Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/18318
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lokhande, Mukul | en_US |
| dc.contributor.author | Sankhe, Akash | en_US |
| dc.contributor.author | Vishvakarma, Santosh Kumar | en_US |
| dc.date.accessioned | 2026-05-14T12:28:24Z | - |
| dc.date.available | 2026-05-14T12:28:24Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Lokhande, M., Sankhe, A., & Vishvakarma, S. K. (2025). REFLEX-PIM: A Resource-Efficient and Flexible Trans-Precision Digital Processing-in-Memory SRAM Macro for AI Workloads. 7th IEEE International Conference on Emerging Electronics, ICEE 2025. https://doi.org/10.1109/ICEE67165.2025.11409858 | en_US |
| dc.identifier.isbn | 979-833155547-4 | - |
| dc.identifier.other | EID(2-s2.0-105036637595) | - |
| dc.identifier.uri | https://dx.doi.org/10.1109/ICEE67165.2025.11409858 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18318 | - |
| dc.description.abstract | This work presents REFLEX-PIM, a resourceefficient and flexible trans-precision digital Processing-in-Memory (PIM) SRAM macro that integrates posit-computations and floating-point non-linear activation function (NAF) within the macro. This work reduces the overhead associated with the pre/-post-processing pipeline in prior works with the SIMD-shared hardware approach and performance-enhanced SRAM Unified PIM (UPIM) engine. This work utilises a shared datapath for the SRAM UPIM engine, Shift-OR-based regime handling and a novel transistor-reduced compressor tree (TRCT) for 67.7% area and 79.5% power savings with processing-in-hardware and 65.3% transistor reduction in accumulation. The proposed 16Kb PIM macro delivers 3.38 TFLOPS throughput (Posit-4) at 37.6 TFLOPS/W energy efficiency and 0.4 TFLOPS/mm2 compute density at 65 nm CMOS process. The detailed application-level evaluations showcase comparable accuracy (within 1-2%) with baseline and prior works, while shrinking model size significantly up to 5.6×. REFLEX-PIM shows 4.34× higher energy efficiency and 1.9× higher compute density, compared to recent SoTA macros. This assessment marks REFLEX-PIM as a potential PIM solution for next-generation XR SoC systems. © 2025 IEEE. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
| dc.source | 7th IEEE International Conference on Emerging Electronics, ICEE 2025 | en_US |
| dc.title | REFLEX-PIM: A Resource-Efficient and Flexible Trans-Precision Digital Processing-in-Memory SRAM Macro for AI Workloads | en_US |
| dc.type | Conference Paper | en_US |
| Appears in Collections: | Department of Electrical Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: