Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/17043
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Landge, Shruti | en_US |
| dc.date.accessioned | 2025-10-31T17:40:59Z | - |
| dc.date.available | 2025-10-31T17:40:59Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Deshmukh, S., Singhal, R., Landge, S., Saraswat, V., Biswas, A., Kadam, A. A., Singh, A. K., Subramoney, S., Somappa, L., Shojaei Baghini, M. S., & Ganguly, U. (2025). Analog and Temporary On-chip Memory for ANN Training and Inference. ACM Journal on Emerging Technologies in Computing Systems, 21(4). https://doi.org/10.1145/3765899 | en_US |
| dc.identifier.issn | 1550-4832 | - |
| dc.identifier.issn | 1550-4840 | - |
| dc.identifier.other | EID(2-s2.0-105019256057) | - |
| dc.identifier.uri | https://dx.doi.org/10.1145/3765899 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17043 | - |
| dc.description.abstract | On-chip training at the edge becomes a primary requisite for real-time and security-sensitive artificial neural network (ANN) applications. In-memory computation (IMC) techniques have been proposed to facilitate data-intensive computational operations in ANNs. IMC-based multiply-accumulate (MAC) accelerates ANN training but suffers from significant communication overhead between the MAC engine and the off-chip storage for the intermediate data. This article proposes an analog temporary on-chip memory (ATOM) to store this intermediate data during ANN training. The ANN training architecture with the proposed ATOM has two significant advantages. First, the energy required to store intermediate data is scaled down by ∼40× due to the on-chip and analog nature of the memory. Second, the proposed architecture avoids power and area-consuming analog-to-digital converters (ADCs) between neural network stages. The ATOM cell measurements are carried out from 20 fabricated chips, and the impact of ATOM characteristics on ANN system performance accuracy is analyzed. This article shows significant latency improvement of ∼9× and area savings of ∼5× for intermediate data storage compared to the on-chip SRAM during ANN training’s forward and backward pass operations. An improvement in the area and latency will be beneficial to instrument the area- and energy-efficient hardware system for on-chip ANN applications. © 2025 Elsevier B.V., All rights reserved. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Association for Computing Machinery | en_US |
| dc.source | ACM Journal on Emerging Technologies in Computing Systems | en_US |
| dc.subject | Analog memory | en_US |
| dc.subject | Artificial Neural Network (ANN) | en_US |
| dc.subject | In-Memory Computation (IMC) | en_US |
| dc.subject | Matrix-Vector Multiplication (MVM) | en_US |
| dc.subject | On-chip training | en_US |
| dc.title | Analog and Temporary On-chip Memory for ANN Training and Inference | en_US |
| dc.type | Journal Article | en_US |
| Appears in Collections: | Department of Electrical Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: