Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/17043
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLandge, Shrutien_US
dc.date.accessioned2025-10-31T17:40:59Z-
dc.date.available2025-10-31T17:40:59Z-
dc.date.issued2025-
dc.identifier.citationDeshmukh, S., Singhal, R., Landge, S., Saraswat, V., Biswas, A., Kadam, A. A., Singh, A. K., Subramoney, S., Somappa, L., Shojaei Baghini, M. S., & Ganguly, U. (2025). Analog and Temporary On-chip Memory for ANN Training and Inference. ACM Journal on Emerging Technologies in Computing Systems, 21(4). https://doi.org/10.1145/3765899en_US
dc.identifier.issn1550-4832-
dc.identifier.issn1550-4840-
dc.identifier.otherEID(2-s2.0-105019256057)-
dc.identifier.urihttps://dx.doi.org/10.1145/3765899-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/17043-
dc.description.abstractOn-chip training at the edge becomes a primary requisite for real-time and security-sensitive artificial neural network (ANN) applications. In-memory computation (IMC) techniques have been proposed to facilitate data-intensive computational operations in ANNs. IMC-based multiply-accumulate (MAC) accelerates ANN training but suffers from significant communication overhead between the MAC engine and the off-chip storage for the intermediate data. This article proposes an analog temporary on-chip memory (ATOM) to store this intermediate data during ANN training. The ANN training architecture with the proposed ATOM has two significant advantages. First, the energy required to store intermediate data is scaled down by ∼40× due to the on-chip and analog nature of the memory. Second, the proposed architecture avoids power and area-consuming analog-to-digital converters (ADCs) between neural network stages. The ATOM cell measurements are carried out from 20 fabricated chips, and the impact of ATOM characteristics on ANN system performance accuracy is analyzed. This article shows significant latency improvement of ∼9× and area savings of ∼5× for intermediate data storage compared to the on-chip SRAM during ANN training’s forward and backward pass operations. An improvement in the area and latency will be beneficial to instrument the area- and energy-efficient hardware system for on-chip ANN applications. © 2025 Elsevier B.V., All rights reserved.en_US
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.sourceACM Journal on Emerging Technologies in Computing Systemsen_US
dc.subjectAnalog memoryen_US
dc.subjectArtificial Neural Network (ANN)en_US
dc.subjectIn-Memory Computation (IMC)en_US
dc.subjectMatrix-Vector Multiplication (MVM)en_US
dc.subjectOn-chip trainingen_US
dc.titleAnalog and Temporary On-chip Memory for ANN Training and Inferenceen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Electrical Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: