Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16954
| Title: | Retrospective: A CORDIC Based Configurable Activation Function for NN Applications |
| Authors: | Kokane, Omkar Lokhande, Mukul Vishvakarma, Santosh Kumar |
| Keywords: | Activation Function;Ai Accelerators;Cordic;Reconfigurable Computing;Transformers;Integrated Circuit Design;Neural Networks;Reconfigurable Architectures;Reconfigurable Hardware;Activation Functions;Ai Accelerator;Constrained Systems;Cordic;Functionals;Hardware Design;Reconfigurability;Reconfigurable Computing;Reconfigurable- Computing;Transformer;Chemical Activation |
| Issue Date: | 2025 |
| Publisher: | IEEE Computer Society |
| Citation: | Kokane, O., Raut, G., Ullah, S., Lokhande, M., Teman, A., Kumar, A., & Vishvakarma, S. K. (2025). Retrospective: A CORDIC Based Configurable Activation Function for NN Applications. Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI. https://doi.org/10.1109/ISVLSI65124.2025.11130218 |
| Abstract: | A CORDIC-based configuration for the design of Activation Functions (AF) was previously suggested to accelerate ASIC hardware design for resource-constrained systems by providing functional reconfigurability. Since its introduction, this new approach for neural network acceleration has gained widespread popularity, influencing numerous designs for activation functions in both academic and commercial AI processors. In this retrospective analysis, we explore the foundational aspects of this initiative, summarize key developments over recent years, and introduce the DA-VINCI AF tailored for the evolving needs of AI applications. This new generation of dynamically configurable and precision-adjustable activation function cores promise greater adaptability for a range of activation functions in AI workloads, including Swish, SoftMax, SeLU, and GeLU, utilizing the Shift-and-Add CORDIC technique. The previously presented design has been optimized for MAC, Sigmoid, and Tanh functionalities and incorporated into ReLU AFs, culminating in an accumulative NEURIC compute unit. These enhancements position NEURIC as a fundamental component in the resourceefficient vector engine for the realization of AI accelerators that focus on DNNs, RNNs/LSTMs, and Transformers, achieving a quality of results (QoR) of 98.5%. © 2025 Elsevier B.V., All rights reserved. |
| URI: | https://dx.doi.org/10.1109/ISVLSI65124.2025.11130218 https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16954 |
| ISBN: | 9781728157757 9781479987184 9781665439466 9781467390385 0769514863 9781538670996 9781479913312 9798350327694 9798350354119 9781479937639 |
| ISSN: | 21593477 21593469 |
| Type of Material: | Conference Paper |
| Appears in Collections: | Department of Electrical Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: