Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/13610
Title: QuantMAC: Enhancing Hardware Performance in DNNs with Quantize Enabled Multiply-Accumulate Unit
Authors: Ashar, Neha
Raut, Gopal
Trivedi, Vasundhara
Vishvakarma, Santosh Kumar
Keywords: Approximate compute;bit-truncation;CORDIC;deep neural network;hardware accelerator;quantize processing element
Issue Date: 2024
Publisher: Institute of Electrical and Electronics Engineers Inc.
Citation: Ashar, N., Raut, G., Trivedi, V., Vishvakarma, S. K., & Kumar, A. (2024). QuantMAC: Enhancing Hardware Performance in DNNs with Quantize Enabled Multiply-Accumulate Unit. IEEE Access. Scopus. https://doi.org/10.1109/ACCESS.2024.3379906
Abstract: In response to the escalating demand for hardware-efficient Deep Neural Network (DNN) architectures, we present a novel quantize-enabled multiply-accumulate (MAC) unit. Our methodology employs a right shift-and-add computation for MAC operation, enabling runtime truncation without additional hardware. This architecture optimally utilizes hardware resources, enhancing throughput performance while reducing computational complexity through bit-truncation techniques. Our key methodology involves designing a hardware-efficient MAC computational algorithm that supports both iterative and pipeline implementations, catering to diverse hardware efficiency or enhanced throughput requirements in accelerators. Additionally, we introduce a processing element (PE) with a pre-loading bias scheme, reducing one clock delay and eliminating the need for conventional extra resources in PE implementation. The PE facilitates quantization-based MAC calculations through an efficient bit-truncation method, removing the necessity for extra hardware logic. This versatile PE accommodates variable bit-precision with a dynamic fraction part within the sfxpt< N,f > representation, meeting specific model or layer demands. Through software emulation, our proposed approach demonstrates minimal accuracy loss, revealing under 1.6% loss for LeNet-5 using MNIST and around 4% for ResNet-18 and VGG-16 with CIFAR-10 in the sfxpt< 8 ,5 > format compared to conventional float32-based implementations. Hardware performance parameters on the Xilinx-Virtex-7 board unveil a 37% reduction in area utilization and a 45% reduction in power consumption compared to the best state-of-the-art MAC architecture. Extending the proposed MAC to a LeNet DNN model results in a 42% reduction in resource requirements and a significant 27% reduction in delay. This architecture provides notable advantages for resource-efficient, high-throughput edge-AI applications. � 2013 IEEE.
URI: https://doi.org/10.1109/ACCESS.2024.3379906
https://dspace.iiti.ac.in/handle/123456789/13610
ISSN: 2169-3536
Type of Material: Journal Article
Appears in Collections:Department of Electrical Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: