Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/4578
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGupta, Siddharthen_US
dc.contributor.authorAhuja, Kapilen_US
dc.contributor.authorTiwari, Arunaen_US
dc.contributor.authorKumar, Akashen_US
dc.date.accessioned2022-03-17T01:00:00Z-
dc.date.accessioned2022-03-17T15:34:53Z-
dc.date.available2022-03-17T01:00:00Z-
dc.date.available2022-03-17T15:34:53Z-
dc.date.issued2020-
dc.identifier.citationUllah, S., Gupta, S., Ahuja, K., Tiwari, A., & Kumar, A. (2020). L2L: A highly accurate log-2-lead quantization of pre-trained neural networks. Paper presented at the Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020, 979-982. doi:10.23919/DATE48585.2020.9116373en_US
dc.identifier.isbn9783981926347-
dc.identifier.otherEID(2-s2.0-85087410213)-
dc.identifier.urihttps://doi.org/10.23919/DATE48585.2020.9116373-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/4578-
dc.description.abstractDeep Neural Networks are one of the machine learning techniques which are increasingly used in a variety of applications. However, the significantly high memory and computation demands of deep neural networks often limit their deployment on embedded systems. Many recent works have considered this problem by proposing different types of data quantization schemes. However, most of these techniques either require post-quantization retraining of deep neural networks or bear a significant loss in output accuracy. In this paper, we propose a novel quantization technique for parameters of pre-trained deep neural networks. Our technique significantly maintains the accuracy of the parameters and does not require retraining of the networks. Compared to the single-precision floating-point numbers-based implementation, our proposed 8-bit quantization technique generates only ~1% and the ~0.4%, loss in top-1 and top-5 accuracies respectively for VGG16 network using ImageNet dataset. © 2020 EDAA.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.sourceProceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020en_US
dc.subjectDigital arithmeticen_US
dc.subjectEmbedded systemsen_US
dc.subjectLearning systemsen_US
dc.subjectData quantizationsen_US
dc.subjectHighly accurateen_US
dc.subjectMachine learning techniquesen_US
dc.subjectOutput accuracyen_US
dc.subjectSingle precisionen_US
dc.subjectTrained neural networksen_US
dc.subjectDeep neural networksen_US
dc.titleL2L: A Highly Accurate Log-2-Lead Quantization of Pre-trained Neural Networksen_US
dc.typeConference Paperen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: