Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/10401
Title: | Approximate deep learning |
Authors: | Anand, Samarth Kumpatla, Vijay Babu Ahuja, Kapil [Guide] |
Keywords: | Computer Science and Engineering |
Issue Date: | 25-May-2022 |
Publisher: | Department of Computer Science and Engineering, IIT Indore |
Series/Report no.: | BTP594;CSE 2022 ANA |
Abstract: | Deep Learning is used to solve complex day to day problems. Solving of the machine learning tasks requires making use of large size DNNs. But to the sheer size and computation cost associated with them, it is very difficult to deploy these DNN models into the embedded systems.The scale of deep neural network models grows, requiring more computing and memory space. As a result, these models may be implemented on low-power devices with some approximation while maintaining network accuracy. Many recent studies have focused on reducing the size and complexity of these DNNs using various techniques. Quantisation of the DNN model parameters is one such technique that focuses on reducing the sheer size of the model by representing the high precision parameters in some lower bid-width representation. In this thesis we will try to look upon some of the efficient quantisation techniques that does not require any retraining/fine-tuning of the model post quantisation. We will also try to reduce the quantisation induced error by making use of Sampling Techniques. |
URI: | https://dspace.iiti.ac.in/handle/123456789/10401 |
Type of Material: | B.Tech Project |
Appears in Collections: | Department of Computer Science and Engineering_BTP |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
BTP_594_Samarth_Anand_180001046_Vijay_Babu_Kumpatla_180001027.pdf | 2.15 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: