Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/9735
Title: | Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset |
Authors: | Bhupendra Moses, Kriz Miglani, Ankur Kumar Kankar, Pavan |
Keywords: | Classification (of information)|Convolution|Convolutional neural networks|Grain (agricultural product)|Image classification|Image quality|Quality control|Deep convolutional neural network|Deep learning|Efficientnet|High magnifications|High-magnification image|Image datasets|Magnification images|Milled rice|Rice grains|Rice qualities|Deep neural networks|agricultural market|agricultural production|data set|machine learning|rice |
Issue Date: | 2022 |
Publisher: | Elsevier B.V. |
Citation: | Bhupendra, Moses, K., Miglani, A., & Kumar Kankar, P. (2022). Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset. Computers and Electronics in Agriculture, 195 doi:10.1016/j.compag.2022.106811 |
Abstract: | Surface quality evaluation of pre-processed rice grains is a key factor in determining their market acceptance, storage stability, processing quality, and the overall customer approval. On one end the conventional methods of surface quality evaluation are time-intensive, subjective, and inconsistent. On the other end, the current methods are limited to either sorting of healthy rice grains from the damaged ones, without classifying the latter, or focusing on segregating the different types of rice. A detailed classification of damage in milled rice grains has been largely unexplored due to the lack of an extensive labelled image dataset and the application of advanced CNN models thereon; that enables quick, accurate, and precise classification by excelling at end-to-end tasks, minimizing pre-processing, and eliminating the need for manual feature extraction. In this study, a machine vision system is developed to first construct a dataset of 8048 high-magnification (4.5 x) images of damaged rice refractions, that are obtained through the on-field collection. The dataset spans across seven damage classes, namely, healthy, full chalky, chalky discolored, half chalky, broken, discolored, and normal damage. Subsequently, five different state-of-the-art memory efficient Deep-CNN models, namely, EfficientNet-B0, ResNet-50, InceptionV3, MobileNetV2, and MobileNetV3 are adopted and fine-tuned to enable damage classification of milled rice grains. Experimental results show that the EfficientNet-B0 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. It achieves an individual class accuracy of 98.33%, 96.51%, 95.45%, 100%, 100%, 99.26%, and 98.72% for healthy, full chalky, chalky discolored, half chalky, broken, discolored, and normal damage class respectively. The EfficientNet-B0 architecture achieves an overall classification accuracy of 98.37 % with a significantly reduced model size (47 MB) and a small prediction time of 0.122 s and can sub-classify the chalky class further into 3 different classes i.e., full chalky, half chalky, and chalky discolored. Overall, this study demonstrates the Deep CNN architectures applied to a high-magnification image dataset enables the classification of damaged rice grains with high accuracy, which could be utilized as a tool for better and more objective quality assessment of the damaged rice grains at market and trading locations. © 2022 Elsevier B.V. |
URI: | https://dspace.iiti.ac.in/handle/123456789/9735 https://doi.org/10.1016/j.compag.2022.106811 |
ISSN: | 0168-1699 |
Type of Material: | Journal Article |
Appears in Collections: | Department of Mechanical Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: