Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/11699
Title: Robust face recognition using multimodal data and transfer learning
Authors: Srivastava, Akhilesh Mohan
Chintaginjala, Sai Dinesh
Bhogavalli, Samhit Chowdary
Prakash, Surya
Keywords: Access control;Biometrics;Deep neural networks;Large dataset;Modal analysis;3D data;3D face recognition;3D faces;Data augmentation;Depth image;Face images;Multi-modal data;Residual network;Siamese network;Transfer learning;Face recognition
Issue Date: 2023
Publisher: SPIE
Citation: Srivastava, A. M., Chintaginjala, S. D., Bhogavalli, S. C., & Prakash, S. (2023). Robust face recognition using multimodal data and transfer learning. Journal of Electronic Imaging, 32(4) doi:10.1117/1.JEI.32.4.042105
Abstract: In recent years, technological advancements in face recognition have sparked numerous research efforts and have opened up a variety of applications in fields such as security, access control, and identity verification. The accuracy of two-dimensional (2D) face recognition is not up to the mark when used in highly illuminated or dark environments. Further, its vulnerability to spoofing makes it a poor choice for security applications. These problems can be easily resolved with the help of three-dimensional (3D) face recognition. However, 3D data comes with its own set of issues and challenges. The resources and computational power required to collect and process 3D data are found to be heavy. Most recent signs of progress in this area have been achieved by training deep neural networks on large datasets, which is computationally costly and time-consuming. To address these issues, instead of using 3D face data directly, we propose the use of a 2.5D representation of 3D face data along with registered 2D face images, which makes it relatively easy to work with in terms of computational power and time requirements. The paper proposes a robust face recognition approach using multi-modal data (2.5 face images along with 2D face images) and transfer learning. The proposed approach is built on ResNet-34 and Siamese network models. The ResNet-34 network is first trained on 2D face images. Further, by reusing the pretrained ResNet-34 network model on 2D images, we perform transfer learning to produce a network that can make accurate predictions on 2.5D images. The final outcome of the face recognition is achieved by fusing the results obtained on 2D and 2.5D data. The proposed approach has been validated on the University of Notre Dame 3D face dataset (ND-Collection D). The experimental analysis shows the effectiveness of the proposed technique. © 2022 SPIE and IS&T.
URI: https://doi.org/10.1117/1.JEI.32.4.042105
https://dspace.iiti.ac.in/handle/123456789/11699
ISSN: 1017-9909
Type of Material: Journal Article
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: