Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/18205
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChattopadhyay, Soumien_US
dc.contributor.authorParihar, Ashutoshen_US
dc.contributor.authorSuralkar, Ananden_US
dc.date.accessioned2026-05-14T12:28:17Z-
dc.date.available2026-05-14T12:28:17Z-
dc.date.issued2025-
dc.identifier.citationDas, B., Adak, C., Deo, A., Bangar, A., Verma, R., Akhtar, Z., Chattopadhyay, S., Dutta, S., Parihar, A., Suralkar, A., Nagar, D., & Kumar, V. (2025). Securing AI-Generated Media: Rethinking Deepfake Vulnerabilities in Side-Face Perspectives. Proceedings - 2025 Conference on Building a Secure and Empowered Cyberspace, BuildSEC 2025, 126–132. https://doi.org/10.1109/BuildSEC68439.2025.00026en_US
dc.identifier.isbn979-833157964-7-
dc.identifier.otherEID(2-s2.0-105035826090)-
dc.identifier.urihttps://dx.doi.org/10.1109/BuildSEC68439.2025.00026-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/18205-
dc.description.abstractDeepfake technology has advanced significantly, producing highly sophisticated fake images that challenge detection mechanisms. However, existing deepfake generators struggle to maintain realism in side-face perspectives, particularly under diverse indoor and outdoor lighting conditions. This limitation is further pronounced for individuals of Indian ethnicity, where variations in skin tone, hairstyles, facial hair, and image capture distance from the camera introduce additional challenges. In this paper, we critically examine the performance of state-of-the-art deepfake generators in these scenarios, highlighting key vulnerabilities in side-face synthesis. We also assess the effectiveness of current detection frameworks in identifying these inconsistencies. Furthermore, we discuss the broader implications of generative models in security-sensitive applications and propose future research directions to enhance the robustness of deepfake synthesis and detection. Our recommendations include improving dataset diversity, developing adaptive generative models, and leveraging multimodal approaches to strengthen detection mechanisms, ensuring more secure and reliable AI-driven media applications. ©2025 IEEE.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.sourceProceedings - 2025 Conference on Building a Secure and Empowered Cyberspace, BuildSEC 2025en_US
dc.titleSecuring AI-Generated Media: Rethinking Deepfake Vulnerabilities in Side-Face Perspectivesen_US
dc.typeConference Paperen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: