Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/18205
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chattopadhyay, Soumi | en_US |
| dc.contributor.author | Parihar, Ashutosh | en_US |
| dc.contributor.author | Suralkar, Anand | en_US |
| dc.date.accessioned | 2026-05-14T12:28:17Z | - |
| dc.date.available | 2026-05-14T12:28:17Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Das, B., Adak, C., Deo, A., Bangar, A., Verma, R., Akhtar, Z., Chattopadhyay, S., Dutta, S., Parihar, A., Suralkar, A., Nagar, D., & Kumar, V. (2025). Securing AI-Generated Media: Rethinking Deepfake Vulnerabilities in Side-Face Perspectives. Proceedings - 2025 Conference on Building a Secure and Empowered Cyberspace, BuildSEC 2025, 126–132. https://doi.org/10.1109/BuildSEC68439.2025.00026 | en_US |
| dc.identifier.isbn | 979-833157964-7 | - |
| dc.identifier.other | EID(2-s2.0-105035826090) | - |
| dc.identifier.uri | https://dx.doi.org/10.1109/BuildSEC68439.2025.00026 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18205 | - |
| dc.description.abstract | Deepfake technology has advanced significantly, producing highly sophisticated fake images that challenge detection mechanisms. However, existing deepfake generators struggle to maintain realism in side-face perspectives, particularly under diverse indoor and outdoor lighting conditions. This limitation is further pronounced for individuals of Indian ethnicity, where variations in skin tone, hairstyles, facial hair, and image capture distance from the camera introduce additional challenges. In this paper, we critically examine the performance of state-of-the-art deepfake generators in these scenarios, highlighting key vulnerabilities in side-face synthesis. We also assess the effectiveness of current detection frameworks in identifying these inconsistencies. Furthermore, we discuss the broader implications of generative models in security-sensitive applications and propose future research directions to enhance the robustness of deepfake synthesis and detection. Our recommendations include improving dataset diversity, developing adaptive generative models, and leveraging multimodal approaches to strengthen detection mechanisms, ensuring more secure and reliable AI-driven media applications. ©2025 IEEE. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
| dc.source | Proceedings - 2025 Conference on Building a Secure and Empowered Cyberspace, BuildSEC 2025 | en_US |
| dc.title | Securing AI-Generated Media: Rethinking Deepfake Vulnerabilities in Side-Face Perspectives | en_US |
| dc.type | Conference Paper | en_US |
| Appears in Collections: | Department of Computer Science and Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: