Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/13593
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSingh, Rituraj K.en_US
dc.contributor.authorSethi, Anikeiten_US
dc.contributor.authorSaini, Krishanuen_US
dc.contributor.authorTiwari, Arunaen_US
dc.date.accessioned2024-04-26T12:43:24Z-
dc.date.available2024-04-26T12:43:24Z-
dc.date.issued2024-
dc.identifier.citationSingh, R., Sethi, A., Saini, K., Saurav, S., Tiwari, A., & Singh, S. (2024). CVAD-GAN: Constrained video anomaly detection via generative adversarial network. Image and Vision Computing. Scopus. https://doi.org/10.1016/j.imavis.2024.104950en_US
dc.identifier.issn0262-8856-
dc.identifier.otherEID(2-s2.0-85185840474)-
dc.identifier.urihttps://doi.org/10.1016/j.imavis.2024.104950-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/13593-
dc.description.abstractAutomatic detection of abnormal behavior in video sequences is a fundamental and challenging problem for intelligent video surveillance systems. However, the existing state-of-the-art Video Anomaly Detection (VAD) methods are computationally expensive and lack the desired robustness in real-world scenarios. The contemporary VAD methods cannot detect the fundamental features absent during training, which usually results in a high false positive rate while testing. To this end, we propose a Constrained Generative Adversarial Network (CVAD-GAN) for real-time VAD. Adding white Gaussian noise to the input video frame with constrained latent space of CVAD-GAN improves its fine-grained features learning from the normal video frames. Also, the dilated convolution layers and skip-connection preserve the information across layers to understand the broader context of complex video scenes in real-time. Our proposed approach achieves a higher Area Under Curve (AUC) score and a lower Equal Error Rate (EER) with enhanced computational efficiency than the existing state-of-the-art VAD methods. CVAD-GAN achieves an AUC and EER score of 98.0% and 6.0% on UCSD Peds1, 97.8% and 7.0% on UCSD Peds2, 94.0% and 8.1% on CUHK Avenue, and 76.2% and 21.7% on ShanghaiTech dataset, respectively. Also, it detects 63 and 19 abnormal events, with false alarms of 3 and 1, respectively, on the Subway-Entry and Subway-Exit datasets. The source code to replicate the results of the proposed CVAD-GAN is available at https://github.com/Rituraj-ksi/CVAD-GAN. © 2024 Elsevier B.V.en_US
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.sourceImage and Vision Computingen_US
dc.subjectAdversarial learningen_US
dc.subjectGenerative adversarial network (GAN)en_US
dc.subjectSurveillance videoen_US
dc.subjectVideo anomaly detectionen_US
dc.titleCVAD-GAN: Constrained video anomaly detection via generative adversarial networken_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: