Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/13593
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Singh, Rituraj K. | en_US |
dc.contributor.author | Sethi, Anikeit | en_US |
dc.contributor.author | Saini, Krishanu | en_US |
dc.contributor.author | Tiwari, Aruna | en_US |
dc.date.accessioned | 2024-04-26T12:43:24Z | - |
dc.date.available | 2024-04-26T12:43:24Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Singh, R., Sethi, A., Saini, K., Saurav, S., Tiwari, A., & Singh, S. (2024). CVAD-GAN: Constrained video anomaly detection via generative adversarial network. Image and Vision Computing. Scopus. https://doi.org/10.1016/j.imavis.2024.104950 | en_US |
dc.identifier.issn | 0262-8856 | - |
dc.identifier.other | EID(2-s2.0-85185840474) | - |
dc.identifier.uri | https://doi.org/10.1016/j.imavis.2024.104950 | - |
dc.identifier.uri | https://dspace.iiti.ac.in/handle/123456789/13593 | - |
dc.description.abstract | Automatic detection of abnormal behavior in video sequences is a fundamental and challenging problem for intelligent video surveillance systems. However, the existing state-of-the-art Video Anomaly Detection (VAD) methods are computationally expensive and lack the desired robustness in real-world scenarios. The contemporary VAD methods cannot detect the fundamental features absent during training, which usually results in a high false positive rate while testing. To this end, we propose a Constrained Generative Adversarial Network (CVAD-GAN) for real-time VAD. Adding white Gaussian noise to the input video frame with constrained latent space of CVAD-GAN improves its fine-grained features learning from the normal video frames. Also, the dilated convolution layers and skip-connection preserve the information across layers to understand the broader context of complex video scenes in real-time. Our proposed approach achieves a higher Area Under Curve (AUC) score and a lower Equal Error Rate (EER) with enhanced computational efficiency than the existing state-of-the-art VAD methods. CVAD-GAN achieves an AUC and EER score of 98.0% and 6.0% on UCSD Peds1, 97.8% and 7.0% on UCSD Peds2, 94.0% and 8.1% on CUHK Avenue, and 76.2% and 21.7% on ShanghaiTech dataset, respectively. Also, it detects 63 and 19 abnormal events, with false alarms of 3 and 1, respectively, on the Subway-Entry and Subway-Exit datasets. The source code to replicate the results of the proposed CVAD-GAN is available at https://github.com/Rituraj-ksi/CVAD-GAN. © 2024 Elsevier B.V. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier Ltd | en_US |
dc.source | Image and Vision Computing | en_US |
dc.subject | Adversarial learning | en_US |
dc.subject | Generative adversarial network (GAN) | en_US |
dc.subject | Surveillance video | en_US |
dc.subject | Video anomaly detection | en_US |
dc.title | CVAD-GAN: Constrained video anomaly detection via generative adversarial network | en_US |
dc.type | Journal Article | en_US |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: