Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/13135
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSingh, Rituraj K.en_US
dc.contributor.authorSethi, Anikeiten_US
dc.contributor.authorSaini, Krishanuen_US
dc.contributor.authorTiwari, Arunaen_US
dc.date.accessioned2024-01-29T05:19:18Z-
dc.date.available2024-01-29T05:19:18Z-
dc.date.issued2024-
dc.identifier.citationSingh, R., Sethi, A., Saini, K., Saurav, S., Tiwari, A., & Singh, S. (2024). Attention-guided generator with dual discriminator GAN for real-time video anomaly detection. Engineering Applications of Artificial Intelligence. Scopus. https://doi.org/10.1016/j.engappai.2023.107830en_US
dc.identifier.issn0952-1976-
dc.identifier.otherEID(2-s2.0-85182400112)-
dc.identifier.urihttps://doi.org/10.1016/j.engappai.2023.107830-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/13135-
dc.description.abstractDetecting anomalies in videos presents a significant challenge in the field of video surveillance. The primary goal is identifying and detecting uncommon actions or events within a video sequence. The difficulty arises from the limited availability of video frames depicting anomalies and the ambiguous definition of anomaly. Based on extensive applications of Generative Adversarial Networks (GANs), which consist of a generator and a discriminator network, we propose an Attention-guided Generator with Dual Discriminator GAN (A2D-GAN) for real-time video anomaly detection (VAD). The generator network uses an encoder–decoder architecture with a multi-stage self-attention added to the encoder and multi-stage channel attention added to the decoder. The framework uses adversarial learning from noise and video frame reconstruction to enhance the generalization of the generator network. Also, of the dual discriminator in A2D-GAN, one discriminates between the reconstructed video frame and the real video frame, while the other discriminates between the reconstructed noise and the real noise. Exhaustive experiments and ablation studies on four benchmark video anomaly datasets, namely UCSD Peds, CUHK Avenue, ShanghaiTech, and Subway, demonstrate the effectiveness of the proposed A2D-GAN compared to other state-of-the-art methods. The proposed A2D-GAN model is robust and can detect anomalies in videos in real-time. The source code to replicate the results of the proposed A2D-GAN model is available at https://github.com/Rituraj-ksi/A2D-GAN. © 2024 Elsevier Ltden_US
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.sourceEngineering Applications of Artificial Intelligenceen_US
dc.subjectAdversarial learningen_US
dc.subjectGenerative adversarial networks (GAN)en_US
dc.subjectOne-class classification (OCC)en_US
dc.subjectVideo anomaly detectionen_US
dc.titleAttention-guided generator with dual discriminator GAN for real-time video anomaly detectionen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: