Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/12759
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Singh, Rituraj K. | en_US |
dc.contributor.author | Saini, Krishanu | en_US |
dc.contributor.author | Sethi, Anikeit | en_US |
dc.contributor.author | Tiwari, Aruna | en_US |
dc.date.accessioned | 2023-12-14T12:38:24Z | - |
dc.date.available | 2023-12-14T12:38:24Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Singh, R., Patel, C., Verma, V. K., Sriram, S., & Mukherjee, S. (2023). ZnO-based flexible UV photodetector for wearable electronic applications. IEEE Sensors Journal. Scopus. https://doi.org/10.1109/JSEN.2023.3314528 | en_US |
dc.identifier.issn | 0924-669X | - |
dc.identifier.other | EID(2-s2.0-85171750483) | - |
dc.identifier.uri | https://doi.org/10.1007/s10489-023-04940-7 | - |
dc.identifier.uri | https://dspace.iiti.ac.in/handle/123456789/12759 | - |
dc.description.abstract | Automatic detection and interpretation of abnormal events have become crucial tasks in large-scale video surveillance systems. The challenges arise from the lack of a clear definition of abnormality, which restricts the usage of supervised methods. To this end, we propose a novel unsupervised anomaly detection method, Spatio-Temporal Generative Adversarial Network (STemGAN). This framework consists of a generator and discriminator that learns from the video context, utilizing both spatial and temporal information to predict future frames. The generator follows an Autoencoder (AE) architecture, having a dual-stream encoder for extracting appearance and motion information, and a decoder having a Channel Attention (CA) module to focus on dynamic foreground features. In addition, we provide a transfer-learning method that enhances the generalizability of STemGAN. We use benchmark Anomaly Detection (AD) datasets to compare the performance of our approach with the existing state-of-the-art approaches using standard evaluation metrics, i.e., AUC (Area Under Curve) and EER (Equal Error Rate). The empirical results show that our proposed STemGAN outperforms the existing state-of-the-art methods achieving an AUC score of 97.5% on UCSDPed2, 86.0% on CUHK Avenue, 90.4% on Subway-entrance, and 95.2% on Subway-exit. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Springer | en_US |
dc.source | Applied Intelligence | en_US |
dc.subject | Anomaly detection | en_US |
dc.subject | Attention | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Spatio-temporal | en_US |
dc.subject | Unsupervised learning | en_US |
dc.subject | Video surveillance | en_US |
dc.title | STemGAN: spatio-temporal generative adversarial network for video anomaly detection | en_US |
dc.type | Journal Article | en_US |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: