Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/6561
Title: Spatio-Temporal Self-Attention Network for Fire Detection and Segmentation in Video Surveillance
Authors: Tanveer, M.
Keywords: Classification (of information);Disaster prevention;Fire detectors;Fires;Image segmentation;Neural networks;Object detection;Object recognition;Security systems;Convolutional neural network;Early detection;Fire detection;Fire segmentations;Semi-supervised;Small-sized fire;Spatial temporals;Spatio-temporal;Video fire segmentation;Video surveillance;Disasters
Issue Date: 2021
Publisher: Institute of Electrical and Electronics Engineers Inc.
Citation: Shahid, M., Virtusio, J. J., Wu, Y. -., Chen, Y. -., Tanveer, M., Muhammad, K., & Hua, K. -. (2022). Spatio-temporal self-attention network for fire detection and segmentation in video surveillance. IEEE Access, 10, 1259-1275. doi:10.1109/ACCESS.2021.3132787
Abstract: Convolutional Neural Network (CNN) based approaches are popular for various image/video related tasks due to their state-of-the-art performance. However, for problems like object detection and segmentation, CNNs still suffer from objects with arbitrary shapes or sizes, occlusions, and varying viewpoints. This problem makes it mostly unsuitable for fire detection and segmentation since flames can have an unpredictable scale and shape. In this paper, we propose a method that detects and segments fire-regions with special considerations of their arbitrary sizes and shapes. Specifically, our approach uses a self-attention mechanism to augment spatial characteristics with temporal features, allowing the network to reduce its reliance on spatial factors like shape or size and take advantage of robust spatial-temporal dependencies. As a whole, our pipeline has two stages: In the first stage, we take out region proposals using Spatial-Temporal features, and in the second stage, we classify whether each region proposal is flame or not. Due to the scarcity of generous fire datasets, we adopt a transfer learning strategy to pre-train our classifier with the ImageNet dataset. Additionally, our Spatial-Temporal Network only requires semi-supervision, where it only needs one ground-truth segmentation mask per frame-sequence input. The experimental results of our proposed method significantly outperform the state-of-the-art fire detection with a 2 ~ 4% relative enhancement in F1-score for large scale fires and a nearly ~ 60% relative improvement for small fires at a very early stage. Author
URI: https://doi.org/10.1109/ACCESS.2021.3132787
https://dspace.iiti.ac.in/handle/123456789/6561
ISSN: 2169-3536
Type of Material: Journal Article
Appears in Collections:Department of Mathematics

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: