Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/14760
Title: | A context-aware attention and graph neural network-based multimodal framework for misogyny detection |
Authors: | Rehman, Mohammad Zia Ur Kumar, Nagendra |
Keywords: | Data fusion;Deep learning;Hate speech against women;Misogyny detection;Multimodal learning;Sexism detection |
Issue Date: | 2025 |
Publisher: | Elsevier Ltd |
Citation: | Rehman, M. Z. U., Zahoor, S., Manzoor, A., Maqbool, M., & Kumar, N. (2025). A context-aware attention and graph neural network-based multimodal framework for misogyny detection. Information Processing and Management. Scopus. https://doi.org/10.1016/j.ipm.2024.103895 |
Abstract: | A substantial portion of offensive content on social media is directed towards women. Since the approaches for general offensive content detection face a challenge in detecting misogynistic content, it requires solutions tailored to address offensive content against women. To this end, we propose a novel multimodal framework for the detection of misogynistic and sexist content. The framework comprises three modules: the Multimodal Attention module (MANM), the Graph-based Feature Reconstruction Module (GFRM), and the Content-specific Features Learning Module (CFLM). The MANM employs adaptive gating-based multimodal context-aware attention, enabling the model to focus on relevant visual and textual information and generating contextually relevant features. The GFRM module utilizes graphs to refine features within individual modalities, while the CFLM focuses on learning text and image-specific features such as toxicity features and caption features. Additionally, we curate a set of misogynous lexicons to compute the misogyny-specific lexicon score from the text. We apply test-time augmentation in feature space to better generalize the predictions on diverse inputs. The performance of the proposed approach has been evaluated on two multimodal datasets, MAMI, and MMHS150K, with 11,000 and 13,494 samples, respectively. The proposed method demonstrates an average improvement of 11.87% and 10.82% in macro-F1 over existing multimodal methods on the MAMI and MMHS150K datasets, respectively. © 2024 Elsevier Ltd |
URI: | https://doi.org/10.1016/j.ipm.2024.103895 https://dspace.iiti.ac.in/handle/123456789/14760 |
ISSN: | 0306-4573 |
Type of Material: | Journal Article |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: