Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16684
Title: | A multimodal–multitask framework with cross-modal relation and hierarchical interactive attention for semantic comprehension |
Authors: | Rehman, Mohammad Zia Ur Bansal, Shubhi Kumar, Nagendra |
Keywords: | Cross-modal Learning;Generative Ai Augmentation;Hate Speech Detection;Multimodal–multitask Learning;Semantic Comprehension;Sentiment Analysis;Interactive Computer Systems;Latent Semantic Analysis;Learning Systems;Modal Analysis;Multi-task Learning;Multitasking;Semantics;Cross-modal;Cross-modal Learning;Generative Ai Augmentation;Hate Speech Detection;Multi-modal;Multimodal–multitask Learning;Multitask Learning;Semantic Comprehension;Sentiment Analysis;Speech Detection |
Issue Date: | 2026 |
Publisher: | Elsevier B.V. |
Citation: | Rehman, M. Z. U., Raghuvanshi, D., Bansal, S., & Kumar, N. (2026). A multimodal–multitask framework with cross-modal relation and hierarchical interactive attention for semantic comprehension. Information Fusion, 126. https://doi.org/10.1016/j.inffus.2025.103628 |
Abstract: | A major challenge in multimodal learning is the presence of noise within individual modalities. This noise inherently affects the resulting multimodal representations, especially when these representations are obtained through explicit interactions between different modalities. Moreover, the multimodal fusion techniques while aiming to achieve a strong joint representation, can neglect valuable discriminative information within the individual modalities. To this end, we propose a Multimodal-Multitask framework with crOss-modal Relation and hIErarchical iNteractive aTtention (MM-ORIENT) that is effective for multiple tasks. The proposed approach acquires multimodal representations cross-modally without explicit interaction between different modalities, reducing the noise effect at the latent stage. To achieve this, we propose cross-modal relation graphs that reconstruct monomodal features to acquire multimodal representations. The features are reconstructed based on the node neighborhood, where the neighborhood is decided by the features of a different modality. We also propose Hierarchical Interactive Monomodal Attention (HIMA) to focus on pertinent information within a modality. While cross-modal relation graphs help comprehend high-order relationships between two modalities, HIMA helps in multitasking by learning discriminative features of individual modalities before late-fusing them. Finally, extensive experimental evaluation on three datasets demonstrates that the proposed approach effectively comprehends multimodal content for multiple tasks. The code is available in the GitHub repository. https://github.com/devraj-raghuvanshi/MM-ORIENT. © 2025 Elsevier B.V., All rights reserved. |
URI: | https://dx.doi.org/10.1016/j.inffus.2025.103628 https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16684 |
ISSN: | 1566-2535 |
Type of Material: | Journal Article |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: