Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/17312
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Uppal, Dolly | en_US |
| dc.contributor.author | Prakash, Surya | en_US |
| dc.date.accessioned | 2025-12-04T10:00:51Z | - |
| dc.date.available | 2025-12-04T10:00:51Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Uppal, D., & Prakash, S. (2025). MS2ADM-BTS: Multi-scale Dual Attention Guided Diffusion Model for Volumetric Brain Tumor Segmentation. Pattern Recognition Letters, 198, 115–122. https://doi.org/10.1016/j.patrec.2025.10.010 | en_US |
| dc.identifier.issn | 0167-8655 | - |
| dc.identifier.other | EID(2-s2.0-105022201444) | - |
| dc.identifier.uri | https://dx.doi.org/10.1016/j.patrec.2025.10.010 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17312 | - |
| dc.description.abstract | Accurate segmentation of brain tumors plays a vital role in diagnosis and clinical evaluation. Diffusion models have emerged as a promising approach in medical image segmentation, due to their ability to generate high-quality representations. Existing diffusion-based approaches exhibit limited integration of multimodal images across multiple scales, and effectively eliminating noise interference in brain tumor images remains a major limitation in segmentation tasks. Further, these methods face challenges in effectively balancing global and local feature extraction and integration, specifically in multi-label segmentation tasks. To address these challenges, we propose a Multi-scale Dual Attention Guided Diffusion Model, named MS2ADM-BTS, tailored for Volumetric Brain Tumor Segmentation in multimodal Magnetic Resonance Imaging (MRI) images. It consists of a Context-Aware Feature (CxAF) encoder and a Dual-Stage Multi-Scale Feature Fusion (DS-MSFF) denoising network that learns the denoising process to generate multi-label segmentation predictions. Further, the DS-MSFF denoising network includes an Attention-Guided Cross-Scale Feature Fusion (AGCS-FF) module that effectively models long-range dependencies in high-resolution feature maps and enhances feature representation and reconstruction quality. In addition, we introduce a novel inference-time sampling procedure that incorporates a Spectral-Guided Noise Initialization mechanism to mitigate the training-inference gap and Uncertainty-Guided Diffusion Sampling to provide robust segmentation outcomes. We evaluate the efficacy of the proposed approach using the benchmark datasets from the 2020 Multimodal Brain Tumor Segmentation (BraTS) Challenge and the Medical Segmentation Decathlon (MSD) BraTS dataset. The results show that the proposed approach outperforms existing state-of-the-art methods due to its effective denoising capability. The code is available at https://github.com/Dolly-Uppal/MS2ADM-BTS . © 2025 Elsevier B.V. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier B.V. | en_US |
| dc.source | Pattern Recognition Letters | en_US |
| dc.subject | Attention | en_US |
| dc.subject | Brain tumor segmentation | en_US |
| dc.subject | Diffusion model | en_US |
| dc.subject | Multi-scale features | en_US |
| dc.subject | Multimodal MRI | en_US |
| dc.title | MS2ADM-BTS: Multi-scale Dual Attention Guided Diffusion Model for Volumetric Brain Tumor Segmentation | en_US |
| dc.type | Journal Article | en_US |
| Appears in Collections: | Department of Computer Science and Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: