Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16782
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Uppal, Dolly | en_US |
| dc.contributor.author | Prakash, Surya | en_US |
| dc.date.accessioned | 2025-09-04T12:47:48Z | - |
| dc.date.available | 2025-09-04T12:47:48Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Uppal, D., & Prakash, S. (2025). CLT-MambaSeg: An integrated model of Convolution, Linear Transformer and Multiscale Mamba for medical image segmentation. Computers in Biology and Medicine, 196. https://doi.org/10.1016/j.compbiomed.2025.110736 | en_US |
| dc.identifier.issn | 1879-0534 | - |
| dc.identifier.issn | 0010-4825 | - |
| dc.identifier.other | EID(2-s2.0-105011704853) | - |
| dc.identifier.uri | https://dx.doi.org/10.1016/j.compbiomed.2025.110736 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16782 | - |
| dc.description.abstract | Recent advances in deep learning have significantly enhanced the performance of medical image segmentation. However, maintaining a balanced integration of feature localization, global context modeling, and computational efficiency remains a critical research challenge. Convolutional Neural Networks (CNNs) effectively capture fine-grained local features through hierarchical convolutions | en_US |
| dc.description.abstract | however, they often struggle to model long-range dependencies due to their limited receptive field. Transformers address this limitation by leveraging self-attention mechanisms to capture global context, but they are computationally intensive and require large-scale data for effective training. The Mamba architecture has emerged as a promising approach, effectively capturing long-range dependencies while maintaining low computational overhead and high segmentation accuracy. Based on this, we propose a method named CLT-MambaSeg that integrates Convolution, Linear Transformer, and Multiscale Mamba architectures to capture local features, model global context, and improve computational efficiency for medical image segmentation. It utilizes a convolution-based Spatial Representation Extraction (SREx) module to capture intricate spatial relationships and dependencies. Further, it comprises a Mamba Vision Linear Transformer (MVLTrans) module to capture multiscale context, spatial and sequential dependencies, and enhanced global context. In addition, to address the problem of limited data, we propose a novel Memory-Guided Augmentation Generative Adversarial Network (MeGA-GAN) that generates synthetic realistic images to further enhance the segmentation performance. We conduct extensive experiments and ablation studies on the five benchmark datasets, namely CVC-ClinicDB, Breast UltraSound Images (BUSI), PH2, and two datasets from the International Skin Imaging Collaboration (ISIC), namely ISIC-2016 and ISIC-2017. Experimental results demonstrate the efficacy of the proposed CLT-MambaSeg compared to other state-of-the-art methods. © 2025 Elsevier B.V., All rights reserved. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.source | Computers in Biology and Medicine | en_US |
| dc.subject | Generative Adversarial Network | en_US |
| dc.subject | Mamba | en_US |
| dc.subject | Medical Image Segmentation | en_US |
| dc.subject | State Space Modal | en_US |
| dc.subject | Transformer | en_US |
| dc.subject | Computational Efficiency | en_US |
| dc.subject | Convolution | en_US |
| dc.subject | Convolutional Neural Networks | en_US |
| dc.subject | Deep Learning | en_US |
| dc.subject | Image Enhancement | en_US |
| dc.subject | Medical Image Processing | en_US |
| dc.subject | Memory Architecture | en_US |
| dc.subject | Network Architecture | en_US |
| dc.subject | State Space Methods | en_US |
| dc.subject | Adversarial Networks | en_US |
| dc.subject | Global Context | en_US |
| dc.subject | Local Feature | en_US |
| dc.subject | Long-range Dependencies | en_US |
| dc.subject | Mamba | en_US |
| dc.subject | Medical Image Segmentation | en_US |
| dc.subject | Skin Imaging | en_US |
| dc.subject | State Space Modal | en_US |
| dc.subject | State-space | en_US |
| dc.subject | Transformer | en_US |
| dc.subject | Image Segmentation | en_US |
| dc.subject | Article | en_US |
| dc.subject | Back Propagation | en_US |
| dc.subject | Computer Vision | en_US |
| dc.subject | Convolutional Neural Network | en_US |
| dc.subject | Cross Validation | en_US |
| dc.subject | Echomammography | en_US |
| dc.subject | Feature Extraction | en_US |
| dc.subject | Feature Learning (machine Learning) | en_US |
| dc.subject | Gaussian Noise | en_US |
| dc.subject | Generative Adversarial Network | en_US |
| dc.subject | Human | en_US |
| dc.subject | Image Segmentation | en_US |
| dc.subject | Machine Learning | en_US |
| dc.subject | Natural Language Processing | en_US |
| dc.subject | Residual Neural Network | en_US |
| dc.title | CLT-MambaSeg: An integrated model of Convolution, Linear Transformer and Multiscale Mamba for medical image segmentation | en_US |
| dc.type | Journal Article | en_US |
| Appears in Collections: | Department of Computer Science and Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: