Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16522
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nath, Anirban | en_US |
dc.contributor.author | Shukla, Sneha | en_US |
dc.contributor.author | Gupta, Puneet | en_US |
dc.date.accessioned | 2025-07-23T10:58:37Z | - |
dc.date.available | 2025-07-23T10:58:37Z | - |
dc.date.issued | 2025 | - |
dc.identifier.citation | Nath, A., Shukla, S., & Gupta, P. (2025). MTMedFormer: multi-task vision transformer for medical imaging with federated learning. Medical and Biological Engineering and Computing. https://doi.org/10.1007/s11517-025-03404-z | en_US |
dc.identifier.issn | 0140-0118 | - |
dc.identifier.other | EID(2-s2.0-105010010786) | - |
dc.identifier.uri | https://dx.doi.org/10.1007/s11517-025-03404-z | - |
dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16522 | - |
dc.description.abstract | Abstract: Deep learning has revolutionized medical imaging, improving tasks like image segmentation, detection, and classification, often surpassing human accuracy. However, the training of effective diagnostic models is hindered by two major challenges: the need for large datasets for each task and privacy laws restricting the sharing of medical data. Multi-task learning (MTL) addresses the first challenge by enabling a single model to perform multiple tasks, though convolution-based MTL models struggle with contextualizing global features. Federated learning (FL) helps overcome the second challenge by allowing models to train collaboratively without sharing data, but traditional methods struggle to aggregate stable feature maps due to the permutation-invariant nature of neural networks. To tackle these issues, we propose MTMedFormer, a transformer-based multi-task medical imaging model. We leverage the transformers’ ability to learn task-agnostic features using a shared encoder and utilize task-specific decoders for robust feature extraction. By combining MTL with a hybrid loss function, MTMedFormer learns distinct diagnostic tasks in a synergistic manner. Additionally, we introduce a novel Bayesian federation method for aggregating multi-task imaging models. Our results show that MTMedFormer outperforms traditional single-task and MTL models on mammogram and pneumonia datasets, while our Bayesian federation method surpasses traditional methods in image segmentation. © International Federation for Medical and Biological Engineering 2025. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Springer Science and Business Media Deutschland GmbH | en_US |
dc.source | Medical and Biological Engineering and Computing | en_US |
dc.subject | Bayesian federation | en_US |
dc.subject | Federated learning | en_US |
dc.subject | Medical imaging | en_US |
dc.subject | Multi-task model | en_US |
dc.subject | Vision transformer | en_US |
dc.title | MTMedFormer: multi-task vision transformer for medical imaging with federated learning | en_US |
dc.type | Journal Article | en_US |
Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: