Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/13297
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tanveer, M. | en_US |
dc.date.accessioned | 2024-03-19T12:56:46Z | - |
dc.date.available | 2024-03-19T12:56:46Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Mazher, M., Razzak, I., Qayyum, A., Tanveer, M., Beier, S., Khan, T., & Niederer, S. A. (2024). Self-supervised spatial–temporal transformer fusion based federated framework for 4D cardiovascular image segmentation. Information Fusion. Scopus. https://doi.org/10.1016/j.inffus.2024.102256 | en_US |
dc.identifier.issn | 1566-2535 | - |
dc.identifier.other | EID(2-s2.0-85184516052) | - |
dc.identifier.uri | https://doi.org/10.1016/j.inffus.2024.102256 | - |
dc.identifier.uri | https://dspace.iiti.ac.in/handle/123456789/13297 | - |
dc.description.abstract | Availability of high-quality large annotated data is indeed a significant challenge in healthcare. In addition, privacy concerns and data-sharing restrictions often hinder access to large and diverse medical image datasets. To reduce the requirement for annotated training data, self-supervised pre-training strategies on nonannotated data have been extensively used, whereas collaborative algorithm training without the need to exchange the underlying data. In this paper, we introduce a novel federated learning-based self-supervised spatial–temporal transformer's fusion (SSFL) for cardiovascular image segmentation. The integration of spatial–temporal swin transformer is used to extract the features from 3D SAX multiple phases (full cycle of cardiac heart). An efficient self-supervised contrastive framework consisting of a spatial–temporal transformer network with 25 encoders is used to model the temporal features. The spatial and temporal features are fused and forwarded to the decoder for cardiac heart segmentation using cine MRI images. To further improve segmentation, we use an attention-based unpaired GAN model to map or transfer the style from ACDC to M&Ms and use synthetically generated volumes in the proposed self-supervised approach. Experiments with three different cardiovascular image segmentation tasks, such as segmentation of the right ventricle, left ventricle, and myocardium, showed significant improvement compared to the state-of-the-art segmentation framework. © 2024 The Author(s) | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier B.V. | en_US |
dc.source | Information Fusion | en_US |
dc.subject | Federated learning | en_US |
dc.subject | Information fusion | en_US |
dc.subject | Medical image segmentation | en_US |
dc.subject | Multiview fusion | en_US |
dc.subject | Self-supervised learning | en_US |
dc.title | Self-supervised spatial–temporal transformer fusion based federated framework for 4D cardiovascular image segmentation | en_US |
dc.type | Journal Article | en_US |
dc.rights.license | All Open Access, Hybrid Gold | - |
Appears in Collections: | Department of Mathematics |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: