Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/13833
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPachori, Ram Bilasen_US
dc.date.accessioned2024-07-05T12:49:20Z-
dc.date.available2024-07-05T12:49:20Z-
dc.date.issued2024-
dc.identifier.citationDas, D., Nayak, D. R., & Pachori, R. B. (2024). AES-Net: An adapter and enhanced self-attention guided network for multi-stage glaucoma classification using fundus images. Image and Vision Computing. Scopus. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85191431418&doi=10.1016%2fj.imavis.2024.105042&partnerID=40&md5=77fd8b211beac4a7ee2d4e5b5d5b73c4en_US
dc.identifier.issn0262-8856-
dc.identifier.otherEID(2-s2.0-85191431418)-
dc.identifier.urihttps://doi.org/10.1016/j.imavis.2024.105042-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/13833-
dc.description.abstractGlaucoma is a progressive eye condition that can lead to permanent vision loss. Therefore, on-time detection of glaucoma is critical for making an effective treatment plan. In recent years, enormous attempts have been made to develop automated glaucoma classification systems using CNNs through images. In contrast, limited methods have been proposed for diagnosing different glaucoma stages. It is mainly owing to the lack of large publicly available labeled datasets. Also, fundus images exhibit a high inter-stage resemblance, redundant features and minute size variations of lesions, making the conventional CNNs difficult to classify multiple stages of glaucoma accurately. To address these challenges, this paper proposes a novel adapter and enhanced self-attention based CNN framework named AES-Net for effective classification of glaucoma stages. In particular, we propose a spatial adapter module on top of the backbone network for learning better feature representations and an enhanced self-attention module (ESAM) to capture global feature correlations among the relevant channels and spatial positions. The ESAM assists in capturing stage-specific and detailed-lesion features from the fundus images. Extensive experiments on two multi-stage glaucoma datasets indicate that our AES-Net surpasses CNN-based existing approaches. The Grad-CAM ++ visualization maps further confirm the effectiveness of our AES-Net. © 2023en_US
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.sourceImage and Vision Computingen_US
dc.subjectAES-Neten_US
dc.subjectEnhanced self-attention module (ESAM)en_US
dc.subjectFundus imageen_US
dc.subjectMulti-stage glaucoma classificationen_US
dc.subjectSpatial-adapter moduleen_US
dc.titleAES-Net: An adapter and enhanced self-attention guided network for multi-stage glaucoma classification using fundus imagesen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Electrical Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: