Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/17081
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMaurya, Chandresh Kumaren_US
dc.date.accessioned2025-10-31T17:41:01Z-
dc.date.available2025-10-31T17:41:01Z-
dc.date.issued2026-
dc.identifier.citationPunneshetty, S., Italiya, D., Agarwal, V., Maurya, C. K., & Agrawal, A. (2026). An Explainable Multimodal Framework with LLM Agents for Intracranial Hemorrhage Detection. In Lecture Notes in Computer Science: Vol. 16147 LNCS. https://doi.org/10.1007/978-3-032-06004-4_1en_US
dc.identifier.isbn9789819698936-
dc.identifier.isbn9789819698042-
dc.identifier.isbn9789819698110-
dc.identifier.isbn9789819698905-
dc.identifier.isbn9789819512324-
dc.identifier.isbn9783032026019-
dc.identifier.isbn9783032008909-
dc.identifier.isbn9783031915802-
dc.identifier.isbn9789819698141-
dc.identifier.isbn9783031984136-
dc.identifier.issn1611-3349-
dc.identifier.issn0302-9743-
dc.identifier.otherEID(2-s2.0-105018304220)-
dc.identifier.urihttps://dx.doi.org/10.1007/978-3-032-06004-4_1-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/17081-
dc.description.abstractExplainability in intracranial hemorrhage (ICH) diagnosis is essential for timely and accurate clinical decisions, especially in life–threatening situations. We propose a framework that generates explainable, clinically relevant text from 2D CT scans using two cooperative GPT-4o agents: a Multi-modal User Agent (MUA) and a Planner Agent. The MUA interprets scans with YOLOv10 (mosaic augmentation), SAM2, and clusteringen_US
dc.description.abstractthe Planner selects tools and outputs key imaging parameters: bleed location, midline shift, calvarial fracture, and mass effect crucial for urgent interventions. Explainability is enforced via chain-of-thought prompting to ensure transparent decision-making. Experiments show YOLOv10 with mosaic improves mAP@0.5:0.95 by 4.1% over existing methods, and the LLM agents extract clinical parameters with 78.1% accuracy (Our code is available at https://github.com/Shashwathp/Explainable-ICH-Detection-with-LLM-Agents/tree/main). These results underscore the potential of explainable AI to enhance trust and reliability in critical healthcare applications. © 2025 Elsevier B.V., All rights reserved.en_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.sourceLecture Notes in Computer Scienceen_US
dc.subjectExplainable AIen_US
dc.subjectIntracranial Hemorrhageen_US
dc.subjectLLM Agentsen_US
dc.titleAn Explainable Multimodal Framework with LLM Agents for Intracranial Hemorrhage Detectionen_US
dc.typeConference Paperen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: