Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/17081
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Maurya, Chandresh Kumar | en_US |
| dc.date.accessioned | 2025-10-31T17:41:01Z | - |
| dc.date.available | 2025-10-31T17:41:01Z | - |
| dc.date.issued | 2026 | - |
| dc.identifier.citation | Punneshetty, S., Italiya, D., Agarwal, V., Maurya, C. K., & Agrawal, A. (2026). An Explainable Multimodal Framework with LLM Agents for Intracranial Hemorrhage Detection. In Lecture Notes in Computer Science: Vol. 16147 LNCS. https://doi.org/10.1007/978-3-032-06004-4_1 | en_US |
| dc.identifier.isbn | 9789819698936 | - |
| dc.identifier.isbn | 9789819698042 | - |
| dc.identifier.isbn | 9789819698110 | - |
| dc.identifier.isbn | 9789819698905 | - |
| dc.identifier.isbn | 9789819512324 | - |
| dc.identifier.isbn | 9783032026019 | - |
| dc.identifier.isbn | 9783032008909 | - |
| dc.identifier.isbn | 9783031915802 | - |
| dc.identifier.isbn | 9789819698141 | - |
| dc.identifier.isbn | 9783031984136 | - |
| dc.identifier.issn | 1611-3349 | - |
| dc.identifier.issn | 0302-9743 | - |
| dc.identifier.other | EID(2-s2.0-105018304220) | - |
| dc.identifier.uri | https://dx.doi.org/10.1007/978-3-032-06004-4_1 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17081 | - |
| dc.description.abstract | Explainability in intracranial hemorrhage (ICH) diagnosis is essential for timely and accurate clinical decisions, especially in life–threatening situations. We propose a framework that generates explainable, clinically relevant text from 2D CT scans using two cooperative GPT-4o agents: a Multi-modal User Agent (MUA) and a Planner Agent. The MUA interprets scans with YOLOv10 (mosaic augmentation), SAM2, and clustering | en_US |
| dc.description.abstract | the Planner selects tools and outputs key imaging parameters: bleed location, midline shift, calvarial fracture, and mass effect crucial for urgent interventions. Explainability is enforced via chain-of-thought prompting to ensure transparent decision-making. Experiments show YOLOv10 with mosaic improves mAP@0.5:0.95 by 4.1% over existing methods, and the LLM agents extract clinical parameters with 78.1% accuracy (Our code is available at https://github.com/Shashwathp/Explainable-ICH-Detection-with-LLM-Agents/tree/main). These results underscore the potential of explainable AI to enhance trust and reliability in critical healthcare applications. © 2025 Elsevier B.V., All rights reserved. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Springer Science and Business Media Deutschland GmbH | en_US |
| dc.source | Lecture Notes in Computer Science | en_US |
| dc.subject | Explainable AI | en_US |
| dc.subject | Intracranial Hemorrhage | en_US |
| dc.subject | LLM Agents | en_US |
| dc.title | An Explainable Multimodal Framework with LLM Agents for Intracranial Hemorrhage Detection | en_US |
| dc.type | Conference Paper | en_US |
| Appears in Collections: | Department of Computer Science and Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: