Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16936
| Title: | eXplainable artificial intelligence-Eval: A framework for comparative evaluation of explanation methods in healthcare |
| Authors: | Agrawal, Krish |
| Keywords: | Black-box Models;Explainable Artificial Intelligence (xai);Healthcare Ai;Model Interpretability |
| Issue Date: | 2025 |
| Publisher: | SAGE Publications Inc. |
| Citation: | Agrawal, K., el Shawi, R., & Ahmed, N. (2025). eXplainable artificial intelligence-Eval: A framework for comparative evaluation of explanation methods in healthcare. Digital Health, 11. https://doi.org/10.1177/20552076251368045 |
| Abstract: | Objective: Machine learning systems are increasingly used in high-stakes domains such as healthcare, where predictive accuracy must be accompanied by explainability to ensure trust, validation, and regulatory compliance. This study aims to evaluate the effectiveness of widely used local and global explanation methods in real-world clinical settings. Methods: We introduce a structured evaluation methodology for the quantitative comparison of explainability techniques. Our analysis covers five local model-agnostic methods—local interpretable model-agnostic explanations (LIME), contextual importance and utility, RuleFit, RuleMatrix, and Anchor—assessed using multiple explainability criteria. For global interpretability, we consider LIME, Anchor, RuleFit, and RuleMatrix. Experiments are conducted on diverse healthcare datasets and tasks to assess performance. Results: The results show that RuleFit and RuleMatrix consistently provide robust and interpretable global explanations across tasks. Local methods show varying performance depending on the evaluation dimension and dataset. Our findings highlight important trade-offs between fidelity, stability, and complexity, offering critical insights into method suitability for clinical applications. Conclusion: This work provides a practical framework for systematically assessing explanation methods in healthcare. It offers actionable guidance for selecting appropriate and trustworthy techniques, supporting safe and transparent deployment of machine learning models in sensitive, real-world environments. © 2025 Elsevier B.V., All rights reserved. |
| URI: | https://dx.doi.org/10.1177/20552076251368045 https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16936 |
| ISSN: | 2055-2076 |
| Type of Material: | Journal Article |
| Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: