Explainable AI Methods in ICU Machine Learning - A Scoping Review
Research Review · ~108 pages · English
Abstract
This scoping review addresses how deep learning adoption in critical care faces transparency barriers. It examines technical interpretability techniques being translated into clinical practice, noting that an inverse relationship often persists between predictive performance and interpretability. The review follows the Arksey & O'Malley methodological framework and PRISMA-ScR guidelines.
1. Research Question
What explainable AI (XAI) methods have been applied to machine learning models used in intensive care unit (ICU) clinical decision support, and what are their reported effectiveness, limitations, and implementation challenges?
The review systematically maps the landscape of XAI deployment in critical care, identifying gaps between technical capabilities and clinical utility.
2. XAI Methods Examined
Model-Agnostic Approaches: • SHAP (SHapley Additive exPlanations) - game theory-based feature importance • LIME (Local Interpretable Model-agnostic Explanations) - local surrogate models • Permutation Importance - feature shuffling impact assessment
Model-Specific Methods: • Attention mechanisms in transformer architectures • Decision trees as inherently interpretable baselines • Rule extraction from neural networks
Advanced Approaches: • Concept-based explanations for clinical terminology alignment • Prototype-based methods for case comparison • Causal inference and counterfactual analysis
3. Clinical Applications
The review identifies key ICU applications:
• Mortality prediction with APACHE/SOFA score overlays • Sepsis early warning systems with time-critical alerts • Acute kidney injury prediction for intervention timing • Mechanical ventilation weaning decision support • Intraoperative hypotension forecasting • Drug dosing optimization recommendations
References
- [1]Arksey, H., & O'Malley, L. (2005). Scoping studies: towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19-32.
- [2]Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608.
- [3]Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS.
- [4]Tonekaboni, S., et al. (2019). What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. MLHC.
This is a sample excerpt. Full papers include complete chapters, verified citations, and downloadable formats.
Free to try · No credit card required · Free to start, 3 credits/day