Interpretability in healthcare a comparative study of local machine learning interpretability techniques

Radwa Elshawi, Youssef Sherif, Mouaz Al-Mallah, Sherif Sakr

Research output: Chapter in Book/Report/Conference proceedingConference contribution

36 Scopus citations

Abstract

Although complex machine learning models (e.g., Random Forest, Neural Networks) are commonly outperforming the traditional simple interpretable models (e.g., Linear Regression, Decision Tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. To tackle this challenge, recently, several machine learning interpretability techniques have been developed and introduced. In general, the main aim of these interpretability techniques is to shed light and provide insights into the predictions process of the machine learning models and explain how the model predictions have resulted. However, in practice, assessing the quality of the explanations provided by the various interpretability techniques is still questionable. In this paper, we present a comprehensive experimental evaluation of three recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP and Anchors on different types of real-world healthcare data. Our experimental evaluation covers different aspects for its comparison including identity, stability, separability, similarity, execution time and bias detection. The results of our experiments show that LIME achieves the lowest performance for the identity metric and the highest performance for the separability metric across all datasets included in this study. On average, SHAP has the smallest average time to output explanation across all datasets included in this study. For detecting the bias, SHAP enables the participants to better detect the bias.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems, CBMS 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages275-280
Number of pages6
ISBN (Electronic)9781728122861
DOIs
StatePublished - Jun 2019
Event32nd IEEE International Symposium on Computer-Based Medical Systems, CBMS 2019 - Cordoba, Spain
Duration: Jun 5 2019Jun 7 2019

Publication series

NameProceedings - IEEE Symposium on Computer-Based Medical Systems
Volume2019-June
ISSN (Print)1063-7125

Other

Other32nd IEEE International Symposium on Computer-Based Medical Systems, CBMS 2019
Country/TerritorySpain
CityCordoba
Period6/5/196/7/19

Keywords

  • Black-Box Model
  • Machine Learning
  • Machine Learning Interpretability
  • Model-Agnostic Interpretability

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Interpretability in healthcare a comparative study of local machine learning interpretability techniques'. Together they form a unique fingerprint.

Cite this