A comparison and analysis of explainable clinical decision making using white box and black box models

Loading...
Thumbnail Image

Date

Authors

Dingle, Liam

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Explainability is a crucial element of machine learning-based making in high stake scenarios such as risk assessment in criminal justice [80], climate modeling [79], disaster response [82], education [81] and critical care. There currently exists a performance tradeoff between low-complexity machine learning models capable of making predictions that are inherently interpretable (white box) to a human, and cutting-edge high complexity (black box) models are not readily interpretable. In this thesis we first aim to assess the reliability of the predictions made by black box models. We train a series of machine learning models on an ICU (Intensive Care Unit) outcome prediction task on the MIMIC III dataset. We perform a comparison of the predictions made by white box models and their black box counterparts by contrasting explainable model feature coefficients/importances to feature importance values generated by a post-hoc SHAP (SHapley Additive exPlanation) values. We then validate our results with a panel of clinical experts. The first study shows that both black box and white box models prioritize clinically relevant variables when making outcome predictions. Higher performing models showed prioritizations to more clinically relevant variables than lower performing models. The black box models show better overall performance than the white box models. [...]

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By