KDD 2018: Explainable Models for Healthcare AI

The Explainable Models for Healthcare AI tutorial was presented by a trio from KenSci Inc. that included a data scientist and a clinician. The premise of the session was that explainability is particularly important in healthcare applications of machine learning, due to the far-reaching consequences of decisions, high cost of mistakes, fairness and compliance requirements. The tutorial walked through a number of aspects of interpretability and discussed techniques that can be applied to explain model predictions.

Seven Pillars

While the interpretability is usually mentioned in the context of the models, it spans the entire system, from features to algorithm, to the model parameters, and eventually the model itself. The tutorial was built around what the presenters called the “seven pillars” of explainable models:

  1. Transparency – the ability for the user to understand what drives the predictions; even if the user is not familiar with the details of the model, or the model itself is opaque (as in the case of deep learning), key factors that drive the decision can be identified and explained.
  2. Domain sense – the explanation must itself be comprehensible for the target user, and must consider the background and the needs of that user. It is important that different roles – doctor, nurse, manager – are considered when building the explanations. It is also important to focus on factors that are actionable, rather than just important for the prediction. This is not always an obvious criterion to apply; for example, high patient BMI might be an important factor in a model prediction, but how much a physician can intervene on it is debatable. It is also important to remember that “actionable” does not mean “causal” – causality in machine learning models is a controversial subject and at this point it is safest to assume none.
  3. Consistency – the explanations should be consistent across different models and different runs of the same model.
  4. Parsimony – the simplest explanation should be preferred, particularly in an environment where the users are constantly bombarded by data. The number of factors presented to the user and the time it takes to process them are important to the usefulness of the explanation.
  5. Generalisability – the explanation technique should not be devised separately for each model; ideally it should be general enough to apply to a variety of models, and even algorithms.
  6. Trust – both model and the explanation should match human perfornamce, and when they make mistakes, those should be similar to the mistakes made by humans.
  7. Fidelity – the explanation should represent what the model really did; be an explanation rather than justification.

While we should strive for all of those qualities, there are inherent trade-offs, for example between generalisability and fidelity. Usually the former is prioritised.

Techniques

The most suitable form of the explanation depends on the type of model. The learning algorithms can roughly be split into three families:

  • rule-based (for example, decision tree) – explained by specifying the rules that triggered;
  • factor-based (for example, logistic regression) – explained by listing the most important features (factors);
  • case-based (for example, k nearest neighbours) – explained by providing cases similar to the one being predicted on.

Ensembles, heavily engineered features or deep learning complicate the task of explanation quite a bit. Generic techniques that sacrifice fidelity in the interest of generalisibility include locally interpretable model explanations (LIME), where a simple (for example, linear) model is constructed for a specific case. While it can generate a simple explanation, the LIME model does not necessarily faithfully reproduce the decision process of the original model; it also does not scale well, since an explanation model has to be created for each example individually. Alternatively, Shapley values are a game-theoretic approach to determining the contribution of individual features to the model decision; however, they suffer from even heavier scalability issues than LIME. Mimic models – simple, interpretable models trained on the input data and the predictions of the real model as the label – are another technique that while not faithful to the original derivation of the prediction, can provide a justification for it. The slides contain a comparison of explanation approaches along a number of axes in a large table towards the end of the deck.

Challenges

The most urgent challenges identified by speakers included ranking of explanations, and lack of good theoretical foundations for that task. Kendall’s tau and W statistics are commonly used to compare rankings, but they seem like a crude tool and not the last word in this space. Another area of concern were imputed values – those are extremely problematic in medical applications, since an incorrect imputation can have disastrous consequences. Explanations need to be robust with respect to those. Model scale and increasing number of features also poses problems for explanations, since as the number of features increases, the most informative features might not be very informative in the absolute sense.

An aspect that deserved, and received, a separate mention was that of fairness. Underrepresentation of certain parts of population in the data and resulting bias of machine learning models have been discussed for a while, but the same issue affects the explanations: if a certain group is underrepresented, the explanation techniques that work satisfactorily for the majority of population might give incorrect or misleading results for that group, depriving it of equal access to quality healthcare.

An even higher level issue is that we, humans, often cannot explain our decisions. Apparently, our introspection abilities are very limited, and while we are well adapted to constructing plausible narratives and retrospective justifications, we do not have direct access to the mental processes responsible for making decisions, and there is no reason to think that physicians are unaffected by this limitation. Should we then require the computers to meet standards that humans cannot? I think so. While the civilisation has developed redundancies around and resiliency to human error, we are yet to appreciate the havoc that can be wreaked by systematic mistakes made by machines. Perhaps we are lucky that the first indications of that were the lot of the retail and financial industries.

Slides: https://mlhealthcare.github.io

20/08/2017