An Easy Guide For Machine Learning Model Interpretation

By Kimberly Cook |Email | Dec 31, 2018 | 4212 Views

A comprehensive guide to interpreting machine learning models

Interpreting Machine Learning models is no longer a luxury but necessity has given the rapid adoption of AI in the industry. This article is a continuation in my series of articles aimed at 'Explainable Artificial Intelligence (XAI)'. The idea here is to cut through the hype and enable you with the tools and techniques needed to start interpreting any black box machine learning model. Following are the previous articles in the series in case you want to give them a quick skim (but are not mandatory for this article).

In this article, we will give you hands-on guides which showcase various ways to explain potential black-box machine learning models in a model-agnostic way. We will be working on a real-world dataset on Census income, also known as the Adult dataset available in the UCI ML Repository where we will be predicting if the potential income of people is more than $50K/yr or not.

The purpose of this article is manifold. The first main objective is to familiarize ourselves with the major state-of-the-art model interpretation frameworks out there (a lot of them being extensions of LIME - the original framework and approach proposed for model interpretation which we have covered in detail in Part 2 of this series).

We cover usage of the following model interpretation frameworks in our tutorial.

The major model interpretation techniques we will be covering in this tutorial include the following.
  • Feature Importances
  • Partial Dependence Plots
  • Model Prediction Explanations with Local Interpretation
  • Building Interpretable Models with Surrogate Tree-based Models
  • Model Prediction Explanation with SHAP values
  • Dependence & Interaction Plots with SHAP

Without further ado let's get started!

Loading Necessary Dependencies
We will be using a lot of frameworks and tools in this article given it is a hands-on guide to model interpretation. We recommend you to load up the following dependencies to get the maximum out of this guide!

Remember to call the shap.initjs() function since a lot of the plots from shap require JavaScript.

Load and View the Census Income Dataset
You can actually get the census income dataset (popularly known as the adult dataset) from the UCI ML repository. Fortunately, shap provides us an already cleaned up version of this dataset which we will be using here since the intent of this article is model interpretation.

Viewing the Data Attributes
Let's take a look at the major features or attributes of our dataset.

data, labels = shap.datasets.adult(display=True)
labels = np.array([int(label) for label in labels])
print(data.shape, labels.shape)
data.head()

((32561, 12), (32561,))

We will explain these features shortly.

Viewing the Class Labels
Let's view the distribution of people with <= $50K (False) and > $50K (True) income which are our class labels which we want to predict.

In [6]: Counter(labels)
Out[6]: Counter({0: 24720, 1: 7841})

Definitely, some class imbalance which is expected given that we should have fewer people having a higher income.

Understanding the Census Income Dataset
Let's now take a look at our dataset attributes and understand their meaning and significance.



Read More Here

Source: HOB