Interpreting PyTorch models with Captum

Interpreting PyTorch models with Captum

As models become more and more complex, it's becoming increasingly important to develop methods for interpreting the model's decisions. I have already covered the topic of model interpretability extensively over the last months, including posts about: Introduction to Machine Learning Model InterpretationHands-on Global Model InterpretationLocal Model Interpretation: An IntroductionThis article will cover Captum, a flexible, easy-to-use model interpretability library for PyTorch models, providing state-of-the-art tools for understanding how specific neurons and layers affect predictions. ...

December 16, 2019 · 7 min · Gilbert Tanner
Local Model Interpretation: An Introduction

Local Model Interpretation: An Introduction

This article is a continuation of my series of articles on Model Interpretability and Explainable Artificial Intelligence. If you haven’t read the first two articles, I highly recommend doing so first. The first article of the series, ‘Introduction to Machine Learning Model Interpretation’, covers the basics of Model Interpretation. The second article, ‘Hands-on Global Model Interpretation’, goes over the details of global model interpretation and how to apply it to a real-world problem using Python. ...

August 18, 2019 · 6 min · Gilbert Tanner
Hands-on Global Model Interpretation

Hands-on Global Model Interpretation

This article is a continuation of my series of articles on Model Interpretability and Explainable Artificial Intelligence. If you haven’t, I would highly recommend you to check out the first article of this series — ‘Introduction to Machine Learning Model Interpretation’, which covers the basics of Model Interpretability, ranging from what model interpretability is, why we need it to the underlying distinctions of model interpretation. This article will pick up where we left off by diving deeper into the ins and outs of global model interpretation. First, we will quickly recap what global model interpretation is and why it is important. Then we will dive into the theory of two of its most popular methods — feature importance and partial dependence plots — and apply them to get information about the features of the heart disease data-set. ...

August 5, 2019 · 8 min · Gilbert Tanner
Introduction to Machine Learning Model Interpretation

Introduction to Machine Learning Model Interpretation

Regardless of what problem you are solving, an interpretable model will always be preferred because both the end-user and your boss/co-workers can understand what your model is doing. Model Interpretability also helps you debug your model by giving you a chance to see what the model thinks is essential. Furthermore, you can use interpretable models to combat the common belief that Machine Learning algorithms are black boxes and that we humans aren't capable of gaining any insights into how they work. ...

May 13, 2019 · 8 min · Gilbert Tanner