<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Model Interpretation on Gilbert Tanner</title>
    <link>https://gilberttanner.com/tag/model-interpretation/</link>
    <description>Recent content in Model Interpretation on Gilbert Tanner</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 16 Dec 2019 08:52:47 +0000</lastBuildDate>
    <atom:link href="https://gilberttanner.com/tag/model-interpretation/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Interpreting PyTorch models with Captum</title>
      <link>https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum/</link>
      <pubDate>Mon, 16 Dec 2019 08:52:47 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum/</guid>
      <description>Interpret PyTorch models with Captum.</description>
    </item>
    <item>
      <title>Local Model Interpretation: An Introduction</title>
      <link>https://gilberttanner.com/blog/local-model-interpretation-an-introduction/</link>
      <pubDate>Sun, 18 Aug 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/local-model-interpretation-an-introduction/</guid>
      <description>Local model interpretation is a set of techniques aimed at answering questions like: Why did the model make this specific prediction? What effect did this specific feature value have on the prediction?</description>
    </item>
    <item>
      <title>Hands-on Global Model Interpretation</title>
      <link>https://gilberttanner.com/blog/hands-on-global-model-interpretation/</link>
      <pubDate>Mon, 05 Aug 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/hands-on-global-model-interpretation/</guid>
      <description>Global model interpretation is a set of techniques that helps us to answer questions like how does a model behave in general? What features drive predictions and what features are completely useless for your cause.</description>
    </item>
    <item>
      <title>Introduction to Machine Learning Model Interpretation</title>
      <link>https://gilberttanner.com/blog/introduction-to-machine-learning-model-interpretation/</link>
      <pubDate>Mon, 13 May 2019 19:34:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-machine-learning-model-interpretation/</guid>
      <description>Regardless of what problem you are solving an interpretable model will always be preferred because both the end-user and your boss/co-workers can understand what your model is really doing.</description>
    </item>
  </channel>
</rss>
