<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Home on Gilbert Tanner</title>
    <link>https://gilberttanner.com/</link>
    <description>Recent content in Home on Gilbert Tanner</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 01 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://gilberttanner.com/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Sign Language Interpreter</title>
      <link>https://gilberttanner.com/projects/sign-language-interpreter/</link>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/sign-language-interpreter/</guid>
      <description>Building a pipeline that converts spoken language into continuous sign language motions for a humanoid robot.</description>
    </item>
    <item>
      <title>SAPIENCE - Sense &amp; Avoid - a cooPeratIvE droNe CompEtition</title>
      <link>https://gilberttanner.com/projects/sapience/</link>
      <pubDate>Fri, 15 Nov 2024 12:56:23 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/sapience/</guid>
      <description>Through a series of collaborative competitions, the Sapience initiative fosters innovation in search and rescue operations, enabling multiple drones to effectively navigate and map GPS-denied environments, detect and deliver aid to victims, and perform complex cooperative tasks.</description>
    </item>
    <item>
      <title>HASCY - HTLs Asfinag Safety Cat</title>
      <link>https://gilberttanner.com/projects/hascy/</link>
      <pubDate>Sun, 22 Oct 2023 13:13:11 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/hascy/</guid>
      <description>HASCY is a remote-controlled sled that drives on a rail that is mounted at the top of a tunnel. It is equipped with multiple sensors, including a thermal and optical PTZ camera, and therefore is able to provide visual information to the Asfinag operators as well as emergency services.</description>
    </item>
    <item>
      <title>TransformerBot – Multi-Mission Ground/Drone Platform</title>
      <link>https://gilberttanner.com/projects/transformerbot-multi-mission-ground-drone-platform/</link>
      <pubDate>Sat, 20 Dec 2025 22:01:27 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/transformerbot-multi-mission-ground-drone-platform/</guid>
      <description>&lt;p&gt;This project presents a&amp;nbsp;&lt;strong&gt;multi-mission transformer robot&lt;/strong&gt;&amp;nbsp;capable of operating both as a&amp;nbsp;&lt;strong&gt;ground vehicle&lt;/strong&gt;&amp;nbsp;and a&amp;nbsp;&lt;strong&gt;quadrotor drone&lt;/strong&gt;.&lt;br&gt;The design integrates&amp;nbsp;&lt;strong&gt;constant-velocity (CV) joints&lt;/strong&gt;,&amp;nbsp;&lt;strong&gt;slip rings&lt;/strong&gt;, and a&amp;nbsp;&lt;strong&gt;servo-driven lifting mechanism&lt;/strong&gt;&amp;nbsp;to enable seamless transformation between driving and flying modes.&lt;/p&gt;&lt;p&gt;We also utilize the&amp;nbsp;&lt;strong&gt;dual-use wheel-propeller system&lt;/strong&gt;, where each wheel houses a&amp;nbsp;&lt;strong&gt;brushless drone motor (BLDC)&lt;/strong&gt;&amp;nbsp;and&amp;nbsp;&lt;strong&gt;propeller&lt;/strong&gt;. This allows the same structure to serve as a rolling surface in ground mode and as an active rotor in drone mode, minimizing redundant components.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Real Time Gesture Control System with EMG</title>
      <link>https://gilberttanner.com/projects/real-time-gesture-control-system-with-emg/</link>
      <pubDate>Thu, 06 Mar 2025 20:46:42 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/real-time-gesture-control-system-with-emg/</guid>
      <description>&lt;p&gt;This project implements real-time gesture classification using the&amp;nbsp;&lt;a href=&#34;https://udevices.io/products/umyo-wearable-emg-sensor&#34; rel=&#34;nofollow&#34;&gt;uMyo EMG sensors&lt;/a&gt;. The system is designed to be used for precision control applications, such as controlling a robotic arm or a drone. On that front we implemented two demos controlling a simulation drone via Ardupilot Gazebo simulation and a Ryze Tello drone in real life.&lt;/p&gt;&lt;p&gt;The project was implemented as part of the Pervasive Computing Lab course (24W) at the University of Klagenfurt.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Turtlebot3 DRL Navigation</title>
      <link>https://gilberttanner.com/projects/turtlebot3-drl-navigation/</link>
      <pubDate>Thu, 27 Feb 2025 20:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/turtlebot3-drl-navigation/</guid>
      <description>&lt;p&gt;Extended an existing &lt;a href=&#34;https://github.com/reiniscimurs/DRL-robot-navigation&#34;&gt;Deep Reinforcement Learning navigation framework&lt;/a&gt; to support the TurtleBot3 platform with realistic 2D LiDAR observations, and migrated the full system from ROS 1 to ROS 2.&lt;/p&gt;&lt;p&gt;The project enables training DRL agents for goal-directed mobile robot navigation with obstacle avoidance in Gazebo, using LiDAR-based state representations and velocity-based control tailored to TurtleBot3 kinematics. To accelerate experimentation, the simulation can be executed at increased real-time factors while maintaining stable learning dynamics.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Project collection</title>
      <link>https://gilberttanner.com/projects/project-collection/</link>
      <pubDate>Mon, 11 Dec 2023 15:17:40 +0000</pubDate>
      <guid>https://gilberttanner.com/projects/project-collection/</guid>
      <description>&lt;p&gt;This page contains a short collage of some of the hardware projects I worked on over the years, starting with school projects created when I was studying at HTL Mössingerstraße.&lt;/p&gt;
&lt;h2 id=&#34;coil-gun--coil-winder&#34;&gt;Coil gun / Coil winder&lt;/h2&gt;
&lt;ul&gt;&lt;li&gt;Start: 11.2017&lt;/li&gt;&lt;li&gt;Project members:&lt;ul&gt;&lt;li&gt;Gilbert Tanner&lt;/li&gt;&lt;li&gt;Gabriel Tanner&lt;/li&gt;&lt;li&gt;Alexander Pichler&lt;/li&gt;&lt;li&gt;Aaron Armbruster&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;Project supervisor:&lt;ul&gt;&lt;li&gt;Herwig Guggi&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;figure class=&#34;kg-card kg-gallery-card kg-width-wide kg-card-hascaption&#34;&gt;&lt;div class=&#34;kg-gallery-container&#34;&gt;&lt;div class=&#34;kg-gallery-row&#34;&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMAG0080.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMAG0335.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20180108_122504.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-row&#34;&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20180118_203310.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20180118_213450.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;figcaption&gt;&lt;p&gt;&lt;span style=&#34;white-space: pre-wrap;&#34;&gt;Figure 2: Coil gun / Coil winder&lt;/span&gt;&lt;/p&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&#34;self-driving-rc-car&#34;&gt;Self Driving RC car&lt;/h2&gt;
&lt;ul&gt;&lt;li&gt;Start: 10.2018&lt;/li&gt;&lt;li&gt;Project members:&lt;ul&gt;&lt;li&gt;Gilbert Tanner&lt;/li&gt;&lt;li&gt;Gabriel Tanner&lt;/li&gt;&lt;li&gt;Alexander Pichler&lt;/li&gt;&lt;li&gt;Aaron Armbruster&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;Project supervisor:&lt;ul&gt;&lt;li&gt;Herwig Guggi&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;Original homepage: &lt;a href=&#34;https://projectbusters.github.io/self-driving-car/&#34;&gt;https://projectbusters.github.io/self-driving-car/&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;figure class=&#34;kg-card kg-gallery-card kg-width-wide kg-card-hascaption&#34;&gt;&lt;div class=&#34;kg-gallery-container&#34;&gt;&lt;div class=&#34;kg-gallery-row&#34;&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20181106_154422.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMAG1096.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMAG1098.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-row&#34;&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20181113_145953.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;div class=&#34;kg-gallery-image&#34;&gt;&lt;img src=&#34;https://gilberttanner.com/content/images/2023/12/IMG_20181115_120926.jpg&#34; loading=&#34;lazy&#34; alt=&#34;&#34;&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;figcaption&gt;&lt;p&gt;&lt;span style=&#34;white-space: pre-wrap;&#34;&gt;Figure 3: Self Driving RC car&lt;/span&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Multiagent Simulation for Drones, Ground Robots &amp; Fixed Wings with Gazebo</title>
      <link>https://gilberttanner.com/blog/multiagent-simulation-drones-ground-robots-gazebo/</link>
      <pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/multiagent-simulation-drones-ground-robots-gazebo/</guid>
      <description>A comprehensive guide to setting up multiagent simulation environments for PX4 and ArduPilot with Gazebo, covering namespacing, sensor configuration, and external odometry.</description>
    </item>
    <item>
      <title>Run TFLITE models on the web</title>
      <link>https://gilberttanner.com/blog/run-tflite-models-on-the-web/</link>
      <pubDate>Wed, 03 Nov 2021 10:07:07 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/run-tflite-models-on-the-web/</guid>
      <description>Using either the TFJS Task API or the TFLITE Web API you can now deploy Tensorflow Lite models on the web without even needing to convert them into Tensorflow.js format. </description>
    </item>
    <item>
      <title>TFLite Object Detection with TFLite Model Maker</title>
      <link>https://gilberttanner.com/blog/tflite-model-maker-object-detection/</link>
      <pubDate>Thu, 17 Jun 2021 13:34:14 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/tflite-model-maker-object-detection/</guid>
      <description>The TensorFlow Lite Model Maker library is a high-level library that simplifies the process of training a TensorFlow Lite model using a custom dataset. It uses transfer learning to reduce the amount of training data required and shorten the training time.</description>
    </item>
    <item>
      <title>D2Go - Use Detectron2 on mobile devices</title>
      <link>https://gilberttanner.com/blog/d2go-use-detectron2-on-mobile-devices/</link>
      <pubDate>Sat, 20 Mar 2021 10:40:07 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/d2go-use-detectron2-on-mobile-devices/</guid>
      <description>D2Go is a production-ready software system from FacebookResearch, which supports end-to-end model training and deployment for mobile platforms.</description>
    </item>
    <item>
      <title>Tensorflow.js Crash-Course</title>
      <link>https://gilberttanner.com/blog/tensorflow-js-crash-course/</link>
      <pubDate>Mon, 28 Dec 2020 11:30:42 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/tensorflow-js-crash-course/</guid>
      <description>TensorFlow.js is a deep learning library providing you with the power to train and deploy your favorite deep learning models in the browser and Node.js.</description>
    </item>
    <item>
      <title>Tensorflow Object Detection with Tensorflow 2: Creating a custom model</title>
      <link>https://gilberttanner.com/blog/tensorflow-object-detection-with-tensorflow-2-creating-a-custom-model/</link>
      <pubDate>Mon, 27 Jul 2020 18:26:37 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/tensorflow-object-detection-with-tensorflow-2-creating-a-custom-model/</guid>
      <description>With the recently released official Tensorflow 2 support for the Tensorflow Object Detection API, it&amp;#39;s now possible to train your own custom object detection models with Tensorflow 2.</description>
    </item>
    <item>
      <title>Tensorflow Object Detection with Tensorflow 2</title>
      <link>https://gilberttanner.com/blog/object-detection-with-tensorflow-2/</link>
      <pubDate>Mon, 13 Jul 2020 17:02:19 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/object-detection-with-tensorflow-2/</guid>
      <description>Learn how to use the Tensorflow Object Detection API with Tensorflow 2</description>
    </item>
    <item>
      <title>Arduino Nano 33 BLE Sense Overview</title>
      <link>https://gilberttanner.com/blog/arduino-nano-33-ble-sense-overview/</link>
      <pubDate>Tue, 07 Jul 2020 18:36:19 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/arduino-nano-33-ble-sense-overview/</guid>
      <description>The Arduino Nano 33 BLE Sense is an evolution of the traditional Arduino Nano, but featuring a lot more powerful processor, the nRF52840 from Nordic Semiconductors, a 32-bit ARM® Cortex™-M4 CPU running at 64 MHz. </description>
    </item>
    <item>
      <title>Run PyTorch models on the Jetson Nano with TensorRT</title>
      <link>https://gilberttanner.com/blog/run-pytorch-models-on-the-jetson-nano-with-tensorrt/</link>
      <pubDate>Sat, 04 Jul 2020 07:35:30 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/run-pytorch-models-on-the-jetson-nano-with-tensorrt/</guid>
      <description>Use TensorRT to run PyTorch models on the Jetson Nano.</description>
    </item>
    <item>
      <title>Run Tensorflow models on the Jetson Nano with TensorRT</title>
      <link>https://gilberttanner.com/blog/run-tensorflow-on-the-jetson-nano/</link>
      <pubDate>Tue, 30 Jun 2020 18:36:46 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/run-tensorflow-on-the-jetson-nano/</guid>
      <description>Run Tensorflow model on the Jetson Nano by converting them into TensorRT format.</description>
    </item>
    <item>
      <title>Jetson Nano YOLO Object Detection with TensorRT</title>
      <link>https://gilberttanner.com/blog/jetson-nano-yolo-object-detection/</link>
      <pubDate>Tue, 23 Jun 2020 07:09:11 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/jetson-nano-yolo-object-detection/</guid>
      <description>YOLO Object Detection on the Jetson Nano using TensorRT</description>
    </item>
    <item>
      <title>Getting Started With NVIDIA Jetson Nano Developer Kit</title>
      <link>https://gilberttanner.com/blog/jetson-nano-getting-started/</link>
      <pubDate>Mon, 15 Jun 2020 09:57:16 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/jetson-nano-getting-started/</guid>
      <description>The NVIDIA Jetson Nano Developer Kit is a small edge computer for AI development. The Jetson Nano Developer Kit packs a Quad-core ARM A57 CPU with a clock-rate of 1.43GHz and 4GB of low-power DDR4 Memory.</description>
    </item>
    <item>
      <title>YOLO Object Detection in PyTorch</title>
      <link>https://gilberttanner.com/blog/yolo-object-detection-in-pytorch/</link>
      <pubDate>Mon, 08 Jun 2020 17:14:56 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/yolo-object-detection-in-pytorch/</guid>
      <description>Train a custom yolo object detection model in PyTorch</description>
    </item>
    <item>
      <title>YOLO Object Detection with keras-yolo3</title>
      <link>https://gilberttanner.com/blog/yolo-object-detection-with-keras-yolo3/</link>
      <pubDate>Mon, 01 Jun 2020 08:19:34 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/yolo-object-detection-with-keras-yolo3/</guid>
      <description>Use and create YOLOV3 models with keras-yolo3.</description>
    </item>
    <item>
      <title>YOLO Object Detection with OpenCV</title>
      <link>https://gilberttanner.com/blog/yolo-object-detection-with-opencv/</link>
      <pubDate>Mon, 25 May 2020 09:08:35 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/yolo-object-detection-with-opencv/</guid>
      <description>Use YOLOv3 with OpenCV to detect objects in both images and videos.</description>
    </item>
    <item>
      <title>YOLO Object Detection Introduction</title>
      <link>https://gilberttanner.com/blog/yolo-object-detection-introduction/</link>
      <pubDate>Mon, 18 May 2020 18:56:05 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/yolo-object-detection-introduction/</guid>
      <description>Learn how to use YOLO for Object Detection.</description>
    </item>
    <item>
      <title>Getting started with Mask R-CNN in Keras</title>
      <link>https://gilberttanner.com/blog/getting-started-with-mask-rcnn-in-keras/</link>
      <pubDate>Mon, 11 May 2020 15:02:58 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/getting-started-with-mask-rcnn-in-keras/</guid>
      <description>Getting started with Mask R-CNN in Keras</description>
    </item>
    <item>
      <title>Train a Mask R-CNN model with the Tensorflow Object Detection API</title>
      <link>https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api/</link>
      <pubDate>Mon, 04 May 2020 13:40:01 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api/</guid>
      <description>Create a custom Mask R-CNN model with the Tensorflow Object Detection API.</description>
    </item>
    <item>
      <title>Detectron2 Train a Instance Segmentation Model</title>
      <link>https://gilberttanner.com/blog/detectron2-train-a-instance-segmentation-model/</link>
      <pubDate>Mon, 13 Apr 2020 20:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/detectron2-train-a-instance-segmentation-model/</guid>
      <description>Learn how to create a custom instance segmentation model using Detectron2.</description>
    </item>
    <item>
      <title>Getting started with LoraWAN and The Things Stack</title>
      <link>https://gilberttanner.com/blog/getting-started-with-lorawan-and-the-things-stack/</link>
      <pubDate>Wed, 26 Feb 2020 09:44:13 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/getting-started-with-lorawan-and-the-things-stack/</guid>
      <description>The LoRaWAN® specification is a Low Power, Wide Area (LPWA)  networking protocol designed to wirelessly connect battery operated  &amp;#39;things&amp;#39; to the internet in regional, national or global networks.</description>
    </item>
    <item>
      <title>Introduction to LoRa</title>
      <link>https://gilberttanner.com/blog/introduction-to-lora/</link>
      <pubDate>Mon, 17 Feb 2020 10:13:06 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-lora/</guid>
      <description>LoRa is a spread spectrum modulation technique derived from chirp spread spectrum (CSS) technology. LoRa allows for long-range, low power wireless communication, often applied in IoT (Internet of Things) applications.</description>
    </item>
    <item>
      <title>Creating math animations in Python with Manim</title>
      <link>https://gilberttanner.com/blog/creating-math-animations-in-python-with-manim/</link>
      <pubDate>Mon, 03 Feb 2020 08:12:51 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/creating-math-animations-in-python-with-manim/</guid>
      <description>Creating math animations in Python with Manim,  a mathematical animation engine made by 3Blue1Brown</description>
    </item>
    <item>
      <title>Convert your Tensorflow Object Detection model to Tensorflow Lite.</title>
      <link>https://gilberttanner.com/blog/convert-your-tensorflow-object-detection-model-to-tensorflow-lite/</link>
      <pubDate>Mon, 27 Jan 2020 15:07:35 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/convert-your-tensorflow-object-detection-model-to-tensorflow-lite/</guid>
      <description>Use your Tensorflow Object Detection model on edge devices by converting them to Tensorflow Lite.</description>
    </item>
    <item>
      <title>Deploying your Streamlit dashboard with Heroku</title>
      <link>https://gilberttanner.com/blog/deploying-your-streamlit-dashboard-with-heroku/</link>
      <pubDate>Tue, 31 Dec 2019 08:23:50 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/deploying-your-streamlit-dashboard-with-heroku/</guid>
      <description>Deploy your Streamlit application using Heroku, a platform as a service (PaaS)</description>
    </item>
    <item>
      <title>Interpreting PyTorch models with Captum</title>
      <link>https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum/</link>
      <pubDate>Mon, 16 Dec 2019 08:52:47 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum/</guid>
      <description>Interpret PyTorch models with Captum.</description>
    </item>
    <item>
      <title>Detectron2 - Object Detection with PyTorch</title>
      <link>https://gilberttanner.com/blog/detectron-2-object-detection-with-pytorch/</link>
      <pubDate>Mon, 18 Nov 2019 10:17:14 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/detectron-2-object-detection-with-pytorch/</guid>
      <description>Detectron2 is Facebooks new vision library that allows us to easily us and create object detection, instance segmentation, keypoint detection and panoptic segmentation models. Learn how to use it for both inference and training.</description>
    </item>
    <item>
      <title>Turn your data science scripts into websites with Streamlit</title>
      <link>https://gilberttanner.com/blog/turn-your-data-science-script-into-websites-with-streamlit/</link>
      <pubDate>Thu, 31 Oct 2019 17:16:49 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/turn-your-data-science-script-into-websites-with-streamlit/</guid>
      <description>Turn your data science scripts and projects into beautiful websites/dashboards using Streamlit.</description>
    </item>
    <item>
      <title>Introduction to Machine Learning in C# with ML.NET</title>
      <link>https://gilberttanner.com/blog/introduction-to-machine-learning-in-c-with-ml-net/</link>
      <pubDate>Sun, 15 Sep 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-machine-learning-in-c-with-ml-net/</guid>
      <description>One of the most popular languages today is C# which is used for many applications. To use the power of Machine Learning in C# Microsoft created a package called ML.NET which provides all the basic Machine Learning functionality.</description>
    </item>
    <item>
      <title>Local Model Interpretation: An Introduction</title>
      <link>https://gilberttanner.com/blog/local-model-interpretation-an-introduction/</link>
      <pubDate>Sun, 18 Aug 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/local-model-interpretation-an-introduction/</guid>
      <description>Local model interpretation is a set of techniques aimed at answering questions like: Why did the model make this specific prediction? What effect did this specific feature value have on the prediction?</description>
    </item>
    <item>
      <title>Hands-on Global Model Interpretation</title>
      <link>https://gilberttanner.com/blog/hands-on-global-model-interpretation/</link>
      <pubDate>Mon, 05 Aug 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/hands-on-global-model-interpretation/</guid>
      <description>Global model interpretation is a set of techniques that helps us to answer questions like how does a model behave in general? What features drive predictions and what features are completely useless for your cause.</description>
    </item>
    <item>
      <title>Google Coral USB Accelerator Introduction</title>
      <link>https://gilberttanner.com/blog/google-coral-usb-accelerator-introduction/</link>
      <pubDate>Mon, 27 May 2019 10:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/google-coral-usb-accelerator-introduction/</guid>
      <description>The Google Coral Edge TPU allows edge devices like the Raspberry Pi or other microcontrollers to exploit the power of artificial intelligence.</description>
    </item>
    <item>
      <title>Introduction to Machine Learning Model Interpretation</title>
      <link>https://gilberttanner.com/blog/introduction-to-machine-learning-model-interpretation/</link>
      <pubDate>Mon, 13 May 2019 19:34:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-machine-learning-model-interpretation/</guid>
      <description>Regardless of what problem you are solving an interpretable model will always be preferred because both the end-user and your boss/co-workers can understand what your model is really doing.</description>
    </item>
    <item>
      <title>Creating your own object detector with the Tensorflow Object Detection API</title>
      <link>https://gilberttanner.com/blog/creating-your-own-objectdetector/</link>
      <pubDate>Wed, 06 Feb 2019 12:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/creating-your-own-objectdetector/</guid>
      <description>Learn how to create your own object detector using the Tensorflow Object Detection API.</description>
    </item>
    <item>
      <title>Introduction to Data Visualization in Python</title>
      <link>https://gilberttanner.com/blog/introduction-to-data-visualization-inpython/</link>
      <pubDate>Wed, 23 Jan 2019 12:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-data-visualization-inpython/</guid>
      <description>Get started visualizing data in Python using Matplotlib, Pandas and Seaborn</description>
    </item>
    <item>
      <title>Introduction to Deep Learning with Keras</title>
      <link>https://gilberttanner.com/blog/introduction-to-deep-learning-withkeras/</link>
      <pubDate>Wed, 09 Jan 2019 14:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/introduction-to-deep-learning-withkeras/</guid>
      <description>Learn the basics of Keras, a high-level library for creating neural networks running on Tensorflow.</description>
    </item>
    <item>
      <title>Scraping Reddit data</title>
      <link>https://gilberttanner.com/blog/scraping-redditdata/</link>
      <pubDate>Sat, 05 Jan 2019 12:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/scraping-redditdata/</guid>
      <description>Scrape data from Reddit using PRAW, the Python wrapper for the Reddit API.</description>
    </item>
    <item>
      <title>Building a book Recommendation System using Keras</title>
      <link>https://gilberttanner.com/blog/building-a-book-recommendation-system-usingkeras/</link>
      <pubDate>Thu, 22 Nov 2018 12:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/building-a-book-recommendation-system-usingkeras/</guid>
      <description>Build a system that is able to recommend books to users depending on what books they have already read using the Keras deep learning library.</description>
    </item>
    <item>
      <title>Generating text using a Recurrent Neural Network</title>
      <link>https://gilberttanner.com/blog/generating-text-using-a-recurrent-neuralnetwork/</link>
      <pubDate>Mon, 29 Oct 2018 12:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/blog/generating-text-using-a-recurrent-neuralnetwork/</guid>
      <description>Generating text in the style of Sir Arthur Conan Doyle using a RNN</description>
    </item>
    <item>
      <title>About</title>
      <link>https://gilberttanner.com/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://gilberttanner.com/about/</guid>
      <description>&lt;section&gt;
    &lt;figure style=&#34;margin: 0 auto; width: 50%;&#34;&gt;
        &lt;img src=&#34;https://gilberttanner.com/content/images/size/w600/2022/06/profil_picture-1.jpg&#34; alt=&#34;Gilbert Tanner&#34; /&gt;
        &lt;figcaption style=&#34;text-align: center;&#34;&gt;Gilbert Tanner (2021)&lt;/figcaption&gt;
    &lt;/figure&gt;
&lt;/section&gt;
&lt;p&gt;&lt;strong&gt;Gilbert Tanner&lt;/strong&gt; is a robotics researcher and currently is doing his Master in Robotics, Systems and Control at &lt;a href=&#34;https://ethz.ch/&#34;&gt;ETH Zürich&lt;/a&gt;. For his Bachelor he studied Robotics and Artificial Intelligence at the &lt;a href=&#34;https://www.aau.at/&#34;&gt;University of Klagenfurt&lt;/a&gt; where he also worked on &lt;a href=&#34;https://www.aau.at/en/blog/team-from-the-university-of-klagenfurt-wins-drone-competition-in-huntsville-usa/&#34;&gt;multi-agent drone research&lt;/a&gt;. In high-school he studied Electronics and Computer Science at &lt;a href=&#34;https://www.htl-klu.at/&#34;&gt;HTL Mössingerstraße&lt;/a&gt;. For his diploma project, he worked on &lt;a href=&#34;https://hascy.at/&#34;&gt;&amp;lsquo;HASCY – HTLs Asfinag Safety Cat&amp;rsquo;&lt;/a&gt;, a rail-bound sledge for tunnel security created in cooperation with the ASFINAG and HTL Lastenstraße, which was turned into an industrial project after graduation.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
