Google Coral USB Accelerator Introduction

Google Coral USB Accelerator Introduction

Last year at the Google Next conference Google announced that they are building two new hardware products around their Edge TPUs. Their purpose is to allow edge devices like the Raspberry Pi  or other microcontrollers to exploit the power of artificial intelligence applications such as image classification and object detection by allowing them to run inference of pre-trained Tensorflow Lite models locally on their own hardware. This is not only more secure than having a cloud server which serves machine learning request but it also can reduce latency quite a bit.

The Coral USB Accelerator

The Coral USB Accelerator comes in at 65x30x8mm making it slightly smaller than it’s competitor the Intel Movidius Neural Compute Stick. This at first doesn’t seem like a big deal but if you consider that the Intel Stick tended to block nearby USB ports, which made it hard to use peripherals it makes quite a difference.

The Coral USB Accelerator comes in at a price of 75€ and can be ordered through Mouser, Seeed, and Gravitylink. On the hardware side, it contains an Edge Tensor Processing Unit (TPU) which provides fast inference for deep learning models at comparably low power consumption.

Box containing the USB Accelerator, USB Type-C to USB 3 Adapter and a simple getting started instruction
Figure 2: Box contains the USB Accelerator, USB Type-C to USB 3 Adapter and a simple getting started instruction

Currently the USB Accelerator only works with Debian 6.0+ or any of its derivatives like Ubuntu or Raspbian. It works best when connected over USB 3.0 even though it can also be used with USB 2.0 and therefore can also be used with a microcontroller like the Raspberry 3 which doesn’t offer any USB 3 ports.

Setup

The Setup of the Coral USB Accelerator is pain-free. The getting started instructions available on the official website worked like a charm on my Raspberry Pi and I was ready to run after only a few minutes. The only thing to keep in mind is that Coral is still in a “beta” release phase, so it’s likely that the software, as well as its setup instructions will change over time but I will certainly try to keep the article updated to ensure that all the instructions are working.

At the moment you need to first of download the latest Edge TPU runtime and Python library by executing the following commands:

cd ~/

wget https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz -O edgetpu_api.tar.gz --trust-server-names

tar xzf edgetpu_api.tar.gz

cd edgetpu_api

bash ./install.sh

During the execution of the install.sh you’ll be asked, “Would you like to enable the maximum operating frequency?”. Enabling this option will improve the inference speed but can possibly cause the USB Accelerator to become hot even though I didn’t experience any overheating whilst using this option. But for normal usage, I would still recommend disabling this option because it doesn’t bring that much of an increase in performance.

Once the installation has finished, go ahead and plug in the USB Accelerator into the Raspberry Pi or any other Debian Device you might be using. If you already had it plugged in whilst  installing you’ll need to replug, as the installation script adds some udev rules that can only take effect after replugging.

Running Example Scripts

Now that we know what the Coral USB Accelerator is and have the Edge TPU   software installed we can run a few example scripts. The installation of the Edge TPU Python module provides us with a simple API that allows us to perform image   classification, object detection, and transfer learning on the Edge TPU.

To run these examples we need to have an Edge TPU compatible model as well  as some input file. We can download some pre-trained models for image classification and object detection as well as some example images with the following code:

cd ~/Downloads

# Download files for classification demo:
curl -O https://dl.google.com/coral/canned_models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/inat_bird_labels.txt \
-O https://coral.withgoogle.com/static/images/parrot.jpg

# Download files for object detection demo:
curl -O https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
-O https://coral.withgoogle.com/static/images/face.jpg

Now that we have our models and example files we can simply execute one of the examples by navigating into the demo directory and executing one of the files with the right parameters.

# If using the USB Accelerator with Debian/Ubuntu:
cd /usr/local/lib/python3.6/dist-packages/edgetpu/demo

# If using the USB Accelerator with Raspberry Pi:
cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

We can now run an image classification model using the following:

python3 classify_image.py \
--model ~/Downloads/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--label ~/Downloads/inat_bird_labels.txt \
--image ~/Downloads/parrot.jpg
Picture of a Parrot
Figure 3: parrot (Link)

This script outputs the class as well as the percentage:

---------------------------
Ara macao (Scarlet Macaw)
Score :  0.761719

The demo folder also includes a object detection file called object_detection.py which we only need to pass a compatible model as well input image path and output image path.

# Download files for object detection demo:
curl -O https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
-O https://coral.withgoogle.com/static/images/face.jpg

cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

python3 object_detection.py \
--model ~/Downloads/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
--input ~/Downloads/face.jpg \
--output ~/detection_results.jpg
Faces
Figure 4: Faces (Link)
Faces with Bounding Boxes
Figure 4: Faces (Link)

A deeper look at the Example Scripts

As you can see it’s pretty easy to work with the Google Coral USB Accelerator but this isn’t the only strong point of the USB Accelerator because its great Edge TPU Python module also makes it easy to read the example scripts and write your own once.

As an example, we will take a closer look at the classify_image.py file which provides us with the functionality to predict the class of a passed image.

import argparse
import re
from edgetpu.classification.engine import ClassificationEngine
from PIL import Image


# Function to read labels from text files.
def ReadLabelFile(file_path):
  """Reads labels from text file and store it in a dict.
  Each line in the file contains id and description separted by colon or space.
  Example: '0:cat' or '0 cat'.
  Args:
    file_path: String, path to the label file.
  Returns:
    Dict of (int, string) which maps label id to description.
  """
  with open(file_path, 'r', encoding='utf-8') as f:
    lines = f.readlines()
  ret = {}
  for line in lines:
    pair = re.split(r'[:\s]+', line.strip(), maxsplit=1)
    ret[int(pair[0])] = pair[1].strip()
  return ret


def main():
  parser = argparse.ArgumentParser()
  parser.add_argument(
      '--model', help='File path of Tflite model.', required=True)
  parser.add_argument(
      '--label', help='File path of label file.', required=True)
  parser.add_argument(
      '--image', help='File path of the image to be recognized.', required=True)
  args = parser.parse_args()

  # Prepare labels.
  labels = ReadLabelFile(args.label)
  # Initialize engine.
  engine = ClassificationEngine(args.model)
  # Run inference.
  img = Image.open(args.image)
  for result in engine.ClassifyWithImage(img, top_k=3):
    print('---------------------------')
    print(labels[result[0]])
    print('Score : ', result[1])

if __name__ == '__main__':
    main()

The first important part of the script is the import of the EdgeTPU library specifically the ClassificationEngine which is responsible for  performing the classification on the Edge TPU.

The ReadLabelFile method just opens the passed textfiles containing the labels for the classifier and creates a dictionary containing the individual labels.

In line 30–37 of the main method we are using the argparse library to create an ArgumentParser that enables us to parse arguments to our script.

After getting the arguments we will get the labels by calling the ReadLabelFile in line 56 and the model by creating a new ClassificationEngine object in line 58.

Lastly, the script opens an image using pillow and classifies it using the ClassifyWithImage method of thee ClassificationEngine object.

The object detection script works almost the same than the classification script with the only change being the use of the DetectionEngine instead of the ClassificationEngine so instead of creating our model by creating a new ClassificationEngine and using the ClassifyWithImage method on the object we create a DetectionEngine and use the DetectWithImage method to make a prediction.

# Initialize engine.
engine = DetectionEngine(args.model)
labels = ReadLabelFile(args.label) if args.label else None

# Open image.
img = Image.open(args.input)
draw = ImageDraw.Draw(img)

# Run inference.
ans = engine.DetectWithImage(img, threshold=0.05, keep_aspect_ratio=True,
                               relative_coord=False, top_k=10)

# Display result.
if ans:
  for obj in ans:
    print ('-----------------------------------------')
    if labels:
      print(labels[obj.label_id])
    print ('score = ', obj.score)
    box = obj.bounding_box.flatten().tolist()
    print ('box = ', box)
    # Draw a rectangle.
    draw.rectangle(box, outline='red')
  img.save(output_name)
  if platform.machine() == 'x86_64':
    # For gLinux, simply show the image.
    img.show()
  elif platform.machine() == 'armv7l':
    # For Raspberry Pi, you need to install 'feh' to display image.
    subprocess.Popen(['feh', output_name])
  else:
    print ('Please check ', output_name)
else:
    print ('No object detected!')

Live Classification/Object Detection and External Camera Support

Coral also provides us with a live image classification script called classify_capture.py which uses the PiCamera library to get images from a webcam which will then be displayed with their respective label.

The only problem with this script is that it can only be used with a PiCamera. In order to add support for other webcams, we will replace the PiCamera code with an imutils VideoStream which is able to work with both a PiCamera and a normal camera.

import cv2
import numpy
import argparse
import time
import re

from edgetpu.classification.engine import ClassificationEngine
from PIL import Image, ImageDraw, ImageFont

from imutils.video import FPS
from imutils.video import VideoStream


def ReadLabelFile(file_path):
    with open(file_path, 'r', encoding='utf-8') as f:
        lines = f.readlines()
    ret = {}
    for line in lines:
        pair = re.split(r'[:\s]+', line.strip(), maxsplit=1)
        ret[int(pair[0])] = pair[1].strip()
    return ret


def draw_image(image, result):
    draw = ImageDraw.Draw(image)
    draw.text((0, 0), result, font=ImageFont.truetype("/usr/share/fonts/truetype/piboto/Piboto-Regular.ttf", 20))
    displayImage = numpy.asarray(image)
    cv2.imshow('Live Inference', displayImage)


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--model', help='File path of Tflite model.', required=True)
    parser.add_argument(
        '--label', help='File path of label file.', required=True)
    parser.add_argument(
        '--picamera', action='store_true',
        help="Use PiCamera for image capture", default=False)
    args = parser.parse_args()

    # Prepare labels.
    labels = ReadLabelFile(args.label) if args.label else None
    # Initialize engine.
    engine = ClassificationEngine(args.model)

    # Initialize video stream
    vs = VideoStream(usePiCamera=args.picamera, resolution=(640, 480)).start()
    time.sleep(1)

    fps = FPS().start()

    while True:
        try:
            # Read frame from video
            screenshot = vs.read()
            image = Image.fromarray(screenshot)

            # Perform inference
            results = engine.ClassifyWithImage(image, top_k=1)
            result = labels[results[0][0]] if results!=[] else 'None'
            draw_image(image, result)

            if cv2.waitKey(5) & 0xFF == ord('q'):
                fps.stop()
                break

            fps.update()
        except KeyboardInterrupt:
            fps.stop()
            break

    print("Elapsed time: " + str(fps.elapsed()))
    print("Approx FPS: :" + str(fps.fps()))

    cv2.destroyAllWindows()
    vs.stop()
    time.sleep(2)


if __name__ == '__main__':
    main()

This script not only performs live image classification but also creates a live window displaying the current frame and its label. This works by using Pillows ImageDraw module which allows us the add text over an image.

Classification Example
Figure 5: Classification Example

The same thing can be done for object detection. The only differences are that we are using a DetectionEngine instead of a ClassificationEngine as well as the changes in the draw_image method. These are needed because we now need to also draw the bounding boxes.

import cv2
import numpy
import argparse
import time
import re

from edgetpu.detection.engine import DetectionEngine
from PIL import Image, ImageDraw, ImageFont

from imutils.video import FPS
from imutils.video import VideoStream


def ReadLabelFile(file_path):
    with open(file_path, 'r', encoding='utf-8') as f:
        lines = f.readlines()
    ret = {}
    for line in lines:
        pair = re.split(r'[:\s]+', line.strip(), maxsplit=1)
        ret[int(pair[0])] = pair[1].strip()
    return ret


def draw_image(image, results, labels):
    result_size = len(results)
    for idx, obj in enumerate(results):

        # Prepare image for drawing
        draw = ImageDraw.Draw(image)

        # Prepare boundary box
        box = obj.bounding_box.flatten().tolist()

        # Draw rectangle to desired thickness
        for x in range( 0, 4 ):
            draw.rectangle(box, outline=(255, 255, 0))

        # Annotate image with label and confidence score
        display_str = labels[obj.label_id] + ": " + str(round(obj.score*100, 2)) + "%"
        draw.text((box[0], box[1]), display_str, font=ImageFont.truetype("/usr/share/fonts/truetype/piboto/Piboto-Regular.ttf", 20))

        displayImage = numpy.asarray(image)
        cv2.imshow('Coral Live Object Detection', displayImage)


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--model', help='File path of Tflite model.', required=True)
    parser.add_argument(
        '--label', help='File path of label file.', required=True)
    parser.add_argument(
        '--maxobjects', type=int, default=3, help='Maximum objects')
    parser.add_argument(
        '--threshold', type=float, default=0.3, help="Minimum threshold")
    parser.add_argument(
        '--picamera', action='store_true',
        help="Use PiCamera for image capture", default=False)
    args = parser.parse_args()

    # Prepare labels.
    labels = ReadLabelFile(args.label) if args.label else None
    # Initialize engine.
    engine = DetectionEngine(args.model)

    # Initialize video stream
    vs = VideoStream(usePiCamera=args.picamera, resolution=(640, 480)).start()
    time.sleep(1)

    fps = FPS().start()

    while True:
        try:
            # Read frame from video
            screenshot = vs.read()
            image = Image.fromarray(screenshot)

            # Perform inference
            results = engine.DetectWithImage(image, threshold=args.threshold, keep_aspect_ratio=True, relative_coord=False, top_k=args.maxobjects)

            # draw image
            draw_image(image, results, labels)

            # closing condition
            if cv2.waitKey(5) & 0xFF == ord('q'):
                fps.stop()
                break

            fps.update()
        except KeyboardInterrupt:
            fps.stop()
            break

    print("Elapsed time: " + str(fps.elapsed()))
    print("Approx FPS: :" + str(fps.fps()))

    cv2.destroyAllWindows()
    vs.stop()
    time.sleep(2)


if __name__ == '__main__':
    main()
Object Detection Example
Figure 6: Object Detection Example

Building your own models

Even though Google offers a lot of precompiled models which can be used with the USB Accelerator you might want to run your own custom models.

For this, you have multiple options. Instead of building your own model from scratch you could retrain an existing model that’s already compatible with the Edge TPU, using a technique called transfer learning. For more details check out official tutorials for retraining an image classification and object detection model.

If you prefer to train a model from scratch you can certainly do so but you need to look out for some restrictions you will have when deploying your model on the USB Accelerator.

First, off you need to use a model optimization technique called quantization which allows us to train our model with 8-bit numbers instead of 32-bit floats and therefore increases the inference time and training speed of a model.

Furthermore, you then need to first convert the model to a TensorFlow Lite file and then you’ll need to compile your TensorFlow Lite model for compatibility with the Edge TPU with Google’s web compiler.

Building custom model process
Figure 7: Building custom model process (Link)

Using a web compiler neat move by Google to get around problems like hardware compatibility which the Intel Movidius Neural Compute Stick faced with its hardware-based compiler, where you needed an x86 based development machine to compile your models.

Conclusion

The Google Coral USB Accelerator is an excellent piece of hardware which allows edge devices like the Raspberry Pi or other microcontrollers to exploit the power of artificial intelligence applications. It has excellent documentation containing everything from the installation and demo applications to building your own model and a detailed Python API documentation.

The USB Accelerator is also perfect for prototyping because it can easily be implemented in most Raspberry Pi camera projects.

That’s all from this article. Thanks for reading. If you have any feedback,   recommendations or ideas of what I should cover next feel free to leave a comment or contact me on social media.