Detectron2 - Object Detection with PyTorch
Detectron2 is Facebooks new vision library that allows us to easily us and create object detection, instance segmentation, keypoint detection and panoptic segmentation models. Learn how to use it for both inference and training.
Update Feb/2020: Facebook Research released pre-built Detectron2 versions, making local installation a lot easier. (Tested on Linux and Windows)
Alongside PyTorch version 1.3, Facebook also released a ground-up rewrite of their object detection framework Detectron. The new framework is called Detectron2 and is now implemented in PyTorch instead of Caffe2.
Detectron2 allows us to easily use and build object detection models. This article will help you get started with Detectron2 by learning how to use a pre-trained model for inferences and how to train your own model.
You can find all the code covered in the article on my Github.
Install Detectron2
Installing Detectron2 is easy compared to other object detection frameworks like the Tensorflow Object Detection API.
Installation on Google Colab
If you are working in Google Colab, it can be installed with the following four lines:
After executing the cell, click the "RESTART RUNTIME" button at the bottom of the output for the installation to take effect.
Installation on a local machine
If you are working on a local machine, it isn't quite that easy but still manageable.
First, you need to have all the requirements installed.
Requirements:
- Python >= 3.6
- PyTorch >=1.6
- torchvision that matches the PyTorch installation. You can install them together at pytorch.org to make sure of this.
- OpenCV, needed for demo and visualization
- GCC >= 5 (if building from source)
Build Detectron2 from Source
After having the above dependencies, you can install detectron2 from source by running:
Installing Pre-built Detectron2
On Linux, you can now install a pre-built with the following command:
You can replace cu111 with "cu{110,102,101,100,92}" depending on your CUDA version or "cpu" if you don't have a GPU.
For most machines, this installation should work fine. However, if you are experiencing any errors, take a look at the Common Installation Issues section of the official installation guide.
Install using Docker
Another great way to install Detectron2 is by using Docker. Docker is great because you don't need to install anything locally, which allows you to keep your machine nice and clean.
If you want to run Detectron2 with Docker, you can find a Dockerfile and docker-compose.yml file in the docker directory of the repository.
For those of you who also want to use Jupyter notebooks inside their container, I created a custom Docker configuration, which automatically starts Jupyter after running the container. If you're interested, you can find the files on my Github.
Inference with a pre-trained model
Using a pre-trained model is super easy in Detectron2. You only need to load in a config and some weights and then create a DefaultPredictor
. After that, you can make predictions and display them using Detectron's Visualizer utility.
The above code imports detectron2, downloads an example image, creates a config, downloads the weights of a Mask RCNN model, and makes a prediction on the image.
After making the prediction, we can display the results using the following code:
You can find all the available models on the "Detectron2 Model Zoo and Baselines" site.
To find the config file's path, you need to click on the name of the model and then look at the location.
The URL of the model weights can be copied directly from the link saying model.
You can either paste in the link directly like
Or you can use the following shortcut:
Other models – Instance Segmentation, Person Keypoint Detection and Panoptic Segmentation
As you might have noticed when looking through the Model zoo, Detectron2 supports object detection and other vision tasks like Instance Segmentation, Person Keypoint Detection and Panoptic Segmentation, and switching from one to another is incredibly easy.
The only thing we need to change to perform image segmentation instead of object detection is to use the config and weights of an image segmentation model instead of an object detection model.
If you are interested, you can also try out Person Keypoint Detection or Panoptic Segmentation by choosing a pre-trained model from the model zoo.
Train a custom model
To train a model on a custom data-set, we need to register our data-set to use the predefined data loaders.
Registering a data-set can be done by creating a function that returns all the needed information about the data as a list and passing the result to DatasetCatalog.register
.
For more information about what format a dictionary should have, check out the "Register a Dataset" section of the documentation.
After registering the data-set, we can train a model using the DefaultTrainer class.
Training a model to detect balloons
In their Detectron2 Tutorial notebook, the Detectron2 team shows how to train a Mask RCNN model to detect all the ballons inside an image.
To do so, they first downloaded the data-set.
After downloading, the data has to be registered as discussed above.
Lastly, the pre-trained model can be fine-tuned for the new data-set using the DefaultTrainer
.
And that's it! That's how easy it is to train a custom model with Detectron2.
Now that the model is trained, it can be used for inference on the validation set:
Save your model and config
After training, the model is getting saved under cfg.OUTPUT_DIR+"/model_final.pth"
. To use the model for inference, you need both the model weights and config. To save the config, use:
For an inference example, check out my detect_from_webcam_or_video.py script.
Data Augmentation
Augmentation is an integral part of training a model, as it allows practitioners to significantly increase the diversity of data available without actually having to collect new data.
Data Augmentation is most commonly used for image classification, but it can also be used in many other areas, including object detection, instance segmentation, and keypoint detection.
Detectron2 allows you to perform data augmentation by writing a custom DatasetMapper. The role of the mapper is to transform the lightweight representation of a data-set into a format that is ready for the model to consume.
A mapper could look like:
To use the mapper inside the dataloader, you need to overwrite the build_train_loader
method of the trainer:
Now for training, instead of saying:
use:
For more information, check out:
Print Accuracy on Validation-Set while training.
When training, we often want to know how well the model is doing on the validation set to assess if the model is overfitting on the training data.
This functionality is available out-of-the-box in most deep learning frameworks, but unfortunately, Detectron2 doesn't support it out-of-the-box.
To get this to work in Detectron2, we need to create a hook that evaluates the model on the validation set and then insert this hook into the model by building a custom Trainer class that extends from DefaultTrainer.
Marcelo Ortega went over the complete code needed for this in his post "Training on Detectron2 with a Validation set, and plot loss on it to avoid overfitting", so I recommend checking this out if you're interested in evaluating your model on the validation set while training.
Resources
Conclusion
Detectron2 is Facebook's new vision library that allows us to easily use and create object detection, instance segmentation, keypoint detection, and panoptic segmentation models. In addition, it has a simple, modular design that makes it easy to rewrite a script for another data-set.
Overall I really like the workflow of Detectron2 and look forward to using it more. With that said, that's all from this article. If you have any questions or want to chat with me, feel free to contact me via EMAIL or social media.