Latest posts by Javier Bonilla (see all)
- Raspberry Pi, TensorFlow Lite and Qt/QML: image segmentation example - 11 September, 2019
- Coral USB Accelerator, TensorFlow Lite C++ API & Raspberry Pi for Edge TPU object detection - 24 June, 2019
- Cross-compile and deploy Qt 5.12 for Raspberry Pi - 10 May, 2019
DeepLab is a state-of-art artificial neural network for semantic image segmentation at pixel level, where the goal is to assign semantic labels to every single pixel in a image. This tutorial shows how to run this model on Raspberry Pi with TensorFlow Lite as the machine learning framework and Qt/QML for the design of the Graphical User Interface.
In a previous tutorial, we already learnt how to integrate TensorFlow Lite with Qt/QML for the development of Raspberry Pi apps, together with an open-source example app for object detection: Raspberry Pi, TensorFlow Lite and Qt/QML: object detection example. Have a look at it to learn about the basics.
- Hardware used in this tutorial
- Qt: download, cross-compile and install on Raspberry Pi
- TensorFlow Lite: download and cross-compile for Raspberry Pi
- Raspberry Pi image segmentation app
Hardware used in this tutorial
- Raspberry Pi 3 Model B+
- Raspberry Pi Camera
- Raspberry Pi 3 – 7″ Touchscreen Display
- 2 x Flex Cable for Raspberry Pi (for camera and display)
- 2 x Raspberry Pi 3 B+ Power Supply – 5V 2.5A (for Raspberry Pi and display)
We need a Linux distribution in our host computer for this tutorial.
Qt: download, cross-compile and install on Raspberry Pi
Have a look at Cross-compile and deploy Qt 5.12 for Raspberry Pi. It provides all the details to do this step. There, you can also find how to set up Qt Creator to deploy Qt apps to Raspberry Pi.
TensorFlow Lite: download and cross-compile for Raspberry Pi
Compilation of TensorFlow Lite for Raspberry Pi, as well as for the host Linux operating system, is already covered in a previous tutorial: Raspberry Pi, TensorFlow Lite and Qt/QML: object detection example.
Raspberry Pi image segmentation app
This app is open source and it is hosted in a Git repository on our GitHub account.
The app is basically the same as the one developed in Raspberry Pi, TensorFlow Lite and Qt/QML: object detection example. The main differences are the following.
- DeepLab is the artificial neural network for image segmentation.
- New code to show the artificial neural network results over the live video frames.
- New configuration options which are specific to this particular neural network.
DeepLab v3 neural network is already in our git repository. It can be also download from TensorFlow website (starter model download button). This link at the TensorFlow website also provides more insight about the DeepLab model and how image segmentation works.
DeepLab v3 is able to identify 20 objects, beside the image background:
The new configuration options in the app are in the Screen info tab of the setting page.
- Inference time: this options was already in the previous app. It shows the time taken by the neural network in the inference process of one video frame.
- Semi-transparent objects: detected objects are drawn semi transparent and tinted at pixel level with a particular color per class if this option is enabled. Otherwise, objects are painted with a solid color which also depends on the object class.
- Show real background: pixels detected as background are shown on the screen if this options is checked. Otherwise, the color of background pixels is set to black.
We have introduced an app that supports DeepLab together with TensorFlow Lite and Qt/QML for Raspberry Pi on the basics of previously developed example apps. This allows us to apply and visualize image segmentation on device with a Raspberry Pi camera, a touchscreen display and a pre-trained TensorFlow neural network model.
I hope you enjoyed this tutorial, please consider to rate it with the starts you can find below, this gives us feedback about how we are doing. If you have any doubt, proposal, comment or issue, just write below, we are here to help :-).