We achieve this by extracting the coordinates from the bounding box and resizing the face filter to the size of the bounding box.Īfterwards, we can overwrite the original face with the resized face filter. Notice, we apply the face filter on the resized frame (ndarray) instead of the Pillow image. How we determine the face filter will be explained in the next chapter. In the next step we iterate over the detected faces, extract each bounding box and use it to overlay the faces with a face filter. The detected faces are a list of DetectionCandidates where each entry provides the bounding box of the detected face. The method takes multiple inputs: the image, a threshold which defines the minimum confidence for the detected faces and the top_k parameter, which defines the maximum number of faces the model should detect. This will run the inference, which means it will produce the predicted faces. Therefore, we call the method detect_with_image on the previously initialized model. Using the preprocessed image and the previously initialized model we can now start to run the face detection. Since we need an RGB image as input for the face detection engine, we have to convert the colors from BGR to RGB and create an image out of the array using the imaging library Pillow (PIL). The color model of the pixel stored in this array is BGR. The frame we captured from the video is a numpy ndarray. This code, together with the rest of the frame processing, happens inside a loop which runs infinitely until the user stops the application.Īs the first step of the image preprocessing, we resize the frame. Since the video is recorded mirrored, we flip each frame horizontally using the computer vision library opencv. The next step is to read the single frames from the video stream and preprocess them.
To allow the camera sensor to warm up we wait 1 second before we start processing the stream. Therefore, we are using the image-utils library. Now we can read the video stream from the webcam. Face detectionįirst we have to initialize the detection engine with the pre-trained model contained in the repository. Now you should be able to run the face replace demo with python3.7 -m face_replace. After the installation unplug and replug the Coral USB Accelerator once.
#DIGIKAM RECOGNIZE FACES INSTALL#
Simply run install.sh, which is located in the root folder and it will install all dependencies. You can find it in the root folder of the repository. To save you the time and effort we wrote a script which automates the installation. Installing all needed dependencies for the USB accelerator is quite some work. With your Raspberry Pi up and running you can install git and clone the repository we prepared to help you get started fast.
#DIGIKAM RECOGNIZE FACES DRIVERS#
After connecting the devices as shown below we can start to install the needed drivers and libraries.įirst you should install a clean Raspbian distribution on your Raspberry Pi using the Noobs installer.
#DIGIKAM RECOGNIZE FACES PRO#
We used a Logitech C920 HD Pro webcam for the setup but as we mentioned earlier, many webcams should work and should lead to similar results. Have a look at the framerate from our experiment we did two years ago with a Pi 3 and the Movidius stick, which was connected via USB 2.0. You can also use older Raspberry Pi versions but expect USB 2.0 to be a bottleneck which will substantially lower the achievable framerate. The Pi 4 is the first Pi which has USB 3.0 on board. While the accelerator also supports USB 2.0, it is recommended to use USB 3.0 to ensure sufficient data transfer rates. The USB accelerator is connected with the Raspberry Pi 4 via the USB 3.0 Type C interface. Using the accelerator, we achieved between 10 and 25 frames per second depending on how much image manipulation features we added and which image resolution we used. When we ran our experiments on the CPU of the Raspberry Pi 4 without the Coral USB accelerator, the application could process between 0.5 and 1.5 frames per second. Therefore, it allows high inference speed for image classification and object detection using neural networks. It can perform 4 trillion operations per second. It currently only supports pre-compiled TensorFlow Lite models. The accelerator contains an edge TPU (Tensor Processing Unit) coprocessor which is optimized to process matrix operations.
The Coral USB accelerator connected to a Raspberry Pi 4 is the heart of the setup. A webcam with 60fps – have a look at webcams which are compatible with a Raspberry Pi.We used the following hardware components: In this blog post we explain how it works and how you can build your own face detection application with low cost consumer hardware and without much machine learning knowledge. In order to keep the face filters assigned to individual faces, even if multiple people appear in the video, it tracks the detected faces over time. The application detects faces based on a pre-trained neural network and overlays them with face filters.