Facial Landmark Detection and Drawing | with OpenCV & UNIHIKER

Introduction

This project aims to detect facial images using the UNIHIKER board and draw facial landmarks on them.

 

Project Objectives

1. Learn the method of using the Haar Cascade Algorithm for face detection.

2. Learn the method of using the LBF (Local Binary Features) algorithm in the Facemark framework for facial landmark detection.

HARDWARE LIST
1 UNIHIKER - IoT Python Single Board Computer with Touchscreen
1 Type-C&Micro 2-in-1 USB Cable

SOFTWARE

- Mind+ Programming Software

 

 

Practical Process

1.Hardware Setup

Connect the UniHiker board to the computer via USB cable.

 

 

2. Software Development

Step 1: Open Mind+ and remotely connect to the UniHiker board.

 

 

Step 2: Create a folder named "AI" in the "Files in UNIHIKER". And create a folder named "Facial Landmark Drawing with OpenCV & UNIHIKER" in this folder. Import the facial images and models, along with required dependency packages and files.

 

 

Step 3: Write the program

Create a new project file at the same level as the above images and model files, and name it "main.py".

Sample Program:

CODE
import cv2
import numpy as np
import time
from unihiker import GUI   
gui=GUI()  
img_image2 = gui.draw_image(x=0, y=0, image='1.jpg')
time.sleep(3)
 
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
 
facemark = cv2.face.createFacemarkLBF() 
 
facemark.loadModel("lbfmodel.yaml")
 
image = cv2.imread('1.jpg')
 
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
 
_, landmarks = facemark.fit(image, faces)
 
for landmark in landmarks:
    for x, y in landmark[0]: 
        x = int(x)
        y = int(y)
        cv2.circle(image, (x, y), 2, (0, 255, 0), -1) 
 
cv2.imwrite('output.jpg', image) 
cv2.waitKey(0)
cv2.destroyAllWindows()
# img_image2 = gui.draw_image(x=0, y=0, image='output.jpg')
img_image2.config(image='output.jpg')
 
 
while True:
    time.sleep(1)

 

3. Run and Debug

Step 1: Run the "1-Install_dependency.py" program file to automatically install dependency packages. The completed schematic is as follows.

 

 

Step 2: Run the main program

Run the "main.py" program. You can see the screen displays a face, and after a few seconds, facial features such as eyes, nose, mouth, and eyebrows are outlined with small dots.

 

 

You can also find the saved facial image "output.jpg" in the same directory.

 

Program Analysis

In the above "main.py" file, we mainly use the Haar feature classifier of the opencv library to detect faces. After the face is detected, we use Facemark's LBF (Local Binary Features) algorithm to detect facial features . The overall process is as follows.

 

 

Algorithm Explanation

1. Haar cascade algorithm

The Haar cascade algorithm, also known as the Haar cascade classifier, is a widely used object detection method in computer vision, particularly in the field of face detection. It is a machine learning-based method that generates a cascade classifier by training on a large number of positive samples (containing the target object) and negative samples (not containing the target object).

How does it work? The following flowchart helps us understand the workflow of the Haar cascade algorithm.

 

 

Get original image

To get the original image, use "cv2.imread('1.jpg')" command to get an original image "1.jpg" from the current path.

 

 

Convert to grayscale image

Get the original image and convert the color image to grayscale image by "cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)".

 

 

Haar feature extraction

Haar feature extraction, is to take a black and white rectangle to the corresponding position of the picture for comparison, to see whether it meets the gray scale distribution characteristics.

 

 

In the first face detection image, it shows that the eye region is darker than the cheeks below the eyes. In the second face detection image, it shows that the left and right eye regions are darker than the nose region. In this way, the feature extraction of the whole image is carried out, and if the extracted features are consistent with the face data model, it means that the face feature extraction is successful.

Note: The face data model, which is trained in advance, won't be described in detail here.

 

Recognition information

Recognition information, using the "face_cascade.detectMultiScale(gray, 1.3, 5)" command, you can get the detected face recognition information. (1.3 refers to the reduction ratio of the image size each time, and 5 refers to the number of neighbors) The following focuses on the scaling ratio and the number of neighbors.

 

(1) Scaling ratio

Scaling is an important parameter in face detection that is used to control the scaling of the image pyramid. For example, a scaling ratio of 1.3 now indicates that the change in scale between every two layers of the image in the image pyramid is approximately 1.3 times. If the original image size is considered to be 1.0, then in the next layer of the pyramid the size of the image will increase to 1.3 times, then again to 1.3 times in the next layer of the pyramid, and so on.

 

 

As for how many layers the pyramid has, it usually depends on the specific face detector settings and the size of the input image. If the original size of the input image is large, the pyramid may have to contain more layers, and the smaller the input image, the fewer the layers. The scaling depends on the needs and performance requirements of the application, and the appropriate scaling can usually be chosen based on experiments and performance optimization to obtain the best face detection results. We measured the best face detection results when the scaling is set to 1.3.

 

(2) Number of neighbors

The number of neighbors is a parameter used to filter out some false detections, mainly to control how many neighboring rectangular bounding boxes can be around the detected face in order to keep this detection result as a valid face detection. For example, the number of neighbors is set to 5 in the program, and if the number of neighboring rectangular bounding boxes around a rectangular bounding box is less than 5, then it is considered a false detection. When the number of neighboring rectangular bounding boxes is greater than or equal to 5, it is considered a valid detection.

Note: Regarding the number of neighbors, you can also choose the appropriate parameter based on the experimental results.

 

 

2. Understanding the LBF algorithm in the Facemark framework

Facemark is a framework for facial feature point detection in the OpenCV library. It provides a generic interface for facial feature point detection by implementing different algorithms. One of the implementations is LBF (Local Binary Features).

LBF is an algorithm for facial feature point detection based on local binary features. This algorithm first uses cascade regression to predict the rough locations of facial feature points, then uses local binary features to describe the appearance information of the face and uses this information to correct the locations of the feature points to get more accurate detection results.

The main advantages of the LBF algorithm are high detection accuracy and relatively high computational efficiency. It can get better detection results in a variety of different situations, such as different lighting, expressions and postures.

In OpenCV's Facemark framework, you can use the cv2.face.createFacemarkLBF() function to create a feature point detection object based on the LBF algorithm, and then use the fit() method of this object to perform feature point detection.

 

Appendix 1: Material and Extended Program Link

 

 

Feel free to join our UNIHIKER Discord community! You can engage in more discussions and share your insights!

License
All Rights
Reserved
licensBg
0