Mediapipe Face Detection Facial Mesh Drawing Based on UNIHIKER

Introduction

In this project, a USB camera is connected externally to UNIHIKER, and the camera is used to detect the face of a person and draw a mesh on it.

 

Project Objectives

Learn how to use MediaPipe's Face Mesh model for facial mesh drawing.

HARDWARE LIST
1 UNIHIKER - IoT Python Single Board Computer with Touchscreen
1 Type-C&Micro 2-in-1 USB Cable
1 USB camera

Software

Mind+ Programming Software

 

Practical Process

1. Hardware Setup

Connect the camera to the USB port of Unihiker.

 

 

Connect the UniHiker board to the computer via USB cable.

 

 

2. Software Development

Step 1: Open Mind+, and remotely connect to Unihiker.

 

 

Step 2: Find a folder named "AI" in the "Files in UNIHIKER". And create a folder named "Mediapipe Face Detection Facial Mesh Drawing Based on UNIHIKER" in this folder. Import dependency packages and files.

 

 

Step 3: Write the program. Create a new project file at the same level as the above images and model files, and name it "main.py".

Sample Program:

CODE
import cv2  
import mediapipe as mp  

mp_drawing = mp.solutions.drawing_utils  
mp_drawing_styles = mp.solutions.drawing_styles  
mp_face_mesh = mp.solutions.face_mesh  
 
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) 
 
cap = cv2.VideoCapture(0)
 
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
 
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
 
cv2.namedWindow('MediaPipe Face Mesh', cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty('MediaPipe Face Mesh', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
 
with mp_face_mesh.FaceMesh(max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5, min_tracking_confidence=0.5) as face_mesh:
    while cap.isOpened():
        success, image = cap.read()
        if not success:
            print("Ignoring empty camera frame.")
            continue
 
        image.flags.writeable = False
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        results = face_mesh.process(image)
 
        image.flags.writeable = True
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
        if results.multi_face_landmarks:
            for face_landmarks in results.multi_face_landmarks:
                mp_drawing.draw_landmarks(image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style())
                mp_drawing.draw_landmarks(image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_contours_style())
                mp_drawing.draw_landmarks(image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_iris_connections_style())
 
        image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
        cv2.imshow('MediaPipe Face Mesh', cv2.flip(image, 1))
        if cv2.waitKey(5) & 0xFF == 27:
            break
 
cap.release()

 

3. Run and Debug

Step 1: Run the "1-Install_dependency.py" program file to automatically install dependency packages. The completed schematic is as follows.

 

 

Step 2: Run the "main.py" program, you can see that the initial screen shows the real-time image captured by the camera, focus the camera image on the face, you can see that the face of the face is detected and a grid is attached.

 

 

 

4. Program Analysis

In the above "main.py" file, we mainly call the camera through the OpenCV library to get the real-time video stream, and then use MediaPipe's Face Mesh model to detect the face of each frame and draw the mesh. The overall process is as follows.

1.Import necessary libraries: first, the program imports the OpenCV library (for image processing and camera control), and the MediaPipe library (for machine learning model inference).

2.Set the drawing specification: the program defines a DrawingSpec object to specify the line thickness and circle radius when drawing the facial mesh.

3.Turn on the camera: the program turns on the camera using cv2.VideoCapture(0) and gets a VideoCapture object.

4.Set camera parameters: the program sets the camera resolution and buffer size.

5.Create full-screen window: the program creates a full-screen window for displaying the processed image.

6.Create FaceMesh object: The program creates a FaceMesh object for facial mesh detection.

7.Processing the video stream from the camera: the program enters an infinite loop that continuously reads the image from the camera and then processes the image using the FaceMesh object. If a face is detected, a mesh is drawn on the image. The processed image is then displayed in the window created earlier.

8.Waiting for user action: the program detects if the user has pressed the ESC key, and if so, it jumps out of the loop, releases the camera, and ends the program.

 

 

MediaPipe's Face Mesh model

Face Mesh is a cross-platform, real-time facial mesh model. It is based on machine learning and is capable of detecting and tracking 468 facial keypoints from an image or video, including areas such as the eyes, mouth, nose, cheeks, and chin. Together, these keypoints form a detailed 3D mesh of the face, which can be used for various applications such as facial expression recognition and facial shape estimation.

MediaPipe's Face Mesh model is a deep learning-based model that uses a face detection model called BlazeFace to first localize the face, and then a model called Face Landmark to predict 468 keypoints on the face. These keypoints are mapped to the 3D space of the face to form a detailed facial mesh.

 

 

Appendix 1: Material and Additional Program

License
All Rights
Reserved
licensBg
0