Pose Classification-Based Face-Tracking Oscillating Fan

1.1 Project Introduction

This project aims to create an intelligent fan that rotates while follows human face movements. Unlike ordinary oscillating fans, our fan can detect facial key points in real-time to automatically track face positions.

Of course, you can use it to continuously send cool air to your family. Placing it on a workbench allows it to provide cool air by following your face position. If you replace the fan module with another module (such as a camera), you can also turn the device into a face-tracking camera simulator. This project comprehensively demonstrates the easy practice of AI technology in daily life from data collection, model training, to inference and application, making pose classification technology accessible, interactive, and more interesting!

1.2 Demo Video

2.Project Implementation Principle

This project implements a posture classification-based oscillating fan, covering the entire process from data preparation to model deployment and application. Specifically, it first collects and annotates posture images through the Mind+ model training platform to build a dataset, then trains a posture classification model. After training, it realizes real-time inference and outputs human face positions on the computer. Finally, the UNIHIKER K10 receives the inference results through the IoT platform and controls the servo to rotate, driving the fan to oscillate.

3.Hardware and Software Preparation

3.1 Equipment List

HARDWARE LIST
1 UNIHIKER K10
1 Servo
1 Fan Module
1 3Pin Connection Wire
1 USB Cable

Ā 

3.2 Hardware Connection

Make connections by referring to the diagram below.

Ā 

Ā 

3.3 Software Preparation

Download and install the Mind+ installation package(Version 2 or above) from the official website. Double-click to open it after installation.

Ā 

Ā 

Double-click "start SIoT.bat" (which you can find in the attachment) to run the program and get your IP address (e.g., 192.168.9.231).

Ā 

Ā 

Note:
When the system pops up the window "Do you want to allow public and private networks to access this app?" for the first time, click "Show more", check both "Public networks" and "Private networks", then click "Allow". Otherwise, devices may fail to access the server.
This program needs to keep running during the data transmission.

Ā 

Ā 

4.Project Making

We use Mind+'s model training to train the face position recognition model.

First, open the Mind+ software, click on "Model" and select "Pose Classification" (Note: Only Mind+ 2.0 and above versions have the model training function).

Ā 

Ā 

The initial interface will open as follows:

Ā 

Ā 

The page is divided into three sections, from left to right: Data Collection, Model Training, and Model Validation & Deployment.

Ā 

4.1 Data Collection and Labeling

The first step in model training is to prepare the pose image database. In this project, we use left, right, and middle to label the head positions.

Follow the steps below to collect and label data using the camera.

Click the "Camera" button to start data collection. This button directly opens the built-in computer camera for data collection.

Ā 

Ā 

Press and hold the "Hold to Record" button to start data collection, changing angles to record multiple images of the target object. By default, twenty photos is collected per second, which can be adjusted in the settings if needed.

Ā 

Ā 

Tip: FPS, which means Frames Per Second, refers to the number of image frames collected per second.

Ā 

After collecting all data of this class, rename the "Class1" to "left".

Ā 


Follow the same method to complete the image collection and labeling for "right" and "middle".

Ā 

Ā 

4.2 Model Training

Click "Train Model" and wait for the model training to complete.

Ā 

Ā 

Tip: To adjust training parameters, click "Advanced Settings".

Ā 


4.3 Validation

After the model training is completed, model validation can be performed to check the model performance. Camera-based validation allows direct observation of the model's application effect. Please follow these steps:

Ā 

Enable the "Input" switch → Select the camera → Aim the camera at the target → Observe the output

Ā 


4.4 Ā Inference Result Push

During validation, the inferred pose data can be pushed to the IoT platform (SIoT platform) in real-time, and will be transmitted to UNIHIKER K10 via the IoT. The specific operations are as follows:

Ā 

Step 1: Click the red dot next to "Real-time Result Push" to open the server settings window.

Ā 


Step 2: Fill in the server parameters, select "SIoT V2" as the server, and set the MQTT Server Address to the local IP address you recorded earlier.

Ā 

Ā 

Step 3: Click "Complete", and the red dot will turn green, indicating that data is pushed in real-time.

Ā 


4.5 Programming

Now, we can start writing the UNIHIKER K10 program to receive inference results and control the servo's rotation.

Ā 

Step 1: Click "+", then select "Upload Mode" in the "Coding" tab.

Ā 


Step 2: Connect device.

Ā 

Ā 

Step 3: Download and load relevant extension libraries.

Ā 

Since we need the UNIHIKER K10 to receive data and control the servo, we need to load the UNIHIKER K10 library, Wi-Fi, MQTT, and Servo libraries. The search method is as follows:
UNIHIKER K10 library: Click "Extensions", in "Board", search for "UNIHIKER K10" and click to download. Wait for the download to complete, then click again to finish loading.

Ā 


Wi-Fi, MQTT, and Servo libraries: Still on the "Extensions" page, select "Module" and click to download. Wait for the download to complete, then click again to finish loading.

Ā 


Step 4: Write the program
Write or directly open the attached program. The method to open the project is shown in the following figure:

Ā 


The opened program is shown in the following diagram:

Ā 

Ā 

The analysis of the core code is as follows:

Ā 

Ā 

Step 5: Click the "Upload" icon to run the program and wait for the program to finish uploading.

Ā 

Ā 

Stay in the Pose Classification page and move left and right in front of your computer’s camera, observe the rotation of the servo horn, and adjust its installation position. Then, using a basic structure, you’ll be able to experience the face-tracking oscillating fan.

The implementation effect is shown in the following image:

Ā 

Ā 

5. Attachment

SIoT V2_EN: https://drive.google.com/file/d/1jOcsHETgLeWd7lVjnL0qlSfMyhJ8DSuJ/view?usp=sharing

icon Pose Classification.mpcode.zip 4KB Download(1)
License
All Rights
Reserved
licensBg
0