Image Classification Based Emotion-Recognition Driving Companion

1.1 Project Introduction

This project aims to create an intelligent driving "companion" that understands emotions and protects safety, helping to alleviate the real-world problems of boredom, lack of interaction, and fatigue during long-distance driving. For example: when driving on the highway, you want to share happy moments but have no one to respond to you; or when driving late at night, you feel sleepy with drooping eyelids but no one to remind you in time. By using image classification technology to recognize emotions, the UNIHIKER M10 transforms into a close driving companion. It can capture your facial expressions in real-time via the camera, accurately identify states such as happiness, anger, and drowsiness, and provide real-time interactions. For instance, if it detects signs of drowsiness like drooping eyelids and a slightly drooping head, it will immediately remind you via text, images, and music: "Take a break, take a break."

You can use it during daily commutes, for example, interacting with it during morning rush-hour traffic jams to alleviate irritation. It can also be extended to long-distance self-driving scenarios for real-time fatigue monitoring. It can even be adapted to the working scenarios of ride-hailing drivers and freight truck drivers, adding an "extra safety lock" for professional driving safety. From data collection, model training to inference and application, this project fully demonstrates how artificial intelligence technology solves real-life needs, bringing the "emotionally aware and safety-protective" intelligent companionship into every daily driving journey.


1.2 Demo Video

2. Project Implementation Principle

This project implements face emotion recognition and interactive functions based on image classification technology. The entire implementation process covers the full workflow from data preparation to model inference and application. Specifically, it first uses the Mind+ model training platform to collect images and train the image classification model. After training, the model is verified and optimized, then exported in a format suitable for the UNIHIKER M10. Finally, the model is deployed to the UNIHIKER M10 for inference application: it uses the board's camera to capture real-time images, performs inference and recognition via the model, outputs the emotion category, and executes the corresponding actions.


3. Hardware and Software Preparation

3.1 Equipment List

Note: The UNIHIKER system version v0.4.1 or above is suitable for the production of this project. The Python environment version of UNIHIKER is 3.12, and the version of Mind+ programming software is v2.0.

HARDWARE LIST
1 UNIHIKER M10
2 USB Cable
1 USB Camera

3.2 Hardware Connection

Make connections by referring to the diagram below.

3.3 Software Preparation

Download and install the Mind+ installation package(Version 2 or above) from the official website. Double-click to open it after installation.


3.4 Environment and Extension Preparation

This project requires setting the Python environment of the UNIHIKER M10 to version 3.12.7. Please follow these steps for environment preparation:

Open the programming software Mind+, and select "Python Block Mode" in the "Coding" tab.


Next, add the required extensions in Mind+.
Enter the "Extensions" page, switch to "Board" tab, search for"M10", and click "UNIHIKER M10".


Click the "Back" button to return to the programming interface.


Expand the "Local Terminal", choose your device for connecting.

In the terminal, input ”python --version” and press the Enter key to check the Python environment version of the UNIHIKER M10:

As shown in the figure, the Python environment of the UNIHIKER M10 is already the specified version 3.12.7.

If the version does not match, input "pyenv global 3.12.7" in the terminal to switch to this version.


Complete extension preparation please follow these steps below:
(1) Connect UNIHIKER M10 to internet
(2) Load the extension


First, connect the UNIHIKER M10 to the network.
Enter "10.1.2.3" in the browser address bar to access the UNIHIKER M10's web configuration page.

Choose name, enter the password and Click "Connect".


A "WiFi connection is successful" message will appear, and your UNIHIKER M10 is now connected to the network.


Secondly, you can proceed to load the model inference-related extension.

Go extension and download the Model Training and Inference library.

Click the "Back" button to return to the programming interface.


Wait for the dependency libraries has been installed.


4. Project Making

We use model training in Mind+ to complete the collection, training, and export of facial expressions.
First, open the Mind+ software, select "Model Training", and open "Image Classification" (Note: Only Mind+ 2.0 or higher versions have the model training function).


The initial interface is shown below:


The page is divided into three parts, from left to right: Data Collection, Model Training, and Model Validation & Export (the usage of each part will be detailed later).


4.1 Data Collection

To train a Face Emotion Recognition model, you need to prepare a dataset containing different categories.
Click Add Category to set up the categories.


Edit the category name.

In this project, we need to recognize four emotions: normal, happy, tired, and mad. Therefore, we need to add four categories. The set-up categories are as follows:

Once the categories are set, proceed to collect data for each category using the camera. The steps are:
Click the Camera button to start data collection.

Press and hold the "Press to Record" button to begin data collection.

By default, twenty photos are captured per second, which can be adjusted in the settings if needed.


(FPS, or Frames Per Second, refers to the number of image frames captured per second)

Collect data for the remaining three categories following the above steps.


Now, the emotion dataset is ready. In this project, the dataset size is: 50 samples per category, totaling 200 samples across four categories.
This dataset is available in the attachment, or you can upload data using the following steps:
Click "Upload Data" and select "Select File to Upload".


Open the dataset folder for the category.

Select all files and click "Open".

Change the category name to the relative name.

Upload data for the remaining three categories in the same manner.
After uploading, the interface will look like this:

Once the dataset is prepared, we can proceed to the model training phase.

4.2 Model Training

To train the model, you need to adjust the training parameters based on the dataset characteristics.

Expand "Advanced Settings" to modify the parameters.

In this project, with a dataset size of approximately 200 images, the training parameters are set as follows:

Once the parameters are configured, simply click the "Train Model" button to start the training process (keep this page open during training to ensure no interruption).

During training, click "Learn More" to monitor the model training process.


Wait for training to complete, then click "OK".


4.3 Model Validation and Export

After the model training is completed, you can verify the model performance through model validation. Please follow these steps to use the camera for testing:
Enable the "Input" switch → Select Webcam → Align the camera with the target → Observe the output

Meanwhile, test files are also provided in the project attachment, and you can verify by uploading the file using the following steps:

Enable the "Input" switch → Select "File" → Click to upload file → Choose the file and click "open" → Observe the validation result.


Once the validation result meets expectations, you can export the model file.
Click "Export Model" to export the model as an ONNX format file and a YAML format configuration file (both files will be used in model inference and application).


Select a location to save the model file.


It is recommended to save this model training project as a project file for easy model optimization and adjustments later. The steps are:
Expand "Quick Experience", select "Save Project", choose the save location, and click "Save" to complete the saving operation. Then open the saved project file via "Open Project" in "Quick Experience"


4.4 Model Inference and Application

Click "+", and select "Python Block Mode" in "Coding".

Connect and load extension.
Go to the "Extensions" page and download the "Model Training and Inference" library.


Please upload the model files as follows.

Click "Resource Files" → Select "Upload File" → Select the model (.onnx) and its configuration file (.yaml) → Click "Open".

The model file has been uploaded successfully.

Then go back to Blocks.

Write the program as following:

The analysis of the core code is as follows:

Click "Run" to run the program.


The effect is as follows:


There is a complete program file for this project in the attachment.
Open Project→Open Local File to load project.

Select the project in the attachment and click "Open".


Click "Run" to run the program.

5. Attachment

google drive: https://drive.google.com/file/d/1qIC9xujwSWheYe-0yW-loPhnOsGU_Uwc/view?usp=drive_link

License
All Rights
Reserved
licensBg
0