Build a Sleep Monitoring Assistant Using Voice Classification in Mind+
1 Project Introduction
1.1 Demo Video
Ā
1.2 Project Design
Have you ever wondered about your sleep quality at night? Have you wanted to know if you snore, talk in your sleep, or cough during the night? The "Sleep Monitoring Assistant" uses the Mind+ Model Training Tool to classify sounds during sleep, identifying and recording the number of various sounds such as snoring, sleep talking, and coughing, and generates an AI sleep quality report.
By working through this project, you will learn:
⢠Model Training: How to train a voice classification model using the Mind+ 2.0 Model Training Tool.
⢠Model Deployment: How to deploy model inference results to the UNIHIKER M10 in real-time push mode, enabling sleep recording and report generation.
⢠Ethical Discussion: How to think about and address AI ethics issues such as privacy security and technological over-reliance in technical practices.
1.3 Project Implementation ProcessĀ
This project trains the "Sleep Monitoring Voice Classification Model" using the Voice Classification module in the Mind+2.0 model training tool. Through real-time, push-based deployment, the model inference results are pushed to the IoT server in real-time. The UNIHIKER M10 receives, presents, and generates sleep reports by combining with the large model. The entire project flow is shown in the following figure:
Ā

Ā
2 AI Knowledge Garden - Voice Classification
2.1 Voice Classification
Voice classification is a key technology in artificial intelligence, whose core task is to analyze input audio signals and classify them into predefined categories. For example, distinguishing whether a segment of audio is a cat meow, dog bark, or ambient sound.
Ā

Ā Ā Ā Ā Ā Ā Ā Ā Ā
2.2 Applications of Voice Classification
The application fields of voice classification are very extensive, mainly including:
⢠Smart voice assistants: Recognize instruction types (such as playing music, setting alarms, asking about the weather) to achieve multi-intent interaction.
⢠Emotion recognition: Analyze intonation to determine the speaker's emotion, used in customer service quality inspection, mental health monitoring, etc.
⢠Security monitoring: Identify abnormal sounds such as glass breaking, screams, and emergency calls to improve security response speed.
⢠Sleep monitoring: Detect sleep behaviors such as snoring, coughing, and sleep talking for health analysis.
⢠Medical diagnosis assistance: Classify cough sounds and breathing sounds to assist in respiratory disease screening.
⢠Entertainment and content retrieval: Recognize music genres, opera styles, and instrument types for intelligent recommendation.
⢠Industrial and home scenarios: Judge whether equipment operation sounds are abnormal for fault detection and maintenance.
3 Sleep Monitoring Model Training
3.1 Download Software and Create Training Project
Download and install Mind+ 2.0 or higher from the official website, then double-click to open it once installed. Download address: https://www.mindplus.cc/
Ā

Ā
Create a new project, click "Model" in the left navigation bar, and select the "Speech Classification" task.
Ā

Ā
3.2 Data Preparation
⢠Label Setup: Add three categories in addition to "Background Noise", and set the labels as "Snoring", "Sleep Talking", and "Coughing".
Ā

Ā
⢠Data Collection: Data can be collected by recording live using the microphone or by uploading data. Each data category requires at least 20 samples.
Ā

Ā
3.3 Model Training
⢠Train Model: Click the "Train Model" button to start model training.
⢠Training Parameter Settings: Click the "Advanced Settings" button to configure training parameters.
Ā

Ā
⢠Training Process and Result Observation: During training, clicking "Deep Dive" in "Advanced Settings" allows you to view accuracy and loss for each training cycle.
Ā

3.4 Model Validation
Enable the microphone and play new audio that was not used in training for testing; check the real-time classification results.
Ā

Ā
⢠Model Optimization and Retraining: When model validation results are unsatisfactory, you can retrain the model by optimizing data quality, adjusting model parameters, and other methods.
4 Sleep Monitoring Model Deployment
4.1 Hardware List
Ā

Ā
4.2 Deployment Approach
Using a real-time push method, the AI model training platform pushes model inference results to the IoT server in real time. The UNIHIKER M10 serves as the terminal to receive information, obtain inference results, and display the counts of various sounds such as snoring, talking in one's sleep, and coughing.
Ā

Ā
4.3 Real-time Result Push
1. Environment Preparation
⢠Enter 10.1.2.3 in the browser.
Ā

Ā
⢠Click "Network Settings", connect to WiFi, and obtain the UNIHIKER M10 IP address.
For IoT applications built on a local server, all devices must be in the same network (e.g., private LAN, hotspot, or external network).
Ā

Ā
⢠Enable the SIoT service.
At this point, UNIHIKER M10 acts as the IoT server.
Ā

Ā
2. Real-time Result Push
⢠Click "Real-time Result Push". In "Real-time Push Server Settings", modify the MQTT server address to the UNIHIKER M10 IP address (checkable in UNIHIKER M10 Network Settings).
Ā

Ā
⢠Once the server connection is successful, the "Real-time Result Push" button turns green.
Ā

Ā
4.4 Model Application
As a terminal device in the IoT system, UNIHIKER M10 receives messages from the IoT server. Follow these steps to write a program that connects to the server, receives real-time inference results, and generates a sleep analysis report.
⢠Create a New Project
Click "Coding" in the left navigation bar, then select "Python Blocks Mode".
Ā

Ā
⢠Import Libraries
Add the main controller: Click "Extensions", search for "M10" in the top-right search box, click the "Download" button on the UNIHIKER M10 extension library, and after downloading, click the library again to complete the import.
Ā

Ā
Add extension libraries: Search for "MQTT" and "deepseek" in the top-right search box, download and import them. Finally, click "Back" in the top-left corner to return to the programming interface.
Ā


⢠Connect Terminal Device
In the terminal connection options, select "Default-10.1.2.3" to connect to UNIHIKER M10.
Ā

Ā
⢠Write the Program
STEP 1: Connect to the IoT Server and Subscribe to a Topic
Write the program (as shown in the figure) to connect to the SIoT server and subscribe to a topic.
Ā

Ā
STEP 2: Receive Model Inference Results
UNIHIKER M10 receives real-time result messages pushed to MQTT and assigns the MQTT information to the variable "Inference Result".
Ā

Ā
STEP 3: Analyze MQTT Messages
If the received MQTT message is "Snoring", increment the count of the "Snoring" variable by 1;
If the message is "Sleep Talking", increment the count of the "Sleep Talking" variable by 1;
If the message is "Coughing", increment the count of the "Coughing" variable by 1.
UNIHIKER M10's screen displays the real-time counts of "Snoring", "Sleep Talking", and "Coughing".
Ā

Ā
STEP 4: Generate Sleep Analysis Report
Initialize the large model. If Button-A is pressed, push the three data points ("Snoring", "Sleep Talking", "Coughing") to the large model for analysis, and UNIHIKER M10's screen displays the sleep quality analysis report.
Ā
Note: To protect your sleep privacy, the analysis can be completed locally (corresponding programs need to be added). If more in-depth analysis is required, you can choose to enable the large model service (requires a key) for online sleep analysis.
Ā

Ā
The complete program is as follows:
Ā

Ā
⢠Run & Verify
Ā

Ā

5 AI Ethics Discussion
Technology itself is neutral, but those who design and use it bear responsibility. While enjoying the convenience brought by AI, we need to actively think about the ethical issues behind it and explore responsible solutions. Now, let's discuss several core ethical challenges that may arise in the Sleep Monitoring Assistant project.
5.1 Privacy and Data Security
⢠Issue: When the system collects and analyzes sleep audio, it may capture and leak users' sensitive personal information, such as personal health conditions, nighttime conversation content, etc.
⢠Solution: Adopt local processing to ensure audio data is analyzed and processed on the user's device without uploading to the cloud. Encrypt stored audio data. Additionally, clearly inform users about the scope and purpose of data collection and obtain their explicit consent.
5.2 Technical Over-Reliance
⢠Issue: Users may over-rely on the assistant's monitoring reports, ignoring their actual feelings and physical signals. Once the technology makes a misjudgment, it may lead to misdiagnosis or anxiety.
⢠Solution: Embed a disclaimer in the system design, e.g., displaying on the screen: "Important Note: This report is generated by AI algorithms for reference only and cannot replace professional medical diagnosis." When AI misjudgment causes anxiety, remember: AI is just an error-prone tool, not an authoritative judgment.
6 Self-Test
6.1 Extended Exercise
Think about other sleep data that can be monitored, such as teeth grinding sounds, nightmare screams, etc. Continue training the voice classification model in Mind+, and deploy it to UNIHIKER M10 for recognition testing to improve the Sleep Monitoring Assistant.
6.2 Learning Evaluation Form
Ā

Ā
7 Attachment
Google drive: https://drive.google.com/file/d/1Y0DvnF3qLF4go72DQf8OIq0WnxJTGvys/view?usp=drive_link









