Lesson Plan on UNIHIKER K10 for Electrical & Electronic Engineering Students

Lesson 1: Getting Started with UNIHIKER K10 on Arduino IDE

Learning Objectives:

By the end of this project, students will be able to:

a. Identify hardware features of the UNIHIKER K10 board (ESP32-S3 core, built-in sensors, display, camera, and GPIO).

b. Set up Arduino IDE and configure it to support the UNIHIKER K10 by installing the appropriate board package and libraries.

c. Connect and recognize the board on different operating systems (Windows, macOS, Linux), selecting the correct board type and serial port.

d. Write, compile, and upload Arduino sketches to the UNIHIKER K10 successfully.

e. Control onboard components (e.g., built-in LED, display) using Arduino code.

f. Utilize onboard sensors (such as light, temperature, or motion) and display or transmit sensor data.

g. Debug simple hardware and software issues (USB connection, port selection, driver installation, upload errors).

h. Demonstrate problem-solving and practical skills in configuring a modern AI/IoT development board in the Arduino environment.

Required Materials:

Core Hardware

a. UNIHIKER K10 board (ESP32-S3 based learning board)

b. USB-C data cable (make sure it supports data, not just charging)

c. Computer (Windows / macOS / Linux with Arduino IDE installed)

Optional Accessories (for experiments)

a. Breadboard (for prototyping)

b. Jumper wires (male-to-male, male-to-female)

c. LEDs (external LEDs, different colors)

d. Resistors (220Ω – 1kΩ for LEDs, others for experiments)

e. Buzzer / piezo speaker

f. Push buttons / switches

g. Potentiometer (for analog input testing)

h. Basic sensors (e.g., DHT11/DHT22 temperature & humidity sensor, LDR, ultrasonic sensor)

i. Small DC motor or servo motor (for actuator experiments)

Software & Tools

a. Arduino IDE (v1.8.19 recommended, v2.x possible with updated SDK)

b. UNIHIKER Arduino board package & libraries (installed via Boards Manager)

c. Drivers (if required) (CH340 / CP210x for clones, Espressif USB drivers)

d. Internet access (for installing board packages and libraries)

Lesson Flow:

Duration: ~2–3 hours

Target Audience: Electrical & Electronic Engineering students (beginners with Arduino)

Introduction (15 minutes)

a. Brief overview of UNIHIKER K10 (features: ESP32-S3 core, display, camera, onboard sensors).

b. Importance of Arduino IDE for embedded systems and IoT development.

c. Show real-world applications (IoT devices, AI edge computing, robotics).

2. Setting Up the Environment (30 minutes)

a. Install Arduino IDE (v1.8.19 or v2.x as supported).

b. Add the UNIHIKER board package via Preferences → Additional Board URLs.

c. Install UNIHIKER K10 board support from Boards Manager.

d. Select board type: UNIHIKER K10.

e. Connect K10 via USB-C and select the correct COM/Port.

3. First Hands-On Coding (45 minutes)

Activity 1: Blink onboard LED

i. Open Arduino IDE.Write/Load Blink sketch (using pin 25 for the onboard LED).

ii. Compile and upload.Observe LED blinking.

Activity 2: Serial Communication

i. Modify Blink sketch to print messages in Serial Monitor.

ii. Open Serial Monitor and view output.

4. Exploring Onboard Features (30 minutes)

a. Access a simple onboard sensor (e.g., light or temperature).

b. Write a sketch to read sensor data and print to Serial Monitor.

c. Demonstrate data visualization in Serial Plotter.

5. Troubleshooting & Debugging (15 minutes)

a. Common issues: USB driver problems, wrong port, library errors, upload failures.

b. Walkthrough solutions (driver install, reset, selecting correct board).

6. Wrap-up & Discussion (15 minutes)

Recap key takeaways:

a. Setting up Arduino IDE for UNIHIKER K10.

b. Writing, compiling, and uploading sketches.

c. Using onboard LED and sensors.

d. Discuss how this foundation leads to IoT/AI applications (cloud connectivity, dashboards, TinyML).

Assign homework/extension task: Control an external LED or sensor module and show output.

Expected Outcomes:

By the end of this lesson, students will:

a. Successfully configure Arduino IDE for UNIHIKER K10.

b. Upload and run basic Arduino sketches on the board.

c. Use Serial Monitor and Plotter for debugging and data visualization.

d. Gain confidence to extend experiments to IoT and AI projects.

Lesson 2: Smart Environment Monitor

Learning Objectives:

By the end of this project, students will be able to:

a. Explain the concept of environmental monitoring systems and their relevance in smart cities, IoT, and sustainability applications.

b. Identify and describe the onboard sensors of the UNIHIKER K10 (temperature, humidity, and light) and their practical uses.

c. Interface and integrate an external Grove air quality sensor with the UNIHIKER K10 using Arduino IDE.

d. Write Arduino sketches to read multiple sensor values, process data, and display results on Serial Monitor and/or the onboard screen.

e. Implement basic data visualization (e.g., graphs or indicators) using Arduino Serial Plotter or display libraries.

f. Apply sensor calibration and data validation techniques to ensure accurate readings.

g. Demonstrate data fusion by combining multiple sensor outputs into a meaningful environmental index.

h. Debug hardware and software issues during sensor interfacing and code development.

i. Lay the foundation for IoT integration by preparing sensor data for transmission to cloud platforms (ThingSpeak, MQTT, etc.) in future projects.

j. Develop problem-solving and teamwork skills by working through real-world challenges in environmental data collection.

Required Materials:

Core Hardware

a. UNIHIKER K10 board (ESP32-S3 based board with built-in sensors and screen)

b. USB-C data cable (for programming and power)

c. Computer (Windows/macOS/Linux with Arduino IDE installed)

Sensors

Onboard Sensors (built-in on K10)

a. Temperature & Humidity sensor

b. Light sensor

External Sensor

Grove – Air Quality Sensor (e.g., Grove Air Quality v1.3 or compatible)

Supporting Components

a. Grove connection cable (to interface the air quality sensor with K10)

b. Breadboard & jumper wires (optional, if using breakout sensors)

c. Resistors (if needed for additional experiments with LEDs/indicators)

Optional Add-ons

a. External Grove modules (like a Grove OLED or Grove buzzer for alerts/visualization)

b. MicroSD card (for data logging)

c. Power bank or Li-ion battery pack (for portable monitoring)

Software & Tools

a. Arduino IDE (with UNIHIKER K10 board package installed)

b. Drivers (if required) for USB communication

c. Serial Monitor / Serial Plotter (built into Arduino IDE for data visualization)

Optional: Excel / Python / ThingSpeak / Node-RED (for advanced data logging or IoT integration)

Lesson Flow:

Duration: ~2.5–3 hours

Target Audience: Electrical & Electronic Engineering Students

1. Introduction (15 minutes)

a. Explain the importance of environmental monitoring (air quality, temperature, humidity, light) in smart cities, health, and conservation.

b. Discuss the onboard sensors of UNIHIKER K10 and how they can be extended with Grove sensors.

c. Present the project goal: build a Smart Environment Monitor that collects, displays, and analyzes environmental data.

2. Hardware Setup (20 minutes)

a. Connect UNIHIKER K10 to PC via USB-C.Identify onboard sensors (temperature & humidity, light).

b. Connect the Grove Air Quality Sensor to the Grove port (I²C or analog, depending on module version).

c. Verify connections and power status.

3. Arduino IDE Configuration (15 minutes)

a. Open Arduino IDE.Select UNIHIKER K10 board from Tools → Board.

b. Select the correct Port.

c. Confirm the Grove libraries (if required for the air quality sensor) are installed via Library Manager.

4. Coding and Implementation (60 minutes)

Activity 1: Read onboard sensors

a. Write a sketch to read temperature, humidity, and light sensor data.

b. Print the values on the Serial Monitor.

Activity 2: Add Grove Air Quality Sensor

a. Include the library (if applicable).

b. Read and print the air quality index (AQI) or raw value.

c. Display all sensor values together in Serial Monitor.

Activity 3: Data Visualization

a. Use Serial Plotter to show changes in temperature, humidity, light, and air quality over time.

b. Discuss how graphs help interpret environmental data.

5. Testing & Troubleshooting (15 minutes)

a. Adjust sensor placement for accurate readings (avoid covering sensors).

b. Check connections if no data is displayed.

c. Verify Grove library installation if compile errors occur.

6. Wrap-up & Discussion (20 minutes)

Recap what was achieved:

a. Integration of onboard and external sensors.

b. Reading and visualizing multiple data streams.

c. Discuss real-world applications: home monitoring, pollution tracking, smart agriculture.

d. Suggest extensions:Store data on SD card.

e. Send data to ThingSpeak or an IoT dashboard.

f. Trigger alerts (buzzer/LED) when air quality is poor.

Expected Outcomes:

By the end of the lesson, students will:

a. Successfully integrate onboard and external sensors with UNIHIKER K10.

b. Read, process, and visualize multiple environmental parameters.

c. Understand how to troubleshoot hardware/software issues.

d. Be prepared to extend the project to IoT/cloud-based monitoring.

Lesson 3: Face Detection System using UNIHIKER K10

Learning Objectives:

By the end of this project, students will be able to:

a. Understand the basics of Edge Impulse and its role in deploying AI models to embedded devices.

b. Collect and prepare image datasets (faces and non-faces) using the UNIHIKER K10 camera for training.

c. Train and optimize a face detection model on Edge Impulse Studio.

d. Deploy the trained model to UNIHIKER K10 and integrate it with Arduino IDE.

e. Capture real-time camera input and run inference for detecting faces.

f. Visualize detection results on the onboard screen and/or Serial Monitor.

g. Apply troubleshooting techniques to optimize inference speed and accuracy on resource-limited hardware.

h. Explore practical applications of edge-based face detection in security, attendance, and smart systems.

Required Materials:

Core Hardware

a. UNIHIKER K10 board (with onboard camera and display)

b. USB-C data cable (for programming and power)

c. Computer (Windows/macOS/Linux with Arduino IDE + Edge Impulse tools installed)

Software & Tools

a. Arduino IDE (latest version)

b. Arduino Board Package for UNIHIKER K10 (installed via Boards Manager)

c. Edge Impulse Studio account (free, cloud-based)

d. Edge Impulse CLI (Command Line Interface) installed on PC (for data upload & deployment)

e. Drivers (if required for USB communication)

f. Serial Monitor / Plotter in Arduino IDE (for testing inference results)

Datasets

a. Face image samples (captured using K10 camera or uploaded to Edge Impulse Studio)

b. Non-face/background samples (for model training and validation)

Optional Add-ons

a. External microSD card (for local dataset storage or logging inference results)

b. Grove modules (e.g., buzzer, LED, relay for alert/response actions)

c. Portable power bank (for mobile/field testing of the device)

Lesson Flow:

Duration: ~3–3.5 hours

Target Audience: Electrical & Electronic Engineering Students

1. Introduction (15 minutes)

a. Explain what face detection is and how it differs from face recognition.Introduce

b. Edge Impulse as a platform for building and deploying ML models on edge devices.

c. State the project goal: Develop a face detection system on UNIHIKER K10 using its onboard camera, Edge Impulse, and Arduino IDE.

2. Hardware Setup (10 minutes)

a. Connect UNIHIKER K10 to PC using USB-C cable.

b. Verify the onboard camera and display are functional.

c. Open Arduino IDE and ensure K10 board package is installed.

3. Dataset Collection (30 minutes)

a. Create a new project in Edge Impulse Studio.
b. Collect face images using the UNIHIKER K10 camera or upload existing samples.

c. Collect non-face/background images for negative samples.

d. Label the dataset as “face” and “no face”.

4. Model Training on Edge Impulse (40 minutes)

a. Use Image Data Block and Transfer Learning (MobileNetV2 or similar) for training.

b. Configure input size (e.g., 96×96 grayscale or color).

c. Train the model and check accuracy, loss, and confusion matrix.

d. Optimize for Embedded/Edge devices.

e. Generate the Arduino Library from Edge Impulse.

5. Model Deployment (30 minutes)

a. Download the Edge Impulse Arduino Library and add it to Arduino IDE.

b. Write an Arduino sketch that:Initializes the K10 camera.

c. Captures frames in real time.Runs inference using the deployed model.

d. Displays detection result (Face/No Face) on the K10 screen and Serial Monitor.

6. Testing & Visualization (20 minutes)

a. Test the model by showing different faces and backgrounds.

b. Observe real-time inference speed and accuracy.

c. Use Serial Monitor to log inference probabilities.

d. Experiment with lighting conditions and distances.

7. Troubleshooting & Optimization (15 minutes)

a. If inference is slow → try reducing input image size or grayscale mode.

b. If accuracy is low → collect more diverse training data.

c. Check serial output for debugging camera initialization issues.

8. Wrap-up & Discussion (20 minutes)

Recap what students achieved:

a. Dataset collection and labeling.

b. Model training and deployment with Edge Impulse.

c. Real-time face detection on UNIHIKER K10.

d. Discuss real-world applications: smart attendance, security access, robotics.

e. Suggest extensions:Add a buzzer or LED alert for detection.

f. Log face detection events to cloud/ThingSpeak.

g. Explore face recognition (identifying specific people).

Expected Outcomes:

By the end of this lesson, students will:

a. Understand how to use Edge Impulse with Arduino IDE.

b. Successfully train and deploy a face detection model on UNIHIKER K10.

c. Capture, process, and classify real-time camera input.

d. Be prepared to extend the project into more advanced AI-powered IoT applications.

Lesson 4: Keyword Spotting Project using UNIHIKER K10

Learning Objectives:

By the end of this project, students will be able to:

a. Understand the concept of keyword spotting (KWS) and its applications in voice assistants, IoT devices, and human-machine interaction.

b. Record audio samples using the built-in microphone of the UNIHIKER K10 and store them on the SD card for dataset preparation.

c. Organize and upload audio datasets (keywords vs. background/noise) to Edge Impulse Studio for training.

d. Train and optimize a keyword spotting model on Edge Impulse using spectrogram and neural network classifiers.

e. Export and integrate the trained model into the Arduino IDE for deployment on UNIHIKER K10.

f. Perform real-time inference on the K10 by capturing audio through the microphone.

g. Display the detected keyword on the onboard screen in real time.

h. Apply debugging and optimization to improve accuracy and latency in real-world environments.

i. Explore practical applications of keyword spotting, such as smart home control, security systems, and hands-free device operation.

Required Materials:

Core Hardware

a. UNIHIKER K10 board (with built-in microphone, display, and SD card slot)

b. USB-C data cable (for power and programming)

c. Computer (Windows/macOS/Linux with Arduino IDE and Edge Impulse tools installed)

Storage

a. MicroSD card (at least 8 GB recommended, for storing audio samples)

b. SD card reader (if dataset needs to be transferred directly from card to computer)

Software & Tools

a. Arduino IDE (latest version)

b. Arduino board package for UNIHIKER K10

c. Edge Impulse Studio account (free, cloud-based)

d. Edge Impulse CLI (Command Line Interface) installed on PC (for dataset upload and deployment)

e. Serial Monitor / Serial Plotter (built into Arduino IDE, for debugging and visualization)

Dataset

a. Recorded keyword audio samples (captured with K10 microphone)

b. Background/noise audio samples (for better training and avoiding false detections)

Optional Add-ons

a. Portable power bank (for mobile/field deployment)

b. External Grove modules (like LED or buzzer to provide feedback on keyword detection)

c. Headphones / speakers (for testing audio quality of recordings, if needed)

Lesson Flow:

Duration: ~3–3.5 hours

Target Audience: Electrical & Electronic Engineering Students

1. Introduction (15 minutes)

a. Explain Keyword Spotting (KWS): detecting specific spoken words (e.g., “yes”, “no”, “start”).

b. Discuss real-world applications: voice assistants, smart devices, security access, robotics.

c. Present the project goal: Develop a KWS system that records audio using the UNIHIKER K10, trains a model on Edge Impulse, and displays recognized keywords on the onboard screen.

2. Hardware Setup (10 minutes)

a. Connect UNIHIKER K10 to the PC via USB-C cable.

b. Insert a microSD card into the board.

c. Verify that the built-in microphone is detected and working (test recording script).

3. Data Collection (30 minutes)

a. Use Arduino code (or Edge Impulse CLI) to record keyword audio samples via the K10 microphone.

b. Save recordings to the SD card.

c. Collect at least:

Keyword samples (e.g., “start”, “stop”).

Background noise samples (ambient sounds, silence).

d. Transfer audio files to the computer via SD card reader if needed.

4. Dataset Upload & Labeling (20 minutes)

a. Log in to Edge Impulse Studio.

b. Create a new project and upload audio samples.

c. Label the data as Keyword vs Noise.

d. Split dataset into training and testing sets.

5. Model Training (40 minutes)

a. Add an Audio (MFCC or Spectrogram) processing block in Edge Impulse.

b. Configure the neural network classifier.

c. Train the model and evaluate performance (accuracy, confusion matrix).

d. Optimize for embedded devices (quantized model).

e. Export the model as an Arduino Library.

6. Deployment & Arduino Integration (30 minutes)

a. Import the Edge Impulse Arduino Library into Arduino IDE.

b. Write Arduino sketch that:

Captures audio in real time from the K10 microphone.

Runs inference using the trained model.

Displays the detected keyword on the UNIHIKER K10 screen.

c. Optionally log results to Serial Monitor.

7. Testing & Validation (20 minutes)

a. Speak the trained keyword near the K10 microphone.

b. Observe detection results on the display.

c. Test with background noise and untrained words.

d. Adjust thresholds if false detections occur.

8. Troubleshooting & Optimization (15 minutes)

a. If model misclassifies → collect more training samples.

b. If inference is slow → reduce sampling rate or shorten input window size.

c. If noise interferes → add more noise samples to dataset.

9. Wrap-up & Discussion (20 minutes)

a. Recap achievements:Recorded audio dataset with onboard microphone.

b. Trained and deployed a keyword spotting model with Edge Impulse.

c. Performed real-time keyword inference on UNIHIKER K10.

d. Discuss applications: voice-controlled robotics, IoT home automation, smart devices.

e. Suggest extensions:Add multiple keywords.

f. Trigger hardware actions (LED/buzzer) on detection.

g. Send detected keyword events to an IoT dashboard.

Expected Outcomes:

By the end of this lesson, students will:

a. Understand keyword spotting and its implementation on embedded devices.

b. Record, label, and train audio datasets with Edge Impulse.

c. Deploy and run inference on UNIHIKER K10 via Arduino IDE.

d. Display detected keywords on the onboard screen in real time.

e. Be prepared to extend the project into voice-controlled IoT systems.

Lesson 5: Hand Gesture Recognition using UNIHIKER K10 and Python

Learning Objectives:

By the end of this project, students will be able to:

a. Understand the fundamentals of computer vision and how it can be applied to real-time hand gesture recognition.

b. Set up the UNIHIKER K10 for Python-based computer vision applications using OpenCV.

c. Capture and process video frames from the onboard camera in real time.

d. Apply image preprocessing techniques (grayscale conversion, thresholding, background subtraction, contour detection) to isolate hand regions.

e. Design and implement a gesture classification pipeline using OpenCV methods such as contour analysis, convex hull, and defect detection.

f. Display the recognized gesture (e.g., Open Palm, Fist, Thumbs Up, Victory Sign) on the UNIHIKER K10 screen.

g. Debug and optimize the recognition algorithm for speed and accuracy on a resource-constrained edge device.

h. Explore practical applications of gesture recognition in human–computer interaction, robotics, and IoT control.

Required Materials:

Core Hardware

a. UNIHIKER K10 board (with onboard camera and display)

b. USB-C data cable (for programming and powering the board)

c. Computer (Windows/macOS/Linux) for coding and transferring scripts

Software & Libraries

a. Python 3.x (pre-installed or installed on K10)

b. OpenCV library for Python (opencv-python)

c. NumPy library (for image array processing)

d. UNIHIKER Python SDK / drivers (for camera and display integration)

e. Thonny / VS Code / Jupyter Notebook (optional, for writing and testing code)

Dataset & Testing Environment

a. Hand gestures for training & testing (e.g., Open Palm, Fist, Thumbs Up, Victory Sign)

b. Plain background or controlled lighting setup (helps improve accuracy)

Optional Add-ons

a. MicroSD card (to store captured gesture images if building a dataset)

b. External lighting (desk lamp or LED ring light for better gesture visibility)

c. Tripod or stand (to keep K10 camera stable during gesture recognition tests)

d. External Grove modules (e.g., buzzer, LED, relay → to trigger actions based on gestures)

e. Portable power bank (for standalone deployment outside lab/classroom)

Lesson Flow:

Duration: ~2–2.5 hours

Target Audience: Electrical & Electronic Engineering Students

1. Introduction (10 minutes)

a. Briefly explain what gesture recognition is and how it is applied in IoT, robotics, and human–computer interaction.

b. Discuss the role of computer vision, OpenCV, and Python in real-time gesture recognition.

c. Show a short demo video/example to inspire students.

2. Required Setup (15 minutes)

a. Connect the UNIHIKER K10 to the computer using USB-C.

b. Ensure Python 3.x is installed and running on the K10.

c. Install necessary Python libraries:pip install opencv-python numpyVerify access to K10 camera and display through a simple test script (e.g., capture a frame and show it on screen).

3. Data Acquisition & Preprocessing (20 minutes)

a. Use the onboard camera to capture live video frames.

b. Apply image preprocessing techniques:

i. Convert to grayscale.

ii. Apply Gaussian blur to reduce noise.

iii. Use thresholding or background subtraction to isolate the hand.

c. Introduce Region of Interest (ROI) for gesture capture.

4. Feature Extraction (20 minutes)

a. Detect hand contours from the ROI.

b. Apply convex hull & convexity defects to identify fingers.

c. Extract features like number of fingers shown or gesture shape.

d. Discuss limitations (lighting, background clutter, camera resolution).

5. Gesture Classification (25 minutes)

a. Define a small set of gestures (e.g., Fist, Open Palm, Thumbs Up, Victory Sign).

b. Implement simple rule-based classification (e.g., count defects → map to gesture).

c. Optionally, extend to ML-based classification later.

d. Display recognized gesture as text on K10 screen.

6. Inference & Testing (15 minutes)

a. Run real-time inference on the K10.

b. Test different gestures in front of the camera.

c. Validate accuracy under various lighting/background conditions.

7. Applications & Extensions (10 minutes)

a. Show how recognized gestures can be used to control IoT devices (e.g., turning on a light with a “thumbs up”).

b. Suggest future improvements:

i. Use deep learning models for more gestures.

ii. Add gesture-controlled games on K10 screen.

c. Deploy with external Grove modules (e.g., LEDs, buzzer, motors).

8. Wrap-up & Reflection (10 minutes)

a. Review the main steps: camera input → preprocessing → feature extraction → gesture classification → output.

b. Encourage students to experiment with new gestures and improve recognition accuracy.

c. Open discussion: How can gesture recognition be used in daily life or industry?

Expected Outcomes:


By completing this project, students will be able to:

a. Set up and configure the UNIHIKER K10 for Python and OpenCV-based computer vision applications.

b. Capture and process real-time video from the onboard camera.

c. Apply image preprocessing techniques such as grayscale conversion, blurring, thresholding, and contour detection.

d. Extract features from hand images using contours, convex hulls, and convexity defects.

e. Recognize and classify basic hand gestures (e.g., fist, open palm, thumbs up, victory sign).

f. Display the detected gesture as text or symbols on the K10’s onboard screen.

g. Troubleshoot and optimize recognition for different lighting conditions and backgrounds.

h. Demonstrate a working prototype of gesture-based human–computer interaction on an embedded edge device.

i. Relate gesture recognition to real-world applications in robotics, IoT device control, and assistive technologies.

License
All Rights
Reserved
licensBg
0