
Train and deploy TFLite Vision models on the Unihiker K10 —— Teach your K10 to recognize everything
Using the Unihiker K10's onboard camera and ESP32-S3 processor, we can train and deploy vision models on EdgeImpulse.
The K10 can be taught to recognize anything.
1.Download K10 WebCamera library
2.Download K10 EdgeImpulse Vision library
3.Download Arduino IDE
Prefer Arduino IDE V1.8.19.
Because in this project, you'll frequently fine-tune the model and import different libraries multiple times. This can be cumbersome in Arduino IDE V2, but in version 1.8.19, you only need to modify the files within the library directory.
4. Config K10 in Arduino IDE
Download and install the Unihiker K10 Webcam library, and upload the following code to connect K10 to your WiFi. Then you can view the K10 IP address in serial monitor
Be sure to set “Tool→USB CDC on boot" in enble.
Other
#include "unihiker_k10_webcam.h"
#include "unihiker_k10.h"
#include <WiFi.h>
UNIHIKER_K10 k10;
unihiker_k10_webcam webcam;
// WiFi config, change it into your own wifi and password
const char* SSID = "your_wifi_name";
const char* PASSWORD = "your_wifi_password";
//Connect wifi
void wifi_init() {
WiFi.begin(SSID, PASSWORD);
Serial.print("Connecting to WiFi");
while (WiFi.status() != WL_CONNECTED) {
Serial.print(".");
delay(500);
}
Serial.println("\n WiFi Connected!");
Serial.print("ESP32 IP: ");
Serial.println(WiFi.localIP());
}
void setup()
{
Serial.begin(115200);
//Init K10
k10.begin();
//Init screen
k10.initScreen();
wifi_init();
//Display camera image on K10 screen
k10.initBgCamerImage();
}
void loop()
{
//Open webcam
Serial.println("enableWebcam");
webcam.enableWebcam();
}
After you get the K10 IP address in the Arduino IDE serial monitor
You can type the IP adress in your web browser and click the following button to save photo which K10 captured.
Then you can point the K10 camera at any object you wish to learn and click the button multiple times in the browser to capture images.
For a single object category, I recommend taking at least 30 to 50 photos from different angles to achieve more accurate results during subsequent model training.
The photo would be apperas into your PC, and I recommend storing photos with different tags in separate folders.
Visit Edge Impulse.Edge Impulse is an online model training website that utilizes local CPU/GPU computing power for model training. It supports training and deploying models suitable for ESP32 TFLite.
1.Create a new project
2. Click “Add exsisting data”
3.Click upload data
4. Choose the photo you collect and type in the lable of this series of photo
5. Label the main thing of the object in “Data acquisition -> Labeling queue” and save.
1.After labeling all data, click “Impulse design -> Create impulse” to generate the impulse and save it.
The purpose of each processing block
2. Get into image, click “save parameters”
3. Get into “Object detection”, click “Save & train” to train the model

4. After training is complete, you can view the model's performance. If the model's performance is unsatisfactory, you can adjust the parameters and retrain it.

5. Go to the “Retrain model” page and click “Train model”.

Go to the “Deployment” page, select the “Arduino library” option under “DEFAULT DEPLOYMENT”, choose the “TensorFlow Lite” option under “MODEL OPTIMIZATIONS”, then click “build”.
Then the computer browser will automatically download the model file.
Extract the trained model library file into the “arduino->libraries” folder of Arduino IDE 1.8.19.
Replace the files “depthwise_conv.cpp” and “conv.cpp” in the directory “src\edge-impulse-sdk\tensorflow\lite\micro\kernels” within this model file.
Download the EdgeImpulse_Vision library and replace the project name referenced in src/usr_include.h with your own EdgeImpulse project name.
Compile and upload edgeImpulse_vision/example/edgeimpulse/edgeimpulse.ino
View the identified object information via the serial port.
The edgeImpulse_vision library calls the vision model files via usr_include.h.
Â
Then upload the following code.
Because it involves ESP32 files and the TFLite model, the compilation process under the Arduino IDE will take a long time, approximately 20 minutes.
#include "unihiker_k10.h"
#include "edgeImpulse_vision.h"
UNIHIKER_K10 k10;
edgelmpulse_vision edfelmpulse;
sEdgeData carData;
uint8_t screen_dir=2;
void setup()
{
Serial.begin(115200);
k10.begin();
k10.initScreen();
k10.initBgCamerImage();
k10.creatCanvas();
}
void loop()
{
edfelmpulse.request();
if(edfelmpulse.isObjectDetection()){
edfelmpulse.getdata(&carData);
Serial.print(carData.label);
Serial.print(", ");
Serial.print(carData.value);
Serial.print(", ");
Serial.print("x:");
Serial.print(carData.x);
Serial.print(", ");
Serial.print("y:");
Serial.print(carData.y);
Serial.print(", ");
Serial.print("width:");
Serial.print(carData.width);
Serial.print(", ");
Serial.print("height:");
Serial.println(carData.height);
k10.canvas->canvasText(carData.label, 0, 0, 0x0000FF, k10.canvas->eCNAndENFont16, 50, true);
k10.canvas->canvasText(carData.value, 0, 20, 0x0000FF, k10.canvas->eCNAndENFont16, 50, true);
k10.canvas->canvasText(carData.x, 0, 40, 0x0000FF, k10.canvas->eCNAndENFont16, 50, true);
k10.canvas->canvasText(carData.y, 0, 60, 0x0000FF, k10.canvas->eCNAndENFont16, 50, true);
k10.canvas->updateCanvas();
}else{
Serial.println("No objects found");
}
delay(1000);
}
After the program is uploaded, reset your K10 and it will be able to recognize the corresponding object.
You can see the result in the serial monitor.
The above are some models I trained myself, which can be directly tried out in the Arduino IDE.
The animal model can recognize horses, chickens and cats
The flower model can recognize daisies, roses and sunflowers
The fruit model can recognize oranges, bananas and apples
The TrafficSign model can recognize left turn, right turn, stop and sidewalk signs
