Object Detection-Based Self Checkout System

1.1 Project Introduction

This project aims to create an intelligent and fun product detection and automatic checkout system, inspired by self-service checkout counters in modern supermarkets and AI smart shopping carts. Through object detection technology, we transform a compact UNIHIKER M10 into an "AI Cashier" – it can recognize items in front of it in real time via the camera, automatically label their names, frame their positions, and quickly calculate the total price.

You can use it to experiment with home snack inventory checks, desktop product detection challenges, or even simulate interactive experiences in unmanned stores. From data collection through model training to inference and application, it fully presents the easy practice of artificial intelligence technology in daily life, making object detection technology truly touchable, interactive, and more interesting!

1.2 Demo Video

2. Project Implementation Principle

This project implements product recognition and checkout functionality based on object detection technology. The entire implementation process covers the full workflow from data collection to model inference and application. Specifically, it first constructs a dataset through data collection and annotation using the Mind+ model training platform and trains an object detection model. After training, the model is validated and optimized, and exported in a format compatible with the UNIHIKER M10. Finally, the model is deployed on the UNIHIKER M10 for inference: the USB camera captures real-time images, and the model performs inference detection, outputs the product location, category, and corresponding price, and automatically calculates the total price.

3. Hardware and Software Preparation


3.1 Equipment List

- Mind+ Software
- UNIHIKER M10
- USB Cable*2

- USB Camera


Note: The UNIHIKER system version v0.4.1 or above is suitable for the production of this project. The Python environment version of UNIHIKER is 3.12, and the version of Mind+ programming software is v2.0.


3.2 Hardware Connection

Make connections by referring to the diagram below.

3.3 Software Preparation

Download and install the Mind+ installation package(Version 2 or above) from the official website. Double-click to open it after installation.

3.4 Environment and Extension Preparation

This project requires setting the Python environment of the UNIHIKER M10 to version 3.12.7. Please follow these steps for environment preparation:

Open the programming software Mind+, and select "Python Block Mode" in the "Coding" tab.


Next, add the required extensions in Mind+.
Enter the "Extensions" page, switch to "Board" tab, search for"M10", and click "UNIHIKER M10".


Click the "Back" button to return to the programming interface.


Expand the "Local Terminal", choose your device for connecting.

In the terminal, input "python --version" and press the Enter key to check the Python environment version of the UNIHIKER M10:

As shown in the figure, the Python environment of the UNIHIKER M10 is already the specified version 3.12.7.

If the version does not match, input "pyenv global 3.12.7" in the terminal to switch to this version.


Complete extension preparation by following these steps:
(1)Connect UNIHIKER M10 to internet
(2)Load the extension


First, connect the UNIHIKER M10 to the network.
Enter "10.1.2.3" in the browser address bar to access the UNIHIKER M10's web configuration page and switch to "Network settings".

Choose name, enter the password and Click "Connect".


A "WiFi connection is successful" message will appear, and your UNIHIKER M10 is now connected to the network.


Secondly, you can proceed to load the model inference-related extension.

Go extension and download the Model Training and Inference library.

Click the "Back" button to return to the programming interface.


Wait for the dependency libraries to be installed.

4. Project Making

We use of Mind+’s training platform model to complete the collection, annotation, training, validation and export.
First, open the Mind+ software, select "Model", and open "Object Detection" (Note: Only Mind+ 2.0 or higher versions have the model training function).


The initial interface is shown below:


The page is divided into three parts, from left to right: Data Collection, Model Training, and Validation & Deploy (the usage of each part will be detailed later).


4.1 Data Collection and Annotation

The first step in model training is preparing a dataset of products. The products used in this project include four categories: apples, cola, lipstick, and eye drops.

Follow these steps to collect and label the data:
Connect the USB camera to the computer via a USB cable, which facilitates data collection.


Click the "Webcam" button and choose the "USB Camera" to start data collection.

Left-click the "Hold to Record" button to begin capturing. Rotate the camera to record multiple angles of the products.

By default, one photo is captured per second, which can be adjusted in the settings if needed.

(FPS, or Frames Per Second, refers to the number of images captured per second)

After collecting all product images, close the camera capture function.


Next, label the data for each category:

Click "Data Annotation"and a pop-up window will appear: "Tag list is empty. Would you like to create tags?" Click "Confirm" to create a label.

Let's create a tag named "Apple" first:
Enter the label name → Select a label color → Confirm → Label created successfully.

To create new tags, click the "Create Tag" button and repeat the steps above to create the remaining three tags.

(This project uses four product tag: apples, cola, lipstick, and eye drops.)

Once all tags are created, start labeling objects in the images:

Select the tag (or use its corresponding Hotkey) → Use the mouse to draw a box around the target object (left-click at the start and end points).

If you need to adjust the bounding box size or position, right-click within the box to modify or delete it (select the box and press the "Delete" key on your keyboard).

After tagging all objects in one image, click "Next" (or press the Space) to continue tagging the next image.

Once all objects are labeled, click the "×" button to return to the main interface.

A prepared dataset in YOLO format is provided in the attachment. Follow these steps to upload it:

Click "Upload".


Select "Import Data Type" as "Annotated Data (YOLO Format)" and click "Select File to Upload".

Choose the file "Object Detection Dataset.zip" and click "Open" to upload.


Once the dataset is ready, you can proceed to the model training phase.


4.2 Model Training

To train the model, you need to adjust the training parameters based on the dataset characteristics.

Expand "Advanced Settings" to modify the parameters.

In this project, with a dataset size of approximately 130 images, the training parameters are set as follows:

Once the parameters are configured, simply click the "Train Model" button to start the training process (keep this page open during training to ensure no interruption).

During training, click "Learn More" to monitor the model training process.


Wait for training to complete, then click "Confirm".


4.3 Model Validation and Export

After the model training is completed, you can verify the model performance through model validation. Please follow these steps to use the camera for testing:
Enable the Input switch → Select Webcam → Select USB 2.0 Camera → Align the camera with the target → Observe the output

You can also test with files (which can be found in the attachment) using the following steps:
Enable the Input switch → Select File → Click to upload file → Choose the file and click open → Observe the validation result.


Once the validation result meets expectations, you can export the model file.
Click "Export Model" to export the model as an ONNX format file and a YAML format configuration file (both files are required for model inference and application).

Select a location to save the model file.


It is recommended to save this model training project as a project file for easy model optimization and adjustments later. The steps are:
Expand "Quick Experience", select "Save Project", choose the save location, and click "Save" to complete the saving operation. Then open the saved project file via "Open Project" in "Quick Experience"


4.4 Model Inference and Application

Click "+", and select "Python Block Mode" in "Coding".

Connect and load extension.
Go to the "Extensions" page and download the "Model Training and Inference" library.


Please upload the model files as follows.

Click "Resource Files" → Select "Upload File" → Select the model (.onnx) and its configuration file (.yaml) → Click "Open".

The model file has been uploaded successfully.


Then go back to Blocks.

Write the program as following:

The analysis of the core code is as follows:

Click "Run" to run the program.


The effect is as follows:


There is a complete program file for this project in the attachment.
Click “+” → Open Project → Open Local File.


Select the project in the attachment and click "Open".


Click "Run" to run the program.

License
All Rights
Reserved
licensBg
0