A Joint Project of Object Detection & HUSKYLENS 2 for Liangzhu Museum Pottery Classification
1.1 Project Introduction
This project, based on Mind+ V2.0 model training and HuskyLens 2, develops an intelligent explanation device that can recognize Liangzhu pottery types. It accurately distinguishes four categories—cooking vessels, food containers, storage vessels, and wine vessels—through a self-trained object detection model, then outputs the corresponding pottery type and introduction via the screen. For example, in the pottery exhibition hall of Liangzhu Museum, visitors simply aim the device at the display cases, and they can immediately identify which are food containers and which are wine vessels, while quickly accessing professional explanations of their functions.
You can also use it to identify garden plants, instantly learning about their profiles, cultural background, care tips, and interesting facts. This interaction only requires pointing at the target for automatic recognition. During operation, it not only delivers rapid classification results but also outputs vivid explanations, blending technological sophistication with scenario-friendliness to make science popularization activities more engaging and practical.
1.2 Demo Video
2. Project Implementation Principle
This project is based on object detection in Mind+ V2.0 and HuskyLens 2, aiming to achieve species recognition and explanation of pottery from the Liangzhu period, covering the entire process from data preparation, model inference to application.
The specific steps are as follows: First, data collection is conducted using object detection in Mind+ 2.0 model training, with image samples of different pottery types (e.g., food containers, cooking vessels, wine vessels) collected from the Pottery Gallery of Liangzhu Museum to build a dataset containing hundreds of samples. Second, the pottery in the data is labeled. Next, model training is performed based on this dataset, enabling the model to recognize morphological features of different pottery and output corresponding categories. After training, the model is exported in a format suitable for HuskyLens 2. Finally, the exported model is installed in HuskyLens 2, which is used to perform real-time detection and classification of pottery in the image, outputting key information such as "food container", "cooking vessel", or "wine vessel", and triggering the display of corresponding explanation text on UNIHIKER M10 (e.g., "The utensil used for holding food, which is convenient for placing and taking food"), thus realizing intelligent explanation of museum exhibits.

3. Hardware and Software Preparation
3.1 Equipment List

Note: HuskyLens requires Version 2.
3.2 Software Preparation
Download and install the Mind+ installation package(Version 2 or above) from the official website. Double-click to open it after installation.

3.3 Model Training Preparation
Please follow the tutorial via the link below to complete the data collection, annotation, and training of the object detection model.
https://community.dfrobot.com/makelog-318320.html
3.4 Model Download Preparation
Click the "Deploy to HuskyLens2" button.

Enter the application name and title settings (only English and numbers are supported temporarily).

Start the conversion. Once the file upload is successful, the automatic cloud conversion will be in progress. Please wait patiently.


Once the model conversion is successful, click "Download to Local Computer" to download a zip file to your local computer.

Note: This hardware only supports object detection currently, and this function requires an internet connection for conversion.
3.5 Model Installation
Use a USB cable to connect your computer and HUSKYLENS 2. Once successfully connected, your computer will detect a drive named HuskyLens.

Copy the generated model ZIP file to the \storage\installation_package directory on HuskyLens's internal storage.

Then, swipe left and right, click to enter 'Model Installation'.

Select local installation, and the following image will appear once installed.

Observe the HUSKYLENS 2 screen, and you will see a new function named "liangzhu"—this indicates that we have successfully imported our self-trained model into HUSKYLENS 2.

Finally, select the protocol type for HUSKYLENS 2.
Tap System Settings -> Protocol Type -> select I2C communication mode, then return to the main menu interface.

3.6 Hardware Connection
Make connections by referring to the diagram below.

4. Project Making
Open the programming software Mind+, choose "Coding" mode, then click "Upload" to create a new project.

Next, add the required extensions in Mind+, including UNIHIKER M10 and HUSKYLENS 2.
Enter the "Extensions" page, switch to "Board" tab, search for "M10", and click "UNIHIKER M10".



Load the "HUSKYLENS 2 AI Camera" by the same way.

Click the "Back" button to return to the programming interface.

Expand the "Local Terminal", choose your device for connecting.


After the device is successfully connected, write the program as follows:

The analysis of the core code is as follows:



The attachment contains the model.zip and program files. You can implement this project by following the steps below.
First, download and install the model.zip into HUSKYLENS 2 according to section 3.5. Then, in Mind+ V2.0, expand the "Project" menu, click "Load Project" to load, and then click "Go" to run the program.



The effect is as follows:


5. Attachment









