Object Classification Project Using NanoDet on UNIHIKER

Introduction

This project aims to implement an object classification project on UNIHIKER using NanoDet. This project interfaces a USB camera with UNIHIKER to detect objects, box them out and recognize them.

Project Objectives

Learn how to implement real-time object detection using the NCNN framework and the NanoDet model.

HARDWARE LIST
1 UNIHIKER - IoT Python Single Board Computer with Touchscreen
1 Type-C&Micro 2-in-1 USB Cable
1 USB camera

Practical Process

1. Hardware Setup

Connect the camera to the USB port of UNIHIKER.

 

connect camera

 

Connect the UNIHIKER board to the computer via USB cable.

 

connect to the computer

 

2. Software Development

Step 1: Open Mind+, and remotely connect to UNIHIKER.

 

Mind+ connect to UNIHIKER

 

Step 2: Find a folder named "AI" in the "Files in UNIHIKER". And create a folder named "Object Classification Project Using NanoDet on UNIHIKER" in this folder. Import the dependency files for this lesson.

 

mind+

 

Step3: Create a new project file in the same directory as the above file and name it "main.py".

Sample Program:

CODE
import os
 
os.environ["NCNN_HOME"] = os.getcwd()
 
import sys
import cv2
import time
import numpy as np
import ncnn
from ncnn.model_zoo import get_model
from utils import draw_detection_objects
 
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
 
cv2.namedWindow('image',cv2.WND_PROP_FULLSCREEN)    #Set the windows to be full screen.
cv2.setWindowProperty('image', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)    #Set the windows to be full screen.
 
net = get_model(
    "nanodet",
    target_size=320,
    prob_threshold=0.4,
    nms_threshold=0.5,
    num_threads=4,
    use_gpu=False,
)
 
while cap.isOpened():
    success, image = cap.read()
    if not success:
        print("Ignoring empty camera frame.")
        # If loading a video, use 'break' instead of 'continue'.
        continue
 
    objects = net(image)
    image = draw_detection_objects(image, net.class_names, objects)
 
    image = cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE)
    cv2.imshow('image', image)
    if cv2.waitKey(5) & 0xFF == 27:
        break
cap.release()

 

3. Run and Debug

Step 1: Execute the "1-Install_dependency.py" program file and wait for the automatic installation of dependencies.

Step 2: Running the Main Program

Run the "main.py" program, you can see the initial screen shows the real-time image captured by the camera. Aim the camera at different objects such as apples, mice, or phones. You can see each of them being recognized one by one.

 

recognize the object

 

Tips: During program execution, it may require an internet connection to automatically update and download models.

 

4. Program Analysis

In the "main.py" file, we primarily use the OpenCV library to access the camera, capture real-time video streams. And then use the NanoDet model of the ncnn library to detect the target on the image. Detected objects are then marked on the image. The overall process is outlined as follows:

① Initialization: when the program starts, it will open the default camera device and set the camera resolution and buffer size. Then, the program creates a full-screen window named 'image' for displaying the image. Next, the program will get the NanoDet model from the model library and set the relevant parameters of the model.

② Main Loop: The program enters an infinite loop where, in each iteration:

Read a frame from the camera. If the reading fails, the frame will be ignored and continue to the next loop.

Use the NanoDet model to implement object detection on the captured frame to get the list of detected targets.

Mark the detected targets on the read frames. This is achieved by drawing the object's class name and confidence within the bounding rectangle of each detected object.

Rotate the marked frames 90 degrees counterclockwise and display them in the window.

③ User Interaction: The program checks the user's keyboard input. If press 'ESC' , the main loop exits.

④ Termination: When the main loop ends, the program will release the camera device and then exit.

Knowledge Corner - The NanoDet Model in the ncnn Library

NCNN is an efficient neural network forward computation framework optimized for mobile devices, provided by Tencent's open-source project NCNN. It can run on mobile devices like Android and iOS, supporting various neural network models, including NanoDet.

NanoDet is an efficient and lightweight object detection model. Its design goal is to maintain high accuracy while minimizing the model size and computational load, enabling real-time object detection on resource-constrained devices such as mobile and embedded devices.

 

Key features of NanoDet include:

1. Lightweight: NanoDet has a very small model size, approximately 1MB, allowing it to run on resource-constrained devices.

2. Efficiency: NanoDet has very low computational load, approximately 1.5BFLOPs, enabling real-time operation on mobile devices.

3. High Accuracy: Despite its small model size and computational load, NanoDet achieves high detection accuracy comparable to some larger, more complex models.

NanoDet incorporates advanced neural network design techniques such as depthwise separable convolution and attention modules to achieve efficient and accurate object detection. Its network architecture mainly consists of a feature extraction network and an object detection head. The eature extraction network is used to extract features from the input image and the target detection head is used to perform target detection based on the extracted features.

icon Project.zip 3.91MB Download(7)

Feel free to join our UNIHIKER Discord community! You can engage in more discussions and share your insights!

License
All Rights
Reserved
licensBg
0