OpenCV Square Detection based on UNIHIKER

Introduction

This project is to connect an external USB camera on UNIHIKER and use the camera to recognize the object, find the square object in the frame and frame it out.

 

Project Objectives

Learn how to use OpenCV for image processing and shape detection.

HARDWARE LIST
1 UNIHIKER - IoT Python Single Board Computer with Touchscreen
1 Type-C&Micro 2-in-1 USB Cable
1 USB camera

Software

Mind+ Programming Software

 

Practical Process

1. Hardware Setup

Connect the camera to the USB port of UNIHIKER.

 

 

Connect the UNIHIKER board to the computer via USB cable.

 

 

2. Software Development

Step 1: Open Mind+, and remotely connect to UNIHIKER.

 

 

Step 2: Find a folder named "AI" in the "Files in UNIHIKER". And create a folder named "OpenCV Square Detection based on UNIHIKER" in this folder. Import the dependency files for this lesson.

 

 

Step3: Create a new project file in the same directory as the above file and name it "main.py".

Sample Program:

CODE
#!/usr/bin/env python
'''
Simple "Square Detector" program.
 
Loads several images sequentially and tries to find squares in each image.
'''
from __future__ import print_function
import sys
PY3 = sys.version_info[0] == 3
if PY3:
    xrange = range
import numpy as np
import cv2 as cv
import video
def angle_cos(p0, p1, p2):
    d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
    return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
 
def find_squares(img):
    img = cv.GaussianBlur(img, (5, 5), 0)
    squares = []
    for gray in cv.split(img):
        for thrs in xrange(0, 255, 26):
            if thrs == 0:
                bin = cv.Canny(gray, 0, 50, apertureSize=5)
                bin = cv.dilate(bin, None)
            else:
                _retval, bin = cv.threshold(gray, thrs, 255, cv.THRESH_BINARY)
            contours, _hierarchy = cv.findContours(bin, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
            for cnt in contours:
                cnt_len = cv.arcLength(cnt, True)
                cnt = cv.approxPolyDP(cnt, 0.02*cnt_len, True)
                if len(cnt) == 4 and cv.contourArea(cnt) > 1000 and cv.isContourConvex(cnt):
                    cnt = cnt.reshape(-1, 2)
                    max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in xrange(4)])
                    if max_cos < 0.1:
                        squares.append(cnt)
    return squares
 
def main():
    cap = video.create_capture(0)
    cv.namedWindow('squares',cv.WND_PROP_FULLSCREEN)
    cv.setWindowProperty('squares', cv.WND_PROP_FULLSCREEN, cv.WINDOW_FULLSCREEN)
    while True:
        _flag, img = cap.read()
        squares = find_squares(img)
        cv.drawContours( img, squares, -1, (0, 255, 0), 3 )
        cv.imshow('squares', img)
        ch = cv.waitKey(1)
        if ch == 27:
            break
if __name__ == '__main__':
    print(__doc__)
    main()
    cv.destroyAllWindows()

 

3. Run and Debug

Step 1: Run the main program

Run the "main.py" program, you can see that the initial screen shows the real-time image captured by the camera. Aiming the camera image at the hand, you can see that the object is detected and boxed out.

 

 

4. Program Analysis

In the above "main.py" file, we mainly use OpenCV library to call the camera, get the real-time video stream, and realize the detection of square objects with the help of traditional computer vision algorithms and techniques. The overall process is as follows.

1. Capture the video stream from the camera: The program first turns on the camera to get the real-time video stream.

2. Processing each frame in the video stream: The program reads the video stream frame by frame and processes each frame. The first step in the processing is to Gaussian blur the image and then separate the image into color space.

3. Find square objects in the image: For each color channel, the program will find the edges of the image by Canny algorithm or threshold segmentation, and then find the objects in the image by finding the contours. For each found contour, the program calculates its length and performs polygon approximation to find contours with 4 sides, area greater than 1000, and convex shape, which are the possible square objects.

4. Determine whether the found contour is square or not: For each possible square object, the program calculates the maximum cosine value between its corner points, if the maximum cosine value is less than 0.1, the contour is considered to be a square object.

5. Draw and display the result: Mark the found square objects on the original image and display the processed image in the window.

6. User Interactions: If the user presses the ESC key, exit the loop and end the program.

7. Loop: The program repeats the above steps until the user chooses to exit.

 

 

Knowledge analysis

1. Gaussian Blur in image processing

Gaussian blur, also known as Gaussian smoothing or Gaussian filtering, is a technique widely used in image processing. Its main function is to reduce the details and noise in an image and make it smoother.

The principle of Gaussian blur is based on the Gaussian function in mathematics, which is a normal distribution curve. In image processing, Gaussian blurr is achieved by replacing the color value of each pixel with a weighted average of the color values of its neighboring pixels. The weight of this weighted average is calculated based on the Gaussian function, with pixels closer to the current pixel having a larger weight and pixels farther away having a smaller weight.

In OpenCV, you can use the cv2.GaussianBlur() function to perform Gaussian blur on an image. This function requires the specification of the width and height of the Gaussian kernel (which must be odd), as well as the standard deviation in the X and Y directions.

Gaussian blur is a preprocessing step in many image processing algorithms, such as edge detection, feature extraction, etc. With Gaussian blurring, small textures and noise in an image can be eliminated, allowing these algorithms to find true edges and features more accurately.

 

2. Canny Edge Detection

Edge detection is an important technique in computer vision and image processing that aims to locate significant changes in object boundaries in an image. Edge detection algorithms typically find edges by detecting rapid changes in the image intensity function.

The importance of edge detection lies in the fact that edges are usually boundaries of objects and can provide information about the shape and location of the object. In addition, because edges contain only a small part of the image but contain most of the information, edge detection is also often used to reduce the amount of data that needs to be processed and to eliminate unnecessary information, making the task of image processing easier.

Edge detection usually consists of three steps: filtering, enhancement and detection. Filtering is usually used to remove noise from an image; enhancement highlights edges by finding the gradient and direction in the image; and detection identifies true edges through steps such as non-extreme value suppression and hysteresis thresholding.

The common edge detection algorithms are Sobel operator, Prewitt operator, Laplacian operator, Canny edge detection and so on. Among them, the Canny edge detection algorithm is one of the most well-known and widely used ones, which provides good edge localization and good suppression of noise.

 

 

Dependency files 

icon project.zip 8KB Download(2)

Feel free to join our UNIHIKER Discord community! You can engage in more discussions and share your insights!

License
All Rights
Reserved
licensBg
0