Unihiker: Where's My Spot? A DIY Parking Slot Detector!

0 56 Medium

Ever wished you could harness the power of computer vision to find out which parking slots are free? Well, here's your chance to be the tech wizard of your neighborhood! With a Unihiker, a USB camera, and some Python magic, we'll create a parking slot detector that tells you whether a spot is free or occupied. Let's get started!

HARDWARE LIST
1 Unihiker
1 USB camera

Optional but handy:

1. Keyboard - For calibration and configuration (unless you want to juggle keys on Unihiker).

2. Printer - To print parking lot and test your detector without sneaking into real parking lots.

3. Small toy cars - to simulate vehicles.

STEP 1
Project preparation

First, we create a project with all the necessary files and folders on a local computer (Windows, Linux, macOS, etc.). Later we load everything onto the Unihiker. This makes development a little easier and IDEs like PyCharm, Visual Studio or etc. can be used to code.

 

Note: At the end of this tutorial you will find an archive with all the necessary files.

 

Here is the preview of the final structure for this project:

 

CODE
# list all files (optional)
$ tree ParkingSpaceOccupancy/
ParkingSpaceOccupancy/
├── calibration.py
├── main.py
├── reference_data
│   ├── reference.jpg
│   └── roi.json
└── test.py

Create the project folder: ParkingSpaceOccupancy and within it another folder called reference_data.

STEP 2
Calibration Script

This script calibration.py will help you define the Regions of Interest (ROIs) and to create a clean reference image of your parking lot. The ROIs are stored in JSON format in the roi.json file. Save the Python file inside the project folder and the JSON file in the reference_data folder. The reference image will be created and stored as reference.jpg by the Python script.

 

Note: The camera position may be slightly different for you! Either adjust the position later to match the ROIs or change the values in the JSON file.

 

Here the code for both files:

 

CODE
{
    "slots_1": [
        [[110, 50], [200, 50], [180, 185], [75, 185]],
        [[215, 50], [310, 50], [310, 185], [200, 185]],
        [[325, 50], [420, 50], [440, 185], [325, 185]],
        [[440, 50], [530, 50], [560, 180], [460, 180]]
    ],
    "slots_2": [
        [[70, 200], [180, 200], [160, 380], [35, 380]],
        [[200, 200], [310, 200], [310, 380], [180, 380]],
        [[330, 200], [440, 200], [460, 380], [330, 380]],
        [[460, 200], [560, 200], [600, 380], [480, 380]]
    ]
}
CODE
from os.path import dirname, abspath, join
from json import load
import numpy as np
import cv2


WINDOW_NAME: str = "test"
DISPLAY_WIDTH: int = 240
DISPLAY_HEIGHT: int = 320
ROI_COLOR: tuple = (0, 0, 255)


def save_reference_image(image, directory: str) -> None:
    reference_image = join(directory, "reference_data/reference.jpg")

    cv2.imwrite(reference_image, image)

    print(f"[INFO] Reference image {reference_image} saved.")


if __name__ == '__main__':
    current_file_path = dirname(abspath(__file__))
    json_file_path = join(current_file_path, "reference_data/roi.json")

    with open(json_file_path, 'r') as file:
        data = load(file)

    polygon_set_1 = [np.array(coords, dtype=np.int32) for coords in data["slots_1"]]
    polygon_set_2 = [np.array(coords, dtype=np.int32) for coords in data["slots_2"]]

    camera = cv2.VideoCapture(0)
    camera.set(cv2.CAP_PROP_BUFFERSIZE, 1)
    camera.set(cv2.CAP_PROP_FPS, 15)

    cv2.namedWindow(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN)
    cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

    if not camera.isOpened():
        print('[ERROR] Could not open camera.')
        exit(1)
    else:
        print('[INFO] Press key s to save the reference image.')
        print('[INFO] Press key q or ESC to quit.')

    while True:
        ret, frame = camera.read()

        if not ret:
            print('[ERROR] No frame received from camera.]')
            break

        if frame is None or frame.size == 0:
            print("[WARNING] Empty frame. Skipping...")
            continue

        org_frame = frame.copy()

        for poly in polygon_set_1:
            cv2.fillPoly(frame, [poly], ROI_COLOR)

        for poly in polygon_set_2:
            cv2.fillPoly(frame, [poly], ROI_COLOR)

        cv2.imshow(WINDOW_NAME, frame)

        key = cv2.waitKey(1) & 0xFF
        if key == ord('q') or key == 27:
            break

        if key == ord('s'):
            save_reference_image(image=org_frame, directory=current_file_path)

    camera.release()
    cv2.destroyAllWindows()
STEP 3
Testing Script

This script test.py (inside the project root) compares the reference image with the live feed and detects changes (like cars taking up space). Free slots will be highlighted in green and occupied slots in red color.

The constants THRESHOLD_DIFFERENCE and CHANGED_PIXELS_THRESHOLD will help you later for fine tuning:

- Threshold Difference: 50 (adjust for lighting conditions).

- Changed Pixels Threshold: 300 (tweak to detect slight movements).

 

Important: This script requires the reference image and ROIs!

CODE
from json import load
from os.path import dirname, abspath, join
from sys import exit
from typing import List, Tuple, Dict
import cv2
import numpy as np


WINDOW_NAME: str = "Parking Space Occupancy"
DISPLAY_WIDTH: int = 240
DISPLAY_HEIGHT: int = 320
REFERENCE_IMAGE_PATH: str = "reference_data/reference.jpg"
JSON_FILE_PATH: str = "reference_data/roi.json"

THRESHOLD_DIFFERENCE: int = 50
CHANGED_PIXELS_THRESHOLD: int = 300


def load_rois(file_path: str) -> Tuple[List[np.ndarray], List[np.ndarray]]:
    """
    Loads regions of interest (ROIs) from a JSON file. The function parses the givenq
    JSON file and extracts coordinate arrays, which are then converted to numpy
    arrays for further use. The ROIs are organized into two separate lists based
    on the file's structure, enabling structured analysis or further processing
    of the data.

    :param file_path: The file path to a JSON file containing ROI data.
    :type file_path: str

    :return: A tuple containing two lists of ROIs represented as numpy arrays of 32-bit integers.
    :rtype: Tuple[List[numpy.ndarray], List[numpy.ndarray]]
    """
    try:
        with open(file_path, 'r') as file:
            data: Dict = load(file)

        set_1: List[np.ndarray] = [np.array(coords, dtype=np.int32) for coords in data["slots_1"]]
        set_2: List[np.ndarray] = [np.array(coords, dtype=np.int32) for coords in data["slots_2"]]

        return set_1, set_2
    except FileNotFoundError:
        print(f"[ERROR] Could not find JSON file {file_path}.")
        exit(1)


def analyze_parking_spots(reference: np.ndarray, current_frame: np.ndarray, rois: List[np.ndarray]) -> int:
    """
    Analyzes parking spots within a provided frame to detect available spaces. Takes a reference frame,
    current frame, and a list of regions of interest (ROIs). Calculates differences pixel-wise between
    the reference and current ROI to identify occupied or free parking spots. Uses a predefined threshold
    to determine if a change suggests occupancy. Outputs the number of free parking spots.

    :param reference: The reference frame/image for comparison where no vehicles are present.
    :type reference: np.ndarray
    :param current_frame: The current frame/image under analysis for detecting parked vehicles.
    :type current_frame: np.ndarray
    :param rois: List of regions of interest polygons defining individual parking spots.
    :type rois: List[np.ndarray]

    :return: The number of free parking spots determined from the analysis.
    :rtype: int
    """
    overlay = frame.copy()
    alpha = 0.5
    free_spots = 0

    for i, roi in enumerate(rois, start=1):
        mask = np.zeros_like(reference, dtype=np.uint8)
        cv2.fillPoly(mask, [roi], 255)

        ref_roi = cv2.bitwise_and(reference, reference, mask=mask)
        current_roi = cv2.bitwise_and(current_frame, current_frame, mask=mask)

        diff = cv2.absdiff(ref_roi, current_roi)
        _, diff_thresh = cv2.threshold(diff, THRESHOLD_DIFFERENCE, 255, cv2.THRESH_BINARY)
        changed_pixels = cv2.countNonZero(diff_thresh)

        if changed_pixels > CHANGED_PIXELS_THRESHOLD:
            status_color = (0, 0, 255)
        else:
            status_color = (0, 255, 0)
            free_spots += 1

        cv2.fillPoly(overlay, [roi], color=status_color)

    cv2.addWeighted(overlay, alpha, frame, 1 - alpha, 0, frame)

    return free_spots


if __name__ == '__main__':
    current_file_path = dirname(abspath(__file__))
    reference_image_path = join(current_file_path, REFERENCE_IMAGE_PATH)
    ref_img = cv2.imread(reference_image_path, cv2.IMREAD_GRAYSCALE)

    if ref_img is None:
        print(f"[ERROR] Could not read reference image {reference_image_path}.")
        exit(1)

    json_file_path = join(current_file_path, JSON_FILE_PATH)
    roi_set_1, roi_set_2 = load_rois(json_file_path)

    camera = cv2.VideoCapture(0)
    camera.set(cv2.CAP_PROP_BUFFERSIZE, 1)
    camera.set(cv2.CAP_PROP_FPS, 15)

    cv2.namedWindow(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN)
    cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

    if not camera.isOpened():
        print('[ERROR] Could not open camera.')
        exit(1)

    while True:
        ret, frame = camera.read()

        if not ret:
            print('[ERROR] No frame received from camera.]')
            break

        if frame is None or frame.size == 0:
            print("[WARNING] Empty frame. Skipping...")
            continue

        current_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        free_spots_set_1 = analyze_parking_spots(reference=ref_img, current_frame=current_img, rois=roi_set_1)
        free_spots_set_2 = analyze_parking_spots(reference=ref_img, current_frame=current_img, rois=roi_set_2)

        total_free_spots = free_spots_set_1 + free_spots_set_2
        print(f"[INFO] {total_free_spots} free spots detected.")

        cv2.imshow(WINDOW_NAME, frame)

        key = cv2.waitKey(1) & 0xFF
        if key == ord('q') or key == 27:
            break

    camera.release()
    cv2.destroyAllWindows()
STEP 4
Final Script with Tkinter UI

The main.py (inside the root folder of the project) serves a user-friendly interface on the Unihiker device. It displays the parking status on the Unihiker screen, making it visually appealing and practical.

 

Important: This script requires the reference image and ROIs!

 

CODE
from json import load
from os.path import dirname, abspath, join
from sys import exit
from tkinter import Tk, Label, Canvas
from typing import List, Tuple, Dict
import cv2
import numpy as np


DISPLAY_WIDTH: int = 240
DISPLAY_HEIGHT: int = 320
REFERENCE_IMAGE_PATH: str = "reference_data/reference.jpg"
JSON_FILE_PATH: str = "reference_data/roi.json"

THRESHOLD_DIFFERENCE: int = 50
CHANGED_PIXELS_THRESHOLD: int = 300

FONT_A: Tuple[str, int] = ("Arial", 20)
FONT_B: Tuple[str, int, str] = ("Arial", 24, "bold")
FONT_C: Tuple[str, int, str] = ("Helvetica", 50, "bold")


def load_rois(file_path: str) -> Tuple[List[np.ndarray], List[np.ndarray]]:
    """
    Load regions of interest (ROIs) from a JSON file. The function processes the
    JSON content and extracts coordinates for two sets of ROIs, converting them
    into numpy arrays for further use. Each set is represented as a list of
    numpy arrays.

    :param file_path: The path to the JSON file containing ROIs structured in "slots_1" and "slots_2".
    :type file_path: str

    :return: A tuple of two lists of numpy arrays.
    :rtype: Tuple[List[np.ndarray], List[np.ndarray]]

    :raises FileNotFoundError: If the specified JSON file is not found.
    """
    try:
        with open(file_path, 'r') as file:
            data: Dict = load(file)

        set_1: List[np.ndarray] = [np.array(coords, dtype=np.int32) for coords in data["slots_1"]]
        set_2: List[np.ndarray] = [np.array(coords, dtype=np.int32) for coords in data["slots_2"]]

        return set_1, set_2
    except FileNotFoundError:
        print(f"[ERROR] Could not find JSON file {file_path}.")
        exit(1)


def analyze_parking_spots(reference: np.ndarray, current_frame: np.ndarray, rois: List[np.ndarray]) -> int:
    """
    Analyzes parking spots by comparing a reference frame with the current frame using regions of interest
    to detect any significant changes. This function determines the number of free parking spots based
    on the specified regions.

    :param reference: A reference frame image represented as a matrix of pixel intensities.
    :type reference: np.ndarray
    :param current_frame: The current frame image to compare against the reference frame.
    :type current_frame: np.ndarray
    :param rois: A list of polygons, where each polygon represents a region of interest (ROI).
    :type rois: List[np.ndarray]

    :return: The number of free parking spots detected after analyzing the regions of interest.
    :rtype: int
    """
    free_spots = 0

    for roi in rois:
        mask = np.zeros_like(reference, dtype=np.uint8)
        cv2.fillPoly(mask, [roi], 255)

        ref_roi = cv2.bitwise_and(reference, reference, mask=mask)
        current_roi = cv2.bitwise_and(current_frame, current_frame, mask=mask)

        diff = cv2.absdiff(ref_roi, current_roi)
        _, diff_thresh = cv2.threshold(diff, THRESHOLD_DIFFERENCE, 255, cv2.THRESH_BINARY)
        changed_pixels = cv2.countNonZero(diff_thresh)

        if changed_pixels <= CHANGED_PIXELS_THRESHOLD:
            free_spots += 1

    return free_spots


def create_parking_sign(canvas: Canvas, x: int, y: int, width: int, height: int) -> None:
    """
    Create a parking sign on a given canvas.

    This function draws a rectangular parking sign with a blue fill,
    white outline, and the letter "P" in the center. The sign is
    positioned at a specified location on the canvas with the given
    dimensions.

    :param canvas: Canvas object where the parking sign will be drawn.
    :param x: x-coordinate for the top-left corner of the rectangle.
    :param y: y-coordinate for the top-left corner of the rectangle.
    :param width: The width of the parking sign rectangle.
    :param height: The height of the parking sign rectangle.

    :return: None
    """
    canvas.create_rectangle(x, y, x + width, y + height, fill="blue", outline="white", width=5)
    canvas.create_text(x + width // 2, y + height // 2, text="P", fill="white", font=FONT_C)


def update_free_spots(cam: cv2.VideoCapture,
                      reference: np.ndarray,
                      roi_1: List[np.ndarray],
                      roi_2: List[np.ndarray],
                      label: Label,
                      window: Tk) -> None:
    """
    Updates the `label` widget to display the number of free parking spots based on
    video feed analysis. The function captures a frame from the given camera,
    processes it to identify free parking spots in the specified regions of
    interest, and updates the label accordingly. It also sets up a recurring task
    to run itself after a given interval of time.

    :param cam: OpenCV VideoCapture object used for capturing video frames.
    :type cam: cv2.VideoCapture
    :param reference: Reference image used for comparing the parking lot state.
    :type reference: numpy.ndarray
    :param roi_1: List of regions of interest for the first set of parking spots.
    :type roi_1: list[numpy.ndarray]
    :param roi_2: List of regions of interest for the second set of parking spots.
    :type roi_2: list[numpy.ndarray]
    :param label: Label widget to be updated with the number of free parking spots.
    :type label: tkinter.Label
    :param window: Tkinter window object managing the UI.
    :type window: tkinter.Tk

    :return: This function does not return a value.
    :rtype: None
    """
    ret, frame = cam.read()

    if ret and frame is not None:
        current_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        free_spots_set_1 = analyze_parking_spots(reference=reference, current_frame=current_img, rois=roi_1)
        free_spots_set_2 = analyze_parking_spots(reference=reference, current_frame=current_img, rois=roi_2)
        total_free_spots = free_spots_set_1 + free_spots_set_2

        if total_free_spots > 0:
            label.config(text=f"{total_free_spots} Free")
        else:
            label.config(text="Occupied")
    else:
        label.config(text="Camera Error")

    window.after(100, update_free_spots, cam, reference, roi_1, roi_2, label, window)


if __name__ == '__main__':
    current_file_path = dirname(abspath(__file__))
    reference_image_path = join(current_file_path, REFERENCE_IMAGE_PATH)
    ref_img = cv2.imread(reference_image_path, cv2.IMREAD_GRAYSCALE)

    if ref_img is None:
        print(f"[ERROR] Could not read reference image {reference_image_path}.")
        exit(1)

    json_file_path = join(current_file_path, JSON_FILE_PATH)
    roi_set_1, roi_set_2 = load_rois(file_path=json_file_path)

    camera = cv2.VideoCapture(0)
    camera.set(cv2.CAP_PROP_BUFFERSIZE, 1)
    camera.set(cv2.CAP_PROP_FPS, 15)

    if not camera.isOpened():
        print('[ERROR] Could not open camera.')
        exit(1)

    root = Tk()
    root.geometry(f"{DISPLAY_WIDTH}x{DISPLAY_HEIGHT}+0+0")
    root.resizable(width=False, height=False)
    root.config(bg='black')

    static_label = Label(root, text="Central Parking", fg="white", bg="red", font=FONT_A, width=DISPLAY_WIDTH)
    static_label.pack(fill='x', side='top', ipady=5)

    sign_canvas = Canvas(root, width=DISPLAY_WIDTH, height=120, bg="black", highlightthickness=0)
    sign_canvas.pack(pady=15)
    create_parking_sign(canvas=sign_canvas, x=(DISPLAY_WIDTH - 100) // 2, y=10, width=100, height=100)

    label_txt = Label(root, text="Initializing...", fg="yellow", bg="black", font=FONT_B)
    label_txt.pack(fill='x', side='top', pady=10)

    update_free_spots(cam=camera, reference=ref_img, roi_1=roi_set_1, roi_2=roi_set_2, label=label_txt, window=root)
    root.mainloop()

    camera.release()
    cv2.destroyAllWindows()
STEP 5
Upload and Test on Unihiker

Upload the project to Unihiker via SCP or SMB. The online documents will help you! 

 

Here an example for SCP:

CODE
# upload via SCP
$ scp -r ParkingSpaceOccupancy/ 10.1.2.3:/root/ 

Calibration/Reference Image

 

Print out the picture parking_slots.jpg on A4 paper. You will find this picture inside the archive. Place the paper in front of camera. Make sure that the lighting is right and that there are no other objects between the camera and the paper.

 

Run the Python script calibration.py (via Touch screen or command line).  

 

Press key s to create the reference image. After you are done press key q or ESC to stop the Python script.

 

 

CODE
# ssh from local to unihiker
$ ssh 10.1.2.3

# change directory
$ cd /root/ParkingSpaceOccupancy/

# execute Python script
$ python3 calibrate.py

Test ROIs

 

Run the Python script test.py (via Touch screen or command line). Cover the respective parking spaces and check whether they are recognized as free (green) or occupied (red).

 

When you're done press the q or ESC key.

 

 

CODE
# ssh from local to unihiker
$ ssh 10.1.2.3

# change directory
$ cd /root/ParkingSpaceOccupancy/

# execute Python script
$ python3 test.py

Run application

 

As soon as you are finished with the previous steps, you can start the final application. Run the Python script main.py (via Touch screen or command line).

 

Here an demo on Instagram.

STEP 6
Annotations and Ideas

Add Notifications: Integrate SMS or email alerts for real-time updates.

 

Weather Adaptation: Teach your detector to handle rainy or snowy days.

 

License Plate Recognition: Add a layer to recognize the plates and times.

 

Real-Time Web Dashboard: Create a web interface to monitor parking remotely.

icon demo.zip 15KB Download(3)
License
All Rights
Reserved
licensBg
0