No helmet no green AI traffic light

Making the Honda Awareness AI Traffic Light from scratch with Edge Impulse

 

HARDWARE LIST
1 DFRobot UNIHIKER - IoT Python Programming Single Board Computer with Touchscreen
1 Megapixel 720p USB Wide-angle Camera for Raspberry Pi and NVIDIA Jetson Nano
1 Texas Instruments AM62A

Software apps and online services 

- Edge Impulse Studio        
- Autodesk Fusion 360 

Story

 

No helmet no green AD. (Hey Dall-e, where is my front wheel?)
No helmet no green AD. (Hey Dall-e, where is my front wheel?)

 

“Awareness traffic light” was the name of an initiative by Honda Safety, a department of Honda Motor Argentina, that aims to raise awareness and promote safe driving of motorcycles. The initiative consists of AI cams located in front of traffic lights that detect if motorcycle drivers are not wearing helmets and ask them to put them on, otherwise the traffic light will not turn green.

As a motorcycle rider and an AI enthusiast, I wondered how hard it would be to make the Honda Awareness AI Traffic Light from scratch.

 

Machine Learning model training

Identify motorcycle drivers not wearing helmets was a complicated task until the popularization of Machine Learning. Now it is just a matter of taking a set of pictures and let Machine Learning find subtle patterns.

 

Dataset Label Queue
Dataset Label Queuee

 

For obvious reasons (lack of a good staircase to start with) I did not use an actual traffic light, but I made a scaled intersection for Lego.

Even when Lego figures were used for training real pictures could be used and the entire system will work right away.

 

No green no helmet
No green no helmet

 

I have taken 60 pictures of this Lego figure with and without helmet using Open Camera a free Android app with interval shoots.

I have uploaded the pictures to the well known Edge Impulse platform, free for developers.

I went to the labeling queue. I created an Impulse for Image data using96x96px, Image processing block, Object Detection as learning block and 2 output features: helmet and nohelmet.

 

 

After generating the features, I configured a Neural Network with 60 cycles, 0.001 learning rate, data augmentation and I got an impressive 95.2% F1 score.

 

NN F1 score
NN F1 score

 

What is F1 score? a machine learning evaluation metric that measures a model's accuracy. It combines the precision and recall scores of a model. The accuracy metric computes how many times a model made a correct prediction across the entire dataset.

 

Testing helmet detection with Edge Impulse runner
Testing helmet detection with Edge Impulse runner

 

At this point, it is possible to download a library with the model for Arduino, ESP 32, Python, etc In this case I did not deploy since the model will be downloaded directly to the AM62A Texas Instruments board.

Having a model trained, I will define the architecture of the complete system.

 

Arquitecture
Arquitecture

 

AM62A with a USB cam will take the pictures and determine if motorcycle riders wear helmets from a Python script.

An intermediate server will receive the inference using php, store the inference in a.ini file to be queried by traffic lights.

Traffic lights will be mounted with Unihiker boards also using a Python script.
 

Texas Instruments AM62A AI Board

 

TI AM62A
TI AM62A

 

The Texas Instruments AM62A is a “low-power starter kit for edge AI systems” featuring a quad-core 64-bit Arm® Cortex®-A53 microprocessor, a single-core Arm Cortex-R5F, an H.264/H.265 video encode/decode, 2GB 32bit LPDDR4 memory, 512MB OSPI, 16GB eMMC, USB 2.0 (too bad that there is just one port), microSD slot, Gigabit Ethernet (no WiFi), 3.5mm TRRS audio jack and a 40 pin GPIO expansion header.

Another USB port and WiFi would it be nice but the board comes with lots of interesting AI features, so it is not that important.

There is one main difference about working with this board compared to a Raspberry PI or a standard Linux box for example. You cannot just connect an HDMI display and a keyboard to run Linux commands. AM62A OS is an Arago version with very limited tools and HDMI will display demos of pre-installed AI applications.

 

There are several setup options but my recommendation is:

. Download SDK image https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM62A/08.06.00.45 Ext wic.xz

. Burn the image with Balena Etcher or other similar app, into a microSD card

. Connect a 5V 3A Power Supply, HDMI to TV, the USB camera and the Ethernet Cable to the router

. Check the assigned IP in the HDMI screen or using router Connected Devices utility

 

Get the IP from HDMI screen
Get the IP from HDMI screen

 

. Login to that IP using Putty or any other SSH client. User is root and password is blank

 

. Run

CODE
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

 

. Run

(this is required to publish the detection rate for the traffic light Unihiker module)

CODE
sudo pip3 install requests

 

. Run

CODE
edge-impulse-linux-runner

Login with your Edge Impulse credentials, select the motorcycle helmet project using thearrows. Press enter.

. You can adjust camera position and check the classification results by loading http://YourAM62AIP:4912

 

74% helmet detection in 4ms
74% helmet detection in 4ms

 

Server side PHP script

For the server, a Linux box with PHP is enough. I did upload the php and ini files to a web server, assigned 777 permissions over helmet.ini file and edited my server name inside am62a_traffic.py file.

I did a manual test by loading the URL http://MyServer/updateHelmet.php?nohelmet=0.123

Then I checked that.ini file content changes were updated. So everything was working for the next step.

Note: for this example, only one inference value is uploaded to the server. For a multi traffic light environment, traffic light Id have to be added. And of coursesome security layers

 

Traffic Light

 

DFRobot Unihiker
DFRobot Unihiker

 

For the traffic lights, there are several options: making a simple ESP32 with Leds to query the server, for example. But I have recently received a Unihiker by DfRobot and there is nothing faster than a board with Linux and Python pre installed and on board touch screen.

 

Unihiker command line
Unihiker command line

 

I wrote a script to iterate 3 png (green, yellow and red) and before turning to red, the server is queried. If the missing helmet score is more than a limit value, a no helmet no green image will appear.

I did connect a USBC Cable to the Unihiker, open web interface to 10.1.2.3, configured my WiFi SSID and password and obtained the IP of the Unihiker.

With that IP, user root, pass dfrobot I connected with SFTP and uploaded unihiker_trafficLight.py and the traffic lightimages to /images folder

I also did a small 3d printed base with Fusion 360.

 

Traffic Light base
Traffic Light base

 

Running the system

 

Low power starter kit for Edge AI systems
Low power starter kit for Edge AI systems

 

Download and install am62a_traffic.py from Github. Upload the file to the AM62A using SFTP.

Now that the inference module was ready, the intermediate server was ready and the traffic light was ready, I started the system with 2 sentences, that of course could be autoexecuted with cronjobs.

 

Texas Instruments:

CODE
python3 am62a_traffic.py

 

Unihiker:

CODE
python unihiker_trafficLight.py

 

Demo

 

Conclusions

This project demonstrates how easy is to implement the Honda´s awarness traffic light, having the right Machine Learning platform and boards.

But why stopping here? What about sending a Telegram notification to authorities?

· Just add this function and you are set.
CODE
def telegramAlert(message):
    apiToken = '00:000'
    chatID = '-0000'
    apiURL = f'https://api.telegram.org/bot{apiToken}/sendMessage'
    try:
        response = requests.post(apiURL, json={'chat_id': chatID, 'text': message})
        print(response.text)
    except Exception as e:
        print(e)

 

What about a secondary cam with OCR and an automatic ticket?

TI AM62A is able to use several cameras. In fact there are 2 CSI ports ready for RPi cameras. You can use any of them to take a picture from behind, send the picture to OCR, obtain the license plate and automatically make the ticket.

For OCR you can use https://pypi.org/project/pytesseract/

CODE
from PIL import Image
import pytesseract
print(pytesseract.image_to_string(Image.open('licenseplate.png')))

Files

· Source code

· Edge Impulse project (trained with Lego)

· Traffic light 3d printed stand https://www.thingiverse.com/thing:6277702

 

Contact

Interested in other AI projects?

· Twitter

· Instagram

· Web

 

Custom parts and enclosures

· 3d printed traffic light base

 

Code

· No helmet no green source code

 

 

This article was first published on Hackster on November 2, 2023

CR: https://www.hackster.io/roni-bandini/no-helmet-no-green-ai-traffic-light-4a5959

Author: Roni Bandini

License
All Rights
Reserved
licensBg
0