icon

Sound Cannon activated by Artificial Intelligence

0 40622 Medium

Fights with screams at dawn in your block almost every night. Civilized channels have been tried. Is there something to be done?

 

To disperse crowds, police forces of some countries uses a sound cannon called LRAD. The LRAD plays specific designed sounds at high volumes. Police models are able to generate 160db at one meter, when 85 db are really annoying for human ears.

 

projectImage

Image by Drew Litowitz, published at pitchfork.com

 

Could it be possible to make a civil LRAD to disperse this people? Not only that, could it be possible to add artificial intelligence to trigger the sounds after specific actions?

 

Hardware

 

In the local market and at reasonable prices I was able to get an outdoor speaker horn and a matching mono amplification board. For detection and audio playback I decided to use a Raspberry Pi, since it has a decent camera, an audio output and easy integration with the Machine Learning platform Edge Impulse.

 

The complete list of materials is:

 Raspberry Pi 3Raspberry Pi camOutdoor speaker hornMono amp Tpa311812v power supply for the amp5v power supply for RaspberryDFRobot LED button (12v green button with led)Miniplug cable7 segment display TM16371 channel relay module
projectImage
TM1637 display CLK pin to GPIO 3, DIO pin to GPIO2, GND to GND, VCC to VCC Relay pin to GPIO 4, GND to GND and VCC to VCC. Then Relay NC and Signal in the middle of positive to 12V and amp power +. Negative of 12V to GND of amp power. 5V power supply to the microUSB for the Raspberry, but with + intercepted by DfRobot Led button. Speaker to amp audio output. Raspberry 3.5mm audio Output to amp Input (consider that only the 2 point parts are used, since the same connector is used for video. In my case it worked to use a stereo miniplug not totally inserted)
projectImage

Enclosure

 

I’ve found an old PC removable HD tray big enough for all the components of this project. Then I’ve designed a small 3d part to hold 7 segment display and DFRbot Led button.

 

You can download this part at https://cults3d.com/en/3d-model/gadget/lrad-panel

 

projectImage

Computer Vision

 

Lots of people passes by and it would be bad to shock a neighbor walking the dog.

 

If the sound cannon won’t detect people, could it detect certain actions of the people? Like fights, which actually happens almost every day. The interesting part is that I don’t even need to take hundreds of pictures since there is a dataset with human actions in the AI community Hugging Face.

projectImage

I wrote a Python script to download 2 actions from this dataset: "people sitting" and "people fighting" I have uploaded the pictures to Edge Impulse Machine Learning platform, I trained a model and exported to an EIM file.

projectImage

Below, model training at Edge Impulse platform.

projectImage

Audio files

 

A brave man confronted a police grade LRAD and uploaded the video to YouTube. I downloaded the file, extracted the audio, processed this audio with Audacity app, replicated in time and exported the file as.WAV

projectImage

Raspberry Pi setup

 

For the Raspberry Pi, I have executed this commands after installing the non desktop version of Raspi OS.

CODE
sudo apt-get install python3-pip
sudo apt install git
pip install picamera
sudo apt install python3-opencv
sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev
git clone https://github.com/edgeimpulse/linux-sdk-python
sudo python3 -m pip install edge_impulse_linux -i https://pypi.python.org/simple
sudo python3 -m pip install numpy
sudo python3 -m pip install pyaudio

I have uploaded the LRAD Python script, WAV file and EIM Machine Learning model (included in the attachments part of this tutorial)

I have assigned 744 execute permissions to the EIM and added the script to cronjob.

 

crontab -e
@reboot sudo python /home/pi/lrad4.py > /home/pi/lradlog.txt

 

sudo reboot

 

The LRAD script is autoexecuted. In the 7 segment display a greeting will be displayed, then the detection percentage and the trigger notice.

 

All the console output of the LRAD script are logged into lradlog.txt [http://lradlog.txt/] file

 

Some code settings to edit:

CODE
# EIM file name in case you use a new version
model = "domestic-ml-lrad-2-linux-armv7-v8.eim"
# fighting percentage
detectionLimit=0.75
# GPIO pin for the relay
relayPin=4
# seconds to play the crowd dispersion sound
playSeconds=5
projectImage

How does it work?

 

This script is in charge of taking a picture every X seconds. The picture is sent to the machine learning model. An inference is done “is this a fight?” The percentage is shown in the 7 segment display and if this number is high enough, amplification is enabled with the relay and the annoying sound is played with a subprocess os call.

Attachments

 

Source code, audio files, machine learning model: 

https://github.com/ronibandini/domesticLMLRAD

 

Interested in more Machine Learning projects?

 

Follow this Machine Learning YouTube list

License
All Rights
Reserved
licensBg
0