Using object detection to identify and scare away predators, as well as to notify us of their presence with a literal siren.
Things used in this project
Hardware components


Intro
Long before eggs became the new gold, my wife wanted chickens. Thing is, a lot of the animals in the area also wanted our chickens. I went all in making the chicken coop safer for the chickens in general, but I wanted to do something more to really help keep them safe.
The idea for this project started with the fact that we wanted to be notified if a predator was by the chicken coop. If we were able to get out there and yell at them while they were figuring out how to get to the chickens, they would scamper off before any harm was done. This made me realize I could even automate the "yelling at the predator" part, and the project was born.
So, today, we're putting together a device that will automatically aim to use object detection to identify predators, scare them off, and notify us of their presence. Let's get into it.
Preparation
First is the object detection. I started off with a site called Roboflow. It had a nice dataset setup but seemed to want me to make calls online, which isn't much of an option over in the chicken coop far beyond the reach of our wifi. So, I opted to do the training locally. Turns out, that meant a week and a half of training each night. I may have gone overboard on the data augmentation... Either way, the model I trained is available within this project. It focuses on predators that I am confident are within my area. Unluckily, we moved to an area that has quite an extensive list in that regard. The model focuses on coyotes, foxes, hawks, opossums, raccoons, and snakes.
One of the main goals of this project is to scare the predators away. I reinforced the coop, but if they stick around and try to figure out how to get in, we want to stop them in their tracks. So, I recorded voice lines for each predator on the list. Granted, they're going to have absolutely no clue that I'm specifically addressing them, or what I'm saying in general, but when they hear a person's voice angrily shouting at them they're going to take a bow and leave the area. The voice lines are included as well, but, honestly, it's pretty fun to record raccoon smack talk this might be a good area for customization.
Now, we have a way to detect the predators and what we're going to use to scare them off when we see them, so it's time to put together our actual device and put it to the test.
Hardware
For running all the features we're putting together, we're using a raspberry pi from PCBWay. Part one, as discussed, is identifying the predator, so we have a very standard USB webcam for that. Next up is playing the audio, which just uses a USB speaker. After this, we get a bit fancier. To alert us of predators, we need to get data back to the house and to do something with it. For this, we're going to use Blues Wireless. I've been using a Notecarrier-F for some products that leverage Blues and I've grown quite comfortable with it, so that's what we're using here.
As far as hardware assembly goes, everything (including the Notecarrier-F) is plugged into the raspberry pi via usb. You'll notice that during the project development phase that means 3 usb devices plus a mouse and keyboard for a total of 5 with only 4 usb ports available on the Raspberry Pi, though. I just used a usb splitter for the mouse and keyboard and had no issues whatsoever.
That takes care of our detection device, but as for how we communicate that there is a predator to those in the house, we sound a literal siren. The siren itself is very simple. It has an on/off switch and a volume knob. That doesn't do much for us, though, so to actually utilize it we're using a smart plug. As it turns out, some of these make life easy and others take a lot of effort as far as using them in this type of project goes. Of the ones I had around the house, the one that was easier to use was a TP-Link Kasa. It has a simple library I was able to leverage, which was so much easier than some of the other options I was looking into prior to finding it. With it, we can take our predator alert, turn on the smart plug (and therefore our siren) for a set amount of time before turning it back off. This gives us a very clear indication that a predator is attacking our chickens, all automatically.
Object Detection
With all the hardware in place, it's time to get into how it works. Thankfully, a lot of it is very intuitive based on what has been discussed so far. We have a YOLO detection model, and we use it to look for predators. I used YOLOv8, and all this training occurred on my PC. I did augment the data a bit, meaning that I included slightly altered versions of the training images to help improve the dataset. If I were to do this all again I probably would have skipped out on this; especially with blurring images (adding a bit of blur is an option when augmenting data, which made sense to me when factoring in movement and such). The end result I got has had some false positives for the brushtail possum, where it thinks blurs are an opossum. This can be rectified a bit by increasing confidence requirements for the brushtail possum in particular but, for anyone following along with the intention to train their own model, maybe train it without adding the blurrier images in the dataset. As far as training it goes, the dataset was well labelled from Roboflow including the bounding box for each animal, so all that was needed was to put a yaml file in the right spot so our training knew what data to process and where and to run the training.
Then running this in the same folder will get you up and running with a fully functional dataset:
pip install ultralytics
yolo task=detect mode=train model=yolov8n.pt data=data.yaml epochs=50
yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml
Note, this took me a week and a half of overnight trainings, but A. this is partly due to how much I included in the dataset with augmentations and everything and B. you can stop and resume training without issue.
Now that training is done, we have a fully functional object detection model to use. It'll generate a best.pt file, which is what you'll want to use for the raspberry pi side of the project. If we see a predator, we play an audio file. This is the main thing that is meant to scare them off, since it'll happen long before any actual humans can get over to them.
Of note, since I'm using just a normal webcam to make this happen, I also got motion activated solar powered lights. This will also act as a deterrent for predators, but will also make it so that our gismo can actually see what's happening if they decide to stick around.
Moving on, we also record a video clip when a predator is detected. This is very helpful for confirming that there is indeed a predator, that it's the type of predator that has been identified, and etc. When one is identified, we send an alert through Blues Wireless, which we will get to next.
Importantly, I absolutely do not want to be inundated with sirens so I have some logic in place to protect us from triggering too many alarms. It hasn't been needed thus far, thankfully, but it's good to keep this type of thing in place.
First, we only allow alarms to trigger once every 30 seconds. Second, if we trigger too many times within a set timespan, currently set to 5 minutes, then we have a cooldown period. Since a predator is not realistically going to be chilling with the chickens for 2 hours, the most realistic use case for this is that there's some sort of false positive in view. This will help avoid driving us to madness while also still ensuring we have our chicken lookout.
Blues Wireless
With Blues Wireless, you setup a route. I talk about this in the video, but one intuitive option for this particular project would be utilizing Twilio.
This is a texting service, which means that we could send an alert through Blues and receive a text saying that a predator has been detected. I did leverage this in one of my projects, but it requires a subscription and I have a more eccentric plan in mind.
I've been continuously adding to a custom Smart Home setup I've been putting together, so I'll be using that. Blues routes to our Smart Home, our Smart Home processes that a predator has been detected and does something with it.
Of note, since the end goal for notifying us is with a siren, if I want to see the type of predator that has been detected I can easily check it out in the Smart Home logs or in the Blues Wireless logs.
Other Code
As a bit of a bonus, we also have some code that identifies whether a predator is in the same shot as a chicken. This is an early iteration of basically saying "a predator has a chicken, do something fast". As is, I don't currently do anything with this, but it seemed potentially useful to have down the line. Since it's not going to be super convenient to bring the raspberry pi in for updates as is, I figured it'd make sense to code it in now. This way, if the current iteration of the code doesn't cut it, I'll be able to add logic in on the Smart Home side to increase the urgency of the alert.
Activating the Siren
Once our Smart Home setup receives a predator alert, we turn on the siren. As noted, since the siren is a simple device, we use a Smart Plug to make this work with our Smart Home setup.
To briefly get a bit more into how we do this, you can install the Kasa library with:
pip install python-kasa
Then you just need to find the ip of your smart plug and declare it like so:
import asyncio
from kasa import SmartPlug
KASA_IP = "192.168.xx.xx" # The IP address of your Kasa plug
Then we just have a couple of helper functions we're able to call within the code:
Formatting wasn't cooperating, so here's a screenshot of the helper functions
In my setup I have it turn the plug, and thus the siren, off after 15 seconds.
Putting it Into Action!
Now we have the following flow:
Raspberry pi device --> detects predator --> goes through Blues Wireless --> Goes to Smart Home --> Activates the red rotating light style alarm/siren
The full system is now up and running, such that it will detect and scare off predators, then alert us of its presence with a siren/alarm.
So, all that's left to do is put it out in the chicken coop and see if it can keep the chickens from harm!
I tested this out with a couple pictures of predators to see if it was all still working, and it was indeed. Unluckily, I did also end up with some false positives for the brushtail possum, as mentioned before, so I'll need to rectify that by increasing the confidence requirements for that animal in particular. That, or if I encounter other issues I may eventually retrain the model. As is, though, I'm pretty thrilled with how well it works.
Hopefully you enjoyed the ride, and hopefully the chickens are all set so I can be done dealing with chicken stuff for awhile :P!
import time
import os
import cv2
import serial
import json
import RPi.GPIO as GPIO
from ultralytics import YOLO
MODEL_PATH = "best.pt" # YOLO model file
CONF_THRESHOLD = 0.8 # Confidence threshold
# Predator classes
PREDATOR_CLASSES = [
"coyote",
"fox",
"hawk",
"opossum",
"raccoon",
"snake",
"possumbrushtail" # mapped to "opossum" for audio
]
CHICKEN_CLASS = "chicken" # This is used for the escalation logic
ESCALATED_ALARM = "escalated.mp3"
# Unique sounds for each predator
SOUND_MAP = {
"coyote": "coyote.mp3",
"fox": "fox.mp3",
"hawk": "hawk.mp3",
"opossum": "opossum.mp3",
"raccoon": "raccoon.mp3",
"snake": "snake.mp3"
# possumbrushtail => "opossum" in code
}
USE_NOTECARD = True
#NOTECARD_PORT = "/dev/ttyAMA0"
NOTECARD_PORT = "/dev/ttyACM0"
NOTECARD_BAUD = 9600
# Rate-limiting
MIN_TIME_BETWEEN_TRIGGERS = 30 # seconds
MAX_TRIGGERS_5_MIN = 10
FORCED_COOLDOWN_HOURS = 2
# We still do skipping frames logic
FRAME_RATE = 30
PROCESSING_TIME_SECONDS = 8
FRAMES_PER_INFERENCE = FRAME_RATE * PROCESSING_TIME_SECONDS # 240 => one detection per ~8s
# Snapshots folder
SAVE_DIR = "captures"
os.makedirs(SAVE_DIR, exist_ok=True)
# Show debug window with bounding boxes?
SHOW_VIDEO = True
# Camera resolution (optional)
CAM_WIDTH = 640
CAM_HEIGHT = 480
# Video recording settings
RECORD_DURATION = 5 # seconds to record
RECORD_FPS = 15 # frames/sec in recorded clip
VIDEO_OUTPUT_DIR = "recordings"
os.makedirs(VIDEO_OUTPUT_DIR, exist_ok=True)
# ==========================================
def play_alarm_sound(sound_file):
if not sound_file:
return
if not os.path.exists(sound_file):
print("[ERROR] Sound file not found:", sound_file)
return
print("[INFO] Playing sound:", sound_file)
os.system(f"mpg123 -q '{sound_file}'")
def send_alert_via_notecard(species_label, confidence, escalated=False):
if not USE_NOTECARD:
return
try:
with serial.Serial(NOTECARD_PORT, NOTECARD_BAUD, timeout=1) as port:
alert_body = {
"alert": "PREDATOR_DETECTED",
"predator": species_label,
"confidence": round(confidence, 2),
"escalated": escalated,
"timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
}
req = {
"req": "note.add",
"file": "alerts.qo",
"body": alert_body
}
port.write((json.dumps(req) + "\n").encode("utf-8"))
port.flush()
print(f"[INFO] Notecard alert sent. Predator={species_label}, escalated={escalated}")
except Exception as e:
print("[ERROR] Could not send Notecard alert:", e)
def boxes_intersect(boxA, boxB):
Ax1, Ay1, Ax2, Ay2 = boxA
Bx1, By1, Bx2, By2 = boxB
if Ax2 < Bx1 or Bx2 < Ax1:
return False
if Ay2 < By1 or By2 < Ay1:
return False
return True
def record_video(cap, duration=5, fps=15):
"""Record a short video from the cap feed for duration seconds, store in recordings/ folder."""
# Build a filename with timestamp
time_str = time.strftime("%Y%m%d_%H%M%S")
out_name = f"{time_str}_predator_clip.avi"
out_path = os.path.join(VIDEO_OUTPUT_DIR, out_name)
print(f"[INFO] Recording video: {out_path}")
# Get actual camera resolution
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(out_path, fourcc, fps, (width, height))
start_time = time.time()
while (time.time() - start_time) < duration:
ret, frame_rec = cap.read()
if not ret:
break
out.write(frame_rec)
# If you want to see a live preview while recording, you could do:
# if SHOW_VIDEO:
# cv2.imshow("Recording...", frame_rec)
# if cv2.waitKey(1) & 0xFF == ord('q'):
# break
out.release()
print("[INFO] Finished recording video.")
def main():
# No relay usage, no need for GPIO config beyond setwarnings(False) if you want
GPIO.setwarnings(False)
print("[INFO] Loading YOLO model from:", MODEL_PATH)
model = YOLO(MODEL_PATH)
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, CAM_WIDTH)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, CAM_HEIGHT)
if not cap.isOpened():
print("[ERROR] Cannot open camera (index 0).")
return
last_trigger_time = 0.0
forced_cooldown_until = 0.0
triggers = []
frame_count = 0
print("[INFO] Starting detection loop. We'll do inference every 240 frames (~8s).")
if SHOW_VIDEO:
print("Press 'q' in the window to exit if you have a desktop/VNC.")
try:
while True:
ret, frame = cap.read()
if not ret:
print("[ERROR] Camera read failed.")
break
frame_count += 1
now = time.time()
# forced cooldown
if now < forced_cooldown_until:
if SHOW_VIDEO:
cv2.imshow("Chicken Guardian", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
continue
# skip detection if not time
if frame_count % FRAMES_PER_INFERENCE != 0:
if SHOW_VIDEO:
cv2.imshow("Chicken Guardian", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
continue
# ========== RUN YOLO INFERENCE ==========
results = model.predict(frame, conf=CONF_THRESHOLD)
bboxes = results[0].boxes
predator_boxes = []
chicken_boxes = []
for box in bboxes:
cls_id = int(box.cls[0])
conf = float(box.conf[0])
raw_label = model.names[cls_id]
label = raw_label.lower()
# unify possumbrushtail => opossum for audio
if label == "possumbrushtail":
audio_label = "opossum"
else:
audio_label = label
xyxy = box.xyxy[0].cpu().numpy().astype(int)
(x1, y1, x2, y2) = xyxy
# Draw bounding box
if label in PREDATOR_CLASSES:
color = (0, 0, 255) # red
elif label == CHICKEN_CLASS:
color = (255, 255, 0) # teal
else:
color = (0, 255, 0) # green
text = f"{label}: {conf:.2f}"
cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)
cv2.putText(frame, text, (x1, max(y1 - 10, 0)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
if label in PREDATOR_CLASSES:
predator_boxes.append((label, audio_label, conf, xyxy))
elif label == CHICKEN_CLASS:
chicken_boxes.append(xyxy)
# Rate-limiting triggers
for (pred_label, audio_label, pred_conf, pred_xyxy) in predator_boxes:
if (now - last_trigger_time) < MIN_TIME_BETWEEN_TRIGGERS:
continue
# remove triggers older than 5 min
cutoff = now - 300
triggers = [t for t in triggers if t >= cutoff]
if len(triggers) >= MAX_TRIGGERS_5_MIN:
print("[WARNING] Too many triggers -> 2-hour cooldown.")
forced_cooldown_until = now + (FORCED_COOLDOWN_HOURS * 3600)
break
# check if escalated
escalated = any(boxes_intersect(pred_xyxy, cxy) for cxy in chicken_boxes)
if escalated:
print(f"[INFO] ESCALATED alarm: {pred_label} overlapping chicken!")
alarm_sound = ESCALATED_ALARM
else:
alarm_sound = SOUND_MAP.get(audio_label, None)
last_trigger_time = now
triggers.append(now)
print(f"[INFO] Detected {pred_label} (conf={pred_conf:.2f}), escalated={escalated}")
# Save snapshot
time_str = time.strftime("%Y%m%d_%H%M%S")
file_name = f"{time_str}_{pred_label}_{pred_conf:.2f}.jpg"
save_path = os.path.join(SAVE_DIR, file_name)
cv2.imwrite(save_path, frame)
print(f"[INFO] Snapshot saved: {save_path}")
# === Record short video clip ===
record_video(cap, RECORD_DURATION, RECORD_FPS)
# === Then play audio & send Notecard ===
play_alarm_sound(alarm_sound)
send_alert_via_notecard(pred_label, pred_conf, escalated=escalated)
# After this, we simply return to main loop and detection resumes.
# Show debug window if desired
if SHOW_VIDEO:
cv2.imshow("Chicken Guardian", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except KeyboardInterrupt:
print("[INFO] Exiting on Ctrl+C.")
finally:
cap.release()
if SHOW_VIDEO:
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
