A Grand Vision
Imagine upgrading your Raspberry Pi with an “eye” to be able to recognize and target your cat
Or to wave hello and take snapshots when it sees you.
Imagine it being able to find Waldo in less than 3 seconds lol.
Imagine a smart security system that recognizes intruders.
I succeeded in Aquiring this knowledge and more With just a raspberry pi, a webcam
and a python library known as OpenCV.
Open Computer Vision
OpenCV is a powerful, open-sourced computer vision library and It’s pretty much what it sounds like: It allows you to program your raspberry pi to see, and to respond to what it sees. You can perform from image analysis, face Recognition, to video and snapshots among other cool things.
Gear
So to get started, you need a Raspberry Pi3B(haven’t tested the 3B+ yet):
And a USB webcam of practically any kind.
My weapon of choice: The Logitech HD Pro Webcam C920
I’m in love with the C920 for its excellent recording quality both in sound with it’s dual mics and in its HD 1080p camera. It’s proven its versatility in many of my projects including computer vision and voice commands.
OpenCV on The Raspberry Pi3
I tried many ways of installing OpenCV for many weeks with many miserable results that wreaked havoc on my system.
Eventually I actually found one that works by upgrading to the latest Raspian Jessie pixel.
It would seem that the full version of OpenCV cannot be handled by the pi. It’s just way too big and powerful and usually fails like an hour into installation.
So this Trimmed version of openCV includes the bare essentials like recognition and snapshots, Video recording etc. And apparently removes some the higher functioning, CPU heavy qualities.
Though I imagine them to be things I wouldn’t really use anyway as I haven’t had any trouble yet besides, If It’s good enough for my pi, it’s good enough for me. ๐
Installation took a while as expected. In the meantime, I had a look at the official examples to find anything interesting that i may want to mess around with down the road.
Face Recognition
My main motivation for seeking this knowledge was to be able to Grant my projects with the ability to recognize and respond to visual stimulus. So I figured I’d start with Face/eye Recognition:
Heh note that it recognizes my nostrils as eyes.
I wanted to see just how specific recognition can be so I took it a small step further with smile recognition:
And from there, it’s as easy to choose what you want your pi to recognize as modifying a single line of code. And just as easy to program a response to said recognition.
Code
I’ll give you my personal python3 code on basic face recognition as well as smile recognition in exchange for a small donation.
[sell_media_file item_name=”test download” name=”Sinistergenius.com” label=”Donate with Stripe” description=”Face and Smile Recognition” amount=”3.50″ locale=”auto”
image=”https://example.com/wp-content/uploads/checkout.png”
panelLabel=”Donate ” download_link=”https://sinistergenius.com/wp-content/uploads/2018/01/Computer-Vision-with-OpenCV-n-Raspberry-pi-3-master.zip”%5D
All code comes with highly detailed comments so that you can thoroughly understand my method snippet by snippet. That can be applied however you like in ANY computer vision projects utilizing Python and the Raspberry pi. (All donations go toward site maintenance and new research)
One time donation, lifetime benefits.
Haar Cascades
Now.
While getting to know this thing, you may have noticed that the face has to be positioned right In order to be recognized.
The key is in the haar cascades you call up. Haar Cascades are a sort of library that can be used in your code To allow your machine to recognize what it reads. And it could be a picture library of anything that you want to be recognized by your system.
So if you want, say for your computer to recognize you and only you, you would put a bunch of pictures of yourself into a custom haar cascade from all angles and lighting conditions and use that in the script.
The more pictures of varying types you have of the subject, the easier it is for your pi to recognize said subject.
OpenCV already has a few ready to go cascades in it’s directory to be experimented with if you don’t need a custom cascade and you can easily find ready to go cascades on the net to be used in your projects.
All you’d have to do in the code is switch out the path of the cascade with the one you want.
Pretty cool huh? ๐
Raspberry Pi Computer Vision Part2
OpenCV with Servos
So by using haar cascades, we can choose what our pi sees and reacts to Such as a face or even something so specific as a smile.
But my question at this point was: Can I apply my little Adafruit 16 channel Servo hat system to get a nice servo targeting/tracking thing going?
Turns out I could ๐ and much easier than i thought it would be:
The Adafruit 16 channel servo hat is a raspberry pi add-on that gives the pi the ability to seamlessly control up to 16 hobby Servos. A fantastic and essential piece of hardware when it comes to physical computing.
Code
The code works much like the previous code except with the upgrade of my servo controller neatly merged with it to now allow for an actual physical tracking of your desired target.
Donate for detailed code (python3) on targeting and tracking any object using OpenCV3, The Adafruit Servo hat, and the Raspberry pi.
[sell_media_file item_name=”test download” name=”Sinistergenius.com” label=”Donate with Stripe” description=”Face Tracking” amount=”7.50″ locale=”auto”
image=”https://example.com/wp-content/uploads/checkout.png”
panelLabel=”Donate ” download_link=”https://sinistergenius.com/wp-content/uploads/2018/01/Computer-vision-WITH-Servos-master-1.zip”%5D
With servos at your disposal, you can really make full use of OpenCVs’ potential.
Imagine pulling off:
-Automatic surveillance cameras that follow and record unfamiliar people.
-Smart Cameras that track your movement while recording For better youtube movies.
-Face activated door that locks itself if it doesn’t recognize you And opens if it does.
-Activate certain programs upon recognizing certain things.
-Interpret sign language!
-Give alerts on your target based on targets body language.
-Or even a bionic selfie Stick..
Simply by swapping out the haar cascades to have your camera track just about anything.
Skies the limit.
See Ya Later
Well that’s just about it to get you started on some simple yet crazy computer vision mischief.
Don’t forget to comment, like and share ๐
Cheers!
Did donate but didn’t receive code
Hmm please try again. I have no donation data from you.
Ooh i saw the previous messages. I will look into it. Thank you so much
Ok this has never happened before. Have you Tried clearing your browser cache and cookies?
Nevermind. I got your stripe donation Sorry about that. I emailed you the code let me know when you get it. And thank you so much!
Got it, thank you for making this available
Hi, I donated but didn’t receive the code, got this error after transaction: 500 Internal Server Error
An error occurred while processing this request.
Website owner? Check your code and/or debug log. If you need assistance, contact support.
Hi, how i should chance the code if I canโ t use Adafruit servo hat? I would like to use only ServoBlaster with extra power supply for servos. https://github.com/richardghirst/PiBits/tree/master/ServoBlaster
Hey Jussi. As long as its in python, all you would have to do is alter the fuchikoma file so that the “move” function works like it would using your servoblaster. That way it should work fine when you run the face tracking program which simply imports fuchikoma as a servo controller.
In other words, You pretty much make your own fuchi.move function(using the servoblaster) so that the facetracking.py uses THAT as its logic to move. Thank you for the donation by the way and happy holidays! ๐
SO! If your working with the python version of servoblaster: https://github.com/jabelone/pythonSB, you’d be in business ๐
copy thanks for that, ive got it running ok now, BUT it tilts up when it means to tilt down. where can i flip that?
Sik! Theres a line that gives you an updated x and y. One of em oughtta be negative. Either the x or y. Switch it ๐
hey i lost the link to the code after i donated a few weeks ago, could you send it to me adrian.stucker @ gmail.com
๐ sure thing. Ill have it over in a bit.
Hey could I get that link?
Heh sorry for the delay! Sent it. ๐
Iโm getting the
TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
error also, what can I do. tried running chris SK work around and the program just hangs at ret,frame = cap.read()
Iโm so close!
Ooh thats a good one. Im out n about today Ill get back to you tmrw. Simple lil adjustment tho dont sweat it! And thank you o much!
anywhere i can look for the fix, really trying to get this working right now while I have time. Thanks!
๐ man on a mission. Ok. Pretty sure its how you import and use the fuchikoma. (Havent gotten a chance to update the code) mess around with: from fuchikoma import Fuchi as F. Then your functions should look something like F.move() or F().move() wouldnt be able to tell you without the code in front of me. But more than likely, the fix would be something like that.
Sent you a donation! Great work! I hope your code can help me further with face tracking.
Thank you so much! Keep me posted on you progress and feel free to ask for help. By the way I have a growing Facebook group where we like to showcase cool stuff and help each other grow. If you interested, check it out and set your password as Oliver ๐ University of Mad Science
Are you still working on this? Iโm a Noob and have a project it looks like this will be perfect for.
Yes I am actually! And I update the code every now and then so that the knowledge will never be obsolete.
๐ besides, if you have any questions you can always ask. It’ll help me improve this site exponentially.
Great! Sent you a donation, looking forward to working with the code. I think I have 80% of the hardware I need.
Sick! Thank you so much! Literally everything I get is going into perfecting this blog as well as my logic so that I may better spread the knowledge ๐
Iโve got what I need for a prototype, when itโs going Iโll throw something up. Itโs been an on and off project I decided I want to complete this year and that sent me out in search of Open CV code to work with. By the way, how do I obtain the code?
There was no download link!!?
Nope, all I got was the PayPal receipt. Always something to fix, I wasn’t worried and suspected you thought something had happened. No worries, I dropped an email to the PayPal address.
Ron
Ok gotcha. Lol I apologize for the name on the email it’s the name o my band. Forgot to edit that too ๐
Whatya missing by the way..?
Servos powerful and mounts strong enough to do what I want. I’ve been enamored with this since I first saw it: http://www.roboticgizmos.com/pinokio-arduino-driven-robotic-lamp/ It was built on Andruino, which I could do, but a lot has changed since 2012. My office is long and narrow with the door at the far end from my desk and I want to put it on a stand by the door. I have a small engine lathe and the ability to mill small parts, but I want to get the code working 1st then the metal work when it warms up.
That sounds so cool! Hey keep me posted. I’m all about helping you make that happen. I’d love to see a video too!
*This is exactly what your looking for. Once your servos are calibrated, it oughtta be pretty straightforward. ๐
Hi, I donated and didn’t receive the code with following error after transaction: 500 Internal Server Error
An error occurred while processing this request.
Website owner? Check your code and/or debug log. If you need assistance, contact support.
so i fixed the problem… i made this code wich has the function move in it. i already tested it and it works fine, here it is :
import cv2
import numpy as np
from Adafruit_PWM_Servo_Driver import PWM
import traceback as traceback
import time
import sys
def mover(servo, angle): #, delta=170):
#delay = max(delta * 0.003, 0.03) # calculate delay
zero_pulse = (servoMin + servoMax) / 2 # half-way == 0 degrees
pulse_width = zero_pulse – servoMin # maximum pulse to either side
pulse = zero_pulse + (pulse_width * angle / 80)
#print(“angle=%s pulse=%s” % (angle, pulse))
pwm.setPWM(servo, 0, int(pulse))
#time.sleep(delay) # sleep to give the servo time to do its thing
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Initialise the PWM device using the default address
pwm = PWM(0x40)
servoMin = 150 # Min pulse length out of 4096
servoMax = 600 # Max pulse length out of 4096
pwm.setPWMFreq(60)
cam_pan = 0# 77 initial possition
cam_tilt = 60#77 initial possition
cap = cv2.VideoCapture(0)
FRAME_W = 440
FRAME_H = 280
cap.set(3, 440) #w
cap.set(4, 320) #h
cascPath = ‘/home/pi/Seguidor/haar/lbpcascades/lbpcascade_frontalface.xml’ # this adress has to be changed depending on where you have this file
#face_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_frontalface_default.xml’)
face_cascade=cv2.CascadeClassifier(cascPath)###
#eye_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_eye.xml’)
mover(0,cam_pan)
mover(1,cam_tilt)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist( gray )###
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
print(len(faces))
# Display the resulting frame
for (x,y,w,h) in faces:
# Draw a green rectancv2.CascadeClassifier(cascPath)gle around the face
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
#//////////////////////////////////////////////
# Track first face
# Get the center of the face
x = x + (w/2)
y = y + (h/2)
# Correct relative to center of image
turn_x = float(x – (FRAME_W/2))
turn_y = float(y – (FRAME_H/2))
# Convert to percentage offset
turn_x /= float(FRAME_W/2)
turn_y /= float(FRAME_H/2)
# Scale offset to degrees
turn_x *= 7.5 # VFOV
turn_y *= 7.5 # HFOV
cam_pan += -turn_x # this direction depends on the way you have your servos attached to the camera
cam_tilt += turn_y # this direction depends on the way you have your servos attached to the camera
# Clamp Pan/Tilt to 0 to 180 degrees
cam_pan = max(0,min(180,cam_pan))
cam_tilt = max(0,min(180,cam_tilt))
# Update the servos
mover(0,cam_pan)
mover(1,cam_tilt)
#/////////////////////////////////
”’roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)”’
cv2.imshow(‘frame’,frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Ah so you took fuchikoma’s “heart” and put it directly into the script. Pretty cool!
When I built fuchikoma, I designed it to be a modular servo control system that can be plugged into anything that I want to move.
Like say this more developed version: https://github.com/matty0077/Project-Nasbots/blob/master/NASBOT/PHYSICAL/fuchikoma.py . Which makes it so easy to add servo movement to any script by simply importing the functions. You can turn anything into a plug n’ play robot. So if you ARE going into more servo work, I’d recommend to find out more about that lil problem. Otherwise this code is great! And I’m glad you found me! ๐
*sorry for the late reply by the way.
hey man !! thanks for your help!!! so the problem is not from the camera, the problem comes when importing fuchikoma, i changed some things, it looks like is running now, ill send you the code when i finish it. thanks again i appreciate it !!!
๐ Awesome!!! can’t wait.
thank you!! i dont know why i still get the same problem:
Traceback (most recent call last):
File โFaceTrack.pyโ, line 18, in
FUCHI.move(0,cam_pan)
TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
โโโโโโ
(program exited with code: 1)
Press return to continue
i reaally dont know what to do
Whaaa? Hmm..ok! So the good news is, it’s not my fault or yours. The BETTER news is, I think I got your answer: http://stackoverflow.com/questions/40230679/typeerror-unbound-method-start-preview-must-be-called-with-picamera-instance
So 1-make sure the camera is enable in sudo raspi-config.2-If your using the picam, make sure it’s REALLY plugged in. 3-instead of “from fuchikoma import *, try out “import fuchikoma”
I would run a “sudo apt-get update” as well
hey man!! thanks for your help! sorry to bother you… im getting this message error message
Traceback (most recent call last):
File “FaceTrack.py”, line 18, in
FUCHI.move(5,cam_pan)
TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
——————
(program exited with code: 1)
Press return to continue
what can i do?? is fuchikoma wrong or something??
No please keep bothering me if you can. it helps me streamline my blog and correct my silly mistakes. I actually thank you for it ๐ so! The 5 and 6 on lines 18, 19, 60 and 61 are the slots on the servo board…and 5 and 6 are the shoulder and elbow on my robot targeting system(sorry). Switch the “5” on lines 18 and 19 to “0” and the “6” on lines 19 and 61 to “1” and it should work fine. And remember to plug them in correctly. The pan servo(side to side) is 0 and the tilt(up and down) servo is 1 and should be plugged into the first 2 slots of the actual board to reflect this.
๐ keep me posted. And thanks again. I’ll update the GitHub repo in a sec.
hello!! EVILGENIUS0077, so i just bought everything i need for the project, i also upload the Py c ode to the raspberry, but when i try to run the code i get an error that says “from servodriver import ServoDriver ImportError: no module named servodriver”, i dont know if i need to find a library somewhere.or you missed to upload that file
Oh my goodness! Ok so servodriver.py is a file that allows use of continuous rotation servos…and I guess I forgot to delete all trace of that experiment. Delete anything that says service driver and I’ll fix that in a bit ๐ sorry man
Tell me how it works out after. If you have a video, I’ll even post it.
thank you very much!!!!
how many servos are you using??
Hey cheers Chris! Im glad you ran into evil genius. I use 2 9g servos in a pan/tilt chassis…Like this one: http://amzn.to/2mZ46Nc or this one: http://amzn.to/2onrapc ๐ which I used before being able to 3d print these myself.
thank you!!
And be sure to let me know how it works out for you. ๐