Computer vision with Opencv and Raspberry pi3

A Grand Vision

Imagine upgrading your Raspberry Pi with an “eye” to be able to recognize and target your cat

Or to wave hello and take snapshots when it sees you.

Imagine it being able to find Waldo in less than 3 seconds lol.

Imagine a smart security system that recognizes intruders.

I succeeded in Aquiring this knowledge and more With just a raspberry pi, a webcam
and a python library known as OpenCV.

Open Computer Vision

OpenCV is a powerful, open-sourced computer vision library and It’s pretty much what it sounds like: It allows you to program your raspberry pi to see, and to respond to what it sees. You can perform from image analysis, face Recognition, to video and snapshots among other cool things.

Gear

So to get started, you need a Raspberry Pi3B(haven’t tested the 3B+ yet):

The perfect computer from which I base all research.

And a USB webcam of practically any kind.

My weapon of choice: The Logitech HD Pro Webcam C920

My go to webcam for Opencv as well as voice command raspberry pi projects.

I’m in love with the C920 for its excellent recording quality both in sound with it’s dual mics and in its HD 1080p camera. It’s proven its versatility in many of my projects including computer vision and voice commands.

OpenCV on The Raspberry Pi3

I tried many ways of installing OpenCV for many weeks with many miserable results that wreaked havoc on my system.

Eventually I actually found one that works by upgrading to the latest Raspian Jessie pixel.

It would seem that the full version of OpenCV cannot be handled by the pi. It’s just way too big and powerful and usually fails like an hour into installation.

So this Trimmed version of openCV includes the bare essentials like recognition and snapshots, Video recording etc. And apparently removes some the higher functioning, CPU heavy qualities.

Though I imagine them to be things I wouldn’t really use anyway as I haven’t had any trouble yet besides, If It’s good enough for my pi, it’s good enough for me. πŸ™‚

Installation took a while as expected. In the meantime, I had a look at the official examples to find anything interesting that i may want to mess around with down the road.

Face Recognition

My main motivation for seeking this knowledge was to be able to Grant my projects with the ability to recognize and respond to visual stimulus. So I figured I’d start with Face/eye Recognition:

Heh note that it recognizes my nostrils as eyes.

I wanted to see just how specific recognition can be so I took it a small step further with smile recognition:

And from there, it’s as easy to choose what you want your pi to recognize as modifying a single line of code. And just as easy to program a response to said recognition.

Code

I’ll give you my personal python3 code on basic face recognition as well as smile recognition in exchange for a small donation.





All code comes with highly detailed comments so that you can thoroughly understand my method snippet by snippet. That can be applied however you like in ANY computer vision projects utilizing Python and the Raspberry pi. (All donations go toward site maintenance and new research)

One time donation, lifetime benefits.

Haar Cascades
Now.

While getting to know this thing, you may have noticed that the face has to be positioned right In order to be recognized.

The key is in the haar cascades you call up. Haar Cascades are a sort of library that can be used in your code To allow your machine to recognize what it reads. And it could be a picture library of anything that you want to be recognized by your system.

So if you want, say for your computer to recognize you and only you, you would put a bunch of pictures of yourself into a custom haar cascade from all angles and lighting conditions and use that in the script.

The more pictures of varying types you have of the subject, the easier it is for your pi to recognize said subject.

OpenCV already has a few ready to go cascades in it’s directory to be experimented with if you don’t need a custom cascade and you can easily find ready to go cascades on the net to be used in your projects.

All you’d have to do in the code is switch out the path of the cascade with the one you want.

Pretty cool huh? πŸ™‚

Raspberry Pi Computer Vision Part2

OpenCV with Servos

So by using haar cascades, we can choose what our pi sees and reacts to Such as a face or even something so specific as a smile.

But my question at this point was: Can I apply my little Adafruit 16 channel Servo hat system to get a nice servo targeting/tracking thing going?

Both standard and stackable configurations of the Adafruit 16 channel servo hat.

Turns out I could πŸ˜‰ and much easier than i thought it would be:

The Adafruit 16 channel servo hat is a raspberry pi add-on that gives the pi the ability to seamlessly control up to 16 hobby Servos. A fantastic and essential piece of hardware when it comes to physical computing.

Code

The code works much like the previous code except with the upgrade of my servo controller neatly merged with it to now allow for an actual physical tracking of your desired target.

Donate for detailed code (python3) on targeting and tracking any object using OpenCV3, The Adafruit Servo hat, and the Raspberry pi.





With servos at your disposal, you can really make full use of OpenCVs’ potential.

Imagine pulling off:
-Automatic surveillance cameras that follow and record unfamiliar people.

-Smart Cameras that track your movement while recording For better youtube movies.

-Face activated door that locks itself if it doesn’t recognize you And opens if it does.

-Activate certain programs upon recognizing certain things.

-Interpret sign language!

-Give alerts on your target based on targets body language.

-Or even a bionic selfie Stick..

Simply by swapping out the haar cascades to have your camera track just about anything.

Skies the limit.

See Ya Later

Well that’s just about it to get you started on some simple yet crazy computer vision mischief.

Don’t forget to comment, like and share πŸ™‚

Cheers!


39 thoughts on “Computer vision with Opencv and Raspberry pi3

  1. hello!! EVILGENIUS0077, so i just bought everything i need for the project, i also upload the Py c ode to the raspberry, but when i try to run the code i get an error that says “from servodriver import ServoDriver ImportError: no module named servodriver”, i dont know if i need to find a library somewhere.or you missed to upload that file

    1. Oh my goodness! Ok so servodriver.py is a file that allows use of continuous rotation servos…and I guess I forgot to delete all trace of that experiment. Delete anything that says service driver and I’ll fix that in a bit πŸ™‚ sorry man

  2. hey man!! thanks for your help! sorry to bother you… im getting this message error message
    Traceback (most recent call last):
    File “FaceTrack.py”, line 18, in
    FUCHI.move(5,cam_pan)
    TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
    ——————
    (program exited with code: 1)
    Press return to continue
    what can i do?? is fuchikoma wrong or something??

    1. No please keep bothering me if you can. it helps me streamline my blog and correct my silly mistakes. I actually thank you for it πŸ™‚ so! The 5 and 6 on lines 18, 19, 60 and 61 are the slots on the servo board…and 5 and 6 are the shoulder and elbow on my robot targeting system(sorry). Switch the “5” on lines 18 and 19 to “0” and the “6” on lines 19 and 61 to “1” and it should work fine. And remember to plug them in correctly. The pan servo(side to side) is 0 and the tilt(up and down) servo is 1 and should be plugged into the first 2 slots of the actual board to reflect this.

  3. thank you!! i dont know why i still get the same problem:

    Traceback (most recent call last):
    File β€œFaceTrack.py”, line 18, in
    FUCHI.move(0,cam_pan)
    TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
    β€”β€”β€”β€”β€”β€”
    (program exited with code: 1)
    Press return to continue

    i reaally dont know what to do

    1. So 1-make sure the camera is enable in sudo raspi-config.2-If your using the picam, make sure it’s REALLY plugged in. 3-instead of “from fuchikoma import *, try out “import fuchikoma”

  4. hey man !! thanks for your help!!! so the problem is not from the camera, the problem comes when importing fuchikoma, i changed some things, it looks like is running now, ill send you the code when i finish it. thanks again i appreciate it !!!

  5. so i fixed the problem… i made this code wich has the function move in it. i already tested it and it works fine, here it is :

    import cv2
    import numpy as np
    from Adafruit_PWM_Servo_Driver import PWM
    import traceback as traceback
    import time
    import sys

    def mover(servo, angle): #, delta=170):
    #delay = max(delta * 0.003, 0.03) # calculate delay
    zero_pulse = (servoMin + servoMax) / 2 # half-way == 0 degrees
    pulse_width = zero_pulse – servoMin # maximum pulse to either side
    pulse = zero_pulse + (pulse_width * angle / 80)
    #print(“angle=%s pulse=%s” % (angle, pulse))
    pwm.setPWM(servo, 0, int(pulse))
    #time.sleep(delay) # sleep to give the servo time to do its thing
    # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

    # Initialise the PWM device using the default address
    pwm = PWM(0x40)

    servoMin = 150 # Min pulse length out of 4096
    servoMax = 600 # Max pulse length out of 4096

    pwm.setPWMFreq(60)

    cam_pan = 0# 77 initial possition
    cam_tilt = 60#77 initial possition

    cap = cv2.VideoCapture(0)

    FRAME_W = 440
    FRAME_H = 280
    cap.set(3, 440) #w
    cap.set(4, 320) #h

    cascPath = ‘/home/pi/Seguidor/haar/lbpcascades/lbpcascade_frontalface.xml’ # this adress has to be changed depending on where you have this file
    #face_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_frontalface_default.xml’)
    face_cascade=cv2.CascadeClassifier(cascPath)###
    #eye_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_eye.xml’)
    mover(0,cam_pan)
    mover(1,cam_tilt)

    while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.equalizeHist( gray )###
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    print(len(faces))
    # Display the resulting frame

    for (x,y,w,h) in faces:
    # Draw a green rectancv2.CascadeClassifier(cascPath)gle around the face
    cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
    #//////////////////////////////////////////////
    # Track first face

    # Get the center of the face
    x = x + (w/2)
    y = y + (h/2)

    # Correct relative to center of image
    turn_x = float(x – (FRAME_W/2))
    turn_y = float(y – (FRAME_H/2))

    # Convert to percentage offset
    turn_x /= float(FRAME_W/2)
    turn_y /= float(FRAME_H/2)

    # Scale offset to degrees
    turn_x *= 7.5 # VFOV
    turn_y *= 7.5 # HFOV
    cam_pan += -turn_x # this direction depends on the way you have your servos attached to the camera
    cam_tilt += turn_y # this direction depends on the way you have your servos attached to the camera

    # Clamp Pan/Tilt to 0 to 180 degrees
    cam_pan = max(0,min(180,cam_pan))
    cam_tilt = max(0,min(180,cam_tilt))

    # Update the servos
    mover(0,cam_pan)
    mover(1,cam_tilt)
    #/////////////////////////////////
    ”’roi_gray = gray[y:y+h, x:x+w]
    roi_color = frame[y:y+h, x:x+w]
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:
    cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)”’

    cv2.imshow(‘frame’,frame)
    if cv2.waitKey(1) & 0xFF == ord(‘q’):
    break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()

    1. Like say this more developed version: https://github.com/matty0077/Project-Nasbots/blob/master/NASBOT/PHYSICAL/fuchikoma.py . Which makes it so easy to add servo movement to any script by simply importing the functions. You can turn anything into a plug n’ play robot. So if you ARE going into more servo work, I’d recommend to find out more about that lil problem. Otherwise this code is great! And I’m glad you found me! πŸ™‚

    1. πŸ™‚ besides, if you have any questions you can always ask. It’ll help me improve this site exponentially.

      1. Great! Sent you a donation, looking forward to working with the code. I think I have 80% of the hardware I need.

      2. Sick! Thank you so much! Literally everything I get is going into perfecting this blog as well as my logic so that I may better spread the knowledge πŸ™‚

      3. I’ve got what I need for a prototype, when it’s going I’ll throw something up. It’s been an on and off project I decided I want to complete this year and that sent me out in search of Open CV code to work with. By the way, how do I obtain the code?

      4. Nope, all I got was the PayPal receipt. Always something to fix, I wasn’t worried and suspected you thought something had happened. No worries, I dropped an email to the PayPal address.

        Ron

      5. Ok gotcha. Lol I apologize for the name on the email it’s the name o my band. Forgot to edit that too πŸ™‚

      6. Servos powerful and mounts strong enough to do what I want. I’ve been enamored with this since I first saw it: http://www.roboticgizmos.com/pinokio-arduino-driven-robotic-lamp/ It was built on Andruino, which I could do, but a lot has changed since 2012. My office is long and narrow with the door at the far end from my desk and I want to put it on a stand by the door. I have a small engine lathe and the ability to mill small parts, but I want to get the code working 1st then the metal work when it warms up.

      7. That sounds so cool! Hey keep me posted. I’m all about helping you make that happen. I’d love to see a video too!

      8. *This is exactly what your looking for. Once your servos are calibrated, it oughtta be pretty straightforward. πŸ™‚

    1. Thank you so much! Keep me posted on you progress and feel free to ask for help. By the way I have a growing Facebook group where we like to showcase cool stuff and help each other grow. If you interested, check it out and set your password as Oliver πŸ™‚ University of Mad Science

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.