Computer vision with Opencv and Raspberry pi3

A Grand Vision

Imagine upgrading your Raspberry Pi with an “eye” to be able to recognize and target your cat

Or to wave hello and take snapshots when it sees you.

Imagine it being able to find Waldo in less than 3 seconds lol.

Imagine a smart security system that recognizes intruders.

I succeeded in Aquiring this knowledge and more With just a raspberry pi, a webcam
and a python library known as OpenCV.

Open Computer Vision

OpenCV is a powerful, open-sourced computer vision library and It’s pretty much what it sounds like: It allows you to program your raspberry pi to see, and to respond to what it sees. You can perform from image analysis, face Recognition, to video and snapshots among other cool things.


So to get started, you need a Raspberry Pi3B(haven’t tested the 3B+ yet):

The perfect computer from which I base all research.

And a USB webcam of practically any kind.

My weapon of choice: The Logitech HD Pro Webcam C920

My go to webcam for Opencv as well as voice command raspberry pi projects.

I’m in love with the C920 for its excellent recording quality both in sound with it’s dual mics and in its HD 1080p camera. It’s proven its versatility in many of my projects including computer vision and voice commands.

OpenCV on The Raspberry Pi3

I tried many ways of installing OpenCV for many weeks with many miserable results that wreaked havoc on my system.

Eventually I actually found one that works by upgrading to the latest Raspian Jessie pixel.

It would seem that the full version of OpenCV cannot be handled by the pi. It’s just way too big and powerful and usually fails like an hour into installation.

So this Trimmed version of openCV includes the bare essentials like recognition and snapshots, Video recording etc. And apparently removes some the higher functioning, CPU heavy qualities.

Though I imagine them to be things I wouldn’t really use anyway as I haven’t had any trouble yet besides, If It’s good enough for my pi, it’s good enough for me. πŸ™‚

Installation took a while as expected. In the meantime, I had a look at the official examples to find anything interesting that i may want to mess around with down the road.

Face Recognition

My main motivation for seeking this knowledge was to be able to Grant my projects with the ability to recognize and respond to visual stimulus. So I figured I’d start with Face/eye Recognition:

Heh note that it recognizes my nostrils as eyes.

I wanted to see just how specific recognition can be so I took it a small step further with smile recognition:

And from there, it’s as easy to choose what you want your pi to recognize as modifying a single line of code. And just as easy to program a response to said recognition.


I’ll give you my personal python3 code on basic face recognition as well as smile recognition in exchange for a small donation.

[sell_media_file item_name=”test download” name=”” label=”Donate with Stripe” description=”Face and Smile Recognition” amount=”3.50″ locale=”auto”
panelLabel=”Donate ” download_link=””%5D

All code comes with highly detailed comments so that you can thoroughly understand my method snippet by snippet. That can be applied however you like in ANY computer vision projects utilizing Python and the Raspberry pi. (All donations go toward site maintenance and new research)

One time donation, lifetime benefits.

Haar Cascades

While getting to know this thing, you may have noticed that the face has to be positioned right In order to be recognized.

The key is in the haar cascades you call up. Haar Cascades are a sort of library that can be used in your code To allow your machine to recognize what it reads. And it could be a picture library of anything that you want to be recognized by your system.

So if you want, say for your computer to recognize you and only you, you would put a bunch of pictures of yourself into a custom haar cascade from all angles and lighting conditions and use that in the script.

The more pictures of varying types you have of the subject, the easier it is for your pi to recognize said subject.

OpenCV already has a few ready to go cascades in it’s directory to be experimented with if you don’t need a custom cascade and you can easily find ready to go cascades on the net to be used in your projects.

All you’d have to do in the code is switch out the path of the cascade with the one you want.

Pretty cool huh? πŸ™‚

Raspberry Pi Computer Vision Part2

OpenCV with Servos

So by using haar cascades, we can choose what our pi sees and reacts to Such as a face or even something so specific as a smile.

But my question at this point was: Can I apply my little Adafruit 16 channel Servo hat system to get a nice servo targeting/tracking thing going?

Both standard and stackable configurations of the Adafruit 16 channel servo hat.

Turns out I could πŸ˜‰ and much easier than i thought it would be:

The Adafruit 16 channel servo hat is a raspberry pi add-on that gives the pi the ability to seamlessly control up to 16 hobby Servos. A fantastic and essential piece of hardware when it comes to physical computing.


The code works much like the previous code except with the upgrade of my servo controller neatly merged with it to now allow for an actual physical tracking of your desired target.

Donate for detailed code (python3) on targeting and tracking any object using OpenCV3, The Adafruit Servo hat, and the Raspberry pi.

[sell_media_file item_name=”test download” name=”” label=”Donate with Stripe” description=”Face Tracking” amount=”7.50″ locale=”auto”
panelLabel=”Donate ” download_link=””%5D

With servos at your disposal, you can really make full use of OpenCVs’ potential.

Imagine pulling off:
-Automatic surveillance cameras that follow and record unfamiliar people.

-Smart Cameras that track your movement while recording For better youtube movies.

-Face activated door that locks itself if it doesn’t recognize you And opens if it does.

-Activate certain programs upon recognizing certain things.

-Interpret sign language!

-Give alerts on your target based on targets body language.

-Or even a bionic selfie Stick..

Simply by swapping out the haar cascades to have your camera track just about anything.

Skies the limit.

See Ya Later

Well that’s just about it to get you started on some simple yet crazy computer vision mischief.

Don’t forget to comment, like and share πŸ™‚


61 thoughts on “Computer vision with Opencv and Raspberry pi3

  1. Hi, I donated but didn’t receive the code, got this error after transaction: 500 Internal Server Error
    An error occurred while processing this request.

    Website owner? Check your code and/or debug log. If you need assistance, contact support.

    1. Hey Jussi. As long as its in python, all you would have to do is alter the fuchikoma file so that the “move” function works like it would using your servoblaster. That way it should work fine when you run the face tracking program which simply imports fuchikoma as a servo controller.

      1. In other words, You pretty much make your own fuchi.move function(using the servoblaster) so that the uses THAT as its logic to move. Thank you for the donation by the way and happy holidays! πŸ™‚

  2. copy thanks for that, ive got it running ok now, BUT it tilts up when it means to tilt down. where can i flip that?

      1. hey i lost the link to the code after i donated a few weeks ago, could you send it to me adrian.stucker @

  3. I’m getting the

    TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)

    error also, what can I do. tried running chris SK work around and the program just hangs at ret,frame =

    I’m so close!

      1. anywhere i can look for the fix, really trying to get this working right now while I have time. Thanks!

      2. πŸ™‚ man on a mission. Ok. Pretty sure its how you import and use the fuchikoma. (Havent gotten a chance to update the code) mess around with: from fuchikoma import Fuchi as F. Then your functions should look something like F.move() or F().move() wouldnt be able to tell you without the code in front of me. But more than likely, the fix would be something like that.

  4. Sent you a donation! Great work! I hope your code can help me further with face tracking.

    1. Thank you so much! Keep me posted on you progress and feel free to ask for help. By the way I have a growing Facebook group where we like to showcase cool stuff and help each other grow. If you interested, check it out and set your password as Oliver πŸ™‚ University of Mad Science

  5. Are you still working on this? I’m a Noob and have a project it looks like this will be perfect for.

    1. Yes I am actually! And I update the code every now and then so that the knowledge will never be obsolete.

    2. πŸ™‚ besides, if you have any questions you can always ask. It’ll help me improve this site exponentially.

      1. Great! Sent you a donation, looking forward to working with the code. I think I have 80% of the hardware I need.

      2. Sick! Thank you so much! Literally everything I get is going into perfecting this blog as well as my logic so that I may better spread the knowledge πŸ™‚

      3. I’ve got what I need for a prototype, when it’s going I’ll throw something up. It’s been an on and off project I decided I want to complete this year and that sent me out in search of Open CV code to work with. By the way, how do I obtain the code?

      4. Nope, all I got was the PayPal receipt. Always something to fix, I wasn’t worried and suspected you thought something had happened. No worries, I dropped an email to the PayPal address.


      5. Ok gotcha. Lol I apologize for the name on the email it’s the name o my band. Forgot to edit that too πŸ™‚

      6. Servos powerful and mounts strong enough to do what I want. I’ve been enamored with this since I first saw it: It was built on Andruino, which I could do, but a lot has changed since 2012. My office is long and narrow with the door at the far end from my desk and I want to put it on a stand by the door. I have a small engine lathe and the ability to mill small parts, but I want to get the code working 1st then the metal work when it warms up.

      7. That sounds so cool! Hey keep me posted. I’m all about helping you make that happen. I’d love to see a video too!

      8. *This is exactly what your looking for. Once your servos are calibrated, it oughtta be pretty straightforward. πŸ™‚

      9. Hi, I donated and didn’t receive the code with following error after transaction: 500 Internal Server Error
        An error occurred while processing this request.

        Website owner? Check your code and/or debug log. If you need assistance, contact support.

  6. so i fixed the problem… i made this code wich has the function move in it. i already tested it and it works fine, here it is :

    import cv2
    import numpy as np
    from Adafruit_PWM_Servo_Driver import PWM
    import traceback as traceback
    import time
    import sys

    def mover(servo, angle): #, delta=170):
    #delay = max(delta * 0.003, 0.03) # calculate delay
    zero_pulse = (servoMin + servoMax) / 2 # half-way == 0 degrees
    pulse_width = zero_pulse – servoMin # maximum pulse to either side
    pulse = zero_pulse + (pulse_width * angle / 80)
    #print(“angle=%s pulse=%s” % (angle, pulse))
    pwm.setPWM(servo, 0, int(pulse))
    #time.sleep(delay) # sleep to give the servo time to do its thing
    # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

    # Initialise the PWM device using the default address
    pwm = PWM(0x40)

    servoMin = 150 # Min pulse length out of 4096
    servoMax = 600 # Max pulse length out of 4096


    cam_pan = 0# 77 initial possition
    cam_tilt = 60#77 initial possition

    cap = cv2.VideoCapture(0)

    FRAME_W = 440
    FRAME_H = 280
    cap.set(3, 440) #w
    cap.set(4, 320) #h

    cascPath = ‘/home/pi/Seguidor/haar/lbpcascades/lbpcascade_frontalface.xml’ # this adress has to be changed depending on where you have this file
    #face_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_frontalface_default.xml’)
    #eye_cascade = cv2.CascadeClassifier(‘/home/pi/Desktop/NAVI/memory/haar/haarcascade_eye.xml’)

    # Capture frame-by-frame
    ret, frame =

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.equalizeHist( gray )###
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    # Display the resulting frame

    for (x,y,w,h) in faces:
    # Draw a green rectancv2.CascadeClassifier(cascPath)gle around the face
    # Track first face

    # Get the center of the face
    x = x + (w/2)
    y = y + (h/2)

    # Correct relative to center of image
    turn_x = float(x – (FRAME_W/2))
    turn_y = float(y – (FRAME_H/2))

    # Convert to percentage offset
    turn_x /= float(FRAME_W/2)
    turn_y /= float(FRAME_H/2)

    # Scale offset to degrees
    turn_x *= 7.5 # VFOV
    turn_y *= 7.5 # HFOV
    cam_pan += -turn_x # this direction depends on the way you have your servos attached to the camera
    cam_tilt += turn_y # this direction depends on the way you have your servos attached to the camera

    # Clamp Pan/Tilt to 0 to 180 degrees
    cam_pan = max(0,min(180,cam_pan))
    cam_tilt = max(0,min(180,cam_tilt))

    # Update the servos
    ”’roi_gray = gray[y:y+h, x:x+w]
    roi_color = frame[y:y+h, x:x+w]
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:

    if cv2.waitKey(1) & 0xFF == ord(‘q’):

    # When everything done, release the capture

    1. Like say this more developed version: . Which makes it so easy to add servo movement to any script by simply importing the functions. You can turn anything into a plug n’ play robot. So if you ARE going into more servo work, I’d recommend to find out more about that lil problem. Otherwise this code is great! And I’m glad you found me! πŸ™‚

  7. hey man !! thanks for your help!!! so the problem is not from the camera, the problem comes when importing fuchikoma, i changed some things, it looks like is running now, ill send you the code when i finish it. thanks again i appreciate it !!!

  8. thank you!! i dont know why i still get the same problem:

    Traceback (most recent call last):
    File β€œ”, line 18, in
    TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
    (program exited with code: 1)
    Press return to continue

    i reaally dont know what to do

    1. So 1-make sure the camera is enable in sudo raspi-config.2-If your using the picam, make sure it’s REALLY plugged in. 3-instead of “from fuchikoma import *, try out “import fuchikoma”

  9. hey man!! thanks for your help! sorry to bother you… im getting this message error message
    Traceback (most recent call last):
    File “”, line 18, in
    TypeError: unbound method move() must be called with FUCHI instance as first argument (got int instance instead)
    (program exited with code: 1)
    Press return to continue
    what can i do?? is fuchikoma wrong or something??

    1. No please keep bothering me if you can. it helps me streamline my blog and correct my silly mistakes. I actually thank you for it πŸ™‚ so! The 5 and 6 on lines 18, 19, 60 and 61 are the slots on the servo board…and 5 and 6 are the shoulder and elbow on my robot targeting system(sorry). Switch the “5” on lines 18 and 19 to “0” and the “6” on lines 19 and 61 to “1” and it should work fine. And remember to plug them in correctly. The pan servo(side to side) is 0 and the tilt(up and down) servo is 1 and should be plugged into the first 2 slots of the actual board to reflect this.

  10. hello!! EVILGENIUS0077, so i just bought everything i need for the project, i also upload the Py c ode to the raspberry, but when i try to run the code i get an error that says “from servodriver import ServoDriver ImportError: no module named servodriver”, i dont know if i need to find a library somewhere.or you missed to upload that file

    1. Oh my goodness! Ok so is a file that allows use of continuous rotation servos…and I guess I forgot to delete all trace of that experiment. Delete anything that says service driver and I’ll fix that in a bit πŸ™‚ sorry man

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.