Realtime Face Detection and Recognition Using Python

Have you ever wanted to build a facial recognition system using Python? Look no further! In this tutorial, we will be using the face_recognition library to detect and recognize faces in video streams, images, and even in real-time using your webcam.

*This image is purely illustrative and does not belong to the author.

Face recognition and face detection are two separate tasks in the field of computer vision.

Face detection is the process of automatically locating faces in a photograph or video. It typically involves finding the locations of key points on a face, such as the corners of the mouth and eyes, and using these points to determine the position, size, and orientation of the face.

Face recognition, on the other hand, is the process of identifying and verifying people from images or video frames. It typically involves comparing a face in a photograph or video frame with a database of known faces and determining the identity of the person in the photograph or video. Both face detection and face recognition systems can be trained to be more accurate by providing them with more data. However, face recognition systems are generally more complex and require more computing power than face detection systems.

This code uses the face_recognition library to detect and recognize faces in a video feed so the first step is to install the library using pip. To do so, open the command window and type(Without ! ):

!pip install face_recognition

Next, we need to Install a few more libraries for this face recognition algorithm to work as intended.

  1. CMAKE : Download and Install here.
  2. Visual Studio : Download Visual Studio and select “C++ CMake Tools For Windows” when Installing as Shown in the image.

3. After installing, make sure CMake is in the Windows path Environment. If not,Open the “environment variable” in windows and paste the path location inside Path Variable as shown in the image.

Once this is done, open the command window again and install Cmake library and dlib as before(Without ! ) :

!pip install cmake
!pip install dlib

Once these libraries are install we can start writing the face recognition code:

Next, we’ll start by loading and processing some images of the people whose faces we want to recognize. We’ll use the load_image_file() function to load the images and the face_encodings() function to compute the face encodings, which are numerical representations of the unique features of each face. We’ll store these encodings in a list called encodings.

import face_recognition
import time
import cv2 


# Load Face Image 
image_anil = face_recognition.load_image_file('me.jpg')
image_obama = face_recognition.load_image_file('Obama.jpg')
image_trump = face_recognition.load_image_file('trump.jpg')

# Get Face Encodings of all the person in the image 
encodings_anil = face_recognition.face_encodings(image_anil)[0]
encodings_obama = face_recognition.face_encodings(image_obama)[0]
encodings_trump = face_recognition.face_encodings(image_trump)[0]

# Create a list of all the encodings 
encodings = [encodings_anil,encodings_obama,encodings_trump]

Now that we have the encodings of the people we want to recognize, we can start detecting and recognizing faces in real time using our webcam. To do this, we’ll use OpenCV’s VideoCapture function to get a video feed and the face_locations() function to detect the locations of the faces in each frame. Inside the loop, the current frame is resized to 1/4 of its original size and converted from BGR color format to RGB color format. This is done for faster processing.

# Get the video feed 
cv2.namedWindow('video',cv2.WINDOW_FREERATIO)

cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print('Camera Error!')
    exit()

while True:
    ret,vidframe = cap.read()
    vidframe = cv2.flip(vidframe,1)
    
    # Framerate Info
    new_frame_time = time.time()
    fps = int(1/(new_frame_time-prev_frame_time))
    prev_frame_time = new_frame_time
    
    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(vidframe, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]
    top,right,bottom,left = [],[],[],[]

The face_locations() function is then used to detect the locations of the faces in the frame.We’ll also use the compare_faces() function to compare the encoding of the current face with the encodings in the encodings list. If there is a match, we’ll display the name of the person on the frame using OpenCV’s putText() function.

    # Get face location
    face_location = face_recognition.face_locations(rgb_small_frame,model='hog') # model = hog/cnn
    num_faces = len(face_location)
    #(top, right, bottom, left)
    if not face_location:
        top,right,bottom,left = 0,0,0,0
    
    else:
        for i in range(num_faces):
            top =  [x[0]*4 for x in face_location]
            right = [x[1]*4 for x in face_location]
            bottom = [x[2]*4 for x in face_location]
            left = [x[3]*4 for x in face_location]
        
    names = ['Anil','Obama','Trump','Unknown']
    if num_faces > 0:
        # Get face encoding of the new image 
        faceencoding = face_recognition.face_encodings(rgb_small_frame,face_location)
        
        for index,eachface in enumerate(faceencoding):
            results = face_recognition.compare_faces(encodings,eachface)
            name = names[3]

            if results[0] :
                cv2.putText(vidframe,f'{names[0]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            elif results[1]:
                cv2.putText(vidframe,f'{names[1]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            elif results[2]:
                cv2.putText(vidframe,f'{names[2]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)   

            else:
                cv2.putText(vidframe,f'{name}',(15,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            
            cv2.rectangle(vidframe,(left[index],top[index]),(right[index],bottom[index]),(0,255,0),2)
            cv2.putText(vidframe,f'FPS:{fps}',(300,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            cv2.imshow('video',vidframe)     
    
    else:
        cv2.imshow('video',vidframe)
        cv2.putText(vidframe,f'FPS:{fps}',(300,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
    
    key = cv2.waitKey(1)
    if key == ord('q') or key == ord('Q'):
        break
cv2.destroyAllWindows()

And that’s it! You now have a basic facial recognition system using Python and the face_recognition library. You can further improve the performance of the system by using a more powerful model for face detection (such as the CNN model) and by increasing the size of the encodings list to include more people. Below is the complete code for you to try this out yourself.

import face_recognition
import time
import cv2 


# Load Face Image 
image_anil = face_recognition.load_image_file('me.jpg')
image_obama = face_recognition.load_image_file('Obama.jpg')
image_trump = face_recognition.load_image_file('trump.jpg')

# Get Face Encodings of all the person in the image 
encodings_anil = face_recognition.face_encodings(image_anil)[0]
encodings_obama = face_recognition.face_encodings(image_obama)[0]
encodings_trump = face_recognition.face_encodings(image_trump)[0]

# Create a list of all the encodings 
encodings = [encodings_anil,encodings_obama,encodings_trump]

# Frame rate calculation
prev_frame_time = 0
new_frame_time = 0

# Get the video feed 
cv2.namedWindow('video',cv2.WINDOW_FREERATIO)

cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print('Camera Error!')
    exit()

while True:
    ret,vidframe = cap.read()
    vidframe = cv2.flip(vidframe,1)
    
    # Framerate Info
    new_frame_time = time.time()
    fps = int(1/(new_frame_time-prev_frame_time))
    prev_frame_time = new_frame_time
    
    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(vidframe, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]
    top,right,bottom,left = [],[],[],[]

    # Get face location
    face_location = face_recognition.face_locations(rgb_small_frame,model='hog') # model = hog/cnn
    num_faces = len(face_location)
    #(top, right, bottom, left)
    if not face_location:
        top,right,bottom,left = 0,0,0,0
    
    else:
        for i in range(num_faces):
            top =  [x[0]*4 for x in face_location]
            right = [x[1]*4 for x in face_location]
            bottom = [x[2]*4 for x in face_location]
            left = [x[3]*4 for x in face_location]
        
    names = ['Anil','Obama','Trump','Unknown']
    if num_faces > 0:
        # Get face encoding of the new image 
        faceencoding = face_recognition.face_encodings(rgb_small_frame,face_location)
        
        for index,eachface in enumerate(faceencoding):
            results = face_recognition.compare_faces(encodings,eachface)
            name = names[3]

            if results[0] :
                cv2.putText(vidframe,f'{names[0]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            elif results[1]:
                cv2.putText(vidframe,f'{names[1]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            elif results[2]:
                cv2.putText(vidframe,f'{names[2]}',(left[index],top[index]-20),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)   

            else:
                cv2.putText(vidframe,f'{name}',(15,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            
            cv2.rectangle(vidframe,(left[index],top[index]),(right[index],bottom[index]),(0,255,0),2)
            cv2.putText(vidframe,f'FPS:{fps}',(300,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
            cv2.imshow('video',vidframe)     
    
    else:
        cv2.imshow('video',vidframe)
        cv2.putText(vidframe,f'FPS:{fps}',(300,45),cv2.FONT_HERSHEY_PLAIN,2,(0,0,0),1)
    
    key = cv2.waitKey(1)
    if key == ord('q') or key == ord('Q'):
        break
cv2.destroyAllWindows()

Leave a Reply

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock