How to Detect Motion using Background Subtraction Algorithms

motion detect

In this tutorial, we will implement a motion detection program using OpenCV’s background subtraction algorithms in python.

Introduction

Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras.

As the name suggests, BS calculates the foreground mask performing a subtraction between the current frame and a background model, which contains the static part of the scene, or in other words the frame of the scene with no foreground objects.

1 – Absolute Background Subtraction (ABS)

In absolute background subtraction method a static frame of the scene with no foreground object is compared with the incoming frames. The absolute difference between the frames separates the foreground from the background.

OpenCV BGS
Absolute Background Subtraction Based motion Detection.

As you can see the first frame is subtracted from the current frame. one problem with this method is that if there is an object in the foreground the mask is not updated when the object is out of the scene as can be seen in the image above.

2 – MOG2 (Mixture of Gaussian)

In this method, a mixture of k Gaussians distributions models each background pixel, with values for k within 3 and 5. It is assumed that different distributions represent each different background and foreground colors. The weight of each one of those used distributions on the model is proportional to the amount of time each color stays on that pixel. Therefore, when the weight of a pixel distribution is low, that pixel is classified as foreground.

MOG2 based motion detection. Every frame is used both for calculating the foreground mask and for updating the background.

3 – KNN (K- Nearest Neighbors)

The K Nearest Neighbor (KNN) method computes the Euclidean distance from each segment in the segmentation image to every training region that you define. The distance is measured in n-dimensional space, where n is the number of attributes for that training region.

In KNN recursive equations are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each
pixel.

KNN based Motion Detection.

Code

# BGS based tracking 

import numpy as np
import matplotlib.pyplot as plt
import cv2
from numpy.lib.type_check import imag

minimum = 4000                  #Define Min Contour area
frame1 = None
cap = cv2.VideoCapture(0)   # Capture object to access the camera
method = 'ABS'

# Background Subtraction Methods
mog = cv2.createBackgroundSubtractorMOG2()  
knn = cv2.createBackgroundSubtractorKNN()   

while True:
    ret, frame = cap.read()
    vid = cv2.flip(frame,1)
    
    if method == 'MOG2':
        bgs = mog.apply(vid)
        
    
    elif method == 'KNN':
        bgs = knn.apply(vid)
# Using Frame difference method to find the change in the scene. Current frame 
# is subtracted from the first frame which is assumed to be static and contains no 
# Foreground object.
    
    elif method == 'ABS':
        frame = cv2.GaussianBlur(vid,(7,7),0)
        frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
        
        if frame1 is None:
            frame1 = frame
            continue 
        
        framedelta = cv2.absdiff(frame1,frame)
        retval, bgs = cv2.threshold(framedelta.copy(), 50, 255, cv2.THRESH_BINARY)
    
    mask = np.zeros_like(frame)

# Finding contours and Draw them of the frame
    
    contours,_ = cv2.findContours(bgs, mode= cv2.RETR_TREE, method= cv2.CHAIN_APPROX_SIMPLE)
    contours = sorted(contours,key=cv2.contourArea,reverse= True)
    
    for cnt in contours:
        if cv2.contourArea(cnt) < minimum:
                continue
        
        (x,y,w,h) = cv2.boundingRect(cnt)
        cv2.rectangle(vid,(x,y),(x+w,y+h),(0,255,10),1)
        cv2.putText(vid,f'{method}',(20,20),cv2.FONT_HERSHEY_COMPLEX_SMALL,1,(0,255,0,2))
        cv2.putText(vid,'Motion Detected',(20,40),cv2.FONT_HERSHEY_COMPLEX_SMALL,1,(0,255,0,2))
        #cv2.drawContours(bgs,cnt,-1,255,3)
        cv2.drawContours(mask,cnt,-1,255,3)
        break

    cv2.imshow('frame',vid)
    cv2.imshow('BGS',bgs)


    key = cv2.waitKey(1)
    if key == ord('q') or key == ord('Q'):
        break
    elif key == ord('M') or key == ord('m'):
        method = 'MOG2'
    elif key == ord('K') or key == ord('k'):
        method = 'KNN'
    elif key == ord('A') or key == ord('a'):
        method = 'ABS'

cap.release()
cv2.destroyAllWindows()

References

[1]https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html

1 thought on “How to Detect Motion using Background Subtraction Algorithms

Leave a Reply

Your email address will not be published. Required fields are marked *