Static Background Removal from a video using OpenCV and Python
Contributor - 14 March 2017 - 12min
Background removal is an important pre-processing step required in many vision based applications. It can be used in scenarios where the background remains approximately constant across the capture and there are some movements in the foreground. For example, intrusion detection applications can determine any movement in the live captured recording and trigger an alarm.
We will be using OpenCV and numpy for this application. Here are some helper links:
Below are the operations we would need to perform in order to get the background subtracted image:
Read the video capture
Import the numpy and opencv modules using:
import numpy as np
To read the video file (vid.mp4 in the same directory as out application), we will use the VideoCapture API of OpenCV. It returns a file object which will enable us to read the video frame by frame.
cap = cv2.VideoCapture(“vid.mp4”)
Loop over the video frames
In order to grab, decode and use the next frame of a video, we can use the read() function of the video file object
ret_val, frame = cap.read()
if frame is None:
Initialize the result image to the first frame
We will initialize the result image to the float 32 channel converted first frame. The reason for float32 conversion will be clear in a while. For now lets use the numpy library to do the conversion.
if first_iter: # first iteration of the while loop
avg = np.float32(frame)
first_iter = False
Maintain a running average
Now that we have done all the initializations, lets get to the meat of the problem.
As can be guessed, every frame is internally represented as a 2D array where each element is the pixel intensity value of the image. In order to approximate the static background in a video, we can maintain a running average of all such 2D arrays.
If a moving object is present in some of the frames in the course of the video, it would not influence the running average much.
OpenCV provides an API called accumulateWeighted which we can use for doing exactly what we want. This function expects the src image, dst image and alpha as its arguments. In our case, the src image is each frame and dst image is the accumulated result.
accumulateWeighted(frame, avg, 0.005)
The alpha argument regulates the update speed (i.e how fast the accumulator “forgets” about earlier images). Higher the alpha, more the disturbance in the averaged image. This alpha (0.005 in this case) is the reason why we converted the 8 bit channel frames to 32 bit float channel frames.
Convert the result to 8 bit channel
After the video frames as accumulated, we can convert the result back to 8 bit channel image using:
result = cv2.convertScaleAbs(avg)
Show and write the result
Upon processing the video mentioned, an averaged frame is generated which doesn’t have any disturbance from the moving marker as shown below: