Teknomo–Fernandez algorithm

By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in

-time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C, C++, and Java.

[1][2][3] People tracking from videos usually involves some form of background subtraction to segment foreground from background.

Once foreground images are extracted, then desired algorithms (such as those for motion tracking, object tracking, and facial recognition) may be executed using these images.

Traditionally, the background image is searched for manually or automatically from the video images when there are no objects.

More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive.

of an image and its accuracy gained within a manageable number of frames.

Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos.

Furthermore, it can be performed for both grayscale and colored videos.

Generally, however, the algorithm will certainly work whenever the following single important assumption holds: For each pixel position, the majority of the pixel values in the entire video contain the pixel value of the actual background image (at that position).

[1]As long as each part of the background is shown in the majority of the video, the entire background image needs not to appear in any of its frames.

The algorithm is expected to work accurately.

[1] At the first level, three frames are selected at random from the image sequence to produce a background image by combining them using the first equation.

This yields a better background image at the second level.

The procedure is repeated until desired level

that the modal bit predicted is the actual modal bit is represented by the equation

It can be observed that even if the modal bit at the considered position is at a low 60% of the frames, the probability of accurate modal bit determination is already more than 99% at 6 levels.

[1] The space requirement of the Teknomo–Fernandez algorithm is given by the function

of frames in the video, and the desired number

will probably not exceed 6 reduces the space complexity to

[1] A variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method named CRF has been developed.

Experiments on some colored video sequences showed that the CRF configurations outperform the TF algorithm in terms of accuracy.

However, the TF algorithm remains more efficient in terms of processing time.

The TF algorithm produces the background image from a video of a street with many pedestrians crossing.
The TF algorithm generates the colored background image and uses it for background subtraction.
Computed probabilities table
This table gives the computed probability values across several levels using some specific initial probabilities. It can be observed that even if the modal bit at the considered position is at a low 60% of the frames, the probability of accurate modal bit determination is already more than 99% at six levels.