In computer vision, the Lucas–Kanade method is a widely used differential method for optical flow estimation developed by Bruce D. Lucas and Takeo Kanade.
It assumes that the flow is essentially constant in a local neighbourhood of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighbourhood, by the least squares criterion.
[1][2] By combining information from several nearby pixels, the Lucas–Kanade method can often resolve the inherent ambiguity of the optical flow equation.
It is also less sensitive to image noise than point-wise methods.
On the other hand, since it is a purely local method, it cannot provide flow information in the interior of uniform regions of the image.
The Lucas–Kanade method assumes that the displacement of the image contents between two nearby instants (frames) is small and approximately constant within a neighborhood of the point
Thus the optical flow equation can be assumed to hold for all pixels within a window centered at
Namely, the local image flow (velocity) vector
These equations can be written in matrix form
This system has more equations than unknowns and thus it is usually over-determined.
The Lucas–Kanade method obtains a compromise solution by the least squares principle.
is often called the structure tensor of the image at the point
The plain least squares solution above gives the same importance to all
For that, one uses the weighted version of the least squares equation,
diagonal matrix containing the weights
is usually set to a Gaussian function of the distance between
is on an edge, and this method suffers from the aperture problem.
So for this method to work properly, the condition is that
This observation shows that one can easily tell which pixel is suitable for the Lucas–Kanade method to work on by inspecting a single image.
One main assumption for this method is that the motion is small (less than 1 pixel between two images for example).
If the motion is large and violates this assumption, one technique is to reduce the resolution of images first and then apply the Lucas–Kanade method.
[3] In order to achieve motion tracking with this method, the flow vector can be iteratively applied and recalculated, until some threshold near zero is reached, at which point it can be assumed that the image windows are very close in similarity.
[1] By doing this to each successive tracking window, the point can be tracked throughout several images in a sequence, until it is either obscured or goes out of frame.
The least-squares approach implicitly assumes that the errors in the image data have a Gaussian distribution with zero mean.
If one expects the window to contain a certain percentage of "outliers" (grossly wrong data values, that do not follow the "ordinary" Gaussian error distribution), one may use statistical analysis to detect them, and reduce their weight accordingly.
The Lucas–Kanade method per se can be used only when the image flow vector
between the two frames is small enough for the differential equation of the optical flow to hold, which is often less than the pixel spacing.
When the flow vector may exceed this limit, such as in stereo matching or warped document registration, the Lucas–Kanade method may still be used to refine some coarse estimate of the same, obtained by other means; for example, by extrapolating the flow vectors computed for previous frames, or by running the Lucas–Kanade algorithm on reduced-scale versions of the images.
Indeed, the latter method is the basis of the popular Kanade–Lucas–Tomasi (KLT) feature matching algorithm.
A similar technique can be used to compute differential affine deformations of the image contents.