Feature (computer vision)

More broadly a feature is any piece of information that is relevant for solving the computational task related to a certain application.

These vary widely in the kinds of feature detected, the computational complexity and the repeatability.

Although local decisions are made, the output from a feature detection step does not need to be a binary image.

The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.

In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data.

[1] A common example of feature vectors appears when each image point is to be classified as belonging to a specific class.

Another and related example occurs when neural network-based processing is applied to images.

During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.

In practice, edges are usually defined as sets of points in the image that have a strong gradient magnitude.

Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge.

These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.

The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two-dimensional structure.

These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient.

Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like.

The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.

In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with more data and more demanding processing.

In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood.

Local velocity is undefined if the corresponding image region does not contain any spatial variation.

This enables a new feature descriptor to be computed from several descriptors, for example, computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties.

In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image.