VisualRank

[1] Both computer vision techniques and locality-sensitive hashing (LSH) are used in the VisualRank algorithm.

An existing search technique based on image metadata and surrounding text is used to retrieve the initial result candidates (PageRank), which along with other images in the index are clustered in a graph according to their similarity (which is precomputed).

Centrality is then measured on the clustering, which will return the most canonical image(s) with respect to the query.

Clearly, the image similarity measure is crucial to the performance of VisualRank since it determines the underlying graph structure.

Local feature descriptors are used instead of color histograms as they allow similarity to be considered between images with potential rotation, scale, and perspective transformations.

Locality-sensitive hashing is then applied to these feature vectors using the p-stable distribution scheme.