Geometric feature learning

Geometric feature learning is a technique combining machine learning and computer vision to solve visual tasks.

The main goal of this method is to find a set of representative features of geometric form to represent an object by collecting geometric features from images and learning them using efficient machine learning methods.

Humans solve visual tasks and can give fast response to the environment by extracting perceptual information from what they see.

Researchers simulate humans' ability of recognizing objects to solve computer vision problems.

For example, M. Mata et al.(2002) [1] applied feature learning techniques to the mobile robot navigation tasks in order to avoid obstacles.

They used genetic algorithms for learning features and recognizing objects (figures).

Geometric feature learning methods can not only solve recognition problems but also predict subsequent actions by analyzing a set of sequential input sensory images, usually some extracting features of images.

Through learning, some hypothesis of the next action are given and according to the probability of each hypothesis give a most probable action.

This technique is widely used in the area of artificial intelligence.

Geometric feature learning methods extract distinctive geometric features from images.

Geometric features are features of objects constructed by a set of geometric elements like points, lines, curves or surfaces.

Feature space was firstly considered in computer vision area by Segen.

[4] He used multilevel graph to represent the geometric relations of local features.

There are many learning algorithms which can be applied to learn to find distinctive features of objects in an image.

Learning can be incremental, meaning that the object classes can be added at any time.

2.According to the recognition algorithm, evaluate the result.

If the result is true, new object classes are recognised.

After recognise the features, the results should be evaluated to determine whether the classes can be recognised, There are five evaluation categories of recognition results: correct, wrong, ambiguous, confused and ignorant.

If the recognition failed, the feature nodes should be maximise their distinctive power which is defined by the Kolmogorov-Smirno distance (KSD).

3.Feature learning algorithm After a feature is recognised, it should be applied to Bayesian network to recognise the image, using the feature learning algorithm to test.

The probably approximately correct (PAC) model was applied by D. Roth (2002) to solve computer vision problem by developing a distribution-free learning theory based on this model.

[5] This theory heavily relied on the development of feature-efficient learning approach.

The goal of this algorithm is to learn an object represented by some geometric features in an image.

The input is a feature vector and the output is 1 which means successfully detect the object or 0 otherwise.

The main point of this learning approach is collecting representative elements which can represent the object through a function and testing by recognising an object from image to find the representation with high probability.

belongs to a class, where X is the instance space consists with parameters and then test whether the prediction is correct.

D. Roth applied two learning algorithms: The main purpose of SVM is to find a hyperplane to separate the set of samples

is an input vector which is a selection of features