Camera auto-calibration

In the visual effects industry, camera auto-calibration is often part of the "Match Moving" process where a synthetic camera trajectory and intrinsic projection model are solved to reproject synthetic content into video.

Camera auto-calibration is a form of sensor ego-structure discovery; the subjective effects of the sensor are separated from the objective effects of the environment leading to a reconstruction of the perceived world without the bias applied by the measurement device.

This is achieved via the fundamental assumption that images are projected from a Euclidean space through a linear, 5 degree of freedom (in the simplest case), pinhole camera model with non-linear optical distortion.

The linear pinhole parameters are the focal length, the aspect ratio, the skew, and the 2D principal point.

With only a set of uncalibrated (or calibrated) images, a scene may be reconstructed up to a six degree of freedom euclidean transform and an isotropic scaling.

A mathematical theory for general multi-view camera self-calibration was originally demonstrated in 1992 by Olivier Faugeras, QT Luong, and Stephen J. Maybank.

In 3D scenes and general motions, each pair of views provides two constraints on the 5 degree-of-freedom calibration.

For example, calibration may be obtained if multiple sets of parallel lines or objects with a known shape (e.g. circular) are identified.

reconstructed up to projective ambiguity (using, for example, bundle adjustment method) we wish to define rectifying homography