Computational photography

Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus").

[8] Other examples include processing and merging differently illuminated images of the same subject matter ("lightspace").

The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.

These are detectors that combine sensing and processing, typically in hardware, like the oversampled binary image sensor.

Computational photography, as an art form, has been practiced by capture of differently exposed pictures of the same subject matter, and combining them together.

Computational photography was inspired by the work of Charles Wyckoff, and thus computational photography datasets (e.g. differently exposed pictures of the same subject matter that are taken in order to make a single composite image) are sometimes referred to as Wyckoff Sets, in his honor.

Early work in this area (joint estimation of image projection and exposure value) was undertaken by Mann and Candoccia.

Charles Wyckoff devoted much of his life to creating special kinds of 3-layer photographic films that captured different exposures of the same subject matter.

A picture of a nuclear explosion, taken on Wyckoff's film, appeared on the cover of Life Magazine and showed the dynamic range from dark outer areas to inner core.

Computational photography provides many new capabilities. This example combines HDR (High Dynamic Range) imaging with panoramics ( image-stitching ), by optimally combining information from multiple differently exposed pictures of overlapping subject matter. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
A 1981 wearable computational photography apparatus
Wearable Computational Photography originated in the 1970s and early 1980s, and has evolved into a more recent art form. This picture was used on the cover of the John Wiley and Sons textbook on the subject.