Deconvolution

The foundations for deconvolution and time-series analysis were largely laid by Norbert Wiener of the Massachusetts Institute of Technology in his book Extrapolation, Interpolation, and Smoothing of Stationary Time Series (1949).

Usually, h is a distorted version of f and the shape of f can't be easily recognized by the eye or simpler time-domain operations.

The function g represents the impulse response of an instrument or a driving force that was applied to a physical system.

In physical measurements, the situation is usually closer to In this case ε is noise that has entered our recorded signal.

That is the reason why inverse filtering the signal (as in the "raw deconvolution" above) is usually not a good solution.

He worked with others at MIT, such as Norbert Wiener, Norman Levinson, and economist Paul Samuelson, to develop the "convolutional model" of a reflection seismogram.

The reflectivity may be recovered by designing and applying a Wiener filter that shapes the estimated wavelet to a Dirac delta function (i.e., a spike).

The result may be seen as a series of scaled, shifted delta functions (although this is not mathematically rigorous): where N is the number of reflection events,

However, by formulating the problem as the solution of a Toeplitz matrix and using Levinson recursion, we can relatively quickly estimate a filter with the smallest mean squared error possible.

It is usually done in the digital domain by a software algorithm, as part of a suite of microscope image processing techniques.

Deconvolution is also practical to sharpen images that suffer from fast motion or jiggles during capturing.

Early Hubble Space Telescope images were distorted by a flawed mirror and were sharpened by deconvolution.

The usual method is to assume that the optical path through the instrument is optically perfect, convolved with a point spread function (PSF), that is, a mathematical function that describes the distortion in terms of the pathway a theoretical point source of light (or other waves) takes through the instrument.

[3] Usually, such a point source contributes a small area of fuzziness to the final image.

In practice, finding the true PSF is impossible, and usually an approximation of it is used, theoretically calculated[4] or based on some experimental estimation by using known probes.

[3] Blind deconvolution is a well-established image restoration technique in astronomy, where the point nature of the objects photographed exposes the PSF thus making it more feasible.

Division of the time-domain data by an exponential function has the effect of reducing the width of Lorentzian lines in the frequency domain.

Before and after deconvolution of an image of the lunar crater Copernicus using the Richardson-Lucy algorithm.
Example of a deconvolved microscope image.
High Resolution THz image is achieved by deconvolution of the THz image and the mathematically modeled THz PSF. (a) THz image of an integrated circuit (IC) before enhancement; (b) Mathematically modeled THz PSF; (c) High resolution THz image which is achieved as a result of deconvolution of the THz image shown in (a) and the PSF which is shown in (b); (d) High resolution X-ray image confirms the accuracy of the measured values. [ 5 ]