Image fusion

[4] In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms.

However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.

At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.

Later techniques are based on Discrete Wavelet Transform, uniform rational filter bank, and Laplacian pyramid.

Multi sensor data fusion has become a discipline which demands more general formal solutions to a number of application cases.

Categories of image fusion metrics are based on information theory[4] features, structural similarity, or human perception.

In many applications of VSN, a camera can’t give a perfect illustration including all details of the scene.

[8] Therefore, just the object located in the focal length of camera is focused and cleared and the other parts of image are blurred.

VSN has an ability to capture images with different depth of focuses in the scene using several cameras.

Due to the large amount of data generated by camera compared to other sensors such as pressure and temperature sensors and some limitation such as limited band width, energy consumption and processing time, it is essential to process the local input images to decrease the amount of transmission data.

While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images.

[11] The term is used when multiple images of a patient are registered and overlaid or merged to provide additional information.