Color balance

Image data acquired by sensors – either film or electronic image sensors – must be transformed from the acquired values to new values that are appropriate for color reproduction or display.

Several aspects of the acquisition and display process make such color correction essential – including that the acquisition sensors do not match the sensors in the human eye, that the properties of the display medium must be accounted for, and that the ambient viewing conditions of the acquisition differ from the display viewing conditions.

The color balance operations in popular image editing applications usually operate directly on the red, green, and blue channel pixel values,[1][2] without respect to any color sensing or reproduction model.

[5] Humans relate to flesh tones more critically than other colors.

The purpose of this color primary imbalance is to more faithfully reproduce the flesh tones through the entire brightness range.

Most digital cameras have means to select color correction based on the type of scene lighting, using either manual lighting selection, automatic white balance, or custom white balance.

[6] The algorithms for these processes perform generalized chromatic adaptation.

Setting a button on a camera is a way for the user to indicate to the processor the nature of the scene lighting.

Examples are Retinex, an artificial neural network[7] or a Bayesian method.

Color constancy is, in turn, related to chromatic adaptation.

Conceptually, color balancing consists of two steps: first, determining the illuminant under which an image was captured; and second, scaling the components (e.g., R, G, and B) of the image or otherwise transforming the components so they conform to the viewing illuminant.

[10] This difference typically amounted to a factor of more than two in favor of camera RGB.

This means that it is advantageous to get color balance right at the time an image is captured, rather than edit later on a monitor.

Color balancing is sometimes performed on a three-component image (e.g., RGB) using a 3x3 matrix.

This type of transformation is appropriate if the image was captured using the wrong white balance setting on a digital camera, or through a color filter.

In principle, one wants to scale all relative luminances in an image so that objects which are believed to be neutral appear so.

Doing analogously for green and blue would result, at least in theory, in a color balanced image.

are the color balanced red, green, and blue components of a pixel in the image;

are the red, green, and blue components of the image before color balancing, and

are the red, green, and blue components of a pixel which is believed to be a white surface in the image before color balancing.

It has been demonstrated that performing the white balancing in the phosphor set assumed by sRGB tends to produce large errors in chromatic colors, even though it can render the neutral surfaces perfectly neutral.

[10] If the image may be transformed into CIE XYZ tristimulus values, the color balancing may be performed there.

[11][12] Although it has been demonstrated to offer usually poorer results than balancing in monitor RGB, it is mentioned here as a bridge to other things.

are the tristimulus values of the viewing illuminant (the white point to which the image is being transformed to conform to);

are the tristimulus values of an object believed to be white in the un-color-balanced image, and

are the un-gamma corrected monitor RGB, one may use: Johannes von Kries, whose theory of rods and three color-sensitive cone types in the retina has survived as the dominant explanation of color sensation for over 100 years, motivated the method of converting color to the LMS color space, representing the effective stimuli for the Long-, Medium-, and Short-wavelength cone types that are modeled as adapting independently.

A 3x3 matrix converts RGB or XYZ to LMS, and then the three LMS primary values are scaled to balance the neutral; the color can then be converted back to the desired final color space:[13] where

are the tristimulus values of an object believed to be white in the un-color-balanced image, and

Matrices to convert to LMS space were not specified by von Kries, but can be derived from CIE color matching functions and LMS color matching functions when the latter are specified; matrices can also be found in reference books.

It has long been known that if the space of illuminants can be described as a linear model with N basis terms, the proper color transformation will be the weighted sum of N fixed linear transformations, not necessarily consistently diagonalizable.

The left half shows the photo as it came from the digital camera. The right half shows the photo adjusted to make a gray surface neutral in the same light.
Example of color balancing
A seascape photograph at Clifton Beach , South Arm , Tasmania , Australia. The white balance has been adjusted towards the warm side for creative effect.
Photograph of a ColorChecker as a reference shot for color balance adjustments.
Two photos of a high-rise building shot within a minute of each other with an entry-level point-and-shoot camera. Left photo shows a "normal", more accurate color balance, while the right side shows a "vivid" color balance, in-camera effects and no post-production besides black background.
Comparison of color versions (raw, natural, white balance) of Mount Sharp (Aeolis Mons) on Mars
A white-balanced image of Mount Sharp (Aeolis Mons) on Mars