Y′UV

[1] Today, the term YUV is commonly used in the computer industry to describe colorspaces that are encoded using YCbCr.

This lends itself naturally to the usage of the same letter in luma (Y′), which approximates a perceptually uniform correlate of luminance.

In all formats other than Y′IQ, each chroma component is a scaled version of the difference between red/blue and Y; the main difference lies in the scaling factors used, which is determined by color primaries and the intended numeric range (compare the use of Umax and Vmax in § SDTV with BT.470 with a fixed ⁠1/2⁠ in YCbCr § R'G'B' to Y'PbPr).

[6] They needed a signal transmission method that was compatible with black-and-white (B&W) TV while being able to add color.

The closer the U and V values get to zero, the lesser it shifts the color meaning that the red, green and blue lights will be more equally bright, producing a grayer spot.

PAL (NTSC used YIQ, which is further rotated) standard defines the following constants,[8] derived from BT.470 System M primaries and white point using SMPTE RP 177 (same constants called matrix coefficients were used later in BT.601, although it uses 1/2 instead of 0.436 and 0.615): PAL signals in Y′UV are computed from R'G'B' (only SECAM IV used linear RGB[9]) as follows: The resulting ranges of Y′, U, and V respectively are [0, 1], [−Umax, Umax], and [−Vmax, Vmax].

The Y′ channel saves all the data recorded by black and white cameras, so it produces a signal suitable for reception on old monochrome displays.

The human eye has fairly little spatial sensitivity to color: the accuracy of the brightness information of the luminance channel has far more impact on the image detail discerned than that of the other two.

Understanding this human shortcoming, standards such as NTSC and PAL reduce the bandwidth of the chrominance channels considerably.

Early versions of NTSC rapidly alternated between particular colors in identical image areas to make them appear adding up to each other to the human eye, while all modern analogue and even most digital video standards use chroma subsampling by recording a picture's color information at reduced resolution.

Today, only high-end equipment processing uncompressed signals uses a chroma subsampling of 4:4:4 with identical resolution for both brightness and color information.

However, true I and Q demodulation was relatively more complex, requiring two analog delay lines, and NTSC receivers rarely used it.

However, this color modulation strategy is lossy, particularly because of crosstalk from the luma to the chroma-carrying wire, and vice versa, in analogue equipment (including RCA connectors to transfer a digital signal, as all they carry is analogue composite video, which is either YUV, YIQ, or even CVBS).

Furthermore, NTSC and PAL encoded color signals in a manner that causes high bandwidth chroma and luma signals to mix with each other in a bid to maintain backward compatibility with black and white television equipment, which results in dot crawl and cross color artifacts.

When the NTSC standard was created in the 1950s, this was not a real concern since the quality of the image was limited by the monitor equipment, not the limited-bandwidth signal being received.

To keep pace with the abilities of new display technologies, attempts were made since the late 1970s to preserve more of the Y′UV signal while transferring images, such as SCART (1977) and S-Video (1987) connectors.

Digital television and DVDs preserve their compressed video streams in the MPEG-2 format, which uses a fully defined Y′CbCr color space, although retaining the established process of chroma subsampling.

Example of U-V color plane, Y′ value = 0.5, represented within RGB color gamut
An image along with its Y′, U, and V components respectively
HDTV Rec. 709 (quite close to SDTV Rec. 601) compared with UHDTV Rec. 2020