Video coding format

It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation.

Although video coding formats such as H.264 are sometimes referred to as codecs, there is a clear conceptual difference between a specification and its implementations.

For example, a large part of how video compression typically works is by finding similarities between video frames (block-matching) and then achieving compression by copying previously-coded similar subimages (such as macroblocks) and adding small differences when necessary.

Though the video coding format must support such compression across frames in the bitstream format, by not needlessly mandating specific algorithms for finding such block-matches and other encoding steps, the codecs implementing the video coding specification have some freedom to optimize and innovate in their choice of algorithms.

[2] Free choice of algorithm also allows different space–time complexity trade-offs for the same video coding format, so a live feed can use a fast but space-inefficient algorithm, and a one-time DVD encoding for later mass production can trade long encoding-time for space-efficient encoding.

[9] In 1967, University of London researchers A.H. Robinson and C. Cherry proposed run-length encoding (RLE), a lossless compression scheme, to reduce the transmission bandwidth of analog television signals.

[12] Similarly, uncompressed high-definition (HD) 1080p video requires bitrates exceeding 1 Gbit/s, significantly greater than the bandwidth available in the 2000s.

It was then developed into a practical image compression algorithm by Ahmed with T. Natarajan and K. R. Rao at the University of Texas in 1973, and was published in 1974.

[9][21] For the spatial transform coding, they experimented with different transforms, including the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for them, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to a typical intra-frame coder requiring 2-bit per pixel.

[26][9] This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981.

[9] Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.

[11][27] H.261 was the first practical video coding standard,[28] and uses patents licensed from a number of companies, including Hitachi, PictureTel, NTT, BT, and Toshiba, among others.

[29] Since H.261, motion-compensated DCT compression has been adopted by all the major video coding standards (including the H.26x and MPEG formats) that followed.

[11][27] MPEG-1, developed by the Moving Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video.

[28] It was succeeded in 1994 by MPEG-2/H.262,[28] which was developed with patents licensed from a number of companies, primarily Sony, Thomson and Mitsubishi Electric.

[28] Its motion-compensated DCT algorithm was able to achieve a compression ratio of up to 100:1, enabling the development of digital media technologies such as video on demand (VOD)[12] and high-definition television (HDTV).

[33] It was developed in 2003, and uses patents licensed from a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics.

It is also widely used by streaming internet sources, such as videos from YouTube, Netflix, Vimeo, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC standards, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S2).

[examples needed] Interframe compression complicates editing of an encoded video sequence.

However, this process demands a lot more computing power than editing intraframe compressed video with the same picture quality.