# H.264/MPEG-4 AVC: Map

### Map showing all locations mentioned on Wikipedia article:

H.264/MPEG-4 AVC is a standard for video compression. The final drafting work on the first version of the standard was completed in May 2003.

H.264/AVC is the latest block-oriented motion-compensation-based codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG), and it was the product of a partnership effort known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 - MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content. H.264 is most popular for its use on Blu-ray Disc, HD DVD and videos from the iTunes Store.

## Overview

The intent of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards (e.g. half or less the bit rate of MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design so much that it would be impractical or excessively expensive to implement. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems, including low and high bit rates, low and high resolution video, broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems.

The H.264 standard is a "family of standards", the members of which are the profiles described below. A specific decoder decodes at least one, but not necessarily all, profiles. The decoder specification describes which of the profiles can be decoded.

The standardization of the first version of H.264/AVC was completed in May 2003. The JVT then developed extensions to the original standard that are known as the Fidelity Range Extensions (FRExt). These extensions enable higher quality video coding by supporting increased sample bit depth precision and higher-resolution color information, including sampling structures known as YUV 4:2:2 and YUV 4:4:4. Several other features are also included in the Fidelity Range Extensions project, such as adaptive switching between 4 *4 and 8 *8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, and support of additional color spaces. The design work on the Fidelity Range Extensions was completed in July 2004, and the drafting work on them was completed in September 2004.

Further recent extensions of the standard have included adding five new profiles intended primarily for professional applications, adding extended-gamut color space support, defining additional aspect ratio indicators, defining two additional types of "supplemental enhancement information" (post-filter hint and tone mapping), and deprecating one of the prior FRExt profiles that industry feedback indicated should have been designed differently.

Scalable Video Coding as specified in Annex G of H.264/AVC allows the construction of bitstreams that contain sub-bitstreams that conform to H.264/AVC. For temporal bitstream scalability, i.e., the presence of a sub-bitstream with a smaller temporal sampling rate than the bitstream, complete access units are removed from the bitstream when deriving the sub-bitstream. In this case, high-level syntax and inter prediction reference pictures in the bitstream are constructed accordingly. For spatial and quality bitstream scalability, i.e. the presence of a sub-bitstream with lower spatial resolution or quality than the bitstream, NAL (Network Abstraction Layer) removed from the bitstream when deriving the sub-bitstream. In this case, inter-layer prediction, i.e., the prediction of the higher spatial resolution or quality signal by data of the lower spatial resolution or quality signal, is typically used for efficient coding. The Scalable Video Coding extension was completed in November 2007.

The H.264 name follows the ITU-T naming convention, where the standard is a member of the H.26x line of VCEG video coding standards; the MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG, where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4. The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard with names such as H.264/AVC, AVC/H.264, H.264/MPEG-4 AVC, or MPEG-4/H.264 AVC, to emphasize the common heritage. The name H.26L, referring to its ITU-T history, is less common, but still used. Occasionally, it is also referred to as "the JVT codec", in reference to the Joint Video Team (JVT) organization that developed it. (Such partnership and multiple naming is not uncommon-for example, the video codec standard known as MPEG-2 also arose from the partnership between MPEG and the ITU-T, where MPEG-2 video is known to the ITU-T community as H.262.)

## Features

H.264/AVC/MPEG-4 Part 10 contains a number of new features that allow it to compress video much more effectively than older standards and to provide more flexibility for application to a wide variety of network environments. In particular, some such key features include:
• Multi-picture inter-picture prediction including the following features:
• Using previously-encoded pictures as references in a much more flexible way than in past standards, allowing up to 16 reference frames (or 32 reference fields, in the case of interlaced encoding) to be used in some cases. This is in contrast to prior standards, where the limit was typically one; or, in the case of conventional "B pictures", two. This particular feature usually allows modest improvements in bit rate and quality in most scenes. But in certain types of scenes, such as those with repetitive motion or back-and-forth scene cuts or uncovered background areas, it allows a significant reduction in bit rate while maintaining clarity.
• Variable block-size motion compensation (VBSMC) with block sizes as large as 16 \times 16 and as small as 4 \times 4, enabling precise segmentation of moving regions. The supported luma prediction block sizes include 16 \times 16, 16 \times 8, 8 \times 16, 8 \times 8, 8 \times 4, 4 \times 8, and 4 \times 4, many of which can be used together in a single macroblock. Chroma prediction block sizes are correspondingly smaller according to the chroma subsampling in use.
• The ability to use multiple motion vectors per macroblock (one or two per partition) with a maximum of 32 in the case of a B macroblock constructed of 16 4 \times 4 partitions. The motion vectors for each 8 *8 or larger partition region can point to different reference pictures.
• The ability to use any macroblock type in B-frames, including I-macroblocks, resulting in much more efficient encoding when using B-frames. This feature was notably left out from MPEG-4 ASP.
• Six-tap filtering for derivation of half-pel luma sample predictions, for sharper subpixel motion-compensation. Quarter-pixel motion is derived by linear interpolation of the halfpel values, to save processing power.
• Quarter-pixel precision for motion compensation, enabling precise description of the displacements of moving areas. For chroma the resolution is typically halved both vertically and horizontally (see 4:2:0) therefore the motion compensation of chroma uses one-eighth chroma pixel grid units.
• Weighted prediction, allowing an encoder to specify the use of a scaling and offset when performing motion compensation, and providing a significant benefit in performance in special cases—such as fade-to-black, fade-in, and cross-fade transitions. This includes implicit weighted prediction for B-frames, and explicit weighted prediction for P-frames.
• Spatial prediction from the edges of neighboring blocks for "intra", rather than the "DC"-only prediction found in MPEG-2 Part 2 and the transform coefficient prediction found in H.263v2 and MPEG-4 Part 2. This includes luma prediction block sizes of 16 *16, 8 *8, and 4 *4 (of which only one type can be used within each macroblock).
• Lossless macroblock coding features including:
• A lossless "PCM macroblock" representation mode in which video data samples are represented directly, allowing perfect representation of specific regions and allowing a strict limit to be placed on the quantity of coded data for each macroblock.
• An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode.
• Flexible interlaced-scan video coding features, including:
• Macroblock-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16 *16 macroblocks in field mode (compared with 16 *8 half-macroblocks in MPEG-2).
• Picture-adaptive frame-field coding (PAFF or PicAFF) allowing a freely-selected mixture of pictures coded as MBAFF frames with pictures coded as individual single fields (half frames) of interlaced video.
• New transform design features, including:
• An exact-match integer 4 *4 spatial block transform, allowing precise placement of residual signals with little of the "ringing" often found with prior codec designs. This is conceptually similar to the well-known DCT design, but simplified and made to provide exactly-specified decoding.
• An exact-match integer 8 *8 spatial block transform, allowing highly correlated regions to be compressed more efficiently than with the 4 *4 transform. This is conceptually similar to the well-known DCT design, but simplified and made to provide exactly-specified decoding.
• Adaptive encoder selection between the 4 *4 and 8 *8 transform block sizes for the integer transform operation.
• A secondary Hadamard transform performed on "DC" coefficients of the primary spatial transform applied to chroma DC coefficients (and also luma in one special case) to obtain even more compression in smooth regions.
• Logarithmic step size control for easier bit rate management by encoders and simplified inverse-quantization scaling.
• Frequency-customized quantization scaling matrices selected by the encoder for perceptual-based quantization optimization.
• An in-loop deblocking filter which helps prevent the blocking artifacts common to other DCT-based image compression techniques, resulting in better visual appearance and compression efficiency.
• Context-adaptive binary arithmetic coding (CABAC), an algorithm to losslessly compress syntax elements in the video stream knowing the probabilities of syntax elements in a given context. CABAC compresses data more efficiently than CAVLC but requires considerably more processing to decode.
• Context-adaptive variable-length coding (CAVLC), which is a lower-complexity alternative to CABAC for the coding of quantized transform coefficient values. Although lower complexity than CABAC, CAVLC is more elaborate and more efficient than the methods typically used to code coefficients in other prior designs.
• A common simple and highly structured variable length coding (VLC) technique for many of the syntax elements not coded by CABAC or CAVLC, referred to as Exponential-Golomb coding (or Exp-Golomb).
• Loss resilience features including:
• A Network Abstraction Layer (NAL) definition allowing the same video syntax to be used in many network environments. One very fundamental design concept of H.264 is to generate self contained packets, to remove the header duplication as in MPEG-4's Header Extension Code (HEC). This was achieved by decoupling information relevant to more than one slice from the media stream. The combination of the higher-level parameters is called a parameter set. The H.264 specification includes two types of parameter sets: Sequence Parameter Set (SPS) and Picture Parameter Set (PPS). An active sequence parameter set remains unchanged throughout a coded video sequence, and an active picture parameter set remains unchanged within a coded picture. The sequence and picture parameter set structures contain information such as picture size, optional coding modes employed, and macroblock to slice group map.
• Flexible macroblock ordering (FMO), also known as slice groups, and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (macroblocks) in pictures. Typically considered an error/loss robustness feature, FMO and ASO can also be used for other purposes.
• Data partitioning (DP), a feature providing the ability to separate more important and less important syntax elements into different packets of data, enabling the application of unequal error protection (UEP) and other types of improvement of error/loss robustness.
• Redundant slices (RS), an error/loss robustness feature allowing an encoder to send an extra representation of a picture region (typically at lower fidelity) that can be used if the primary representation is corrupted or lost.
• Frame numbering, a feature that allows the creation of "sub-sequences", enabling temporal scalability by optional inclusion of extra pictures between other pictures, and the detection and concealment of losses of entire pictures, which can occur due to network packet losses or channel errors.
• Switching slices, called SP and SI slices, allowing an encoder to direct a decoder to jump into an ongoing video stream for such purposes as video streaming bit rate switching and "trick mode" operation. When a decoder jumps into the middle of a video stream using the SP/SI feature, it can get an exact match to the decoded pictures at that location in the video stream despite using different pictures, or no pictures at all, as references prior to the switch.
• A simple automatic process for preventing the accidental emulation of start codes, which are special sequences of bits in the coded data that allow random access into the bitstream and recovery of byte alignment in systems that can lose byte synchronization.
• Supplemental enhancement information (SEI) and video usability information (VUI), which are extra information that can be inserted into the bitstream to enhance the use of the video for a wide variety of purposes.
• Auxiliary pictures, which can be used for such purposes as alpha compositing.
• Support of monochrome, 4:2:0, 4:2:2, and 4:4:4 chroma subsampling (depending on the selected profile).
• Support of sample bit depth precision ranging from 8 to 14 bits per sample (depending on the selected profile).
• The ability to encode individual color planes as distinct pictures with their own slice structures, macroblock modes, motion vectors, etc., allowing encoders to be designed with a simple parallelization structure (supported only in the three 4:4:4-capable profiles).
• Picture order count, a feature that serves to keep the ordering of the pictures and the values of samples in the decoded pictures isolated from timing information, allowing timing information to be carried and controlled/changed separately by a system without affecting decoded picture content.

These techniques, along with several others, help H.264 to perform significantly better than any prior standard under a wide variety of circumstances in a wide variety of application environments. H.264 can often perform radically better than MPEG-2 video—typically obtaining the same quality at half of the bit rate or less, especially on high bit rate and high resolution situations.

Like other ISO/IEC MPEG video standards, H.264/AVC has a reference software implementation that can be freely downloaded. Its main purpose is to give examples of H.264/AVC features, rather than being a useful application per se. Some reference hardware design work is also under way in the Moving Picture Experts Group.The above mentioned are complete features of H.264/AVC covering all profiles of H.264. A profile for a codec is a set of features of that codec identified to meet a certain set of specifications of intended applications. This means that many of the features listed are not supported in some profiles. Various profiles of H.264/AVC are discussed in next section.

## Profiles

The standard includes the following sets of capabilities, which are referred to as profiles, targeting specific classes of applications:
Constrained Baseline Profile (CBP): Primarily for low-cost applications this profile is used widely in videoconferencing and mobile applications. It corresponds to the subset of features that are in common between the Baseline, Main, and High Profiles described below.
Baseline Profile (BP): Primarily for low-cost applications that requires additional error robustness, while this profile is used rarely in videoconferencing and mobile applications, it does add additional error resilience tools to the Constrained Baseline Profile. The importance of this profile is fading after the Constrained Baseline Profile has been defined.
Main Profile (MP): Originally intended as the mainstream consumer profile for broadcast and storage applications, the importance of this profile faded when the High profile was developed for those applications.
Extended Profile (XP): Intended as the streaming video profile, this profile has relatively high compression capability and some extra tricks for robustness to data losses and server stream switching.
High Profile (HiP): The primary profile for broadcast and disc storage applications, particularly for high-definition television applications (this is the profile adopted into HD DVD and Blu-ray Disc, for example).
High 10 Profile (Hi10P): Going beyond today's mainstream consumer product capabilities, this profile builds on top of the High Profile, adding support for up to 10 bits per sample of decoded picture precision.
High 4 2 2 Profile (Hi422P): Primarily targeting professional applications that use interlaced video, this profile builds on top of the High 10 Profile, adding support for the 4:2:2 chroma subsampling format while using up to 10 bits per sample of decoded picture precision.
High 4 4 4 Predictive Profile (Hi444PP): This profile builds on top of the High 4:2:2 Profile, supporting up to 4:4:4 chroma sampling, up to 14 bits per sample, and additionally supporting efficient lossless region coding and the coding of each picture as three separate color planes.
High Stereo Profile: This profile targets two-view stereoscopic 3D video and combines tools from the High profile with inter-view prediction capabilities of Multiview Video Coding extension.

In addition, the standard contains four additional all-Intra profiles, which are defined as simple subsets of other corresponding profiles. These are mostly for professional (e.g., camera and editing system) applications:
High 10 Intra Profile: The High 10 Profile constrained to all-Intra use.
High 4 2 2 Intra Profile: The High 4:2:2 Profile constrained to all-Intra use.
High 4 4 4 Intra Profile: The High 4:4:4 Profile constrained to all-Intra use.
CAVLC 4 4 4 Intra Profile: The High 4:4:4 Profile constrained to all-Intra use and to CAVLC entropy coding (i.e., not supporting CABAC).

As a result of the Scalable Video Coding extension, the standard contains three additional scalable profiles, which are defined as a combination of the H.264/AVC profile for the base layer (2nd word in scalable profile name) and tools that achieve the scalable extension:
Scalable Baseline Profile: Primarily targeting video conferencing, mobile, and surveillance applications, this profile builds on top of a constrained version of the H.264/AVC Baseline profile to which the base layer (a subset of the bitstream) must conform. For the scalability tools, a subset of the available tools is enabled.
Scalable High Profile: Primarily targeting broadcast and streaming applications, this profile builds on top of the H.264/AVC High Profile to which the base layer must conform.
Scalable High Intra Profile: Primarily targeting production applications, this profile is the Scalable High Profile constrained to all-Intra use.

Predefined profiles
Feature CBP BP XP MP HiP Hi10P Hi422P Hi444PP
B slices
SI and SP slices
Flexible macroblock ordering (FMO)
Arbitrary slice ordering (ASO)
Redundant slices (RS)
Data partitioning
Interlaced coding (PicAFF, MBAFF)
CABAC entropy coding
Quantization scaling matrices
Separate Cb and Cr QP control
Monochrome (4:0:0)
Chroma formats (4:2:x)
Largest sample depth
Separate color plane coding
Predictive lossless coding

## Levels

Levels with maximum property values
Level Max macroblocks Max video bit rate (VCL) Examples for high resolution @
frame rate
(max stored frames)

per second per frame BP, XP, MP
(kbit/s)
HiP
(kbit/s)
Hi10P
(kbit/s)
Hi422P, Hi444PP
(kbit/s)
1 1485 99 64 80 192 256 128×96@30.9 (8)
176×144@15.0 (4)
1b 1485 99 128 160 384 512 128×96@30.9 (8)
176×144@15.0 (4)
1.1 3000 396 192 240 576 768 176×144@30.3 (9)
320×240@10.0 (3)
352×288@7.5 (2)

1.2 6000 396 384 480 1152 1536 320×240@20.0 (7)
352×288@15.2 (6)
1.3 11880 396 768 960 2304 3072 320×240@36.0 (7)
352×288@30.0 (6)
2 11880 396 2000 2500 6000 8000 320×240@36.0 (7)
352×288@30.0 (6)
2.1 19800 792 4000 5000 12000 16000 352×480@30.0 (7)
352×576@25.0 (6)
2.2 20250 1620 4000 5000 12000 16000 352×480@30.7(10)
352×576@25.6 (7)
720×480@15.0 (6)
720×576@12.5 (5)

3 40500 1620 10000 12500 30000 40000 352×480@61.4 (12)
352×576@51.1 (10)
720×480@30.0 (6)
720×576@25.0 (5)

3.1 108000 3600 14000 17500 42000 56000 720×480@80.0 (13)
720×576@66.7 (11)
1280×720@30.0 (5)

3.2 216000 5120 20000 25000 60000 80000 1280×720@60.0 (5)
1280×1024@42.2 (4)
4 245760 8192 20000 25000 60000 80000 1280×720@68.3 (9)
1920×1080@30.1 (4)
2048×1024@30.0 (4)

4.1 245760 8192 50000 62500 150000 200000 1280×720@68.3 (9)
1920×1080@30.1 (4)
2048×1024@30.0 (4)

4.2 522240 8704 50000 62500 150000 200000 1920×1080@64.0 (4)
2048×1080@60.0 (4)
5 589824 22080 135000 168750 405000 540000 1920×1080@72.3 (13)
2048×1024@72.0 (13)
2048×1080@67.8 (12)
2560×1920@30.7 (5)
3680×1536@26.7 (5)

5.1 983040 36864 240000 300000 720000 960000 1920×1080@120.5 (16)
4096×2048@30.0 (5)
4096×2304@26.7 (5)

## Reference Frames

Reference frames are used by h.264 encoders to provide a sliding reference window. This allows the encoder to make efficient decisions on the best way to encode a given frame. However, users have difficulty determining just how many reference frames are appropriate for the h.264 Level & target video resolution. You can use these formulas to determine maximum reference frames.

##### Formula
The raw formula to determine maximum reference frames uses x & y values expressed in macroblocks (MBS).
 (min (1024 * maxDPB / (PicWidthInMBS * FrameHeightInMBS * 384), 16)
To make things easier, use the formula below. Refer to the chart below to determine the appropriate maxDPB value for the level you're encoding.

 ROUNDDOWN (MIN (1024 * maxDPB/(((Width * Height) / 256) * 384), 16), 0)

##### Examples
 Example 1: Level 4.1, 720x480 target video resolution Example 2: Level 4.1, 1920x1080 target video resolution Example 2: Level 3, 720x480 target video resolution ROUNDDOWN (MIN (1024 * 12288/(((720 * 480) / 256) * 384), 16), 0) = 16 Reference Frames Max ROUNDDOWN (MIN (1024 * 12288/(((1920 * 1080) / 256) * 384), 16), 0) = 4 Reference Frames Max ROUNDDOWN (MIN (1024 * 3037.5/(((720 * 480) / 256) * 384), 16), 0) = 6 Reference Frames Max

##### maxDPB Values
Excerpt from Table A-1 of the h.264 Specification
 Level 1 1b 1.1 1.2 1.3 2 2.1 2.2 MaxDPB 148.5 148.5 337.5 891 891 891 1782 3037.5

 Level 3 3.1 3.2 4 4.1 4.2 5 5.1 MaxDPB 3037.5 6750 7680 12288 12288 13056 41400 69120

Note: Reference frames are exclusive of B-Frames.

To clear up common confusion between the two, it should be noted B-Frames do not have Level restrictions as reference frames do.

It is normal to have more or fewer b-frames than reference frames depending on perceptual taste and encoding options chosen.

Another way to think of it is reference frames control the number of 'input frames the encoder can make reference to and b-frames controls what quantity of frames in the output of the encoder can be b-frames.

In terms of x264.exe, reference frames refer to the command --ref x (where "x" is the number of reference frames the encoder should use, considering the chosen h.264 Level).

B-Frames refers to the command -bframes x (where "x" is an arbitrary number between 0-15).

## Standardization committee and history

In early 1998 the Video Coding Experts Group (VCEG - ITU-T SG16 Q.6) issued a call for proposals on a project called H.26L, with the target to double the coding efficiency (which means halving the bit rate necessary for a given level of fidelity) in comparison to any other existing video coding standards for a broad variety of applications. VCEG was chaired by Gary Sullivan (Microsoft [formerly PictureTel], USA). The first draft design for that new standard was adopted in August 1999. In 2000, Thomas Wiegand (Heinrich Hertz Institute, Germany) became VCEG co-chair.In December 2001, VCEG and the Moving Picture Experts Group (MPEG - ISO/IEC JTC 1/SC 29/WG 11) formed a Joint Video Team (JVT), with the charter to finalize the video coding standard. Formal approval of the specification came in March 2003. The JVT was (is) chaired by Gary Sullivan, Thomas Wiegand, and Ajay Luthra (Motorola, USA). In June 2004, the Fidelity range extensions (FRExt) project was finalized. From January 2005 to November 2007, the JVT was working on an extension of H.264/AVC towards scalability by an Annex (G) called Scalable Video Coding (SVC). The JVT management team was extended by Jens-Reiner Ohm (Aachen University, Germany). Since July 2006, the JVT works on Multiview Video Coding (MVC), an extension of H.264/AVC towards free viewpoint television and 3D television.

## Versions

Versions of the H.264/AVC standard include the following completed revisions, corrigenda, and amendments (dates are final approval dates in ITU-T, while final "International Standard" approval dates in ISO/IEC are somewhat different and slightly later in most cases). Each version represents changes relative to the next lower version that is integrated into the text. Bold faced versions are published (or planned to be published).
• Version 1: (May 2003) First approved version of H.264/AVC containing Baseline, Extended, and Main profiles.
• Version 2: (May 2004) Corrigendum containing various minor corrections.
• Version 3: (March 2005) Major addition to H.264/AVC containing the first Amendment providing Fidelity Range Extensions (FRExt) containing High, High 10, High 4:2:2, and High 4:4:4 profiles.
• Version 4: (September 2005) Corrigendum containing various minor corrections and adding three aspect ratio indicators.
• Version 5: (June 2006) Amendment consisting of removal of prior High 4:4:4 profile (processed as a corrigendum in ISO/IEC).
• Version 6: (June 2006) Amendment consisting of minor extensions like extended-gamut color space support (bundled with above-mentioned aspect ratio indicators in ISO/IEC).
• Version 7: (April 2007) Amendment containing the addition of High 4:4:4 Predictive and four Intra-only profiles (High 10 Intra, High 4:2:2 Intra, High 4:4:4 Intra, and CAVLC 4:4:4 Intra).
• Version 8: (November 2007) Major addition to H.264/AVC containing the Amendment for Scalable Video Coding (SVC) containing Scalable Baseline, Scalable High, and Scalable High Intra profiles.
• Version 9: (February 2009) Corrigendum containing minor corrections.
• Version 10: (February 2009) Amendment containing definition of a new profile (the Constrained Baseline profile) with only the common subset of capabilities supported in various previously-specified profiles.
• Version 11: (February 2009) Major addition to H.264/AVC containing the Amendment for Multiview Video Coding (MVC) extension, including the Multiview High profile.
• Version 12: (November 2009) Amendment containing definition of a new MVC profile (the Stereo High profile) for two-view video coding with support of interlaced coding tools and specifying an additional SEI message (the frame packing arrangement SEI message).
• Version 13: (November 2009) Corrigendum containing minor corrections.

## Patent licensing

In countries where patents on software algorithms are upheld, the vendors of products which make use of H.264/AVC are expected to pay patent licensing royalties for the patented technology that their products use.This applies to the Baseline Profile as well.A private organization known as MPEG LA, which is not affiliated in any way with the MPEG standardization organization, administers the licenses for patents applying to this standard, as well as the patent pools for MPEG-2 Part 1 Systems, MPEG-2 Part 2 Video, MPEG-4 Part 2 Video, and other technologies.

In 2005, Qualcomm, which was the assignee of US Patents 5,452,104 and 5,576,767, sued Broadcom in US District Court, alleging that Broadcom infringed the two patents by making products that were compliant with the H.264 video compression standard. In 2007, the District Court found that the patents were unenforceable because Qualcomm had failed to disclose them to the JVT prior to the release of the H.264 standard in May 2003. In December 2008, the US Court of Appeals for the Federal Circuit affirmed the District Court's order that the patents be unenforceable but remanded to the District Court with instructions to limit the scope of unenforceability to H.264 compliant products.

### Patents and GNU Free Software licenses

Discussions are often held regarding the legality of free software implementations of codecs like H.264, especially concerning the legal use of GNU LGPL and GPL implementations of H.264 and other patented codecs. Consensus in discussions is that the allowable use depends on the laws of local jurisdictions. If operating or shipping a product in a country or group of countries where none of the patents covering H.264 apply, then using, for example, an LGPL implementation of the codec is not a problem: There is no conflict between the software license and the (non-existent) patent license.

Conversely, shipping (not necessarily implementing) a product in the U.S. which includes an LGPL H.264 decoder/encoder would be in violation of the software license of the codec implementation. In simple terms, the LGPL and GPL licenses require that any rights held in conjunction with distributing the code also apply to anyone receiving the code, GPL, version 3, section 10 and no further restrictions are put on distribution or use.GPL, version 3, section 7 If there is a requirement for a patent license to be sought, this is a clear violation of both the GPL and LGPL terms.GPL, version 3, section 11 Thus, the right to distribute patent-encumbered code under those licenses as part of the product is revoked per the terms of the GPL and LGPL. It should be realized that the party who would enforce any such breach of copyright would be the people who hold copyright: its writers, whereby any suit on a breach of that clause would have to argue that there exist valid, applicable patents that apply to the capabilities GPL licenced code, a stance copyright holders have not taken.

## Applications

The H.264 video codec has a very broad application range that covers all forms of digital compressed video from low bit-rate Internet streaming applications to HDTV broadcast and Digital Cinema applications with nearly lossless coding. With the use of H.264, bit rate savings of 50% or more are reported. Digital Satellite TV quality, for example, was reported to be achievable at 1.5 Mbit/s, compared to the current operation point of MPEG 2 video at around 3.5 Mbit/s. In order to ensure compatibility and problem-free adoption of H.264/AVC, many standards bodies have amended or added to their video-related standards so that users of these standards can employ H.264/AVC.

Both the Blu-ray Disc format and the now-discontinued HD DVD format include the H.264/AVC High Profile as one of 3 mandatory video compression codecs. Sony has also chosen this format for their Memory Stick Video format.

The Digital Video Broadcast project (DVB) approved the use of H.264/AVC for broadcast television in late 2004. The Advanced Television Systems Committee (ATSC) standards body in the United States is considering the possibility of specifying one or two advanced video codecs for its optional Enhanced-VSB (E-VSB) transmission mode for use in U.S. broadcast television. It has included H.264/AVC and VC-1 into Candidate Standards as CS/TSG-659r2 and CS/TSG-658r1 respectively for this purpose.

AVCHD is a high-definition recording format designed by Sony and Panasonic that uses H.264 (conforming to H.264 while adding additional application-specific features and constraints).

AVC-Intra is an intraframe compression only format, developed by Panasonic.

## Software encoder feature comparison

AVC software implementations
Feature QT Nero LEAD x264 MC Dicas Elecard TSE VSofts ProCoder Avivo Elemental IPP
B slices
SI and SP slices
Multiple reference frames
Flexible Macroblock Ordering (FMO)
Arbitrary slice ordering (ASO)
Redundant slices (RS)
Data partitioning
Interlaced coding (PicAFF, MBAFF)
CABAC entropy coding
Quantization scaling matrices
Separate Cb and Cr QP control
Monochrome (4:0:0)
Chroma formats (4:2:x)
Largest sample depth (bit)
Separate color plane coding
Predictive lossless coding
Film grain modeling

Fully supported profiles
Profile QT Nero LEAD x264 MC Dicas Elecard TSE VSofts ProCoder Avivo Elemental IPP
Constrained baseline
Baseline
Extended
Main
High

## Hardware Encoder and IP

Because H.264 encoders required significant computing power, software encoders run on general CPU are typically slow, especially dealing with HD contents. To offload the CPU and/or to do realtime encoding, hardware encoders are employed.

A hardware H.264 encoder can be an ASIC or an FPGA. An FPGA is a general programmable chip. To use an FPGA as a hardware encoder, an H.264 encoder IP is required. As technology evolves, a full HD (main profile, level 4.1, 1080p, 30fps) H.264 encoder can run on a single chip of low cost FPGA in 2009.

ASIC encoders with H.264 encoder function are available from many different semiconductor companies, but the H.264 encoder IP used in the ASIC are mostly licensed from a few IP vendors. Some H.264 IP vendors' IP are for FPGA or ASIC only, and some are for both FPGA and ASIC.

## References

### Miscellaneous

 Embed code: