MPEG-4 Part 3


MPEG-4 Part 3 or MPEG-4 Audio is the third part of the ISO/IEC MPEG-4 international standard developed by Moving Picture Experts Group. It specifies audio coding methods. The first version of ISO/IEC 14496-3 was published in 1999.
The MPEG-4 Part 3 consists of a variety of audio coding technologies – from lossy speech coding, general audio coding, lossless audio compression, a Text-To-Speech Interface, Structured Audio and many additional audio synthesis and coding techniques.
MPEG-4 Audio does not target a single application such as real-time telephony or high-quality audio compression. It applies to every application which requires the use of advanced sound compression, synthesis, manipulation, or playback.
MPEG-4 Audio is a new type of audio standard that integrates numerous different types of audio coding: natural sound and synthetic sound, low bitrate delivery and high-quality delivery, speech and music, complex soundtracks and simple ones, traditional content and interactive content.

Versions

Subparts

MPEG-4 Part 3 contains following subparts:
MPEG-4 Audio includes a system for handling a diverse group of audio formats in a uniform manner. Each format is assigned a unique Audio Object Type to represent it. Object Type is used to distinguish between different coding methods. It directly determines the MPEG-4 tool subset required to decode a specific object. The MPEG-4 profiles are based on the object types and each profile supports different list of object types.
Object Type IDAudio Object TypeFirst public release dateDescription
1AAC Main1999contains AAC LC
2AAC LC 1999Used in the "AAC Profile". MPEG-4 AAC LC Audio Object Type is based on the MPEG-2 Part 7 Low Complexity profile combined with Perceptual Noise Substitution .
3AAC SSR 1999MPEG-4 AAC SSR Audio Object Type is based on the MPEG-2 Part 7 Scalable Sampling Rate profile combined with Perceptual Noise Substitution .
4AAC LTP 1999contains AAC LC
5SBR 2003used with AAC LC in the "High Efficiency AAC Profile"
6AAC Scalable1999
7TwinVQ1999audio coding at very low bitrates
8CELP 1999speech coding
9HVXC 1999speech coding
10
11
12TTSI 1999
13Main synthesis1999contains 'wavetable' sample-based synthesis and Algorithmic Synthesis and Audio Effects
14'wavetable' sample-based synthesis1999based on SoundFont and DownLoadable Sounds, contains General MIDI
15General MIDI1999
16Algorithmic Synthesis and Audio Effects1999
17ER AAC LC2000Error Resilient
18
19ER AAC LTP2000Error Resilient
20ER AAC Scalable2000Error Resilient
21ER TwinVQ2000Error Resilient
22ER BSAC 2000It is also known as "Fine Granule Audio" or fine grain scalability tool. It is used in combination with the AAC coding tools and replaces the noiseless coding and the bitstream formatting of MPEG-4 Version 1 GA coder. Error Resilient
23ER AAC LD 2000Error Resilient, used with CELP, ER CELP, HVXC, ER HVXC and TTSI in the "Low Delay Profile",
24ER CELP2000Error Resilient
25ER HVXC2000Error Resilient
26ER HILN 2000Error Resilient
27ER Parametric2000Error Resilient
28SSC 2004
29PS 2004 and 2006used with AAC LC and SBR in the "HE-AAC v2 Profile". PS coding tool was defined in 2004 and Object Type defined in 2006.
30MPEG Surround2007also known as MPEG Spatial Audio Coding, it is a type of spatial audio coding
31
32MPEG-1/2 Layer-12005
33MPEG-1/2 Layer-22005
34MPEG-1/2 Layer-32005also known as "MP3onMP4"
35DST 2005lossless audio coding, used on Super Audio CD
36ALS 2006lossless audio coding
37SLS 2006two-layer audio coding with lossless layer and lossy General Audio core/layer
38SLS non-core2006lossless audio coding without lossy General Audio core/layer
39ER AAC ELD 2008Error Resilient
40SMR Simple2008note: Symbolic Music Representation is also the MPEG-4 Part 23 standard
41SMR Main2008
42USAC 2012Unified Speech and audio Coding is defined in MPEG-D Part 3
43SAOC 2010note: Spatial Audio Object Coding is also the MPEG-D Part 2 standard
44LD MPEG Surround2010This object type conveys Low Delay MPEG Surround Coding side information in the MPEG-4 Audio framework.
45SAOC-DE2013Spatial Audio Object Coding Dialogue Enhancement
46Audio Sync2015The audio synchronization tool provides capability of synchronizing multiple contents in multiple devices.

Audio Profiles

The MPEG-4 Audio standard defines several profiles. These profiles are based on the object types and each profile supports different list of object types. Each profile may also have several levels, which limit some parameters of the tools present in a profile. These parameters usually are the sampling rate and the number of audio channels decoded at the same time.
Audio ProfileAudio Object TypesFirst public release date
AAC ProfileAAC LC2003
High Efficiency AAC ProfileAAC LC, SBR2003
HE-AAC v2 ProfileAAC LC, SBR, PS2006
Main Audio ProfileAAC Main, AAC LC, AAC SSR, AAC LTP, AAC Scalable, TwinVQ, CELP, HVXC, TTSI, Main synthesis1999
Scalable Audio ProfileAAC LC, AAC LTP, AAC Scalable, TwinVQ, CELP, HVXC, TTSI1999
Speech Audio ProfileCELP, HVXC, TTSI1999
Synthetic Audio ProfileTTSI, Main synthesis1999
High Quality Audio ProfileAAC LC, AAC LTP, AAC Scalable, CELP, ER AAC LC, ER AAC LTP, ER AAC Scalable, ER CELP2000
Low Delay Audio ProfileCELP, HVXC, TTSI, ER AAC LD, ER CELP, ER HVXC2000
Natural Audio ProfileAAC Main, AAC LC, AAC SSR, AAC LTP, AAC Scalable, TwinVQ, CELP, HVXC, TTSI, ER AAC LC, ER AAC LTP, ER AAC Scalable, ER TwinVQ, ER BSAC, ER AAC LD, ER CELP, ER HVXC, ER HILN, ER Parametric2000
Mobile Audio Internetworking ProfileER AAC LC, ER AAC Scalable, ER TwinVQ, ER BSAC, ER AAC LD2000
HD-AAC ProfileAAC LC, SLS2009
ALS Simple ProfileALS2010

Audio storage and transport

There is no standard for transport of elementary streams over a channel, because the broad range of MPEG-4 applications have delivery requirements that are too wide to easily characterize with a single solution.
The capabilities of a transport layer and the communication between transport, multiplex, and demultiplex functions are described in the Delivery Multimedia Integration Framework in ISO/IEC 14496-6. A wide variety of delivery mechanisms exist below this interface, e.g., MPEG transport stream, Real-time Transport Protocol, etc.
Transport in Real-time Transport Protocol is defined in RFC 3016, RFC 3640, RFC 4281 and RFC 4337.
LATM and LOAS were defined for natural audio applications, which do not require sophisticated object-based coding or other functions provided by MPEG-4 Systems.

Bifurcation in the AAC technical standard

The Advanced Audio Coding in MPEG-4 Part 3 Subpart 4 was enhanced relative to the previous standard MPEG-2 Part 7, in order to provide better sound quality for a given encoding bitrate.
It is assumed that any Part 3 and Part 7 differences will be ironed out by the ISO standards body in the near future to avoid the possibility of future bitstream incompatibilities. At present there are no known player or codec incompatibilities due to the newness of the standard.
The MPEG-2 Part 7 standard was first published in 1997 and offers three default profiles: Low Complexity profile, Main profile and Scalable Sampling Rate profile.
The MPEG-4 Part 3 Subpart 4 combined the profiles from MPEG-2 Part 7 with Perceptual Noise Substitution and defined them as Audio Object Types.

HE-AAC

is an extension of AAC LC using spectral band replication, and Parametric Stereo. It is designed to increase coding efficiency at low bitrates by using partial parametric representation of audio.

AAC-SSR

AAC Scalable Sample Rate was introduced by Sony to the MPEG-2 Part 7 and MPEG-4 Part 3 standards. It was first published in ISO/IEC 13818-7, Part 7: Advanced Audio Coding in 1997. The audio signal is first split into 4 bands using a 4 band polyphase quadrature filter bank. Then these 4 bands are further split using MDCTs with a size k of 32 or 256 samples. This is similar to normal AAC LC which uses MDCTs with a size k of 128 or 1024 directly on the audio signal.
The advantage of this technique is that short block switching can be done separately for every PQF band. So high frequencies can be encoded using a short block to enhance temporal resolution, low frequencies can be still encoded with high spectral resolution. However, due to aliasing between the 4 PQF bands coding efficiencies around * fs/8 is worse than normal MPEG-4 AAC LC.
MPEG-4 AAC-SSR is very similar to ATRAC and ATRAC-3.

Why AAC-SSR was introduced

The idea behind AAC-SSR was not only the advantage listed above, but also the possibility of reducing the data rate by removing 1, 2 or 3 of the upper PQF bands. A very simple bitstream splitter can remove these bands and thus reduce the bitrate and sample rate.
Example:
Note: although possible, the resulting quality is much worse than typical
for this bitrate. So for normal 64 kbit/s AAC LC a bandwidth of 14–16 kHz is
achieved by using intensity stereo and reduced NMRs. This degrades audible quality
less than transmitting 6 kHz bandwidth with perfect quality.

BSAC

Bit Sliced Arithmetic Coding is an MPEG-4 standard for scalable audio coding. BSAC uses an alternative noiseless coding to AAC, with the rest of the processing being identical to AAC. This support for scalability allows for nearly transparent sound quality at 64 kbit/s and graceful degradation at lower bit rates. BSAC coding is best performed in the range of 40 kbit/s to 64 kbit/s, though it operates in the range of 16 kbit/s to 64 kbit/s. The AAC-BSAC codec is used in Digital Multimedia Broadcasting applications.

Licensing

In 2002, the MPEG-4 Audio Licensing Committee selected the Via Licensing Corporation as the Licensing Administrator for the MPEG-4 Audio patent pool.