Skip to content

MPEG 132

MPEG 132 took place as an online conference from 2020-10-12 until 2020-10-16.

Press release of 132nd MPEG meeting: here
Press release of WG 8 MPEG GENOMIC CODING: here
MPEG Roadmap: here
Liaison response to SC 29/WG 1: here
Verbal report from AG3: here
Assets of Communication: here
White paper on Multi-Image Application Format: here

Press release:

The 132nd meeting of MPEG was held online from 2020-10-12 to 2020-10-16.

MPEG Continues to Progress – First Meeting with the New Structure

After the 131st MPEG meeting of June/July 2020, SC 29 raised the organizational level of its MPEG activities by elevating some of the subgroups of WG 11 (MPEG) to independent MPEG Working Groups (WGs) and Advisory Groups (AGs) of SC 29 rather than subgroups of the former WG 11. The MPEG community is now an affiliated group of WGs and AGs that will continue meeting together according to previous MPEG meeting practices and will further advance the standardization activities of the MPEG work program.

The 132nd MPEG meeting was the first meeting with the new structure consisting of AG 2 MPEG Technical Coordination (for overall MPEG work coordination), WG 2 MPEG Technical Requirements, WG 3 MPEG Systems, WG 4 MPEG Video Coding, WG 5 MPEG Joint Video Coding Team(s) with ITU-T SG 16, WG 6 MPEG Audio Coding, WG 7 MPEG Coding of 3D Graphics, WG 8 MPEG Genome Coding, AG 3 MPEG Liaison and Communication, and AG 5 MPEG Visual Quality Assessment. More than 300 participants continued to work efficiently on standards for the future needs of the industry. As a group, MPEG started to explore new application areas that will benefit from standardized compression technology in the future.

Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance and Reference Software Standards Reach their First Milestone

At the 132nd MPEG meeting, a verification testing assessment of the new Versatile Video Coding (VVC) standard was completed for ultra-high definition (UHD) content with standard dynamic range, as may be used in newer streaming and broadcast television applications. The testing was performed using rigorous subjective assessment methods. It showed that VVC provides a compelling gain over its predecessor, the High Efficiency Video Coding (HEVC) standard produced in 2013. Using its reference software implementation, VVC showed bit rate savings of roughly 45% over HEVC for comparable subjective video quality. In the same assessment, additional bit rate savings of more than 10% relative to the VVC reference software were observed for an open source encoder implementation of VVC, which at the same time runs significantly faster than the reference software implementation. It is noted that HEVC itself has much better compression capability than the AVC standard that is used in most modern video applications. VVC has been produced jointly by MPEG as ISO/IEC 23090-3, along with the ITU-T, which has approved VVC as H.266.

Additionally, the standardization work for both conformance testing and reference software for the VVC standard reached its first major milestone – progressing to Committee Draft ballot in the ISO/IEC approval process. The conformance testing standard ISO/IEC 23090-15 will ensure interoperability among the diverse applications that use the VVC standard, and the reference software standard ISO/IEC 23090-16 will provide an illustration of the capabilities of VVC and a valuable example showing how the standard can be implemented. The reference software will further facilitate adoption of the standard by being available for use as the basis of product implementations.

MPEG Completes Geometry-based Point Cloud Compression Standard

At its 132nd meeting, MPEG promoted its ISO/IEC 23090-9 Geometry-based Point Cloud Compression (G‑PCC) standard to the Final Draft International Standard (FDIS) stage. G‑PCC addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as colour and material properties. This technology is particularly suitable for sparse point clouds. ISO/IEC 23090-5 Video-based Point Cloud Compression (V‑PCC), which reached the FDIS stage in July 2020, addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images using video compression techniques. The generalized approach of G-PCC, where the 3D geometry is directly coded to exploit any redundancy in the point cloud itself, is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.

Point clouds are typically represented by extremely large amounts of data, which is a significant barrier to mass market applications. However, the relative ease of capturing and rendering spatial information compared to other volumetric video representations makes point clouds increasingly popular for displaying immersive volumetric data. The current draft reference software implementation of a lossless, intra-frame G‑PCC encoder provides a compression ratio of up to 10:1 and lossy coding of acceptable quality for a variety of applications with a ratio of up to 35:1.

By providing high immersion at currently available bit rates, the G‑PCC standard will enable various applications such as 3D mapping, indoor navigation, autonomous driving, advanced augmented reality (AR) with environmental mapping, and cultural heritage.

MPEG Evaluates Extensions and Improvements to MPEG-G and Announces a Call for Evidence on New Advanced Genomics Features and Technologies

The extensive use of high-throughput DNA sequencing technologies enables a new approach to healthcare known as “precision medicine”. DNA sequencing technologies produce extremely large amounts of raw data that are stored in various repositories around the world. One challenge is to efficiently handle the increasing flood of sequencing data. A second challenge is the ability to process such a flood of data in order to 1) expand scientific knowledge of genome sequence information and 2) search genome databases for diagnostic and therapeutic purposes. High-performance compression of genomic data is required to reduce the storage size and increase transmission speed of large data sets.

The current MPEG-G standard series (ISO/IEC 23092) deals with the representation, compression, and transport of genome sequencing data, with support for annotation data under development. These specifications provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for accessing genomic data in its native compressed format.

In response to a call for proposals issued at the 131st meeting, MPEG received submissions addressing low-complexity coding modes that directly improve coding and decoding speed to enable access to data with lower latency, and for advanced sequencing data and metadata indexing and search that can be applied to both aligned and unaligned data directly in the compressed domain. In addition, technologies for compressing and indexing of aligned and unaligned read data were proposed. MPEG is currently evaluating the integration of these new technologies into the MEPG-G standard series.

In line with MPEG’s traditional practice of continuously improving the quality and performance of its standards, MPEG issued a public Call for Evidence (CfE) at its 132nd meeting. This CfE aims to evaluate the performance of new technologies that 1) can demonstrate that the current compression, transport, and indexing technology of the ISO/IEC 23092 series can be improved with new compression technologies, especially for very long reads, and 2) can yield higher compression rate, support new features or improve the performance of other important metrics.

MPEG Issues Draft Call for Proposals on the Coded Representation of Haptics

Haptics provide an additional layer of entertainment and sensory immersion beyond audio and visual media. The user experience and enjoyment of media content can be significantly enhanced by adding haptics to audio/video content, whether in ISO Base Media File Format (ISOBMFF) files or in media streams such as ATSC 3.0 broadcasts, streaming games, and mobile advertising. For this purpose, haptics has been proposed as a first-order media type, co-equal to audio and video, in ISOBMFF. Furthermore, haptics was proposed as an addition to the MPEG-DASH standard, which would make DASH streaming clients aware of the presence of haptics in ISOBMFF segments. Finally, haptics were added to the MPEG-I use cases, resulting in a number of haptics-specific requirements for MPEG-I. These proposals are in various stages of the MPEG standardization process.

These ongoing efforts to standardize haptics underline the need for a standardized coded representation of haptics. A standardized haptics coding format and associated standardized decoder will facilitate the inclusion of haptics in the ISOBMFF, MPEG-DASH, and MPEG-I standards, making it easier for content creators and streaming content providers to include haptics and benefit from their impact on the user experience.

At its 132nd meeting, MPEG issued a draft Call for Proposals (CfP) on the Coded Representation of Haptics. This CfP calls for the submission of technologies that enable efficient representation and compression of time-dependent haptic signals and are suitable for the coding of timed haptic tracks that can be synchronized with audio and/or video media. The final CfP will be issued at the 133rd MPEG meeting in January 2021.

Companies and organizations are invited to submit proposals for this call. Parties do not need to be MPEG members to respond. Responses should be submitted by March 1, 2021 and will be evaluated during the 134th MPEG meeting. Detailed information, including instructions on how to respond to the call for proposals, the requirements to be considered, and the test data to be used are available at http://www.mpeg.org/. For further information on the call, please contact Dr. Igor Curcio, WG 2 MPEG Technical Requirements Convenor, at igor.curcio@nokia.com.

MPEG Evaluates Responses to MPEG IPR Smart Contracts CfP

At its 132nd meeting, MPEG concluded the evaluation of the responses to its Call for Proposals (CfP) on technologies for MPEG Intellectual Property Rights (IPR) contracts to smart contracts conversion. The CfP responses proved to be complementary to the CfP requirements for the conversion of MPEG IPR XML & RDF contracts (ISO/IEC 21000-19 Media Value Chain Ontology, 21000-19/AMD1 Audio Value Chain Ontology, 21000-20 Contract Expression Language and 21000-21 Media Contract Ontology) to smart contracts that can be executed in existing Distributed Ledger Technology (DLT) environments. The CfP responses have been aligned in terms of terminology, workflow descriptions, architecture, and APIs. As a result, MPEG adopted a draft specification of MPEG IPR smart contracts (ISO/IEC 21000-23) at the working draft (WD) level. The final specification is expected to be available by the end of 2021.

This important standard will greatly assist the industry in achieving effective interoperability for the exchange of verified IPR contract data between different DLTs and provide valuable information to facilitate the development of interoperable MPEG IPR smart contracts. Another important feature of this standard is that it offers the possibility to bind the clauses of a smart contract with those of a human-readable contract and vice versa. In this way, it increases trust and promotes the adoption of DLT applications by the parties in the music and media value chain, as each party signing an MPEG IPR smart contact will know exactly what the clauses of the smart contract express.

MPEG Completes Standard on Harmonization of DASH and CMAF

At its 132nd meeting, MPEG completed the development of the standard that enables the integration of Dynamic Adaptive Streaming over HTTP (DASH) with Common Media Application Format (CMAF) by promoting ISO/IEC 23009-1:2019 Amendment 1 to the status Final Draft Amendment (FDAM). Among other improvements, this amendment defines a DASH profile for the use of CMAF.

CMAF and DASH segments are both based on the ISO Base Media File Format (ISOBMFF), which enables a smooth integration of both technologies. This change defines constraints on the content models of DASH and CMAF so that the two technologies can be used effectively together. To distribute CMAF content in DASH, this profile defines a normative mapping of CMAF structures to the structures of DASH and its usage with a Media Presentation Description (MPD) as a manifest format.

In addition, this amendment also adds DASH events and timed metadata track timing and processing models with in-band event streams, a method for specifying the resynchronization points of segments when the segments have internal structures that allow container-level resynchronization, an MPD patch framework that allows the transmission of partial MPD information as opposed to the complete MPD using the XML patch framework as defined in IETF RFC 5261, and content protection enhancements for efficient signalling.

It is expected that the 5th edition of the MPEG DASH standard (ISO/IEC 23009-1) containing this change will be issued at the 133rd MPEG meeting in January 2021.

MPEG Completes 2nd Edition of the Omnidirectional Media Format

At its 132nd meeting MPEG completed the standardization of the 2nd edition of the Omnidirectional MediA Format (OMAF) by promoting ISO/IEC 23009-2 to Final Draft International Standard (FDIS) status.

This edition adds “late binding” technologies to deliver and present only that part of the content that adapts to the dynamically changing users’ viewpoint. To enable an efficient implementation of such a feature, this edition of the specification introduces the concept of bitstream rewriting, in which a compliant bitstream is dynamically generated that, by combining the received portions of the bitstream, covers only the users’ viewport on the client. In addition, this edition extends the application area of the OMAF specification beyond 360-degree video. This edition introduces the concept of viewpoints, which can be considered as user-switchable camera positions for viewing content or as temporally contiguous parts of a storyline to provide multiple choices for the storyline a user can follow. This edition also enhances the use of video, image, or timed text overlays on top of omnidirectional visual background video or images related to a sphere or a viewport.

MPEG Completes the Low Complexity Enhancement Video Coding Standard

MPEG is pleased to announce the completion of the new ISO/IEC 23094-2 standard, i.e., Low Complexity Enhancement Video Coding (MPEG-5 Part 2 LCEVC), which has been promoted to Final Draft International Standard (FDIS) at the 132nd MPEG meeting.

LCEVC adds an enhancement data stream that can appreciably improve the resolution and visual quality of reconstructed video with effective compression efficiency of limited complexity by building on top of existing and future video codecs.

LCEVC can be used to complement devices originally designed only for decoding the base layer bitstream, by using firmware, operating system, or browser support. It is designed to be compatible with existing video workflows (e.g., CDNs, metadata management, DRM/CA) and network protocols (e.g., HLS, DASH, CMAF) to facilitate the rapid deployment of enhanced video services.

LCEVC can be used to deliver higher video quality in limited bandwidth scenarios, especially when the available bit rate is low for high-resolution video delivery and decoding complexity is a challenge. Typical use cases include mobile streaming and social media, and services that benefit from high-density/low-power transcoding.

WG 8 MPEG GENOMIC CODING

MPEG evaluates extensions and improvements to MPEG-G and announces Call for Evidence on new advanced genomics features and technologies

The extensive usage of high-throughput DNA sequencing technologies enables a new approach to healthcare known as “precision medicine”. DNA sequencing technologies produce extremely large amounts of raw data which are stored in different repositories worldwide. One challenge is to efficiently handle the increasing flood of sequencing data. A second challenge is the ability to process such a deluge of data in order to 1) increase the scientific knowledge of genome sequence information and 2) search genome databases for diagnosis and therapy purposes. High-performance compression of genomic data is required to reduce the storage size and increase transmission speed of large data sets.

Current MPEG-G standard series (ISO/IEC 23092) address the representation, compression and transport of genome sequencing data with support for annotation data under development. They provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for the access of genomic data in the native compressed format.

In response to a Call for Proposals issued at the 131st meeting, MPEG is evaluating extensions to the MPEG-G standard series. Submissions have been received addressing low complexity coding modes to directly improve the speed of encoding and decoding to provide faster, reduced latency access to data, as well as advanced sequencing data and metadata indexing and search which can be applied to both aligned and unaligned data directly in the compressed domain. In addition, technologies have been proposed for compressing and indexing aligned and unaligned read data.

In line with the traditional MPEG practice of continuous improvement of the quality and performance of its standards, at its 1st SC29/WG8 meeting, MPEG has issued a public Call for Evidence (CfE). This CfE aims to assess the performance of new technologies that can 1) demonstrate that current compression, transport and indexing technology of ISO/IEC 23092 series can be improved with new compression technologies, particularly applied to very long reads, and 2) can yield higher compression rate, support new functionality or improve performance of other metrics.

Output documents published in MPEG 132

MPEG-I

#PartTitle
3Versatile Video CodingWorking Draft 1 of ISO/IEC 23090-3 Amd.1 Operation range extensions
3Versatile Video CodingVVC verification test report for UHD SDR video content
3Versatile Video CodingVVC verification test plan (draft 4)
3Versatile Video CodingTest Model 11 for Versatile Video Coding (VTM 11)
3Versatile Video CodingCore experiment on high bit depth and high bit rate entropy coding in VVC
5Visual Volumetric Video-based Coding (V3C) and Video-based Point Cloud Compression (V-PCC)V-PCC codec description
9Geometry-based Point Cloud CompressionG-PCC codec description
9Geometry-based Point Cloud CompressionEE4FE 13.47 On spherical coordinate geometry
12Immersive VideoPotential Improvements of MPEG Immersive Video
12Immersive VideoTest Model 7 for MPEG Immersive Video
12Immersive VideoCommon Test Conditions for MPEG Immersive Video
14Scene Description for MPEG MediaText of ISO/IEC CD 23090-14 Scene Description for MPEG Media
15Conformance Testing for Versatile Video CodingText of ISO/IEC CD 23090-15 Conformance Testing for Versatile Video Coding
16Reference Software for Versatile Video CodingText of ISO/IEC CD 23090-16 Reference Software for Versatile Video Coding

MPEG-4

#PartTitle
10Advanced Video CodingWorking draft of ISO/IEC 14496-10:202X (9th ed.) Amd.1 Additional SEI messages
22Open Font FormatWD of ISO/IEC 14496-22:2019 AMD 2 Extending color font functionality and other updates

MPEG-5

#PartTitle
1Essential Video CodingReport on Essential Video Coding Compression Performance Verification Testing for HDR/WCG Content

MPEG-C

#PartTitle
7Versatile Supplemental Enhancement Information Messages for Coded Video BitstreamsWorking Draft 1 of ISO/IEC 23002-7 Amd.1 Additional SEI messages

MPEG-A

#PartTitle
22Multi-Image Application Format (MIAF)White paper on Multi-Image Application Format

MPEG-7

#PartTitle
17Compression of Neural Networks for Multimedia Content Description and analysisEvaluation Framework for Compression of neural networks for multimedia content description and analysis
17Compression of Neural Networks for Multimedia Content Description and analysisCall for Proposals on Incremental Compression of Neural Networks for multimedia content description and analysis

Administrative

#PartTitle
MPEG Press Release

Explorations

#PartTitle
Draft Call for Proposals on the Coded Representation of Haptics. Phase 1
Draft Encoder Input Format for MPEG Haptics
7Immersive VideoCall for MPEG-I Visual Test Materials
7Immersive VideoSoftware manual of IV-PSNR for Immersive Video
23Advance signalling of MPEG containers contentAdvance signalling of MPEG containers content
34Video Coding for MachinesDraft Evaluation Framework for Video Coding for Machines
34Video Coding for MachinesDraft Call for Evidence for Video Coding for Machines
36Neural Network-based Video CompressionExploration experiment on neural network-based video coding technology
38Systems Functionalities for Video ConformanceSystems Functionalities for Video Conformance
38Systems Functionalities for Video ConformanceSummary of Systems standards for video codecs after the 1st WG03 meeting
39Exploration on Encoder and Packager SynchronizationOn Exploration on Encoder and Packager Synchronization

Management

#PartTitle
Liaison response to SC 29/WG 1

Reports

#PartTitle
Verbal report from AG3

All

#PartTitle
Assets of Communication

Other documents published in MPEG 132

TypeTitle
Press ReleasePress release of the 1st SC29/WG 08 meeting
Work planRoadmap after MPEG 132