Skip to content

MPEG 146

MPEG 146 took place in Rennes from 2024-04-22 until 2024-04-26.

Press Release

MPEG issues Call for Proposals for AI-based Point Cloud Coding

At the 146th MPEG meeting, MPEG Technical Requirements (WG 2) issued a Call for Proposals (CfP) focusing on AI-based point cloud coding technologies. This initiative stems from ongoing explorations by MPEG into potential use cases, requirements, and the capabilities of AI-driven point cloud encoding, particularly for dynamic point clouds.

With recent significant progress in AI-based point cloud compression technologies, MPEG is keen on studying and adopting AI methodologies. MPEG is specifically looking for learning-based codecs capable of handling a broad spectrum of dynamic point clouds, which are crucial for applications ranging from immersive experiences to autonomous driving and navigation.

As the field evolves rapidly, MPEG expects to receive multiple innovative proposals. These may include a unified codec, capable of addressing multiple types of point clouds, or specialized codecs tailored to meet specific requirements, contingent upon demonstrating clear advantages. MPEG has therefore publicly called for submissions of AI-based point cloud codecs, aimed at deepening the understanding of the various options available and their respective impacts. Submissions that meet the requirements outlined in the call will be invited to provide source code for further analysis, potentially laying the groundwork for a new standard in AI-based point cloud coding. MPEG welcomes all relevant contributions and looks forward to evaluating the responses.

Interested parties are requested to contact the MPEG WG 2 Convenor Igor Curcio (igor.curcio@nokia.com) and MPEG WG 7 Convenor Marius Preda (marius.preda@it-sudparis.eu) to register their participation to the CfP and to submit responses for review at the 148th MPEG meeting in November 2024. Further details are given in the CfP, issued as WG 2 document N 365 and available from https://www.mpeg.org/meetings/mpeg-146/.

MPEG issues Call for Interest in Object Wave Compression

At the 146th MPEG meeting, MPEG Technical Requirements (WG 2) issued a Call for Interest (CfI) in object wave compression. Computer holography, a 3D display technology, utilizes a digital fringe pattern called a computer-generated hologram (CGH) to reconstruct 3D images from input 3D models. Holographic near-eye displays (HNEDs) reduce the need for extensive pixel counts due to their wearable design, positioning the display near the eye. This positions HNEDs as frontrunners for the early commercialization of computer holography, with significant research underway for product development.

Innovative approaches facilitate the transmission of object wave data, crucial for CGH calculations, over networks. Object wave transmission offers several advantages, including independent treatment from playback device optics, lower computational complexity, and compatibility with video coding technology. These advancements open doors for diverse applications, ranging from entertainment experiences to real-time two-way spatial transmissions, revolutionizing fields such as remote surgery and virtual collaboration. As MPEG explores object wave compression for computer holography transmission, a Call for Interest seeks contributions to address market needs in this field.

Interested parties are requested to contact the MPEG WG 2 Convenor Igor Curcio (igor.curcio@nokia.com) or to submit inputs for review at the 147th MPEG meeting in July 2024. Further details are given in the Call for Interest, issued as WG 2 document N 377 and available from https://www.mpeg.org/meetings/mpeg-146/.

MPEG reaches First Milestone for Fifth Edition of Open Font Format

At the 146th MPEG meeting, MPEG Systems (WG 3) promoted the 5th edition of ISO/IEC 14496-22 Open font format to Committee Draft (CD), marking the initial stage of standard development.

The importance of textual representation within multimedia content cannot be understated. In recognition of this, MPEG Systems has diligently pursued the standardization of interoperable font formats. With the commencement of its 5th edition, a pivotal milestone has been achieved. This latest iteration not only enhances the legibility of the specification but also transcends previous limitations, notably the 64K glyph encoding constraint in a single font file. By surpassing this barrier, the new edition facilitates the comprehensive coverage of the entire Unicode repertoire, accommodating diverse world languages and writing systems, including multiple glyph variants, within a singular font file.

Moreover, the latest edition introduces more space-efficient composite glyph representations, along with a myriad of novel features and capabilities tailored for variable fonts. This innovation culminates in substantial reductions in font file sizes and empowers the creation of parametric variable fonts utilizing higher order interpolations.

The development trajectory of this standard is projected for completion, culminating in the attainment of the Final Draft International Standard (FDIS) status by the conclusion of 2025.

MPEG ratifies Second Edition of Scene Description

At the 146th MPEG meeting, MPEG Systems (WG 3) promoted the 2nd edition of ISO/IEC 23090-14 Scene description to Final Draft International Standard (FDIS), the final stage of standard development.

Since the inaugural release of the standard on immersive media scene description in 2022, the momentum in extending its capabilities has remained unwavering. The latest iteration seamlessly integrates two amendments into its predecessor, prioritizing enhanced user readability. Noteworthy advancements include the seamless integration of MPEG-developed immersive media objects, such as Video-based Point Cloud Compression (V-PCC, as specified in ISO/IEC 23090-5), and MPEG Immersive Video (MIV, as delineated in ISO/IEC 23090-12), within a scene framework. Furthermore, this edition fortifies support for a myriad of data types essential for immersive scenes, encompassing haptics, augmented reality, avatars, interactivity, and lighting, among others.

Looking ahead, MPEG Systems is steadfast in its commitment to advancing the standard’s development, with plans to expand support to encompass MPEG-I immersive audio and beyond.

MPEG reaches First Milestone for Second Edition of MPEG Immersive Video (MIV)

At the 146th MPEG meeting, MPEG Video Coding (WG 4) reached the Committee Draft (CD) stage of the 2nd edition of ISO/IEC 23090-12 MPEG immersive video (MIV), the first stage of standard development.

MIV was developed to support the compression of immersive video content, in which multiple real or virtual cameras capture a real or virtual 3D scene. The standard enables the storage and distribution of immersive video content over existing and future networks for playback with 6 degrees of freedom (6DoF) of view position and orientation. MIV is a flexible standard for multi-view video plus depth (MVD) and multi-planar video (MPI) that leverages strong hardware support for commonly used video formats to compress volumetric video.

New features in the 2nd edition are coloured depth, capture device information, patch margins, background views, static background atlases, support for decoder-side depth estimation, chroma dynamic range modification, piecewise linear normalized disparity quantization, and linear depth quantization. These features provide additional functionality and improved performance.

The first edition of the standard included the MIV Main profile for MVD, the MIV Extended profile, which enables MPI, and the MIV Geometry Absent profile, which is suitable for use with cloud-based and decoder-side depth estimation. In the 2nd edition, the MIV 2 profile is being added, which is a superset of the existing profiles and covers all new functionality. In addition, a document entitled “profiles under consideration” was started to study the inclusion of narrower profiles in this edition.

Finally, it is expected that a 2nd edition of ISO/IEC 23090-23 Conformance and reference software for MPEG immersive video will be requested from the next MPEG meeting.

MPEG releases New Editions of AVC, HEVC, and Video CICP

At the 146th MPEG meeting, the MPEG Joint Video Coding Team with ITU-T SG 16 (WG 5), also known as JVET, promoted (i) the 11th edition of ISO/IEC 14496-10 Advanced Video Coding (AVC), (ii) the 5th edition of ISO/IEC 23008-2 High Efficiency Video Coding (HEVC), and (iii) the 3rd edition of ISO/IEC 23091-2 Video Coding-independent Code Points (Video CICP) to Final Draft International Standard (FDIS), the final stage of standard development.

The latest editions of AVC and HEVC now incorporate support for additional SEI messages, drawing from ISO/IEC 23002-7 Versatile Supplemental Enhancement Information (SEI) Messages for Coded Video Bitstreams. Specifically, this includes the integration of (a) neural network post-filtering SEI message and (b) phase indication SEI message with these standards. HEVC has been expanded to include extended multiview profiles for 8-bit and 10-bit, as well as monochrome multiview profiles supporting standalone depth map coding with up to 16 bits. Additionally, the new version of Video CICP introduces additional color code points and implements text improvements and clarifications.

These advancements demonstrate a commitment to maintaining support for legacy standards developed jointly with ITU-T, ensuring their relevance to current market needs.

MPEG Promotes Standard Development for Machine-Optimized Video Compression

At the 146th MPEG meeting, the MPEG Joint Video Experts Team with ITU-T SG16 (WG 5), also known as JVET, advanced ISO/IEC 23888-3 “Optimization of Encoders and Receiving Systems for Machine Analysis of Coded Video Content” as part 3 of MPEG AI to Committee Draft Technical Report (CDTR), marking the initial stage of standard development.

In recent years, the efficacy of machine learning-based algorithms in video content analysis has steadily improved. However, an encoder designed for human consumption does not always produce compressed video conducive to effective machine analysis. This challenge lies not in the compression standard but in optimizing the encoder or receiving system. The forthcoming technical report addresses this gap by showcasing technologies and methods that optimize encoders or receiving systems to enhance machine analysis performance.

Developed collaboratively with ITU-T SG16, this technical report will be published as a technically aligned twin text, corresponding to a forthcoming supplement or technical paper of ITU-T. It is available at https://www.mpeg.org/meetings/mpeg-146/.

MPEG reaches First Milestone for MPEG-I Immersive Audio

At the 146th MPEG meeting, MPEG Audio Coding (WG 6) promoted ISO/IEC 23090-4 MPEG-I immersive audio and ISO/IEC 23090-34 Immersive audio reference software to Committee Draft (CD) stage, the first stage of standard development. The MPEG-I immersive audio standard sets a new benchmark for compact and lifelike audio representation in virtual and physical spaces, catering to Virtual, Augmented, and Mixed Reality (VR/AR/MR) applications. By enabling high-quality, real-time interactive rendering of audio content with six degrees of freedom (6DoF), users can experience immersion, freely exploring 3D environments while enjoying dynamic audio.

Designed in accordance with MPEG’s rigorous standards, MPEG-I immersive audio ensures efficient distribution across bandwidth-constrained networks without compromising on quality. Unlike proprietary frameworks, this standard prioritizes interoperability, stability, and versatility, supporting both streaming and downloadable content while seamlessly integrating with MPEG-H 3D audio compression.

MPEG-I’s comprehensive modeling of real-world acoustic effects, including sound source properties and environmental characteristics, guarantees an authentic auditory experience. Moreover, its efficient rendering algorithms balance computational complexity with accuracy, empowering users to finely tune scene characteristics for desired outcomes.

The release of the CD for ISO/IEC 23090-34 Immersive Audio Reference Software, which encompasses all aspects of the standard, facilitates real-time evaluation and adoption in industry and consumer applications. Interested parties can access both the text specification and reference software publicly at https://www.mpeg.org/meetings/mpeg-146/, with additional insights available through a dedicated white paper released during this meeting.

MPEG reaches First Milestone for
Video-based Dynamic Mesh Coding (V-DMC)

At the 146th MPEG meeting, MPEG Coding of 3D Graphics and Haptics (WG 7) reached the Committee Draft (CD) stage of ISO/IEC 23090-29 Video-based Dynamic Mesh Compression (V-DMC), the first stage of standard development. This standard represents a significant advancement in 3D content compression, catering to the ever-increasing complexity of dynamic meshes used across various applications, including real-time communications, storage, free-viewpoint video, augmented reality (AR), and virtual reality (VR). The standard addresses the challenges associated with dynamic meshes that exhibit time-varying connectivity and attribute maps, which were not sufficiently supported by previous standards.

Video-based Dynamic Mesh Compression promises to revolutionize how dynamic 3D content is stored and transmitted, allowing more efficient and realistic interactions with 3D content globally. The Committee Draft follows an extensive call for proposals issued by MPEG, which invited technology developers to submit innovations that could contribute to the new standard. Proposals were evaluated based on various objective and subjective metrics to ensure the selected technologies meet and exceed the current market and technical demands. MPEG extends its gratitude to all contributors who have submitted proposals and participated in the rigorous testing and evaluation process. The results of these evaluations have shaped the draft of the standard, ensuring it meets the high expectations and needs of the industry.

The Committee Draft of the Video-based Dynamic Mesh Compression standard is now available for further comments and evaluation by national bodies. It is available at https://www.mpeg.org/meetings/mpeg-146/. MPEG encourages continued participation from the community to finalize the standard for publication.

MPEG reaches First Milestone for Low Latency, Low Complexity LIDAR coding

At the 146th MPEG meeting, MPEG Coding of 3D Graphics and Haptics (WG 7) reached the Committee Draft (CD) stage of ISO/IEC 23090-30 Low Latency, Low Complexity LiDAR Coding, the first stage of standard development. This milestone underscores MPEG’s commitment to advancing coding technologies required by modern LiDAR applications across diverse sectors. The new standard addresses critical needs in the processing and compression of LiDAR-acquired point clouds, which are integral to applications ranging from automated driving to smart city management. It provides an optimized solution for scenarios requiring high efficiency in both compression and real-time delivery, responding to the increasingly complex demands of LiDAR data handling.

LiDAR technology has become essential for various applications that require detailed environmental scanning, from autonomous vehicles navigating roads to robots mapping indoor spaces. The Low Latency, Low Complexity LiDAR Coding standard will facilitate a new level of efficiency and responsiveness in LiDAR data processing, which is critical for the real-time decision-making capabilities needed in these applications.

This Committee Draft builds on comprehensive analysis and industry feedback to address specific challenges such as noise reduction, temporal data redundancy, and the need for region-based quality of compression. The standard also emphasizes the importance of low latency coding to support real-time applications, essential for operational safety and efficiency in dynamic environments.

Key applications highlighted for the new standard include:

  • Automotive Industry: enhancing driver assistance systems and self-driving functionalities through efficient and rapid processing of road and environmental data.
  • Robotics: optimizing navigation and operational efficiency in automated robots.
  • Surveillance: Supporting advanced security systems with combined video and LiDAR data processing capabilities.
  • Aerial Drones: enabling safer and more effective use of drones in professional and emergency scenarios through improved obstacle detection and environmental mapping.
  • Industrial Automation: enhancing precision and safety in industrial applications through better tracking and positioning of machinery.

The Committee Draft is available at https://www.mpeg.org/meetings/mpeg-146/.

MPEG White Paper

At the 146th MPEG meeting, MPEG Liaison and Communication (AG 3) approved the following MPEG white paper, available at https://www.mpeg.org/whitepapers/.

White paper on MPEG-I Immersive Audio

The MPEG-I immersive audio standard aims at providing a convincing solution for compact representation and for high-quality real-time interactive rendering of virtual audio content with six degrees of freedom (6DoF), i.e., the user can not only turn his/her head in all directions (pitch/yaw/roll) but also move around freely in 3D space.

By exploring the 6DoF virtual world, many acoustic effects of the real world must be modeled accurately to provide a realistic user experience, including properties of sound sources (e.g., level, size, radiation/directivity characteristics, Doppler processing) as well as effects of the acoustic environment (e.g., sound reflections and reverberation, diffraction, total- and partial occlusion). MPEG-I immersive audio features a plethora of technology components that support computationally efficient rendering of such aspects. Distinguishing from many existing technologies, it offers scene descriptions using physics-inspired metadata (for easier scene authoring from CAD scenes and material databases) and possibilities for artistic tuning of the scene characteristics to achieve the desired results.

During the standardization process, extensive listening test comparisons and evaluations were conducted.

Output documents published in MPEG 146

MPEG-I

#PartTitle
2Omnidirectional Media FormatTechnologies under Consideration for OMAF
2Omnidirectional Media FormatDraft text of ISO/IEC 23090-2 DAM 1 Server-side dynamic adaptation
3Versatile Video CodingText of ISO/IEC 23090-3:202x/CDAM1 Additions and corrections for VVC
3Versatile Video CodingTest model 22 for versatile video coding (VTM 22)
4Immersive AudioText of ISO/IEC CD 23090-4, MPEG-I immersive audio
4Immersive AudioMPEG-I immersive audio Encoder Input Format, version 9
4Immersive AudioMPEG-I immersive audio LSDF, version 3
4Immersive AudioWhite Paper on MPEG-I Immersive Audio
6Immersive Media MetricsTechnologies under Consideration for ISO/IEC 23090-6 Immersive media metrics
7Immersive Media MetadataTechnologies under Consideration for Immersive media metadata
8Network based Media ProcessingNBMP reference software and conformance framework
8Network based Media ProcessingTechnologies under Consideration for NBMP
10Carriage of Visual Volumetric Video-based Coding DataTechnologies under consideration on carriage of V3C data
12Immersive VideoManual of the Extrinsic Camera Parameters Calibration framework
12Immersive VideoCommon test conditions for MPEG immersive video
12Immersive VideoTest model 20 for MPEG immersive video
13Video Decoding Interface for Immersive MediaWD of ISO/IEC 23090-13 2nd edition Video decoding interface for immersive media
14Scene Description for MPEG MediaDraft registration of Khronos extensions 2nd edition
14Scene Description for MPEG MediaTechnologies under consideration for ISO/IEC 23090-14 Scene Description
14Scene Description for MPEG MediaProcedures for standard development for ISO/IEC 23090-14 (MPEG-I Scene Description)
16Reference Software for Versatile Video CodingText of ISO/IEC CD 23090-16:202x Reference software for versatile video coding (2nd ed.)
17Reference Software and Conformance for OMAFWD of Reference software and conformance for omnidirectional media format (OMAF) 2nd edition
18Carriage of Geometry-based Point Cloud Compression DataTechnologies under Considerations on Carriage of geometry-based point cloud compression data
18Carriage of Geometry-based Point Cloud Compression DataWD of ISO/IEC 23090-18 AMD 2 Point reliability indication and other improvements
24Conformance and Reference Software for Scene Description for MPEG MediaProcedures for test scenarios and reference software development for MPEG-I Scene Description
24Conformance and Reference Software for Scene Description for MPEG MediaWD of ISO/IEC 23090-24 AMD 1 Conformance and reference software for scene description on haptics, augmented reality, avatars, interactivity and lighting
29Video-based dynamic mesh codingText of ISO/IEC CD 23090-29 Video-based mesh coding
30Low latency, low complexity LiDAR codingText of ISO/IEC 23090-30 CD Low complexity low latency LIDAR coding
31Haptics codingEvaluation platform for Haptics Coding Phase 2
32Carriage of haptics dataPotential improvement of SO/IEC DIS 23090-32 Carriage of haptics data
34Immersive audio reference softwareText of ISO/IEC CD 23090-34, Immersive audio reference software

MPEG-DASH

#PartTitle
1Media Presentation Description and Segment FormatsTechnologies under Consideration for DASH
1Media Presentation Description and Segment FormatsPotential improvement of ISO/IEC DIS 23009-1 6th edition Media presentation description and segment formats
7Delivery of CMAF content with DASHExploration on alignment of ISOBMFF/DASH/CMAF terminology, concepts and solutions

MPEG-H

#PartTitle
12Image File FormatTechnology under Consideration on ISO/IEC 23008-12
12Image File FormatRequest for ISO/IEC 23008-12 3rd edition AMD 3 Low-overhead image file format
12Image File FormatWD of ISO/IEC 23008-12 3rd edition AMD 2 Support for tone map derivation and others
12Image File FormatWD of ISO/IEC 23008-12 3rd edition AMD 3 Low-overhead image file format

MPEG-4

#PartTitle
12ISO base Media File FormatTechnologies under Consideration for ISO/IEC 14496-12
12ISO base Media File FormatPotential improvement of ISO/IEC 2nd DIS 14496-12 8th edition ISO base media file format
12ISO base Media File FormatExploration of carriage of depth map and alpha map as a new media type in ISOBMFF
12ISO base Media File FormatWD of ISO/IEC 14496-12 8th edition AMD 2 Tools for enhanced CMAF and DASH integration
14MP4 File FormatTechnologies under Consideration for ISO/IEC 14496-14 MP4 File format
15Carriage of Network Abstraction Layer (NAL) Unit Structured Video in the ISO base Media File FormatTechnologies under Consideration for ISO/IEC 14496-15 Carriage of NAL unit structured video in ISOBMFF
15Carriage of Network Abstraction Layer (NAL) Unit Structured Video in the ISO base Media File FormatWD of ISO/IEC 14496-15 7th edition AMD 2 Improvement of carriage of L-HEVC
22Open Font FormatText of ISO/IEC CD 14496-22 5th edition Open font format
32File Format ReferenceTechnologies under Consideration for file format reference software and conformance
32File Format ReferencePotential improvement of ISO/IEC DIS 14496-32 2nd edition File format reference software and conformance
34Syntactic description languageTechnology under Consideration on ISO/IEC 14496-34 Syntactic Description Language
34Syntactic description languageDraft text of ISO/IEC FDIS 14496-34 Syntactic description language

MPEG-2

#PartTitle
1SystemsText of ISO/IEC 13818-1 9th edition CDAM 1 Codec parameter clarifications and other improvements

MPEG-CICP

#PartTitle
2VideoTechnologies under consideration for future extensions of video CICP

MPEG-C

#PartTitle
7Versatile Supplemental Enhancement Information Messages for Coded Video BitstreamsText of ISO/IEC 23002-7:202x/CDAM1 Additional SEI messages for VSEI
7Versatile Supplemental Enhancement Information Messages for Coded Video BitstreamsTechnologies under consideration for future extensions of VSEI (version 4)

MPEG-B

#PartTitle
7Common Encryption in ISO Base Media File Format FilesTechnologies under Consideration for ISO/IEC 23001-7 Common Encryption
10Carriage of Timed Metadata Metrics of Media in ISO Base Media File FormatTechnologies under Consideration for ISO/IEC 23001-10 Carriage of timed metadata metrics of media in ISOBMFF
11Energy-Efficient Media Consumption (green metadata)Potential improvement of ISO/IEC 23001-11 DAM 2 Energy-efficient media consumption for new display power reduction metadata
11Energy-Efficient Media Consumption (green metadata)Preliminary draft of consolidated text on carriage of green metadata
16Derived Visual Tracks in the ISO Base Media File FormatTechnologies under Consideration for ISO/IEC 23001-16 Derived visual tracks including further visual derivations
17Carriage of Uncompressed Video in ISOBMFFDraft text of ISO/IEC 23001-17 DAM 2 Generic compression for samples and items in ISOBMFF
17Carriage of Uncompressed Video in ISOBMFFPotential improvement of ISO/IEC 23001-17 DAM 1 High precision timing tagging

MPEG-A

#PartTitle
19Common Media Application Format (CMAF) for Segmented MediaWD of ISO/IEC 23000-19 AMD 2 New Structural CMAF Brand Profile
22Multi-Image Application Format (MIAF)Technology under consideration on multi-image application format
23Decentralized media rights application formatWD of ISO/IEC 23000-23 Decentralized media rights application format
24Messaging media application formatWorking Draft of ISO/IEC 23000-24 Messaging media application format

Explorations

#PartTitle
Use cases and requirements for Lenslet Video Coding
Call for interest of object wave compression
7Immersive VideoOverview of lenslet video coding activities
7Immersive VideoCommon test conditions of lenslet video coding
36Neural Network-based Video CompressionExploration experiment on neural network-based video coding (EE1)
36Neural Network-based Video CompressionDescription of algorithms and software in neural network-based video coding (NNVC) version 7
41Enhanced compression beyond VVC capabilityExploration experiment on enhanced compression beyond VVC capability (EE2)
41Enhanced compression beyond VVC capabilityAlgorithm description of enhanced compression model 13 (ECM 13)
41Enhanced compression beyond VVC capabilityVisual quality comparison of ECM/VTM encoding
44AI-based coding for graphicsRequirements for AI-based Point Cloud Coding
44AI-based coding for graphicsUse cases for AI-based Point Cloud Coding
44AI-based coding for graphicsCall for Proposals for AI-based Point Cloud Coding
46Audio Coding for MachinesUse Cases and Requirements on Audio Coding for Machines
47Metadata Definition and Carriage for Split RenderingExploration on Metadata Definition and Carriage for Split Rendering
48Indicating AI generated/altered content using the MPEG Systems technologiesExploration on indicating AI generated/altered content using the MPEG Systems technologies
49Quality metrics for 2D videoCall for learning-based video codecs for study of quality assessment, version 2

MPEG-AI

#PartTitle
2Video coding for machinesCommon test conditions for video coding for machines
3Optimization of encoders and receiving systems for machine analysis of coded video contentText of ISO/IEC CDTR 23888-3 Optimization of encoders and receiving systems for machine analysis of coded video content
4Feature coding for machinesCommon test and training conditions for FCM

Other documents published in MPEG 146

TypeTitle
AhGList of the AHGs established at the 15th SC 29/WG 03 Meeting (MPEG 146)
OutputAssets of communication
Work planMPEG Roadmap after the MPEG 146 meeting
Work planMPEG Roadmap after the MPEG 146 meeting (extended PPT)
Administrative MattersRequest for offers to host a MPEG meeting (MPEG 154 - MPEG 157)
Administrative MattersMeeting Notice of the 147th MPEG meeting including the 16th meeting of SC29/AG2,3,5, WG2,3,4,5,6,7,8
Administrative MattersDraft guidelines for meeting hosts of SC29 MPEG Groups (April 2024)