JCTVC-L0226: VPS AND VPS_EXTENSION UPDATES
The VPS and vps_extension( ) syntax structures are updated with some cleanups for the HEVC Extensions in scalable video coding, multi-view coding and 3D video coding areas. In addition, four options of adding syntaxes to support mixed video sequences in various layers for the VPS extension are described.
This application claims priority under 35 U.S.C. § 119(e) of the U.S. Provisional Patent Application Ser. No. 61/748,893, filed Jan. 4, 2013 and titled, “JCTVC-LOXXX: VPS AND VPS_EXTENSION UPDATES,” which is hereby incorporated by reference in its entirety for all purposes.
FIELD OF THE INVENTIONThe present invention relates to the field of video encoding. More specifically, the present invention relates to high efficiency video coding.
BACKGROUND OF THE INVENTIONThe Video Parameter Set (VPS) has been added as metadata to describe the overall characteristics of coded video sequences, including the dependencies between temporal sublayers. The primary purpose of this is to enable the compatible extensibility of the standard in terms of signaling at the systems layer, e.g., when the base layer of a future extended scalable or multiview bitstream would need to be decodable by a legacy decoder, but for which additional information about the bitstream structure that is only relevant for the advanced decoder would be ignored.
SUMMARY OF THE INVENTIONThe VPS and vps_extension( ) syntax structures are updated with some cleanups for the HEVC Extensions in scalable video coding, multi-view coding and 3D video coding areas. In addition, four options of adding syntaxes to support mixed video sequences in various layers for the VPS extension are described.
In one aspect, a method programmed in a non-transitory memory of a device comprises decoding content and accessing information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information. A video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag. Byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag. The video parameter set data includes a raw byte sequence payload trailing bits value. The video parameter set data is determined without using byte alignment syntaxes. The video parameter set data is determined using two syntaxes for mixed sequence signaling support. The video parameter set data is determined using a source mixed codec flag syntax parameter. The video parameter set data is determined using a source mixed video present flag syntax parameter. The video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter. The method further comprises encoding the content. The device comprises a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player, a high definition disc writer/player, an ultra high definition disc writer/player), a television, a home entertainment system, or a smart watch.
In another aspect, a method programmed in a non-transitory memory of a device comprises providing content and enabling access of information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information. A video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag. Byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag. The video parameter set data includes a raw byte sequence payload trailing bits value. The video parameter set data is determined without using byte alignment syntaxes. The video parameter set data is determined using two syntaxes for mixed sequence signaling support. The video parameter set data is determined using a source mixed codec flag syntax parameter. The video parameter set data is determined using a source mixed video present flag syntax parameter. The video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter. The method further comprises encoding the content. The device comprises a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player, a high definition disc writer/player, an ultra high definition disc writer/player), a television, a home entertainment system, or a smart watch.
In another aspect, an apparatus comprises a non-transitory memory for storing an application, the application for decoding content and accessing information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information and a processing component coupled to the memory, the processing component configured for processing the application. A video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag. Byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag. The video parameter set data includes a raw byte sequence payload trailing bits value. The video parameter set data is determined without using byte alignment syntaxes. The video parameter set data is determined using two syntaxes for mixed sequence signaling support. The video parameter set data is determined using a source mixed codec flag syntax parameter. The video parameter set data is determined using a source mixed video present flag syntax parameter. The video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter. The apparatus further comprises encoding the content.
The Video Parameter Set (VPS) syntax structure in the current Draft International Standard (DIS) specification for High Efficiency Video Coding (HEVC) is shown below.
The if (vps_extension_flag) section above is updated below to support various HEVC extensions.
The updates include:
-
- 1) The vps_reserved_zero_6 bits in version 1 is changed to vps_max_num_layers_minus1. However, in the vps_extension( ) syntax structure, this parameter is used as vps_max_layers_minus1. This is also consistent with a similar parameter vps_max_sub_layers_minus1 in version-1. Thus, vps_max_layers_minus1 is used in VPS.
- 2) For HEVC extensions, the “vps_extension( )” syntax structure as defined previously is inserted in VPS under the condition of vps_extension_flag.
- 3) The vps_reserved_0xffff_16 bits in version-1 is replaced by next_essential_info_byte_offset. This new syntax parameter is able to be used to locate the esential information available in “vps_extension( )” syntax structure.
- 4) The syntax parameter next_essential_info_byte_offset also includes the byte-alignment related syntaxes near the top of the “vps_extension( )” syntax structure. These byte-alignment syntaxes are taken out and placed inside the VPS under the condition of vps_extension_flag, just before the “vps_syntax( )” structure, for cleanliness.
- 5) The “rbsp_trailing_bits” contain a number of byte alignment bits for the VPS_rbsp( ), and such bit-counts are able to be determined as follows:
nByteAlignmentBits=((vps_extension_length+7)/8)*8−vps_extension length, where vps_extension_length is the total number of bits in vps_extension( ), and these are able to be computed easily as vps_extension( ) syntax structure so far contains fixed-length unsigned integer bits.
The syntax structure of vps_extension( ) is shown below:
While reviewing the above syntax structure and semantics description, a number of issues are discussed below as cleanups or for editing purposes.
For the scalability_mask syntax parameter, the following parameter table is listed, while the corresponding entries show the bit-location for the register-type data of “scalability_mask,” but it is not described well.
In place of the above table for scalability_mask, the following register-type description is better for a clearer understanding:
Also, the modified syntax structure of vps_extension( ) shown below, indicates some helpful cleanups.
As a correction, this loop is “for (i=0; i<=vps_max_layers_minus1; i++){ . . . }” for two cases, except for the “profile_tier_level( . . . ).”
4) The semantics description for the syntax parameter “dimension_id[i][j]” is also not available with respect to “scalability dimension” and “dimension_id_len_minus1[j].” An exemplary table is shown below to show the relationship among these parameters:
The VPS and vps_extension( ) syntax structures are updated for the HEVC Extensions in scalable video coding, multi-view coding and 3D video coding areas.
VPS contains metadata to describe the overall characteristics of coded video, enables standards extension compatibility in terms of systems layer signaling so that a legacy decoder is able to ignore additional information about the extension bitstream structure. The current VPS syntax structure and its extension contain information for multi-layered video sequences, where a layer is able to be a base layer (for HEVC and its extensions), or is able to be a scalable enhancement layer (spatial or quality) for SHVC, or a view layer (texture view or depth view) for multi-view coding of HEVC extension (MHVC). All such layers are able to have their sublayers also (temporal). Each video layer is able to be depending upon its neighboring lower layers or it is able to be independent with no inter-layer predictions.
The motivation behind “mixed sequences signaling support” is to develop new layer-specific properties in the VPS extension syntax structure, as suggested in “JCTVC-K_notes.” The layered syntax agreement for independent layers in VPS are able to be suitably exploited to support a variety of mixed contents (source-types, coding types and others) in the high level syntax structure for HEVC extensions.
Some possible application examples in mixed broadcasting or network transitions for a VPS with 2-layers are:
Frame structured video in layer-1 and field structured video in layer-2: each video layer has its respective SPS and more. Legacy mon-view (2-D) decoders are able to user either of these 2 layers for its base-layer application.
Frame structured video (2-D, mon-view) in layer-1 and frame-compatible (frame-packing) video (3D, stereo-view) in layer-2: legacy mono-view (2-D) decoder uses the layer-1 video, while an advanced stereo-view (3D) frame-compatible video decoder is able to use layer-2 for 3D stereo-view video application.
The combination of mixed video types will be present in the video bitstreams to support two such decoders (legacy and advanced).
Many such combinations of coded video content with varied source-types and coding types are able to co-exist in the current and future applications where either legacy or advanced decoders will sort out the respective bitstream for decoding.
The inherent layer-based descriptions of VPS are able to be extended to support such applications, at least for the systems level solutions (ad-insertions, splicing and more).
There are four possible options of adding syntax for mixed sequences signaling in VPS extension.
Option 1: 2 syntaxes for mixed sequences signaling support
Option 2: Option 1 plus source_mixed_codec_flag syntax parameter
Option 3: Option 2 plus mixed_video_present_flag syntax parameter
Option 4: specific applications support plus mixed video_present_flag_syntax parameter and use it as a condition for added syntax/semantics for this option.
Option 1
The semantics description for these two added syntax parameters are shown in the tables below:
Option 2
The semantics description for the added new syntax is the same avc_base_codec_flag as follow:
Option 3
mixed_video_present_flag syntax parameter equals 1 indicates presence of mixed video in VPS layers. If 0, no such mixed sequences are present in the layers.
The presence of this flag is able to save some added bits in case of “no mixed video-type.”
Option 4
mixed_video_present_flag syntax parameter equals 1 indicates presence of mixed video in VPS layers. If 0, no such mixed sequences are present in the layers.
In case of “mixed_video_present_flag=1,” two new syntaxes are present for each layer and these two syntax parameters address two specific application examples with mixed video sources:
Frame/Field structured scan-types and Frame for 2D video, and
Frame-packing arrangement (FPA) video for 3D video.
The semantics description for these two new syntaxes are explained in the tables below:
The syntax modifications described herein extend the layer-based properties of VPS to address various emerging applications in the System layers (content editing, splicing, ad-insertion) as well as in the overall decoding path for better communication and system integration purposes.
In some embodiments, the modified VPS application(s) 230 include several applications and/or modules. In some embodiments, modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included.
Examples of suitable computing devices include a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player (e.g., DVD writer/player, high definition disc writer/player, ultra high definition disc writer/player), a television, a home entertainment system, smart jewelry (e.g., smart watch) or any other suitable computing device.
To utilize the modified VPS method, devices are able to access parameters in VPS and its extension for scalable video coding, multi-view coding, 3D video coding and mixed video sequences. The modified VPS method is automatically used when performing video processing or other times. The modified VPS method is able to be implemented automatically without user involvement.
In operation, the VPS and vps_extension( ) syntax structures are updated with some cleanups for the HEVC Extensions in scalable video coding, multi-view coding and 3D video coding areas. In addition, four options of adding syntaxes to support mixed video sequences in various layers for the VPS extension are described. The VPS is generated using the modifed syntax structure.
U.S. patent application No. Atty Docket No. SONY-57100, titled “VIDEO PARAMETER SET (VPS) SYNTAX RE-ORDERING FOR EASY ACCESS OF EXTENSION PARAMETERS” and U.S. patent application No. Atty Docket No. SONY-57300, titled “JCTVC-L0227: VPS_EXTENSION WITH UPDATES OF PROFILE-TIER-LEVEL SYNTAX STRUCTURE” are hereby incorporated by reference in their entireties for all purposes.
Some Embodiments of JCTVC-L0226: VPS and VPS_Extension Updates
1. A method programmed in a non-transitory memory of a device comprising:
-
- 1. decoding content; and
- 2. accessing information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information.
2. The method of clause 1 wherein a video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag.
3. The method of clause 1 wherein byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag.
4. The method of clause 1 wherein the video parameter set data includes a raw byte sequence payload trailing bits value.
5. The method of clause 1 wherein the video parameter set data is determined without using byte alignment syntaxes.
6. The method of clause 1 wherein the video parameter set data is determined using two syntaxes for mixed sequence signaling support.
7. The method of clause 6 wherein the video parameter set data is determined using a source mixed codec flag syntax parameter.
8. The method of clause 7 wherein the video parameter set data is determined using a source mixed video present flag syntax parameter.
9. The method of clause 1 wherein the video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter.
10. The method of clause 1 further comprising encoding the content.
11. The method of clause 1 wherein the device comprises a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player, a high definition disc writer/player, an ultra high definition disc writer/player), a television, a home entertainment system, or a smart watch.
12. A method programmed in a non-transitory memory of a device comprising: - 1. providing content; and
- 2. enabling access of information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information.
13. The method of clause 12 wherein a video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag.
14. The method of clause 12 wherein byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag.
15. The method of clause 12 wherein the video parameter set data includes a raw byte sequence payload trailing bits value.
16. The method of clause 12 wherein the video parameter set data is determined without using byte alignment syntaxes.
17. The method of clause 12 wherein the video parameter set data is determined using two syntaxes for mixed sequence signaling support.
18. The method of clause 17 wherein the video parameter set data is determined using a source mixed codec flag syntax parameter.
19. The method of clause 18 wherein the video parameter set data is determined using a source mixed video present flag syntax parameter.
20. The method of clause 12 wherein the video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter.
21. The method of clause 12 further comprising encoding the content.
22. The method of clause 12 wherein the device comprises a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player, a high definition disc writer/player, an ultra high definition disc writer/player), a television, a home entertainment system, or a smart watch.
23. An apparatus comprising: - 1. a non-transitory memory for storing an application, the application for:
- 1. decoding content; and
- 2. accessing information related to the content, wherein the information comprises video parameter set data, further wherein the video parameter set data comprises mixed signaling information; and
- 2. a processing component coupled to the memory, the processing component configured for processing the application.
24. The apparatus of clause 23 wherein a video parameter set function used in determining the video parameter set data is under a condition of a video parameter set extension flag.
25. The apparatus of clause 23 wherein byte-alignment syntaxes used in determining the video parameter set data are under a condition of a video parameter set extension flag.
26. The apparatus of clause 23 wherein the video parameter set data includes a raw byte sequence payload trailing bits value.
27. The apparatus of clause 23 wherein the video parameter set data is determined without using byte alignment syntaxes.
28. The apparatus of clause 23 wherein the video parameter set data is determined using two syntaxes for mixed sequence signaling support.
29. The apparatus of clause 28 wherein the video parameter set data is determined using a source mixed codec flag syntax parameter.
30. The apparatus of clause 29 wherein the video parameter set data is determined using a source mixed video present flag syntax parameter.
31. The apparatus of clause 23 wherein the video parameter set data is determined using specific application support and a source mixed video present flag syntax parameter.
32. The apparatus of clause 23 further comprising encoding the content.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of principles of construction and operation of the invention. Such reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be readily apparent to one skilled in the art that other various modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention as defined by the claims.
Claims
1-32. (canceled)
33. An image processing method, comprising:
- in an image processing apparatus comprising at least one processor: generating a bitstream by encoding image data, wherein the bitstream includes a video parameter set (VPS) syntax, and wherein a byte-alignment syntax in the VPS syntax is under a condition of a VPS extension flag; and transmitting the generated bitstream.
34. The image processing method of claim 33,
- wherein the byte-alignment syntax is included in the VPS syntax based on a determination that the VPS extension flag is true.
35. The image processing method of claim 33,
- wherein the bitstream is generated with a coding tree unit as a unit.
36. The image processing method of claim 33, further comprising filtering reference image data for motion compensation as a deblock filter.
37. The image processing method of claim 36, further comprising filtering reference image data applied to the deblock filter as a sample adaptive offset (SAO) processing.
38. An image processing apparatus, comprising:
- at least one processor configured to: generate a bitstream by encoding image data, wherein the bitstream includes a video parameter set (VPS) syntax, and wherein a byte-alignment syntax in the VPS syntax is under a condition of a VPS extension flag; and transmit the generated bitstream.
39. The image processing apparatus of claim 38,
- wherein the byte-alignment syntax is included in the VPS syntax based on a determination that the VPS extension flag is true.
40. The image processing apparatus of claim 38, wherein the generation of the bitstream is with a coding tree unit as a unit.
41. The image processing apparatus of claim 38, wherein the at least one processor is further configured to filter reference image data for motion compensation as a deblock filter.
42. The image processing apparatus of claim 41, wherein the at least one processor is further configured to filter reference image data applied to the deblock filter as a sample adaptive offset (SAO) processing.
Type: Application
Filed: Jan 3, 2019
Publication Date: May 9, 2019
Inventors: MUNSI HAQUE (SAN JOSE, CA), ALI J. TABATABAI (CUPERTINO, CA)
Application Number: 16/238,751