Method for coding and decoding a digital video, and related coding and decoding devices
A method is described for generating a video stream by starting from a plurality of sequences of 2D and/or 3D video frames, wherein a video stream generator composes into a container video frame video frames coming from N different sources (S1, S2, S3, SN) and generates a single output video stream of container video frames which is coded by an encoder, wherein said encoder enters into the output video stream a signalling adapted to indicate the structure of the container video frames. A corresponding method for regenerating the video stream is also described.
Latest RAI Radiotelevisione Italiana S.P.A. Patents:
- System for transmission and/or reception of signals having electromagnetic modes with orbital angular momentum, and device and method thereof
- System, decoder, and method for transmitting satellite signals
- Signalling for offloading broadcast services from a mobile network
- Method and system for transmitting satellite signals and receiver thereof
- Method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
This application is a Reissue application of U.S. patent application Ser. No. 14/435,408, filed Apr. 13, 2015, now U.S. Pat. No. 9,961,324, which issued on May 1, 2018, which is a nationalization of PCT Application No. PCT/IB2013/059349, filed Oct. 14, 2013, and claims priority to Italian Application No. TO2012A0901, filed Oct. 15, 2012, which are incorporated herein by specific reference.
BACKGROUND OF THE INVENTION 1. Field of the InventionThe present invention relates to a method for coding and decoding a digital video, in particular a method for coding a video stream into independent partitions, and to a corresponding method for independently decoding one or more partitions making up the video stream.
The present invention also relates to a device for coding a video stream into independent partitions and to a device for independently decoding one or more of said partitions.
2. Present State of the ArtThe coding and distribution of independent video streams representing different views of the same event or of a mosaic of multimedia services (multiview video—Free-to-View Video) have long been known. Distributing such multiview videos to users typically requires coding a number of independent video streams matching the number of generated views.
A coding and decoding method of this kind is described for example, in document “ISO/IEC 13818-1: 2000 (E)—Information technology—Generic coding of moving pictures and associated audio information: Systems”, or in document “ISO/IEC 14496-10 Information technology—Coding of audio-visual objects Part 10: Advanced Video Coding” and in the corresponding document “ITU-T H.264—Advanced video coding for generic audiovisual services”, hereafter referred to as H.264/AVC specification. The coding methods currently in use have several drawbacks, such as: the necessity of using a number of video encoders equal to the number of video components to be distributed; the mutual difficult synchronization among the video streams being distributed and between the video streams and the corresponding audio streams; the increased band required for transporting the video streams, due to the need for replicating similar signalling elements required for decoding each independent stream. On the other hand, the corresponding decoding methods require the use of multiple decoders for decoding and displaying two or more views being transmitted, leading to higher complexity and cost of the user terminals' architecture.
It is also known that a single video stream can be used for distributing multiple independent views, as is the case, for example, of the so-called “mosaic” services, wherein the single frame is constituted by n frames extracted from independent videos and composed into one image, or by the two component videos of a 3D stereoscopic pair composed into a single frame (the so-called “Frame Packing Arrangement” or “frame compatible format”). Such composite videos are typically compressed by using any one of the available compression techniques, such as, for example, MPEG-2, H.264/AVC, HEVC. Such compression techniques provide no tools allowing a specification-compliant decoder to independently decode one or more of the component video streams. Methods have been developed which allow a 2D decoder to extract from the decoded video only one of the two component views of the stereoscopic pair, but these methods rely on the use of a suitable signalling that allows the decoder, once the entire container frame has been decoded, to cut and display a frame area containing only one of the two views.
It is currently impossible to code the video in such a way as to enable a decoder (upon user selection or due to limited computational or storage resources) to decode only a chosen subset of the whole frame. For example, it is not possible to code a video containing one of the above-mentioned Frame Packing Arrangements in a manner such that a 2D decoder, which is not interested in both images making up the stereoscopic pair, can decode and display only the region corresponding to one of the two views (e.g., the left one).
This implies wasting computational and energetic resources. It should be noted that this problem is especially felt in the field of mobile terminals, where any undue utilization of computational resources can drastically shorten the battery life.
Furthermore, a decoder may be used in a device such as a set-top box or a smart gateway, to which one or more displays, not necessarily having homogeneous characteristics, can be connected. Let us consider, for example, the case of a smart gateway receiving a coded video stream from a distribution network (e.g., an IP network or a broadcasting network) or reading the stream from a storage device. To said smart gateway a plurality of displays can be connected, through cables and/or wireless connections, which may have different characteristics (e.g., HD display or tablet). In such a case, the decoder should be able to adapt the decoded video to the characteristics of the display(s) to be served: if just one display with lower resolution than the decoded video is connected to the decoder, the latter should be able to decode only that part of the video which is most relevant for the terminal involved.
Besides, the current techniques only allow to automatically identify one of the component video streams (as in the above stereoscopic pair example), so that it is impossible to expressly indicate to the decoder the presence of the additional one or more component video streams. A “default” choice is thus imposed on the decoder with less resources, and the presence of alternative contents cannot be indicated.
Moreover, the possibility of coding a single video stream, besides allowing to scale the utilization of computational resources during the decoding process, also allows to code a single video stream in order to serve, according to different service models, terminals characterized by different availability in terms of storage and computational resources. For example, it is conceivable to code the composition of 4 HD videos (1920×1080 pixel) as a single 4k (3840×2160 pixel) video stream: of such a video, a decoder with limited computational resources might decode a subset containing just one of the HD components; alternatively, a more powerful decoder might decode the entire 4K video and, for example, display the whole mosaic of contents.
SUMMARY OF THE INVENTIONIt is one object of the present invention to define a coding method that allows coding into a single container video stream one or more different component video streams, so that at least one of the latter can be decoded independently of the others.
It is another object of the present invention to specify a decoding method which allows one or more component video streams to be independently decoded from a single container video stream through the use of a single decoder.
It is a further object of the present invention to provide an encoder which codes a container video stream made up of multiple component video streams, so as to allow one or more component video streams to be independently decoded.
It is yet another object of the present invention to provide a decoder that independently decodes at least one of a plurality of component video streams coded as a single container video stream.
These and further aspects of the present invention will become more apparent from the following description, which will illustrate some embodiments thereof with reference to the annexed drawings, wherein:
The existing video coding standards, as well as those currently under definition, offer the possibility of partitioning the images that constitute digital video streams for the purpose of optimizing the coding and decoding processes. As shown in
Instead,
The HEVC specification has defined tiles in such a way as to allow the images that constitute the video stream to be segmented into regions and to make the decoding thereof mutually independent. The decoding process, however, even when parallelized, is still carried out for the entire image only, and the segments cannot be used independently of one another.
As aforementioned in the above paragraphs, it would be useful to be able to partition the video stream in such a way that different terminals can decide, automatically or upon instructions received from the user, which parts of the video should be decoded and sent to the display for visualization.
The tile structure described in the HEVC specification is not sufficient to allow a decoder to properly recognize and decode the content transported by the container video. This problem can be solved by entering a suitable level of signalling describing which content is being transported in each one of the independently decodable regions and how to proceed in order to properly decode and display it.
At least two different scenarios can be foreseen. In the first one, it is necessary to indicate the association between the single contents and at least one of the tiles into which the image has been disassembled, and its possible reassembly into a coherent video stream (for example, as shown in
The proposed solution provides for entering a descriptor which indicates, for at least one of the tiles, one or more specific characteristics: for example, it must be possible to signal if the content is a 2D one or, in the case of a stereoscopic content, the type of frame packing arrangement thereof. Furthermore, it is desirable to indicate any “relationships” (joint decoding and/or display) between tiles; the view identifier (to be used, for example, in the case of multiview contents) and a message stating whether the view in question is the right view or the left view of a stereoscopic pair, or a depth map. By way of example, the solution is illustrated as pseudo code in the table of
Frame_packing_arrangement_type is an index that might correspond, for example, to the values commonly used in the MPEG2, H.264/AVC or SMPTE specifications, which catalogue the currently known and used stereoscopic video formats.
Tile_content_relationship_bitmask is a bitmask that univocally describes, for each tile, its association with the other tiles into which the coded video stream has been subdivided.
Content_interpretation_type provides the information necessary for interpreting the content of each tile. An example is specified in the table of
With reference to the above case, wherein a stereoscopic video is coded as two tiles, in order to ensure the decoding of just one view by a 2D decoder the following information will be associated with the tile 0:
-
- frame_packing_arrangement_type[0]=3
- tile_content_relationship_bitmask[0]=11
- view_id[0]=0
- content_interpretation_type[0]=2
It should be noted that this type of signalling might be used together with or instead of other tools, such as, for example, the cropping rectangle. The cropping rectangle technique, according to which it is mandatory to crop the part of the decoded frame inside a rectangle signalled by means of suitable metadata, is already commonly used for making “3D compatible” a stereoscopic video stream in the form of one of the frame packing arrangements that require the stereoscopic pair to be entered into a single frame.
Assuming, for example, that the video stream has been divided into four tiles, as shown in
-
- frame_packing_arrangement_type[0]=3
- frame_packing_arrangement_type[1]=3
- frame_packing_arrangement_type[2]=3
- frame_packing_arrangement_type[3]=3
- tile_content_relationship_bitmask[0]=1100
- tile_content_relationship_bitmask[1]=1100
- tile_content_relationship_bitmask[2]=0011
- tile_content_relationship_bitmask[3]=0011
- view_id[0]=0
- view_id[1]=0
- view_id[2]=1
- view_id[3]=1
- content_interpretation_type[0]=2
- content_interpretation_type[1]=1
- content_interpretation_type[2]=2
- content_interpretation_type[3]=1
This signalling indicates to the decoder that tiles 0 and 1 belong to the same 3D video content (tile_content_relationship_bitmask=1100) in side-by-side (frame_packing_arrangement_type=3). The value of tile_content_relationship_bitmask allows the decoder to know that the two views (which belong to the same stereoscopic pair because tile view_id=0 for both tiles) are contained in different tiles (and hence, in this case, at full resolution). Content_interpretation_type allows to understand that tile 0 corresponds to the left view, while tile 1 corresponds to the right view.
The same considerations apply to tiles 1 and 2.
The arrangement of
-
- frame_packing_arrangement_type[0]=3
- frame_packing_arrangement_type[1]=3
- frame_packing_arrangement_type[2]=6
- frame_packing_arrangement_type[3]=6
- tile_content_relationship_bitmask[0]=1111
- tile_content_relationship_bitmask[1]=1111
- tile_content_relationship_bitmask[2]=1010
- tile_content_relationship_bitmask[3]=0101
- view_id[0]=1
- view_id[1]=1
- content_interpretation_type[0]=2
- content_interpretation_type[1]=1
- content_interpretation_type[2]=5
- content_interpretation_type[3]=5
Unlike
In the syntax of the HEVC specification, this type of signalling could be easily coded as a SEI (Supplemental Enhancement Information) message: application information which, without altering the basic coding and decoding mechanisms, allows the construction of additional functions concerning not only the decoding, but also the next visualization process. As an alternative, the same signalling could be entered into the Picture Parameter Set (PPS), a syntax element that contains information necessary for decoding a dataset corresponding to a frame. The table of
A further generalization might provide for entering the signalling into the Sequence Parameter Set (SPS): a syntax element that contains information necessary for decoding a dataset corresponding to a consecutive sequence of frames. The table of
The latter are described by the same signalling used for representing the content of
An encoder receives the container video stream, constructs the tiles in such a way as to map them onto the structure of the single component video streams, generates the signalling describing the tiles, the structure of the component video streams and their relationships, and compresses the container video stream. If the “source composer” does not automatically generate the signalling that describes the component video streams, the encoder can be programmed manually by the operator. The compressed video stream outputted by the encoder can then be decoded in different ways, i.e., by selecting independent parts depending on the functional characteristics and/or computational resources of the decoder and/or of the display it is connected to. The audio of each component video stream can be transported in accordance with the specifications of the System Layer part adopted for transportation.
A 2D decoder analyzes the bitstream, finds the signalling of the two tiles containing the two views, and decides to decode a single tile, displaying only one image compatible with a 2D display. A 3D decoder, instead, will decode both tiles and will proceed with stereoscopic visualization on a 3D display.
Similarly,
Therefore, the present invention relates to a method for generating a video stream by starting from a plurality of sequences of 2D and/or 3D video frames, wherein a video stream generator composes into a container video frame video frames coming from N different sources S1, S2, S3, SN. Subsequently, an encoder codes the single output video stream of container video frames by entering into it a signalling adapted to indicate the structure of the container video frames.
The invention also relates to a method for regenerating a video stream comprising a sequence of container frames, each one comprising a plurality of 2D and/or 3D video frames coming from N different sources S1, S2, S3, SN. A decoder reads a signalling adapted to indicate the structure of the container video frames, and regenerates a plurality of video streams by extracting at least one or a subset of the plurality of video frames by decoding only those portions of the container video frames which comprise those video frames of the plurality of 2D and/or 3D video frames of the video streams which have been selected for display.
Claims
1. A method for generating a digital video stream in a video stream generator comprising a video stream receiver unit and a video encoder, wherein the video stream generator generates a container video stream containing a plurality of independently encoded regions, the method comprising:
- receiving by said the video stream receiver unit three or more component video streams from a plurality of video sources;
- mapping said by the video encoder the three or more component video streams to three or more independently decodable regions;
- entering by said the video encoder a signalling signal indicating a presence of corresponding three or more independently decodable regions,;
- entering by said the video encoder a signalling indicating an association between each of said the three or more component video streams and each of said the three or more independently decodable regions, whereby any of said the three or more component regions component video streams can be associated with any of said the three or more independently decodable regions in an independent way,; and
- outputting a digital video stream comprising said the signal, the signalling and the container video stream.
2. The method according to claim 1, further comprising entering by said wherein the signalling entered by the video encoder enters a descriptor into said the digital video stream indicating a type of content of said the three or more component video streams.
3. The method according to claim 1, wherein each one of the three or more independently decodable regions is coded by said the video encoder as a tile.
4. The method according to claim 1, wherein a coding technique employed by said the video encoder is employs the coding technique H.264/AVC or HEVC.
5. The method according to claim 1 2, wherein the signalling entered by said the video encoder into the digital video stream indicating that indicates the association between each of the three or more component regions video streams and each of the three or more independently decodable regions and that includes the descriptor indicating a the type of content of the three or more component regions is video streams is an SEI message.
6. The method according to claim 1 2, wherein the signalling indicating that indicates the association between each of the three or more component regions video streams and each of the three or more independently decodable regions and that includes the descriptor indicating a the type of content of the three or more component regions is video streams is entered by said the video encoder into an SPS signalling or into a PPS signalling.
7. The method according to claim 1, wherein the signalling entered by said the video encoder into the digital video stream indicating the association between each of the three or more component regions video streams and each of the three or more independently decodable regions includes a bitmask.
8. The method according to claim 1, wherein the three or more component regions of the digital video stream video streams represent one or more independent video streams.
9. The method according to claim 8, wherein the one three or more component video streams include one or more of the following formats:
- one or more stereoscopic video pairs;
- video streams and depth maps;
- one or more video streams in the frame packing arrangement format;
- mosaic of independent videos.
10. The method according to claim 2, wherein the descriptor comprises one or more metadata describing:
- Frame packing arrangement;
- Content interpretation type; and
- View ID.
11. A method for decoding an encoded digital video stream including a three or more independent component regions component video streams in a video decoder comprising a signalling decoder and a video data decoder, the method comprising:
- reading by said the signalling decoder a signal indicating a presence of three or more independently decodable regions;
- reading by said the signalling decoder a signalling comprised in said the digital video stream indicating an association between each of said the three or more independent component regions component video streams and each of said the three or more independently decodable regions, wherein said the three or more independent component regions component video streams are originated by a plurality of video sources and wherein any of said the three or more component regions component video streams can be associated with any of said the three or more independently decodable regions in an independent way;
- reading by said the signaling decoder a descriptor comprised in said the digital video stream indicating a type of content of each one of the three or more independently decodable regions;
- selecting for decoding a set one or more of the three or more independently decodable regions indicated by said the signalling or by said the descriptor, and
- decoding said the selected set one or more of independently decodable regions by said the video decoder and outputting the a decoded video stream obtained by said the video data decoder for displaying.
12. The method according to claim 11, wherein said the video decoder selects the one or more of the three or more independently decodable regions based on an evaluation of its own computational resources of the video decoder.
13. The method according to claim 11, wherein the selected one or more of the three or more independently decodable regions are made available for display on a single display.
14. The method according to claim 11, wherein the selected one or more of the three or more independently decodable regions are made available for display on multiple heterogeneous devices.
15. The method according to claim 11, wherein a selection of a set the selected one or more of the three or more independently decodable regions to be decoded is determined based on an automatic negotiation with a display device associated to said the video decoder and configured to display the video data decoded by said the video decoder.
16. The method according to claim 11, wherein a selection of a set the selected one or more of the three or more independently decodable regions to be decoded is determined based on a manual selection of the a display format on a display device associated to said the video decoder by a user performed by means of a remote control device associated to said the video decoding device decoder or to said the display device.
17. A decoding device for decoding a digital video stream including a three or more independent component regions component video streams and configured to read a signal indicating a presence of three or more independently decodable regions, the decoding device comprising:
- a signaling decoder configured to read a signalling comprised in said the digital video stream indicating an association between the three or more independent component regions component video streams and the three or more independently decodable regions and configured to read a descriptor comprised in said the digital video stream indicating a type of content of each one of the three or more independently decodable regions, wherein said three or more independent component regions are originated by a plurality of video sources and wherein any of said the three or more independent component regions component video streams can be associated with any of said the three or more independently decodable regions in an independent way,;
- a video data decoder configured to decode video data comprised in said the digital video stream according to a decoding strategy,; and
- a selecting unit configured to select for decoding by said the video data decoder a set of said the three or more independently decodable regions indicated by said the signalling or by said the descriptor,
- wherein the video data decoder decodes said the selected set of the three or more independently decodable regions selected by the selecting unit, and outputs a decoded digital video stream comprising said the selected set of selected the three or more independently decodable regions.
18. The decoding device of claim 17, wherein said the selecting unit is further configured to automatically or manually select for display on a display device associated to said the decoding device said the selected set of the three or more independently decodable regions decoded by said the decoding device.
19. The decoding device according to claim 17, wherein said a the selecting unit is further configured to select, by means of a negotiation process with a display device associated to said the decoding device, a display format comprising said the selected set of the three or more independently decodable regions decoded by said the decoding device.
20080303893 | December 11, 2008 | Kim |
20090128620 | May 21, 2009 | Lipton |
20110194619 | August 11, 2011 | Yu |
20120042050 | February 16, 2012 | Chen et al. |
20120106921 | May 3, 2012 | Sasaki et al. |
20120281068 | November 8, 2012 | Celia |
1 524 859 | April 2005 | EP |
H11-346370 | December 1999 | JP |
2004-7266 | January 2004 | JP |
2005-124200 | May 2005 | JP |
2010-527217 | August 2010 | JP |
2011-109397 | June 2011 | JP |
201220826 | May 2012 | TW |
201234833 | August 2012 | TW |
2008/054100 | May 2008 | WO |
2009/136681 | November 2009 | WO |
WO-2011128818 | October 2011 | WO |
- Japanese Office Action dated Jul. 16, 2020, issued in Japanese Application No. 2017-221897.
- International Search Report dated Mar. 4, 2014, issued in PCT Application No. PCT/IB2013/059349, filed Oct. 14, 2013.
- Written Opinion dated Mar. 4, 2014, issued in PCT Application No. PCT/IB2013/059349, filed Oct. 14, 2013.
- Japanese Office Action dated Jun. 13, 2017, issued in Japanese Application No. 2015-536276.
- Paola Sunna et al., Title Descriptor, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/TEC 1/SC 29/WG 11, 12th Meeting Geneva, Switzerland, Jan. 2013, 7 pp.
- Xiaofeng Yang et al., 2D Compatible Frame Packing Stereo 3D Video, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/TEC 1/SC 29/WG 11, 11th Meeting, Shanghai, China, Oct. 10-19, 2012, pp. 6.
- Ohji Nakagami et al., Frame Packing Arrangement SEI Extension for HEVC, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/TEC 1/SC 29/WG 11, 9th Meeting Geneva, Switzerland, Apr. 27-May 7, 2012, 3 pp.
- Taiwanese Office Action dated Aug. 24, 2015, issued in Taiwan Application No. TW 10421127080.
Type: Grant
Filed: Apr 30, 2020
Date of Patent: Jan 2, 2024
Assignees: RAI Radiotelevisione Italiana S.P.A. (Rome), S.I.SV.EL. Societa' Italiana Per Lo Sviluppo Dell' Elettronica S.P.A. (None)
Inventors: Marco Arena (Turin), Giovanni Ballocca (Turin), Paola Sunna (Turin)
Primary Examiner: Eric B. Kiss
Application Number: 16/863,620
International Classification: H04N 13/161 (20180101); H04N 19/132 (20140101); H04N 19/156 (20140101); H04N 19/162 (20140101); H04N 19/164 (20140101); H04N 19/17 (20140101); H04N 19/44 (20140101); H04N 19/597 (20140101); H04N 19/70 (20140101); H04N 13/178 (20180101);