Abstract: A video coding system uses a value of a chroma quantization parameter table to quantize or de-quantize chroma values of a block of the video, the chroma quantization parameter table being directly coded in a set of parameters at the picture level or at the sequence level of the coded video stream. A corresponding encoding method, encoding device, decoding method, decoding device, video signal, computer program and non-transitory readable medium are proposed.
Abstract: Methods, apparatus, systems, devices, and computer program products directed to augmenting reality with respect to real-world places, and/or real-world scenes that may include real-world places may be provided. Among the methods, apparatus, systems, devices, and computer program products is a method directed to augmenting reality via a device. The method may include capturing a real-world view that includes a real-world place, identifying the real-world place, determining an image associated with the real-world place familiar to a user of the device viewing the real-world view, and/or augmenting the real-world view that includes the real-world place with the image of the real-world place familiar to a user viewing the real-world view.
Abstract: There are provided methods and apparatus for in-loop artifact filtering. An apparatus includes an encoder for encoding an image region. The encoder has at least two filters for successively performing in-loop filtering to respectively reduce at least a first and a second type of quantization artifact.
Type:
Grant
Filed:
July 2, 2021
Date of Patent:
May 7, 2024
Assignee:
INTERDIGITAL VC HOLDINGS, INC.
Inventors:
Meng-Ping Kao, Peng Yin, Oscar Divorra Escoda
Abstract: Methods, devices and stream are disclosed for encoding, transporting and decoding a 3D scene prepared to be viewed from the inside of a viewing zone. A central view comprising texture and depth information is encoded by projected points of the 3D scene visible from a central point of view onto an image plane. Patches are generated to encode small parts of the 3D scene not visible from the central point of view. At the rendering, a viewport image is generated for the current point of view. Holes, that is dis-occluded areas, of the viewport are filled using a patch based inpainting algorithm adapted to take the patches, warped according to the rotation and translation between virtual camera used for capturing the patch and the current virtual camera.
Abstract: A method for encoding a volumetric video content representative of a 3D scene is disclosed. The method comprises obtaining a reference viewing box and an intermediate viewing box defined within the 3D scene. For the reference viewing bounding box, the volumetric video reference subcontent is encoded as a central image and peripheral patches for parallax. For the intermediate viewing bounding box, the volumetric video intermediate sub-content is encoded as intermediate central patches which are differences between the intermediate central image and the reference central image.
Type:
Grant
Filed:
November 30, 2020
Date of Patent:
April 23, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Charles Salmon-Legagneur, Bertrand Chupeau, Julien Fleureau
Abstract: Disclosed herein are systems and methods for augmented reality multi-view telepresence. An embodiment takes the form of a method that includes obtaining a session geometry of a multi-location telepresence session that includes a first-location participant at a first location and a second-location participant at a second location, each location having respective pluralities of cameras and display segments. The method includes selecting, according to the session geometry, both a first-to-second-viewpoint second-location camera from the plurality of second-location cameras as well as a first-to-second-viewpoint first-location display segment from the plurality of first-location display segments.
Abstract: A decoding method is presented. An illumination compensation parameter is first determined for a current block of a picture. The illumination compensation parameter is determined from one or more illumination compensation parameters or from one or more reconstructed samples of at least one spatially neighbor block only in the case where said at least one spatially neighbor block belongs to the same local illumination compensation group as the current block, called current local illumination compensation group. Finally, the current block is reconstructed using the determined illumination compensation parameter.
Type:
Grant
Filed:
September 19, 2019
Date of Patent:
April 16, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Ya Chen, Tangi Poirier, Philippe Bordes, Fabrice Leleannec, Franck Galpin
Abstract: Encoding or decoding syntax information associated with video information can involve identifying a coding context associated with a syntax element of a current coding unit of the video information, wherein the identifying occurs without using a syntax element of a neighboring block, and encoding or decoding the syntax element of the current coding unit based on the coding context.
Type:
Grant
Filed:
March 5, 2020
Date of Patent:
April 16, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Ya Chen, Fabrice Leleannec, Tangi Poirier
Abstract: An encoding method for a picture part encoded in a bitstream and reconstructed can involve refinement data encoded in the bitstream and determined based on at least one comparison of a version of a rate distortion cost computed using a data coding cost, a distortion between an original version of the picture part and a reconstructed picture part, and at least one other version involving one or more combinations of with or without a refinement by the refinement data, or a refinement either in or out of the decoding loop, or with or without a mapping or an inverse mapping.
Abstract: Systems and methods are described for compressing color information in point cloud data. In some embodiments, point cloud data includes point position information and point color information for each of a plurality of points. The point position information is provided to a neural network, and the neural network generates predicted color information (e.g. predicted luma and chroma values) for respective points in the point cloud. A prediction residual is generated to represent the difference between the predicted color information and the input point color position. The point position information (which may be in compressed form) and the prediction residual are encoded in a bitstream. In some embodiments, color hint data is encoded to improve color prediction.
Type:
Grant
Filed:
December 11, 2019
Date of Patent:
April 16, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Tatu V. J. Harviainen, Louis Kerofsky, Ralph Neff
Abstract: A video codec can involve processing video information based on a motion model involving a coding unit including a plurality of sub-blocks, such as an affine motion model, to produce motion compensation information, obtaining a local illumination compensation model, and encoding or decoding the video information based on the motion compensation information and the local illumination compensation model.
Abstract: The disclosure discloses methods and devices for transmitting and rendering a 3D scene. The method for rendering comprises: receiving a manifest; requesting, from a server, at least one available first data stream; requesting, from the server, a subset of available second data streams selected based at least on an angular sector associated with the at least one available second data stream; and rendering the 3D scene using the central patch content from the requested first data streams and parallax patch content from the requested selected subset of available second data streams.
Type:
Application
Filed:
December 18, 2023
Publication date:
April 11, 2024
Applicant:
INTERDIGITAL VC HOLDINGS, INC.
Inventors:
Yvon Legallais, Charline Taibi, Serge Travert, Charles Salmon-Legagneur
Abstract: Video coding tools can be controlled by including syntax in a video bitstream that makes better use of video decoding resources. An encoder inserts syntax into a video bitstream to enable a decoder to parse the bitstream and easily control which tools combinations are enabled, which combinations are not permitted, and which tools are activated for various components in a multiple component bitstream, leading to potential parallelization of bitstream decoding.
Type:
Grant
Filed:
November 22, 2022
Date of Patent:
April 9, 2024
Assignee:
INTERDIGITAL VC HOLDINGS, INC.
Inventors:
Edouard Francois, Pierre Andrivon, Franck Galpin, Fabrice Leleannec, Michel Kerdranvat
Abstract: Different implementations are described, particularly implementations for video encoding and decoding based on asymmetric binary partitioning of image blocks are presented. The encoding method comprises for a picture, wherein at least one component of the picture is divided into blocks of samples, partitioning a block into block partitions, wherein at least one block partition has a size equal to a positive integer different from a power of 2 in width and/or height, obtaining a residual based on a difference between a prediction of the block partitions and the block partitions, splitting the residual into at least two residual blocks with a size equal to a power of 2 in width and height and encoding the at least two residual blocks. Ohers embodiments are presented for a block partitioning on the border of the picture, for setting maximum and/or minimum block sizes and for corresponding decoding method.
Abstract: A method of performing intra prediction for encoding or decoding uses multiple layers of reference samples. The layers are formed into reference arrays that are used by a function, such as a weighted combination, to form a final prediction. The weights can be variable, chosen from among a number of number of sets of weights. The prediction is used in encoding or decoding a block of video data. The weights can be determined in a number of ways, and for a given prediction mode, the same weights, or different weights can be used for all pixels in a target block. If the weights are varied, they can depend on the distance of the target pixel from reference arrays. An index can be sent indicating which set of weights is to be used.
Abstract: Systems and methods are described for rendering a 3D synthetic scene. A display client receives point cloud samples of the scene, where the point cloud samples include a point location, one or more viewing directions, and color information for each of the viewing directions. The point cloud samples may be generated by a server using ray tracing. The display client combines information from the point cloud samples with a locally generated rendering of at least a portion of the scene to generate a combined rendering of the scene, and the client displays the combined rendering. The number of point cloud samples may be adjusted adaptively based on performance metrics at the client.
Type:
Grant
Filed:
July 2, 2020
Date of Patent:
April 9, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Tatu V. J. Harviainen, Marko Palviainen
Abstract: A method and an apparatus are defined for displaying images on a flexible display in a head-mountable device (HMD). One or more flexible display devices may be inserted in the HMD. The one or more flexible display devices may be constrained by the HMD to take a particular curved form and wrap the field of view (FOV) of images displayed on the one or more flexible display devices for an improved user experience. The display surface of the one or more flexible displays may be divided in display areas, the display areas corresponding to a focus area and a peripheral FOV. Image processing may be differentiated according to display area.
Type:
Grant
Filed:
March 18, 2020
Date of Patent:
April 2, 2024
Assignee:
INTERDIGITAL VC HOLDINGS, INC.
Inventors:
Pierre Andrivon, Franck Galpin, Fabrice Urban
Abstract: Methods (1500, 1600) and apparatuses (700, 900, 1700) for video coding and decoding are provided. The method of video encoding (1500) includes determining (1510) a set of parameters for illumination compensation associated with a first motion compensated reference block of a block in a picture of a video based on a function of a set of samples of the first motion compensated reference block and a set of samples of a second motion compensated reference block of the block, processing (1520) a prediction of the block based on the set of parameters, the prediction being associated with the first motion compensated reference block and encoding (1530) the block based on the processed prediction. A bitstream formatted to include encoded data, a computer-readable storage medium and a computer program product are also described.
Type:
Grant
Filed:
September 28, 2018
Date of Patent:
April 2, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Philippe Bordes, Franck Galpin, Fabien Racape
Abstract: At least one embodiment relates to a method for padding a first depth image representative of depth values of nearer points of a point cloud frame and a second depth image representative of depth values of farther points of a point cloud frame. The method also comprises encoding a video stream including a time-interleaving of said encoded first and second images. There is also provided a method comprising decoding a video stream to provide a first depth image representative of depth values of nearer points of a point cloud frame and a second depth image representative of depth values of farther points of a point cloud frame; and filtering pixel values of the second depth image by using pixel values of the first depth image.
Type:
Grant
Filed:
January 21, 2019
Date of Patent:
April 2, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Jean-Claude Chevet, Joan Llach Pinsach, Kangying Cai
Abstract: An image is split into a plurality of blocks of various sizes and a subdivision level counter is associated to each of the blocks. The value of this subdivision level counter for a block is representative of the size of the block and is used to determine the quantization parameter for the block. The value is propagated for each subdivision and incremented according the type of the subdivision. When the image is split, an analysis is done according to the subdivision level counter, a maximal value of subdivision and the type of split, in order to determine the start of a new quantization group and when it is the case, the current position of the partition is propagated to the further split partitions to be stored with these partition and serve in the prediction process when decoding.
Type:
Grant
Filed:
October 21, 2019
Date of Patent:
March 19, 2024
Assignee:
InterDigital VC Holdings, Inc.
Inventors:
Philippe De Lagrange, Philippe Bordes, Edouard Francois