Patents by Inventor Karam NASER

Karam NASER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220329829
    Abstract: At least a method and an apparatus are presented for efficiently encoding or decoding video. For example, one or more chroma residual scaling parameters are determined based one or more luma mapping parameters and based on a corrective value of the one or more chroma residual scaling parameters. The video is encoded or decoded based on the determined one or more chroma residual scaling parameters.
    Type: Application
    Filed: September 16, 2020
    Publication date: October 13, 2022
    Inventors: Edouard Francois, Franck Galpin, Karam Naser, Philippe De Lagrange
  • Publication number: 20220312040
    Abstract: A method and apparatus to improve compression efficiency in a video compression scheme enables use of new tools with multiple transform selection. In one embodiment, transform pair selection is based on a flag indicative of low-frequency non-separable transforms. In another embodiment, transform pair selection is based on a flag indicative of low-frequency non-separable transforms and on a flag indicative of matrix-based intra prediction. In another embodiment, when an implicit multiple transform selection mode is used, transform pair selection is based on a flag indicative of low-frequency non-separable transforms. Bitstream syntax is used to convey the flags.
    Type: Application
    Filed: May 28, 2020
    Publication date: September 29, 2022
    Inventors: Karam Naser, Fabrice LeLeannec, Tangi Poirier
  • Patent number: 11457211
    Abstract: In video coding, a transform can be selected from multiple transform sets. To efficiently implement multiple transforms, a unified architecture of implementing the Discrete Trigonometric Transforms (DTTs) or flipped DTT can be used. In the proposed unified architecture, the relationships between the transforms are utilized. In particular, all transforms can be implemented based on DCT-II, DCT-IV, a reverse order operation, and a sign changing operation for odd elements. The DCT-II can be implemented at a minimum size, and other sizes for DCT-II can be implemented recursively from the minimum size DCT-II and DCT-IV at various sizes. In one example, the multiple transforms are {DCT-II, DST-II, DCT-III, DST-III, DCT-IV, DST-IV}. The relationships between transforms can also be used to guide the design of additional transforms that can be implemented by the unified architecture.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: September 27, 2022
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Karam Naser, Tangi Poirier, Gagan Bihari Rath
  • Publication number: 20220303535
    Abstract: A lossless coding mode is proposed in a video coding system comprising a plurality of coding tools, some being lossy by design, some of which can be adapted to become lossless or near lossless. To enable a lossless mode in such system, it is proposed to disable the tools that are lossy by design and use only lossless tools, to adapt some tools to enable lossless coding, and adapt some tools to enable near-lossless coding so that a secondary lossless coding may be applied after residual coding. In a specific embodiment, it is proposed to determine the type of residual coding by, when an information indicates that a transform skip residual coding is used, obtaining a flag representative of a special mode, and when this flag is true, determine that regular residual coding must be used instead of the transform skip residual coding that should be used.
    Type: Application
    Filed: June 16, 2020
    Publication date: September 22, 2022
    Inventors: Tangi Poirier, Fabrice Le Leannec, Karam Naser, Edouard Francois
  • Publication number: 20220264095
    Abstract: Methods and apparatuses for video coding and decoding are provided. The method of video encoding includes accessing a bin of a syntax element associated with a block in a picture of a video, determining a context for the bin of the syntax element associated with the syntax element and entropy encoding the bin of the syntax element based on the determined context wherein either the bin of the syntax element is based on the relevance of a prediction by a neural network of the syntax element or the probability associated to the context is determined by a neural network. A bitstream formatted to include encoded data, a computer-readable storage medium and a computer-readable program product are also described.
    Type: Application
    Filed: April 22, 2022
    Publication date: August 18, 2022
    Applicant: InterDigital VC Holdings, Inc.
    Inventors: Franck GALPIN, Fabien RACAPE, Karam NASER, Philippe BORDES
  • Publication number: 20220141466
    Abstract: In one implementation, the CCB counting methods are unified between the transform residual coding process and Transform Skip (TS) residual coding process. In one example, in TS residual coding, the CCB counting excludes the coeff_sign_flag so that syntax used for CCB count is unified for the two residual coding processes. In addition, a separate maximum number of context coded bins can be specified and used for coeff_sign_flag only. In another example, in TS residual coding, the maximum number of CCB count is reduced from TB_size*2 for a TB to TB_size*1.75, or more generally both the maximum CCB counts of the transform residual coding and TS residual coding are set to an identical value, so that the maximum CCB count is unified for the two residual coding processes.
    Type: Application
    Filed: September 18, 2020
    Publication date: May 5, 2022
    Inventors: Ya CHEN, Fabrice LE LEANNEC, Franck GALPIN, Karam NASER
  • Patent number: 11323716
    Abstract: Methods and apparatuses for video coding and decoding are provided. The method of video encoding includes accessing a bin of a syntax element associated with a block in a picture of a video, determining a context for the bin of the syntax element associated with the syntax element and entropy encoding the bin of the syntax element based on the determined context wherein either the bin of the syntax element is based on the relevance of a prediction by a neural network of the syntax element or the probability associated to the context is determined by a neural network. A bitstream formatted to include encoded data, a computer-readable storage medium and a computer-readable program product are also described.
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: May 3, 2022
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Franck Galpin, Fabien Racape, Karam Naser, Philippe Bordes
  • Publication number: 20220124337
    Abstract: Methods and apparatus for using wide-angle intra prediction with position dependent intra prediction combination. Wide-angle intra prediction enables intra prediction direction angles higher than the conventional 45 degrees. Also, position dependent intra prediction combination (PDPC) was adopted in a specification for the next generation of video coding H.266/VVC and enables more reference pixels along edges of a block. In one embodiment, when a video block to be coded or decoded is non-square, additional intra prediction directions are enabled in the direction of the longer block edge. An index is used to indicate the prediction direction and can be adapted according to the additional intra predictions in the longer direction, with correspondingly fewer prediction directions along the shorter block edge. This preserves the number of prediction modes that need to be indexed but allows their angles to correspond to the shape of the block.
    Type: Application
    Filed: September 19, 2019
    Publication date: April 21, 2022
    Inventors: Karam NASER, Fabien RACAPE, Gagan RATH
  • Publication number: 20220038704
    Abstract: A decoding method is disclosed. A quantization parameter of at least one luma block is obtained. The at least one luma block comprises a luma sample co-located with at least one chroma sample selected in a current chroma block. The luma and chroma blocks are coded in dual tree mode. A quantization parameter of the current chroma block is then determined responsive to the quantization parameter of the at least one luma block. Finally, the current chroma block is decoded using the quantization parameter of the current chroma block.
    Type: Application
    Filed: September 19, 2019
    Publication date: February 3, 2022
    Inventors: Philippe De Lagrange, Karam Naser, Philippe Bordes
  • Publication number: 20220030238
    Abstract: A decoding method is presented. a type of split of a block into transform units is first decoded. A transform is then determined for each transform unit of said block responsive to said type of split. Finally, decoded transform coefficients of said transform units are inverse transformed using the determined transforms.
    Type: Application
    Filed: December 2, 2019
    Publication date: January 27, 2022
    Inventors: Karam Naser, Fabrice Le Leannec, Tangi Poirler
  • Publication number: 20210400269
    Abstract: At least a method and an apparatus are provided for efficiently encoding or decoding video. For example, a plurality of different motion prediction modes for a current block are obtained. The current block is encoded or decoded based on a combination of the plurality of different motion prediction modes with corresponding weights for a plurality of sub-blocks of the current block, wherein the combination with the corresponding weights comprising an inter prediction mode and an intra prediction mode.
    Type: Application
    Filed: November 19, 2019
    Publication date: December 23, 2021
    Inventors: Tangi Poirier, Karam Naser, Edouard Francois
  • Publication number: 20210400276
    Abstract: At least a method and an apparatus are presented for efficiently encoding or decoding video. For example, a quantization mode selection condition is obtained. A first quantization mode is selected for processing a first portion of a set of transform coefficients based on the quantization mode selection condition. A second quantization mode is selected for processing a second portion of the set of transform coefficients based on the quantization mode selection condition. The video is encoded or decoded based on the processed first and second portions of the set of transform coefficients.
    Type: Application
    Filed: November 19, 2019
    Publication date: December 23, 2021
    Inventors: YA CHEN, Fabrice Le Leannec, Karam Naser
  • Publication number: 20210385471
    Abstract: At least a method and an apparatus are presented for encoding or decoding video and can involve, for example, obtaining a group of coding units including two or more of a plurality of coding units divided from a current block wherein the two or more of the plurality of coding units share a coding parameter, and the group of coding units overlaps at least two different pipeline units associated with a pipelined decoding operation, and encoding or decoding the current block based on the group of coding units, and the shared coding parameter.
    Type: Application
    Filed: October 31, 2019
    Publication date: December 9, 2021
    Inventors: Fabrice Le Leannec, Tangi Poirier, Karam Naser
  • Publication number: 20210385452
    Abstract: In general, encoding or decoding image information can involve processing a signal including image information based on determining a block of spatial-domain values for a prediction residual; replacing in a set of multiple transforms at least one first transform matrix with at least one second transform matrix and/or adding at least one second transform matrix to said set of multiple transforms; transforming the block of spatial-domain values using said second transform matrix; and encoding or decoding at least a portion of the image information based on the transforming of the block of spatial-domain values.
    Type: Application
    Filed: October 25, 2019
    Publication date: December 9, 2021
    Inventors: Karam Naser, Franck Galpin, Gagan Bihari Rath
  • Publication number: 20210160498
    Abstract: In video coding, a transform can be selected from multiple transform sets. To efficiently implement multiple transforms, a unified architecture of implementing the Discrete Trigonometric Transforms (DTTs) or flipped DTT can be used. In the proposed unified architecture, the relationships between the transforms are utilized. In particular, all transforms can be implemented based on DCT-II, DCT-IV, a reverse order operation, and a sign changing operation for odd elements. The DCT-II can be implemented at a minimum size, and other sizes for DCT-II can be implemented recursively from the minimum size DCT-II and DCT-IV at various sizes. In one example, the multiple transforms are {DCT-II, DST-II, DCT-III, DST-III, DCT-IV, DST-IV}. The relationships between transforms can also be used to guide the design of additional transforms that can be implemented by the unified architecture.
    Type: Application
    Filed: April 29, 2019
    Publication date: May 27, 2021
    Inventors: Karam Naser, Tangi Poirier, Gagan Bihari Rath
  • Publication number: 20210120247
    Abstract: Methods and apparatuses for video coding and decoding are provided. The method of video encoding includes accessing a bin of a syntax element associated with a block in a picture of a video, determining a context for the bin of the syntax element associated with the syntax element and entropy encoding the bin of the syntax element based on the determined context wherein either the bin of the syntax element is based on the relevance of a prediction by a neural network of the syntax element or the probability associated to the context is determined by a neural network. A bitstream formatted to include encoded data, a computer-readable storage medium and a computer-readable program product are also described.
    Type: Application
    Filed: April 24, 2019
    Publication date: April 22, 2021
    Inventors: Franck GALPIN, Fabien RACAPE, Karam NASER, Philippe BORDES
  • Publication number: 20200359025
    Abstract: The present embodiments relate to a method and an apparatus for efficiently encoding and decoding video using multiple transforms. For example, a horizontal transform or a vertical transform may be selected from a set of transforms to transform prediction residuals of a current block of a video picture being encoded. In one example, the set of transforms includes: 1) only one transform with a constant lowest frequency basis function, 2) one or more transform with an increasing lowest frequency basis function, and 3) only one transform with a decreasing lowest frequency basis function. In one embodiment, the transform with a constant lowest frequency basis function is DCT-II, the transform with an increasing lowest frequency basis function is DST-VII (and DST-IV), and the transform with a decreasing lowest frequency basis function is DCT-VIII. At the decoder side, the corresponding inverse transforms are selected.
    Type: Application
    Filed: December 19, 2018
    Publication date: November 12, 2020
    Inventors: Karam NASER, Fabrice Leleannec, Franck Galpin
  • Publication number: 20190082182
    Abstract: Some embodiments are directed to a method for encoding a dynamic texture region of a video sequence, said video sequence including a plurality of video frames and each video frame including at least one coding block. The method includes a rate-distortion optimization step in a coding loop based on a measured distortion value (SSD). According to the invention, the rate-distortion optimization includes the steps of estimating, for at least one current coding block of dynamic texture to be encoded, a perceived distortion value (SSDp), replacing the distortion value (SSD) measured for said current coding block of dynamic texture by said estimated perceived distortion value (SSDp), and applying the rate-distortion optimization step with said estimated perceived distortion value.
    Type: Application
    Filed: September 8, 2017
    Publication date: March 14, 2019
    Inventors: Karam NASER, Vincent RICORDEL, Patrick LE CALLET
  • Publication number: 20180343447
    Abstract: Some embodiments are directed to a method and a device for encoding a current frame of a video sequence, the current frame being encoded block by block. A current block of the current frame is encoded by performing: applying a texture synthesis to the video sequence in order to generate a set of n candidate blocks for replacing the current block, the n candidates blocks being similar to the current block according to a predefined criterion, encoding the candidate blocks in order to generate encoded candidate blocks and computing a coding cost for each encoded candidate block, and selecting as encoded block for the current block the encoded candidate block having the lowest coding cost.
    Type: Application
    Filed: September 7, 2016
    Publication date: November 29, 2018
    Inventors: Karam NASER, Vincent RICORDEL, Patrick LE CALLET