Patents by Inventor Lester Lu

Lester Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250260824
    Abstract: Decoding a current block includes obtaining at least one of a maximum motion vector (MV) precision or a minimum MV precision for a group of blocks. The group of blocks includes the current block. A block-level MV precision for decoding the current block is obtained. The block-level MV precision is such that it is limited by the at least one of the maximum MV precision or the minimum MV precision. An MV for the current block is decoded using the block-level MV precision. A prediction block is obtained for the block using the MV.
    Type: Application
    Filed: May 7, 2022
    Publication date: August 14, 2025
    Inventors: Debargha Mukherjee, Urvang Joshi, Onur Guleryuz, Yaowu Xu, Yue Chen, Lester Lu, Adrian W. Grange, Mohammed Golam Sarwer, Jianle Chen, Rachel Barker, Chi Yo Tsai
  • Publication number: 20250260837
    Abstract: Motion vector (MV) coding using an MV precision is described. An MV class of a motion vector difference (MVD) is decoded. Whether to omit decoding least significant bits of offset bits of an integer portion of the MVD is determined using a MV precision. The integer portion is obtained using at least some decoded offset bits and the least significant bits of the integer portion. Whether to omit decoding least significant bits of fractional bits of a fractional portion is determined using the MV precision. The fractional portion is obtained using at least some decoded fractional bits and the least significant bits of the fractional portion. The MVD is obtained using at least the integer portion and the fractional portion. An MV for the current block is obtained using the MVD.
    Type: Application
    Filed: May 7, 2022
    Publication date: August 14, 2025
    Inventors: Debargha Mukherjee, Urvang Joshi, Onur Guleryuz, Yaowu Xu, Yue Chen, Lester Lu, Adrian Grange, Mohammed Golam Sarwer, Jianle Chen, Rachel Barker, Chi Yo Tsai
  • Publication number: 20250229510
    Abstract: A composite structure includes: a substrate, and a printable layer, provide on at least a part of a surface of the substrate and having accommodated therein a colorful material.
    Type: Application
    Filed: January 25, 2022
    Publication date: July 17, 2025
    Inventors: Lester LU, Ming WU, Lisa LI, Yong LIANG
  • Patent number: 12294705
    Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: May 6, 2025
    Assignee: GOOGLE LLC
    Inventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
  • Publication number: 20250047833
    Abstract: A new reference framework is described that ranks reference frames based on a normative procedure (e.g., a calculated score) and signals the reference frames based on their ranks. The bitstream syntax is simplified by using a context tree that relies on the ranking. Moreover, mapping reference frames to buffers does not have to be signaled and can be determined at the decoder. In an example, the identifier of a reference frame used to code a current block can include identifying a syntax element corresponding to the identifier, determining context information for the syntax element, determining a node of a context tree that includes the syntax element, and coding the syntax element according to a probability model using the context information associated with the node. The context tree is a binary tree that includes, as nodes, the available reference frames arranged in the ranking.
    Type: Application
    Filed: December 7, 2022
    Publication date: February 6, 2025
    Inventors: Sarah Parker, Debargha Mukherjee, Lester Lu
  • Publication number: 20250039436
    Abstract: Coding including dynamic range handling of high dimensional inverse autocorrelation in optical flow refinement includes obtaining a refinement model from available warped refinement models, wherein the available warped refinement models include a four-parameter scaling refinement model, a three-parameter scaling refinement model, and a four-parameter rotational refinement model, obtaining refined motion vectors using the warped refinement model and previously obtained reference frame data in the absence of data expressly indicating the refined motion vectors in the encoded bitstream, wherein obtaining the refined motion vectors includes using a dynamic range adjusted autocorrelation matrix, generating refined prediction block data using the refined motion vectors, generating reconstructed block data using the refined prediction block data, including the reconstructed block data in reconstructed frame data for the current frame, and outputting the reconstructed frame data.
    Type: Application
    Filed: July 26, 2024
    Publication date: January 30, 2025
    Inventors: Lester Lu, Xiang Li, Debargha Mukherjee
  • Publication number: 20240422309
    Abstract: Methods, systems and apparatuses are disclosed including computer readable medium storing instructions used to encode or decode a video or a bitstream encodable or decodable using disclosed steps. The steps include reconstructing a first reference frame and a second reference frame for a current frame to be encoded or decoded, projecting motion vectors of the first reference frame and the second reference frame onto pixels of a current reference frame resulting in a first pixel in the current reference frame being associated with a plurality of projected motion vectors, and selecting a first projected motion vector from the plurality of projected motion vectors as a selected motion vector associated with the first pixel to be used for determining a pixel value of the first pixel, the selection based on magnitudes of the respective ones of the plurality of projected motion vectors.
    Type: Application
    Filed: August 30, 2024
    Publication date: December 19, 2024
    Inventors: Lin Zheng, Yaowu Xu, Lester Lu, Jingning Han, Bohan Li
  • Patent number: 12143605
    Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: November 12, 2024
    Assignee: GOOGLE LLC
    Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
  • Publication number: 20240323361
    Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.
    Type: Application
    Filed: May 30, 2024
    Publication date: September 26, 2024
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Publication number: 20240205458
    Abstract: Transform prediction with parsing independent coding includes generating a reconstructed frame and outputting the reconstructed frame.
    Type: Application
    Filed: December 18, 2023
    Publication date: June 20, 2024
    Inventors: Onur Guleryuz, Zeyu Deng, Debargha Mukherjee, Lester Lu, Yue Chen
  • Patent number: 12003706
    Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: June 4, 2024
    Assignee: GOOGLE LLC
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Publication number: 20230011893
    Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.
    Type: Application
    Filed: December 23, 2019
    Publication date: January 12, 2023
    Inventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
  • Publication number: 20220217336
    Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.
    Type: Application
    Filed: March 21, 2022
    Publication date: July 7, 2022
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Publication number: 20220094950
    Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
  • Patent number: 11284071
    Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: March 22, 2022
    Assignee: GOOGLE LLC
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Patent number: 11197004
    Abstract: Transform modes are derived for inter-predicted blocks using side information available within a bitstream. An inter-predicted encoded video block and side information are identified within a bitstream. Based on the side information, a trained transform is determined for inverse transforming transform coefficients of the inter-predicted encoded video block from amongst multiple trained transforms. The transform coefficients of the inter-predicted encoded video block are inverse transformed according to the trained transform to produce a prediction residual. A video block is reconstructed using the prediction residual and the reference frame. The video block is then output within an output video stream for storage or display. To determine the trained transforms, a learning model uses individual side information types and combinations of the individual side information types processed against a training data set.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: December 7, 2021
    Assignee: GOOGLE LLC
    Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
  • Publication number: 20210185312
    Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky