Patents by Inventor Lester Lu
Lester Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250260824Abstract: Decoding a current block includes obtaining at least one of a maximum motion vector (MV) precision or a minimum MV precision for a group of blocks. The group of blocks includes the current block. A block-level MV precision for decoding the current block is obtained. The block-level MV precision is such that it is limited by the at least one of the maximum MV precision or the minimum MV precision. An MV for the current block is decoded using the block-level MV precision. A prediction block is obtained for the block using the MV.Type: ApplicationFiled: May 7, 2022Publication date: August 14, 2025Inventors: Debargha Mukherjee, Urvang Joshi, Onur Guleryuz, Yaowu Xu, Yue Chen, Lester Lu, Adrian W. Grange, Mohammed Golam Sarwer, Jianle Chen, Rachel Barker, Chi Yo Tsai
-
Publication number: 20250260837Abstract: Motion vector (MV) coding using an MV precision is described. An MV class of a motion vector difference (MVD) is decoded. Whether to omit decoding least significant bits of offset bits of an integer portion of the MVD is determined using a MV precision. The integer portion is obtained using at least some decoded offset bits and the least significant bits of the integer portion. Whether to omit decoding least significant bits of fractional bits of a fractional portion is determined using the MV precision. The fractional portion is obtained using at least some decoded fractional bits and the least significant bits of the fractional portion. The MVD is obtained using at least the integer portion and the fractional portion. An MV for the current block is obtained using the MVD.Type: ApplicationFiled: May 7, 2022Publication date: August 14, 2025Inventors: Debargha Mukherjee, Urvang Joshi, Onur Guleryuz, Yaowu Xu, Yue Chen, Lester Lu, Adrian Grange, Mohammed Golam Sarwer, Jianle Chen, Rachel Barker, Chi Yo Tsai
-
Publication number: 20250229510Abstract: A composite structure includes: a substrate, and a printable layer, provide on at least a part of a surface of the substrate and having accommodated therein a colorful material.Type: ApplicationFiled: January 25, 2022Publication date: July 17, 2025Inventors: Lester LU, Ming WU, Lisa LI, Yong LIANG
-
Patent number: 12294705Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.Type: GrantFiled: December 23, 2019Date of Patent: May 6, 2025Assignee: GOOGLE LLCInventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
-
Publication number: 20250047833Abstract: A new reference framework is described that ranks reference frames based on a normative procedure (e.g., a calculated score) and signals the reference frames based on their ranks. The bitstream syntax is simplified by using a context tree that relies on the ranking. Moreover, mapping reference frames to buffers does not have to be signaled and can be determined at the decoder. In an example, the identifier of a reference frame used to code a current block can include identifying a syntax element corresponding to the identifier, determining context information for the syntax element, determining a node of a context tree that includes the syntax element, and coding the syntax element according to a probability model using the context information associated with the node. The context tree is a binary tree that includes, as nodes, the available reference frames arranged in the ranking.Type: ApplicationFiled: December 7, 2022Publication date: February 6, 2025Inventors: Sarah Parker, Debargha Mukherjee, Lester Lu
-
Publication number: 20250039436Abstract: Coding including dynamic range handling of high dimensional inverse autocorrelation in optical flow refinement includes obtaining a refinement model from available warped refinement models, wherein the available warped refinement models include a four-parameter scaling refinement model, a three-parameter scaling refinement model, and a four-parameter rotational refinement model, obtaining refined motion vectors using the warped refinement model and previously obtained reference frame data in the absence of data expressly indicating the refined motion vectors in the encoded bitstream, wherein obtaining the refined motion vectors includes using a dynamic range adjusted autocorrelation matrix, generating refined prediction block data using the refined motion vectors, generating reconstructed block data using the refined prediction block data, including the reconstructed block data in reconstructed frame data for the current frame, and outputting the reconstructed frame data.Type: ApplicationFiled: July 26, 2024Publication date: January 30, 2025Inventors: Lester Lu, Xiang Li, Debargha Mukherjee
-
Publication number: 20240422309Abstract: Methods, systems and apparatuses are disclosed including computer readable medium storing instructions used to encode or decode a video or a bitstream encodable or decodable using disclosed steps. The steps include reconstructing a first reference frame and a second reference frame for a current frame to be encoded or decoded, projecting motion vectors of the first reference frame and the second reference frame onto pixels of a current reference frame resulting in a first pixel in the current reference frame being associated with a plurality of projected motion vectors, and selecting a first projected motion vector from the plurality of projected motion vectors as a selected motion vector associated with the first pixel to be used for determining a pixel value of the first pixel, the selection based on magnitudes of the respective ones of the plurality of projected motion vectors.Type: ApplicationFiled: August 30, 2024Publication date: December 19, 2024Inventors: Lin Zheng, Yaowu Xu, Lester Lu, Jingning Han, Bohan Li
-
Patent number: 12143605Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.Type: GrantFiled: December 6, 2021Date of Patent: November 12, 2024Assignee: GOOGLE LLCInventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
-
Publication number: 20240323361Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.Type: ApplicationFiled: May 30, 2024Publication date: September 26, 2024Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Publication number: 20240205458Abstract: Transform prediction with parsing independent coding includes generating a reconstructed frame and outputting the reconstructed frame.Type: ApplicationFiled: December 18, 2023Publication date: June 20, 2024Inventors: Onur Guleryuz, Zeyu Deng, Debargha Mukherjee, Lester Lu, Yue Chen
-
Patent number: 12003706Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.Type: GrantFiled: March 21, 2022Date of Patent: June 4, 2024Assignee: GOOGLE LLCInventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Publication number: 20230011893Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.Type: ApplicationFiled: December 23, 2019Publication date: January 12, 2023Inventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
-
Publication number: 20220217336Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.Type: ApplicationFiled: March 21, 2022Publication date: July 7, 2022Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Publication number: 20220094950Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.Type: ApplicationFiled: December 6, 2021Publication date: March 24, 2022Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
-
Patent number: 11284071Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.Type: GrantFiled: December 12, 2019Date of Patent: March 22, 2022Assignee: GOOGLE LLCInventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Patent number: 11197004Abstract: Transform modes are derived for inter-predicted blocks using side information available within a bitstream. An inter-predicted encoded video block and side information are identified within a bitstream. Based on the side information, a trained transform is determined for inverse transforming transform coefficients of the inter-predicted encoded video block from amongst multiple trained transforms. The transform coefficients of the inter-predicted encoded video block are inverse transformed according to the trained transform to produce a prediction residual. A video block is reconstructed using the prediction residual and the reference frame. The video block is then output within an output video stream for storage or display. To determine the trained transforms, a learning model uses individual side information types and combinations of the individual side information types processed against a training data set.Type: GrantFiled: July 2, 2020Date of Patent: December 7, 2021Assignee: GOOGLE LLCInventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
-
Publication number: 20210185312Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.Type: ApplicationFiled: December 12, 2019Publication date: June 17, 2021Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky