Patents by Inventor Debargha Mukherjee
Debargha Mukherjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250150641Abstract: Entropy coding a sequence of syntax elements is described where an observation for a syntax element of the sequence is determined, and the observation is arithmetic coded using the probability model. Thereafter, the probability model is updated using a time-variant update rate to produce an updated probability model. Updating the probability model includes regularizing one or more probability values of the probability model so no probability of the updated probability model is below a defined minimum resolution. As a result, the use of a minimum probability value during the arithmetic coding, which can distort probability model, may be omitted.Type: ApplicationFiled: December 29, 2022Publication date: May 8, 2025Inventors: Jingning Han, Yaowu Xu, Joseph Young, In Suk Chong, Debargha Mukherjee
-
Patent number: 12294705Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.Type: GrantFiled: December 23, 2019Date of Patent: May 6, 2025Assignee: GOOGLE LLCInventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
-
Publication number: 20250119577Abstract: Encoding using chroma intra prediction with filtering includes encoding a current block from a current frame, which includes obtaining a first chroma prediction value for a current chroma pixel using a current spatial intra prediction mode, obtaining a current luma prediction value for a current luma pixel collocated with the current chroma pixel, obtaining a second chroma prediction value for the current chroma pixel for the current chroma component by applying derived filter coefficients to the current luma prediction value, obtaining, as a third chroma prediction value for the current chroma pixel for the current chroma component, a weighted average of the first chroma prediction value and the second chroma prediction value, obtaining encoded chroma pixel data for the current chroma pixel by encoding the current chroma pixel using the third chroma prediction value, and including the encoded chroma pixel data in the encoded block data.Type: ApplicationFiled: September 30, 2024Publication date: April 10, 2025Inventors: Xiang Li, Debargha Mukherjee, Yaowu Xu, Jingning Han
-
Patent number: 12267484Abstract: Encoding and decoding using warped reference list includes generating a reconstructed frame from an encoded bitstream by, for decoding a current block for the reconstructed frame, obtaining a dynamic reference list, obtaining a warped reference list, decoding a warped reference list index value, obtaining optimal predicted warped model parameters from the warped reference list in accordance with the index value, decoding differential warped model parameters, obtaining, as optimal warped model parameters, a result of adding the optimal predicted warped model parameters and the differential warped model parameters, obtaining predicted block data in accordance with the optimal warped model parameters, decoding residual block data, and obtaining, as decoded block data for the current block, a result of adding the residual block data and the predicted block data.Type: GrantFiled: December 5, 2022Date of Patent: April 1, 2025Assignee: GOOGLE LLCInventors: Mohammed Golam Sarwer, Rachel Barker, Jianle Chen, Debargha Mukherjee
-
Patent number: 12262065Abstract: A portion Y of a degraded frame is restored using a projection operation that uses a first projection parameter ?, a second projection parameter ?, and at least two guide portions. Restoring the portion Y of the degraded frame includes generating, using first restoration parameters, a first guide portion Y1 for the portion Y; generating, using second restoration parameters, a second guide portion Y2 for the portion Y; and generating a reconstructed portion YR, wherein the projection operation is based on ?(Y1?Y)+?(Y2?Y).Type: GrantFiled: February 9, 2024Date of Patent: March 25, 2025Assignee: GOOGLE LLCInventor: Debargha Mukherjee
-
Publication number: 20250088635Abstract: Entropy coding a sequence of transform coefficients includes determining a predictor value corresponding to a transform coefficient, selecting a probability model from a set of pre-defined probability models based on the predictor value, and entropy coding a symbol associated with the transform coefficient using the selected probability model. The predictor value can be calculated based on a previous predictor value used for coding an immediately preceding symbol associated with an immediately preceding transform coefficient of the sequence of the transform coefficients. The predictor value can be further calculated based on the immediately preceding symbol.Type: ApplicationFiled: August 13, 2024Publication date: March 13, 2025Inventors: Joseph Young, In Suk Chong, Debargha Mukherjee
-
Publication number: 20250047833Abstract: A new reference framework is described that ranks reference frames based on a normative procedure (e.g., a calculated score) and signals the reference frames based on their ranks. The bitstream syntax is simplified by using a context tree that relies on the ranking. Moreover, mapping reference frames to buffers does not have to be signaled and can be determined at the decoder. In an example, the identifier of a reference frame used to code a current block can include identifying a syntax element corresponding to the identifier, determining context information for the syntax element, determining a node of a context tree that includes the syntax element, and coding the syntax element according to a probability model using the context information associated with the node. The context tree is a binary tree that includes, as nodes, the available reference frames arranged in the ranking.Type: ApplicationFiled: December 7, 2022Publication date: February 6, 2025Inventors: Sarah Parker, Debargha Mukherjee, Lester Lu
-
Publication number: 20250039436Abstract: Coding including dynamic range handling of high dimensional inverse autocorrelation in optical flow refinement includes obtaining a refinement model from available warped refinement models, wherein the available warped refinement models include a four-parameter scaling refinement model, a three-parameter scaling refinement model, and a four-parameter rotational refinement model, obtaining refined motion vectors using the warped refinement model and previously obtained reference frame data in the absence of data expressly indicating the refined motion vectors in the encoded bitstream, wherein obtaining the refined motion vectors includes using a dynamic range adjusted autocorrelation matrix, generating refined prediction block data using the refined motion vectors, generating reconstructed block data using the refined prediction block data, including the reconstructed block data in reconstructed frame data for the current frame, and outputting the reconstructed frame data.Type: ApplicationFiled: July 26, 2024Publication date: January 30, 2025Inventors: Lester Lu, Xiang Li, Debargha Mukherjee
-
Publication number: 20240388690Abstract: Video coding using warped motion compensation is described. Extended rotations for the warped motion compensation can be explicitly signaled. For example, motion parameters for predicting the current block and a rotation angle can be decoded. A warping matrix is obtained using the motion parameters and the rotation angle, and a prediction block is obtained by projecting the current block to a quadrilateral in a reference frame. Also described is determining a prediction model of the current block and obtaining a prediction block by projecting the current block to a quadrilateral in a reference frame. Determining the prediction model can include determining whether to predict the current block using a motion vector, a local warping model, or a global motion model, obtaining motion parameters of the prediction model, decoding a rotation angle, and obtaining a warping matrix using the motion parameters and the rotation angle.Type: ApplicationFiled: July 15, 2021Publication date: November 21, 2024Inventors: Yue Chen, Yu Wang, Hui Su, Debargha Mukherjee, Yunqing Wang
-
Publication number: 20240380924Abstract: Decoding a current block of a current frame includes decoding, from a compressed bitstream, one or more syntax elements indicating that a geometric transformation is to be applied; applying the geometric transformation to at least a portion of the current frame to obtain a transformed portion; and obtaining a prediction of the current block based on the transformed portion and an intra-prediction mode.Type: ApplicationFiled: April 15, 2024Publication date: November 14, 2024Inventors: Bohan Li, Debargha Mukherjee, Yaowu Xu, Jingning Han
-
Patent number: 12143605Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.Type: GrantFiled: December 6, 2021Date of Patent: November 12, 2024Assignee: GOOGLE LLCInventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
-
Publication number: 20240357098Abstract: Obtaining a restored frame from a degraded frame includes obtaining, for a pixel of the degraded frame, magnitude features based on a first window centered at the pixel. A cardinality N of the magnitude features is at least 1. The magnitude features are used to obtain a pixel-adaptive filter. The pixel-adaptive filter is applied to the pixel to obtain a pixel of the restored frame.Type: ApplicationFiled: August 12, 2022Publication date: October 24, 2024Inventors: Onur Guleryuz, Debargha Mukherjee
-
Patent number: 12120345Abstract: A method for intra-prediction of a current block includes selecting peripheral pixels of the current block, where the peripheral pixels are used to generate a prediction block for the current block; for each prediction pixel of the prediction block, performing steps including selecting two respective pixels of the peripheral pixels; and calculating the prediction pixel by interpolating at least the two respective pixels; and coding a residual block corresponding to a difference between the current block and the prediction block.Type: GrantFiled: May 14, 2020Date of Patent: October 15, 2024Assignee: GOOGLE LLCInventors: James Bankoski, Debargha Mukherjee
-
Publication number: 20240333961Abstract: Generating a compound predictor block includes generating a first predictor block and generating a second predictor block. The first predictor block includes a first pixel and the second predictor block includes a second pixel. The first and the second pixels are located at a same location within the first predictor block and the second predictor block, respectively. A first weight is determined for the first pixel based on a difference between a first value of the first pixel and a second value of the second pixel. A second weight is determined for the second pixel based on the first weight. The compound predictor block is generated by combining the first predictor block and the second predictor block. The compound predictor block includes a weighted pixel that is determined based on a weighted sum of the first pixel and the second pixel based on the first weight and the second weight.Type: ApplicationFiled: June 13, 2024Publication date: October 3, 2024Inventors: Debargha Mukherjee, James Bankoski, Yue Chen, Yuxin Liu, Sarah Parker
-
Publication number: 20240323361Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.Type: ApplicationFiled: May 30, 2024Publication date: September 26, 2024Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Publication number: 20240314345Abstract: A method for inter-prediction includes coding a first block of a current frame using a first motion vector (MV) and a reference frame type; storing, in at least one MV buffer, the first MV and the reference frame type; identifying MV candidates for coding a current block using the reference frame type; responsive to a determination that a cardinality of the MV candidates is less than a maximum number of MV candidates identifying the first motion vector in the at least one MV buffer, and responsive to a determination that the first MV is not included in the MV candidates, adding the first MV as an MV candidate; and selecting one of the MV candidates for coding the current block.Type: ApplicationFiled: July 15, 2021Publication date: September 19, 2024Inventors: Hui Su, Debargha Mukherjee
-
Patent number: 12075081Abstract: A super-resolution coding mode is described. An encoded image can be decoded from an encoded bitstream stored on a non-transitory computer-readable storage medium. A flag can indicate whether an image was encoded using the super-resolution mode at a first resolution. Responsive to the flag indicating that the image was encoded using the super-resolution mode, bits indicating an amount of scaling of the image are included. The image is decoded from the encoded bitstream to obtain a reconstructed image at the first resolution, and the reconstructed image is upscaled to a second resolution using the amount of scaling to obtain an upscaled reconstructed image. The second resolution is higher than the first resolution. Loop restoration parameters within the bitstream can used for look restoration filtering of the upscaled reconstructed image to obtain a loop restored image at the second resolution.Type: GrantFiled: January 17, 2023Date of Patent: August 27, 2024Assignee: GOOGLE LLCInventors: Urvang Joshi, Debargha Mukherjee, Andrew Simpson
-
Patent number: 12075089Abstract: A method for coding a current block using an intra-prediction mode includes obtaining a focal point, the focal point having coordinates (a, b) in a coordinate system; and generating, using first peripheral pixels and second peripheral pixels, a prediction block for the current block, where the first peripheral pixels form a first peripheral pixel line constituting an x-axis, and where the second peripheral pixels form a second peripheral pixel line constituting a y-axis. Generating the prediction block includes, for each location of the prediction block at a location (i, j) of the prediction block, determining at least one of an x-intercept or a y-intercept; and determining a prediction pixel value for the each location of the prediction block using the at least one of the x-intercept or the y-intercept.Type: GrantFiled: May 14, 2020Date of Patent: August 27, 2024Assignee: GOOGLE LLCInventors: James Bankoski, Debargha Mukherjee
-
Patent number: 12034963Abstract: Generating a compound predictor block for a current block of video includes generating, for the current block, a first predictor block using one of inter-prediction or intra-prediction and generating a second predictor block. The first predictor block includes a first pixel and the second predictor block includes a second pixel that is co-located with the first pixel. A first weight is determined for the first pixel using a difference between a value of the first pixel and a value of the second pixel. A second weight is determined for the second pixel using the first weight. The compound predictor block is generated by combining the first predictor block and the second predictor block. The compound predictor block includes a weighted pixel that is determined using a weighted sum of the first pixel and the second pixel using the first weight and the second weight.Type: GrantFiled: April 28, 2022Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventors: Debargha Mukherjee, James Bankoski, Yue Chen, Yuxin Liu, Sarah Parker
-
Publication number: 20240223796Abstract: Coding using local global prediction modes with projected motion fields includes identifying a current frame, identifying a current reference frame, obtaining a projected motion field, for the current frame, using motion data from the current reference frame, identifying a current superblock from the current frame, obtaining reference warp motion parameters for the current superblock by fitting the projected motion field to a warp motion model, and using the reference warp motion parameters to code respective blocks from the superblock.Type: ApplicationFiled: December 20, 2023Publication date: July 4, 2024Inventors: Debargha Mukherjee, Mohammed Golam Sarwer, Rachel Barker, Jianle Chen, Xiang Li