Patents by Inventor Debargha Mukherjee
Debargha Mukherjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240098280Abstract: Image coding using guided machine learning restoration may include obtaining reconstructed frame data by decoding, obtaining a restored frame by restoring the reconstructed frame, and outputting the restored frame. Obtaining the restored frame may include obtaining a reconstructed block, obtaining guide parameter values, obtaining a restored block, and including the restored block in the restored frame. Obtaining the restored block may include inputting the reconstructed block to an input layer of a trained guided convolutional neural network, wherein the neural network is constrained such that an output layer has a defined cardinality of channels, obtaining, from the output layer, neural network output channel predictions, obtaining a guided neural network prediction as a linear combination of the guide parameter values and the neural network output channel predictions, and generating the restored block using the guided neural network prediction.Type: ApplicationFiled: January 19, 2021Publication date: March 21, 2024Inventors: Urvang Joshi, Yue Chen, Sarah Parker, Elliott Karpilovsky, Debargha Mukherjee
-
Publication number: 20240098298Abstract: Multiple global motion models associated with respective segments of a current frame are decoded from a compressed bitstream. Each global motion model is based on a segmentation of the current frame and represents a respective underlying motion of blocks within a respective segment. Blocks of the current frame are decoded by: for each inter-predicted block of a segment, decoding, form the compressed bitstream, an indication of whether to decode the each inter-predicted block based on a global motion model of the multiple global motion models and associated with the segment, or whether to decode the each inter-predicted block based on a motion vector that is different from the global motion model; and decoding the each inter-predicted block based on the indication.Type: ApplicationFiled: November 28, 2023Publication date: March 21, 2024Inventors: Debargha Mukherjee, Yuxin Liu, Sarah Parker
-
Patent number: 11924476Abstract: A device for restoring a degraded frame resulting from reconstruction of a source frame includes a processor that is configured to receive a compressed bitstream. The compressed bitstream includes a first projection parameter ? a second projection parameter ?, first restoration parameters comprising a first radius value, and second restoration parameters comprising a second radius value. The processor is further configured to restore at least a portion of the degraded frame using a projection operation that uses the first projection parameter ?, the second projection parameter ?, and at least two guide tiles.Type: GrantFiled: July 18, 2022Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventor: Debargha Mukherjee
-
Patent number: 11870993Abstract: Improved transforms are used to encode and decode large video and image blocks. During encoding, a prediction residual block having a large size (e.g., larger than 32×32) is generated. The pixel values of the prediction residual block are transformed to produce transform coefficients. After determining that the transform coefficients exceed a threshold cardinality representative of a maximum transform block size (e.g., 32×32), a number of the transform coefficients are discarded such that a remaining number of transform coefficients does not exceed the threshold cardinality. A transform block is then generated using the remaining number. During decoding, after determining that the transform coefficients exceed the threshold cardinality, a number of new coefficients are added to the transform coefficients such that a total number of transform coefficients exceeds the threshold cardinality. The transform coefficients are then inverse transformed into a prediction residual block having a large size.Type: GrantFiled: June 28, 2021Date of Patent: January 9, 2024Assignee: GOOGLE LLCInventors: Urvang Joshi, Debargha Mukherjee
-
Publication number: 20230291925Abstract: Video coding in accordance with an inter-intra prediction model may include coding an inter-prediction motion vector for a current block of a current frame, obtaining spatial block-context pixels oriented relative to the current block, generating an inter-prediction block, generating a corresponding set of reference block-context pixels oriented relative to the inter-prediction block, identifying inter-intra prediction parameters that correspond with minimizing error between the spatial block-context pixels and the reference block-context pixels, generating a prediction block for the current block by, for a current pixel of the current block, obtaining an inter-prediction pixel, determining a predictor for the current pixel using a combination of the inter-prediction pixel and the inter-intra prediction parameters, and including the predictor in the prediction block.Type: ApplicationFiled: July 1, 2020Publication date: September 14, 2023Applicant: Google LLCInventors: Debargha Mukherjee, Yue Chen, Urvang Joshi, Sarah Parker, Elliott Karpilovsky, Hui Su
-
Patent number: 11689726Abstract: A hybrid apparatus for coding a video stream includes a first encoder. The first encoder includes a neural network having at least one hidden layer, and the neural network receives source data from the video stream at a first hidden layer of the at least one hidden layer, receives side information correlated with the source data at the first hidden layer, and generates guided information using the source data and the side information. The first encoder outputs the guided information and the side information for a decoder to reconstruct the source data.Type: GrantFiled: July 19, 2019Date of Patent: June 27, 2023Assignee: GOOGLE LLCInventors: Debargha Mukherjee, Urvang Joshi, Yue Chen, Sarah Parker
-
Publication number: 20230199179Abstract: Video coding may include generating, by a processor, a decoded frame by decoding a current frame from an encoded bitstream and outputting a reconstructed frame based on the decoded frame. Decoding includes identifying a current encoded block from the current frame, identifying a prediction coding model for the current block, wherein the prediction coding model is a machine learning prediction coding model from a plurality of machine learning prediction coding models, identifying reference values for decoding the current block based on the prediction coding model, obtaining prediction values based on the prediction coding model and the reference values, generating a decoded block corresponding to the current encoded block based on the prediction values, and including the decoded block in the decoded frame.Type: ApplicationFiled: February 23, 2023Publication date: June 22, 2023Inventors: Debargha Mukherjee, Urvang Joshi, Yue Chen, Sarah Parker
-
Publication number: 20230179789Abstract: A super-resolution coding mode is described. An encoded image can be decoded from an encoded bitstream stored on a non-transitory computer-readable storage medium. A flag can indicate whether an image was encoded using the super-resolution mode at a first resolution. Responsive to the flag indicating that the image was encoded using the super-resolution mode, bits indicating an amount of scaling of the image are included. The image is decoded from the encoded bitstream to obtain a reconstructed image at the first resolution, and the reconstructed image is upscaled to a second resolution using the amount of scaling to obtain an upscaled reconstructed image. The second resolution is higher than the first resolution. Loop restoration parameters within the bitstream can used for look restoration filtering of the upscaled reconstructed image to obtain a loop restored image at the second resolution.Type: ApplicationFiled: January 17, 2023Publication date: June 8, 2023Inventors: Urvang Joshi, Debargha Mukherjee, Andrew Simpson
-
Patent number: 11601644Abstract: Video coding may include generating, by a processor, a decoded frame by decoding a current frame from an encoded bitstream and outputting a reconstructed frame based on the decoded frame. Decoding includes identifying a current encoded block from the current frame, identifying a prediction coding model for the current block, wherein the prediction coding model is a machine learning prediction coding model from a plurality of machine learning prediction coding models, identifying reference values for decoding the current block based on the prediction coding model, obtaining prediction values based on the prediction coding model and the reference values, generating a decoded block corresponding to the current encoded block based on the prediction values, and including the decoded block in the decoded frame.Type: GrantFiled: March 7, 2019Date of Patent: March 7, 2023Assignee: GOOGLE LLCInventors: Debargha Mukherjee, Urvang Joshi, Yue Chen, Sarah Parker
-
Publication number: 20230058845Abstract: A method for coding a current block using an intra-prediction mode includes obtaining a focal point, the focal point having coordinates (a, b) in a coordinate system; and generating, using first peripheral pixels and second peripheral pixels, a prediction block for the current block, where the first peripheral pixels form a first peripheral pixel line constituting an x-axis, and where the second peripheral pixels form a second peripheral pixel line constituting a y-axis. Generating the prediction block includes, for each location of the prediction block at a location (i, j) of the prediction block, determining at least one of an x-intercept or a y-intercept; and determining a prediction pixel value for the each location of the prediction block using the at least one of the x-intercept or the y-intercept.Type: ApplicationFiled: May 14, 2020Publication date: February 23, 2023Inventors: James Bankoski, Debargha Mukherjee
-
Publication number: 20230050660Abstract: A method for intra-prediction of a current block includes selecting peripheral pixels of the current block, where the peripheral pixels are used to generate a prediction block for the current block; for each prediction pixel of the prediction block, performing steps including selecting two respective pixels of the peripheral pixels; and calculating the prediction pixel by interpolating at least the two respective pixels; and coding a residual block corresponding to a difference between the current block and the prediction block.Type: ApplicationFiled: May 14, 2020Publication date: February 16, 2023Inventors: James Bankoski, Debargha Mukherjee
-
Patent number: 11558631Abstract: A super-resolution coding mode is described. Encoded image can be decoded by decoding, from an encoded bitstream, a flag indicating whether an image was encoded using the super-resolution mode. The image is encoded at a first resolution. Responsive to the flag indicating that the image was encoded using the super-resolution mode, bits indicating an amount of scaling of the image are decoded. The image is decoded from the encoded bitstream to obtain a reconstructed image at the first resolution, and the reconstructed image is upscaled to a second resolution using the amount of scaling to obtain an upscaled reconstructed image. The second resolution is higher than the first resolution. Loop restoration filtering is applied to the upscaled reconstructed image using loop restoration parameters to obtain a loop restored image at the second resolution.Type: GrantFiled: March 31, 2020Date of Patent: January 17, 2023Assignee: GOOGLE LLCInventors: Urvang Joshi, Debargha Mukherjee, Andrew Simpson
-
Publication number: 20230011893Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.Type: ApplicationFiled: December 23, 2019Publication date: January 12, 2023Inventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
-
Publication number: 20220353545Abstract: A device for restoring a degraded frame resulting from reconstruction of a source frame includes a processor that is configured to receive a compressed bitstream. The compressed bitstream includes a first projection parameter ? a second projection parameter ?, first restoration parameters comprising a first radius value, and second restoration parameters comprising a second radius value. The processor is further configured to restore at least a portion of the degraded frame using a projection operation that uses the first projection parameter ?, the second projection parameter ?, and at least two guide tiles.Type: ApplicationFiled: July 18, 2022Publication date: November 3, 2022Inventor: Debargha Mukherjee
-
Publication number: 20220345704Abstract: Transform-level partitioning of a prediction residual block is performed to improve compression efficiency of video data. During encoding, a prediction residual block is generated responsive to prediction-level partitioning performed against a video block, a transform block partition type to use is determined based on the prediction residual block, a non-recursive transform-level partitioning is performed against the prediction residual block according to the transform block partition type, and transform blocks generated as a result of the transform-level partitioning are encoded to a bitstream.Type: ApplicationFiled: July 8, 2022Publication date: October 27, 2022Inventors: Sarah Parker, Debargha Mukherjee, Yue Chen, Elliott Karpilovsky, Urvang Joshi
-
Publication number: 20220256186Abstract: Generating a compound predictor block for a current block of video includes generating, for the current block, a first predictor block using one of inter-prediction or intra-prediction and generating a second predictor block. The first predictor block includes a first pixel and the second predictor block includes a second pixel that is co-located with the first pixel. A first weight is determined for the first pixel using a difference between a value of the first pixel and a value of the second pixel. A second weight is determined for the second pixel using the first weight. The compound predictor block is generated by combining the first predictor block and the second predictor block. The compound predictor block includes a weighted pixel that is determined using a weighted sum of the first pixel and the second pixel using the first weight and the second weight.Type: ApplicationFiled: April 28, 2022Publication date: August 11, 2022Inventors: Debargha Mukherjee, James Bankoski, Yue Chen, Yuxin Liu, Sarah Parker
-
Patent number: 11405653Abstract: A method includes generating, using first restoration parameters, a first guide tile for a degraded tile of the degraded frame, the degraded tile corresponding to a source tile of the source frame; generating, using second restoration parameters, a second guide tile for the degraded tile of the degraded frame, the second restoration parameters being different from the first restoration parameters; determining a first tile difference between the source tile and the first guide tile; determining a second tile difference between the source tile and the second guide tile; calculating projection parameters that minimize a difference between a restored tile of the degraded tile and the source tile; and encoding, in an encoded bitstream, the projection parameters. The difference between the restored tile of the degraded tile and the source tile is a linear combination, using the projection parameters, of the first tile difference and the second tile difference.Type: GrantFiled: October 29, 2019Date of Patent: August 2, 2022Assignee: GOOGLE LLCInventor: Debargha Mukherjee
-
Patent number: 11388401Abstract: Transform-level partitioning of a prediction residual block is performed to improve compression efficiency of video data. During encoding, a prediction residual block is generated responsive to prediction-level partitioning performed against a video block, a transform block partition type to use is determined based on the prediction residual block, a non-recursive transform-level partitioning is performed against the prediction residual block according to the transform block partition type, and transform blocks generated as a result of the transform-level partitioning are encoded to a bitstream.Type: GrantFiled: June 26, 2020Date of Patent: July 12, 2022Assignee: GOOGLE LLCInventors: Sarah Parker, Debargha Mukherjee, Yue Chen, Elliott Karpilovsky, Urvang Joshi
-
Publication number: 20220217336Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.Type: ApplicationFiled: March 21, 2022Publication date: July 7, 2022Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
-
Publication number: 20220207654Abstract: Guided restoration is used to restore video data degraded from a video frame. The video frame is divided into restoration units (RUs) which each correspond to one or more blocks of the video frame. Restoration schemes are selected for each RU. The restoration schemes may indicate to use one of a plurality of neural networks trained for the guided restoration. Alternatively, the restoration schemes may indicate to use a neural network and a filter-based restoration tool. The video frame is then restored by processing each RU according to the respective selected restoration scheme. During encoding, the restored video frame is encoded to an output bitstream, and the use of the selected restoration schemes may be signaled within the output bitstream. During decoding, the restored video frame is output to an output video stream.Type: ApplicationFiled: March 18, 2022Publication date: June 30, 2022Inventors: Debargha Mukherjee, Urvang Joshi, Yue Chen, Sarah Parker