Patents by Inventor Elliott Karpilovsky

Elliott Karpilovsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240098280
    Abstract: Image coding using guided machine learning restoration may include obtaining reconstructed frame data by decoding, obtaining a restored frame by restoring the reconstructed frame, and outputting the restored frame. Obtaining the restored frame may include obtaining a reconstructed block, obtaining guide parameter values, obtaining a restored block, and including the restored block in the restored frame. Obtaining the restored block may include inputting the reconstructed block to an input layer of a trained guided convolutional neural network, wherein the neural network is constrained such that an output layer has a defined cardinality of channels, obtaining, from the output layer, neural network output channel predictions, obtaining a guided neural network prediction as a linear combination of the guide parameter values and the neural network output channel predictions, and generating the restored block using the guided neural network prediction.
    Type: Application
    Filed: January 19, 2021
    Publication date: March 21, 2024
    Inventors: Urvang Joshi, Yue Chen, Sarah Parker, Elliott Karpilovsky, Debargha Mukherjee
  • Publication number: 20230291925
    Abstract: Video coding in accordance with an inter-intra prediction model may include coding an inter-prediction motion vector for a current block of a current frame, obtaining spatial block-context pixels oriented relative to the current block, generating an inter-prediction block, generating a corresponding set of reference block-context pixels oriented relative to the inter-prediction block, identifying inter-intra prediction parameters that correspond with minimizing error between the spatial block-context pixels and the reference block-context pixels, generating a prediction block for the current block by, for a current pixel of the current block, obtaining an inter-prediction pixel, determining a predictor for the current pixel using a combination of the inter-prediction pixel and the inter-intra prediction parameters, and including the predictor in the prediction block.
    Type: Application
    Filed: July 1, 2020
    Publication date: September 14, 2023
    Applicant: Google LLC
    Inventors: Debargha Mukherjee, Yue Chen, Urvang Joshi, Sarah Parker, Elliott Karpilovsky, Hui Su
  • Publication number: 20230011893
    Abstract: Residual coding using vector quantization (VQ) is described. A flag indicating whether a residual block for the current block is encoded using VQ. In response to the flag indicating that the residual block is encoded using VQ, a parameter indicating an entry in a codebook is decoded, and the residual block is decoded using the entry. In response to the flag indicating that the residual block is not encoded using VQ, the residual block is decoded based on a skip flag indicating whether the current block is encoded using transform skip. The current block is reconstructed using the residual block.
    Type: Application
    Filed: December 23, 2019
    Publication date: January 12, 2023
    Inventors: Debargha Mukherjee, Lester Lu, Elliott Karpilovsky
  • Publication number: 20220345704
    Abstract: Transform-level partitioning of a prediction residual block is performed to improve compression efficiency of video data. During encoding, a prediction residual block is generated responsive to prediction-level partitioning performed against a video block, a transform block partition type to use is determined based on the prediction residual block, a non-recursive transform-level partitioning is performed against the prediction residual block according to the transform block partition type, and transform blocks generated as a result of the transform-level partitioning are encoded to a bitstream.
    Type: Application
    Filed: July 8, 2022
    Publication date: October 27, 2022
    Inventors: Sarah Parker, Debargha Mukherjee, Yue Chen, Elliott Karpilovsky, Urvang Joshi
  • Patent number: 11388401
    Abstract: Transform-level partitioning of a prediction residual block is performed to improve compression efficiency of video data. During encoding, a prediction residual block is generated responsive to prediction-level partitioning performed against a video block, a transform block partition type to use is determined based on the prediction residual block, a non-recursive transform-level partitioning is performed against the prediction residual block according to the transform block partition type, and transform blocks generated as a result of the transform-level partitioning are encoded to a bitstream.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: July 12, 2022
    Assignee: GOOGLE LLC
    Inventors: Sarah Parker, Debargha Mukherjee, Yue Chen, Elliott Karpilovsky, Urvang Joshi
  • Publication number: 20220217336
    Abstract: Decoding video data includes, for a block encoded using a prediction mode, determining a transform mode for the block using the prediction mode. The transform mode is a first mode when the prediction mode is an inter-prediction mode and is a second mode when the prediction mode is an intra-prediction mode. The first mode is an available first transform type that is a combination of transforms selected from first fixed transforms and first learned transforms that each comprise a respective transformation matrix generated iteratively using blocks predicted using the inter-prediction mode. The second mode is an available second transform type that is a combination of transforms selected from second fixed transforms, which is a proper subset of the first fixed transforms, and a second learned transform comprising a transformation matrix that is generated iteratively using blocks predicted using the intra-prediction mode. Decoding the block uses the prediction and transform modes.
    Type: Application
    Filed: March 21, 2022
    Publication date: July 7, 2022
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Publication number: 20220094950
    Abstract: Transform modes are derived for inter-predicted blocks using side information. A prediction residual is generated for a current video block using a reference frame. Side information associated with one or both of the current video block or the reference frame is identified. A trained transform is determined from amongst multiple trained transforms based on the side information, in which each of the trained transforms is determined using individual side information types and combinations of the individual side information types and the side information represents values of one of the individual side information types or one of the combinations of the individual side information types. The prediction residual is transformed according to the trained transform, and data associated with the transformed prediction residual and the side information are encoded to a bitstream.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
  • Patent number: 11284071
    Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: March 22, 2022
    Assignee: GOOGLE LLC
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky
  • Publication number: 20210409705
    Abstract: Transform-level partitioning of a prediction residual block is performed to improve compression efficiency of video data. During encoding, a prediction residual block is generated responsive to prediction-level partitioning performed against a video block, a transform block partition type to use is determined based on the prediction residual block, a non-recursive transform-level partitioning is performed against the prediction residual block according to the transform block partition type, and transform blocks generated as a result of the transform-level partitioning are encoded to a bitstream.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 30, 2021
    Inventors: Sarah Parker, Debargha Mukherjee, Yue Chen, Elliott Karpilovsky, Urvang Joshi
  • Patent number: 11197004
    Abstract: Transform modes are derived for inter-predicted blocks using side information available within a bitstream. An inter-predicted encoded video block and side information are identified within a bitstream. Based on the side information, a trained transform is determined for inverse transforming transform coefficients of the inter-predicted encoded video block from amongst multiple trained transforms. The transform coefficients of the inter-predicted encoded video block are inverse transformed according to the trained transform to produce a prediction residual. A video block is reconstructed using the prediction residual and the reference frame. The video block is then output within an output video stream for storage or display. To determine the trained transforms, a learning model uses individual side information types and combinations of the individual side information types processed against a training data set.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: December 7, 2021
    Assignee: GOOGLE LLC
    Inventors: Rohit Singh, Debargha Mukherjee, Elliott Karpilovsky, Lester Lu
  • Publication number: 20210185312
    Abstract: Coding a block of video data includes determining a prediction mode for the block, which is an inter-prediction or intra-prediction mode, determining a transform type for the block, and coding the block using the prediction mode and the transform type. The transform type is one of a first plurality of transform types when the prediction mode is the inter-prediction mode, and is one of a second plurality of transform types when the prediction mode is the intra-prediction mode. The first plurality of transform types includes first fixed transform types and first mode-dependent transform types that are based on a first learned transform generated using inter-predicted blocks. The second plurality of transform types includes second fixed transform types and second mode-dependent transform types that are based on a second learned transform generated using intra-predicted blocks. The first and second fixed transform types have at least some fixed transform types in common.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Inventors: Lester Lu, Debargha Mukherjee, Elliott Karpilovsky