Patents by Inventor Jingning Han
Jingning Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12219143Abstract: Entropy coding a sequence of symbols is described. A first probability model for entropy coding is selected. At least one symbol of the sequence is coded using a probability determined using the first probability model. The probability according to the first probability model is updated with an estimation of a second probability model to entropy code a subsequent symbol. The combination may be a fixed or adaptive combination.Type: GrantFiled: November 9, 2020Date of Patent: February 4, 2025Assignee: GOOGLE LLCInventors: Jingning Han, Yue Sun, Yaowu Xu
-
Patent number: 12206842Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: January 26, 2024Date of Patent: January 21, 2025Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20240422309Abstract: Methods, systems and apparatuses are disclosed including computer readable medium storing instructions used to encode or decode a video or a bitstream encodable or decodable using disclosed steps. The steps include reconstructing a first reference frame and a second reference frame for a current frame to be encoded or decoded, projecting motion vectors of the first reference frame and the second reference frame onto pixels of a current reference frame resulting in a first pixel in the current reference frame being associated with a plurality of projected motion vectors, and selecting a first projected motion vector from the plurality of projected motion vectors as a selected motion vector associated with the first pixel to be used for determining a pixel value of the first pixel, the selection based on magnitudes of the respective ones of the plurality of projected motion vectors.Type: ApplicationFiled: August 30, 2024Publication date: December 19, 2024Inventors: Lin Zheng, Yaowu Xu, Lester Lu, Jingning Han, Bohan Li
-
Publication number: 20240397055Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for encoding video comprising a sequence of video frames. In one aspect, a method comprises for one or more of the video frames: obtaining a feature embedding for the video frame; processing the feature embedding using a rate control machine learning model to generate a respective score for each of multiple quantization parameter values; selecting a quantization parameter value using the scores; determining a cumulative amount of data required to represent: (i) an encoded representation of the video frame and (ii) encoded representations of each preceding video frame; determining, based on the cumulative amount of data, that a feedback control criterion for the video frame is satisfied; updating the selected quantization parameter value; and processing the video frame using an encoding model to generate the encoded representation of the video frame.Type: ApplicationFiled: August 1, 2024Publication date: November 28, 2024Inventors: Chenjie Gu, Hongzi Mao, Ching-Han Chiang, Cheng Chen, Jingning Han, Ching Yin Derek Pang, Rene Andre Claus, Marisabel Guevara Hechtman, Daniel James Visentin, Christopher Sigurd Fougner, Charles Booth Schaff, Nishant Patil, Alejandro Ramirez Bellido
-
Publication number: 20240380924Abstract: Decoding a current block of a current frame includes decoding, from a compressed bitstream, one or more syntax elements indicating that a geometric transformation is to be applied; applying the geometric transformation to at least a portion of the current frame to obtain a transformed portion; and obtaining a prediction of the current block based on the transformed portion and an intra-prediction mode.Type: ApplicationFiled: April 15, 2024Publication date: November 14, 2024Inventors: Bohan Li, Debargha Mukherjee, Yaowu Xu, Jingning Han
-
Publication number: 20240305802Abstract: Syntax elements are written to a bitstream to designate bit depth precision for palette mode coding of video blocks. During encoding, a bit depth to use for palette mode coding a current block may be based on an input video signal including the current block or based on some change in bit depth precision. A prediction residual for the current block is encoded to a bitstream along with syntax elements indicative of the bit depth used for the palette mode coding of the current block. In particular, the syntax elements include a first element indicating the palette mode coding bit depth used and a second element indicating whether to apply a bit offset to the palette mode coding bit depth. During decoding, values of the syntax elements are read from the bitstream and used to determine a bit depth for palette mode coding the encoded block.Type: ApplicationFiled: February 9, 2021Publication date: September 12, 2024Applicant: Google LLCInventors: Cheng Chen, Jingning Han, Hui Su, Yaowu Xu
-
Patent number: 12088823Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for encoding video comprising a sequence of video frames. In one aspect, a method comprises for one or more of the video frames: obtaining a feature embedding for the video frame; processing the feature embedding using a rate control machine learning model to generate a respective score for each of multiple quantization parameter values; selecting a quantization parameter value using the scores; determining a cumulative amount of data required to represent: (i) an encoded representation of the video frame and (ii) encoded representations of each preceding video frame; determining, based on the cumulative amount of data, that a feedback control criterion for the video frame is satisfied; updating the selected quantization parameter value; and processing the video frame using an encoding model to generate the encoded representation of the video frame.Type: GrantFiled: November 3, 2021Date of Patent: September 10, 2024Assignee: DeepMind Technologies LimitedInventors: Chenjie Gu, Hongzi Mao, Ching-Han Chiang, Cheng Chen, Jingning Han, Ching Yin Derek Pang, Rene Andre Claus, Marisabel Guevara Hechtman, Daniel James Visentin, Christopher Sigurd Fougner, Charles Booth Schaff, Nishant Patil, Alejandro Ramirez Bellido
-
Publication number: 20240276015Abstract: An encoded bitstream is decodable by a processor configured to execute instructions to store, in a first line buffer, first values of a first scan-order diagonal line scanned immediately before a current scan-order diagonal line of a transform block; and store, in a second line buffer, second values of a second scan-order diagonal line scanned immediately before the first scan-order diagonal line. The first values of the first line buffer and the second values of the second line buffer are interleaved in a destination buffer. Using the destination buffer, a probability distribution is selected for coding a current value of the current scan-order diagonal line. The current value is entropy decoded from the bitstream using the probability distribution. One of the second line buffer or the first line buffer is replaced with current values of the current scan-order diagonal line for coding values of an immediately subsequent scan-order diagonal line.Type: ApplicationFiled: April 22, 2024Publication date: August 15, 2024Inventors: Jingning Han, James Zern, Linfeng Zhang, Ching-Han Chiang, Yaowu Xu
-
Patent number: 12047606Abstract: Transform kernel candidates including a vertical transform type associated with a vertical motion and a horizontal transform type associated with a horizontal motion can be encoded or decoded. During a decoding operation, a probability model for decoding encoded bitstream video data associated with a transform kernel candidate for an encoded transform block is identified based on one or both of a first transform kernel candidate selected for an above neighbor transform block of the encoded transform block or a second transform kernel candidate selected for a left neighbor transform block of the encoded transform block. The encoded bitstream video data associated with the transform kernel candidate is decoded using the probability model.Type: GrantFiled: July 18, 2022Date of Patent: July 23, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Jingning Han, Ching-Han Chiang
-
Publication number: 20240214607Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: ApplicationFiled: March 4, 2024Publication date: June 27, 2024Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Publication number: 20240195979Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.Type: ApplicationFiled: December 18, 2023Publication date: June 13, 2024Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20240171733Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: ApplicationFiled: January 26, 2024Publication date: May 23, 2024Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11991392Abstract: A transform type is obtained for decoding the transform block of transform coefficients. A template for entropy-decoding values related to the transform coefficients is selected based on the transform type. The template indicates, for a to-be-coded value, positions of already coded values. A context for selecting a probability distribution for entropy decoding a current value of the values is determined using the template. The current value is entropy decoded from a compressed bitstream using the probability distribution.Type: GrantFiled: July 18, 2022Date of Patent: May 21, 2024Assignee: GOOGLE LLCInventors: Jingning Han, James Zern, Linfeng Zhang, Ching-Han Chiang, Yaowu Xu
-
Publication number: 20240155121Abstract: A bitstream that stores encoded image data is described. In addition to the compressed data for color planes of the image, signals identifying respective deblocking filters is identified for the different color planes of the image. The deblocking filters may include those having different lengths for a luma plane as compared to one or more chroma planes of the image. One or more of the color planes, such as the luma plane, may have different filters for filtering reconstructed pixels vertically as compared to filtering the reconstructed pixels horizontally.Type: ApplicationFiled: January 8, 2024Publication date: May 9, 2024Inventors: Yaowu Xu, Jingning Han, Cheng Chen
-
Patent number: 11924467Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: GrantFiled: November 16, 2021Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Patent number: 11917128Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: November 5, 2020Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Patent number: 11876974Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11870983Abstract: Techniques for encoding and decoding image data are described. An image is reconstructed and deblocked. A respective deblocking filter is identified for different color planes of the image. The deblocking filters may include those having different lengths for a luma plane as compared to one or more chroma planes of the image. One or more of the color planes, such as the luma plane, may have different filters for filtering reconstructed pixels vertically as compared to filtering the reconstructed pixels horizontally.Type: GrantFiled: August 17, 2020Date of Patent: January 9, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Jingning Han, Cheng Chen
-
Patent number: 11800136Abstract: Decoding a current block of a current frame includes obtaining motion trajectories between the current frame and at least one previously coded frame by projecting motion vectors from the at least one previously coded frame onto the current frame. A motion field is obtained between the current frame and a reference frame used for coding the current frame. The motion field is obtained by extending the motion trajectories from the current frame towards the reference frame. A motion vector for the current block is identified based on the motion field. A prediction block is obtained for the current block using a reference block of the reference frame identified using the motion vector.Type: GrantFiled: July 19, 2022Date of Patent: October 24, 2023Assignee: GOOGLE LLCInventors: Jingning Han, Yaowu Xu, James Bankoski, Jia Feng
-
Publication number: 20230336739Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for encoding video comprising a sequence of video frames. In one aspect, a method comprises for one or more of the video frames: obtaining a feature embedding for the video frame; processing the feature embedding using a rate control machine learning model to generate a respective score for each of multiple quantization parameter values; selecting a quantization parameter value using the scores; determining a cumulative amount of data required to represent: (i) an encoded representation of the video frame and (ii) encoded representations of each preceding video frame; determining, based on the cumulative amount of data, that a feedback control criterion for the video frame is satisfied; updating the selected quantization parameter value; and processing the video frame using an encoding model to generate the encoded representation of the video frame.Type: ApplicationFiled: November 3, 2021Publication date: October 19, 2023Inventors: Chenjie Gu, Hongzi Mao, Ching-Han Chiang, Cheng Chen, Jingning Han, Ching Yin Derek Pang, Rene Andre Claus, Marisabel Guevara Hechtman, Daniel James Visentin, Christopher Sigurd Fougner, Charles Booth Schaff, Nishant Patil, Alejandro Ramirez Bellido