Patents by Inventor Yaowu Xu
Yaowu Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240080482Abstract: An apparatus for decoding frames of a compressed video data stream having at least one frame divided into partitions, includes a memory and a processor configured to execute instructions stored in the memory to read partition data information indicative of a partition location for at least one of the partitions, decode a first partition of the partitions that includes a first sequence of blocks, decode a second partition of the partitions that includes a second sequence of blocks identified from the partition data information using decoded information of the first partition.Type: ApplicationFiled: November 2, 2023Publication date: March 7, 2024Inventors: Yaowu Xu, Paul Wilkins, James Bankoski
-
Patent number: 11917128Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: November 5, 2020Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Patent number: 11876974Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11870983Abstract: Techniques for encoding and decoding image data are described. An image is reconstructed and deblocked. A respective deblocking filter is identified for different color planes of the image. The deblocking filters may include those having different lengths for a luma plane as compared to one or more chroma planes of the image. One or more of the color planes, such as the luma plane, may have different filters for filtering reconstructed pixels vertically as compared to filtering the reconstructed pixels horizontally.Type: GrantFiled: August 17, 2020Date of Patent: January 9, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Jingning Han, Cheng Chen
-
Patent number: 11800136Abstract: Decoding a current block of a current frame includes obtaining motion trajectories between the current frame and at least one previously coded frame by projecting motion vectors from the at least one previously coded frame onto the current frame. A motion field is obtained between the current frame and a reference frame used for coding the current frame. The motion field is obtained by extending the motion trajectories from the current frame towards the reference frame. A motion vector for the current block is identified based on the motion field. A prediction block is obtained for the current block using a reference block of the reference frame identified using the motion vector.Type: GrantFiled: July 19, 2022Date of Patent: October 24, 2023Assignee: GOOGLE LLCInventors: Jingning Han, Yaowu Xu, James Bankoski, Jia Feng
-
Patent number: 11785226Abstract: Adaptive composite intra-prediction may include in response to a determination that a first prediction pixel from a first block immediately adjacent to a first edge of a current block is available for predicting a current pixel of the current block, determining whether a second prediction pixel from a second block immediately adjacent to a second edge of the current block is available for predicting the current pixel, wherein the second edge is opposite the first edge, and, in response to a determination that the second prediction pixel is available, generating a prediction value for the current pixel based on at least one of the first prediction pixel or the second prediction pixel. Adaptive composite intra-prediction may include generating a reconstructed pixel corresponding to the current pixel based on the prediction value, including the reconstructed pixel in the decoded current block, and outputting or storing the decoded current block.Type: GrantFiled: April 14, 2017Date of Patent: October 10, 2023Assignee: GOOGLE INC.Inventors: Yaowu Xu, Hui Su
-
Publication number: 20230308679Abstract: Video coding using motion prediction coding with coframe motion vectors includes generating a reference coframe spatiotemporally concurrent with a current frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, generating an encoded frame by encoding the current frame using the reference coframe, including the encoded frame in an encoded bitstream, and outputting the encoded bitstream.Type: ApplicationFiled: May 25, 2023Publication date: September 28, 2023Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20230232001Abstract: A system, apparatus, and method for encoding and decoding a video image having a plurality of frames is disclosed. Encoding and decoding the video image can include selecting, for a current block, a prediction mode from a plurality of prediction modes; identifying, for the current block, a quantization value; selecting, for the current block, a probability distribution from a plurality of probability distributions based on the identified quantization value using a processor; and entropy encoding the selected prediction mode using the selected probability distribution.Type: ApplicationFiled: March 22, 2023Publication date: July 20, 2023Inventors: Yaowu Xu, Paul Gordon Wilkins, James Bankoski
-
Patent number: 11665365Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.Type: GrantFiled: September 14, 2018Date of Patent: May 30, 2023Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Patent number: 11647223Abstract: Dynamic motion vector referencing is used to predict motion within video blocks. A motion trajectory is determined for a current frame including a video block to encode or decode based on a reference motion vector used for encoding or decoding one or more reference frames of the current frame. One or more temporal motion vector candidates are then determined for predicting motion within the video block based on the motion trajectory. A motion vector is selected from a motion vector candidate list including the one or more temporal motion vector candidates and used to generate a prediction block. The prediction block is then used to encode or decode the video block. The motion trajectory is based on an order of video frames indicated by frame offset values encoded to a bitstream. The motion vector candidate list may include one or more spatial motion vector candidates.Type: GrantFiled: December 23, 2020Date of Patent: May 9, 2023Assignee: GOOGLE LLCInventors: Jingning Han, James Bankoski, Yaowu Xu
-
Patent number: 11627321Abstract: Generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed frame, may include generating a reconstructed block of the reconstructed frame by decoding from an encoded bitstream. Decoding may include decoding a value from the encoded bitstream, identifying, in accordance with the value, a probability distribution for generating the reconstructed block, wherein the value indicates the probability distribution among a plurality of probability distributions determined independently of generating the reconstructed frame, entropy decoding an encoded prediction mode from the encoded bitstream using the probability distribution to identify a prediction mode for generating the reconstructed block, generating a prediction block in accordance with the prediction mode; combining the prediction block and a reconstructed residual block to obtain the reconstructed block, and including the reconstructed block in the reconstructed frame.Type: GrantFiled: June 7, 2021Date of Patent: April 11, 2023Assignee: GOOGLE LLCInventors: Yaowu Xu, Paul Gordon Wilkins, James Bankoski
-
Publication number: 20230007260Abstract: Entropy coding a sequence of symbols is described. A first probability model for entropy coding is selected. At least one symbol of the sequence is coded using a probability determined using the first probability model. The probability according to the first probability model is updated with an estimation of a second probability model to entropy code a subsequent symbol. The combination may be a fixed or adaptive combination.Type: ApplicationFiled: November 9, 2020Publication date: January 5, 2023Inventors: Jingning Han, Yue Sun, Yaowu Xu
-
Publication number: 20220377376Abstract: A transform type is obtained for decoding the transform block of transform coefficients. A template for entropy-decoding values related to the transform coefficients is selected based on the transform type. The template indicates, for a to-be-coded value, positions of already coded values. A context for selecting a probability distribution for entropy decoding a current value of the values is determined using the template. The current value is entropy decoded from a compressed bitstream using the probability distribution.Type: ApplicationFiled: July 18, 2022Publication date: November 24, 2022Inventors: Jingning Han, James Zern, Linfeng Zhang, Ching-Han Chiang, Yaowu Xu
-
Publication number: 20220377364Abstract: Decoding a current block of a current frame includes obtaining motion trajectories between the current frame and at least one previously coded frame by projecting motion vectors from the at least one previously coded frame onto the current frame. A motion field is obtained between the current frame and a reference frame used for coding the current frame. The motion field is obtained by extending the motion trajectories from the current frame towards the reference frame. A motion vector for the current block is identified based on the motion field. A prediction block is obtained for the current block using a reference block of the reference frame identified using the motion vector.Type: ApplicationFiled: July 19, 2022Publication date: November 24, 2022Inventors: Jingning Han, Yaowu Xu, James Bankoski, Jia Feng
-
Publication number: 20220353534Abstract: Transform kernel candidates including a vertical transform type associated with a vertical motion and a horizontal transform type associated with a horizontal motion can be encoded or decoded. During a decoding operation, a probability model for decoding encoded bitstream video data associated with a transform kernel candidate for an encoded transform block is identified based on one or both of a first transform kernel candidate selected for an above neighbor transform block of the encoded transform block or a second transform kernel candidate selected for a left neighbor transform block of the encoded transform block. The encoded bitstream video data associated with the transform kernel candidate is decoded using the probability model.Type: ApplicationFiled: July 18, 2022Publication date: November 3, 2022Inventors: Yaowu Xu, Jingning Han, Ching-Han Chiang
-
Publication number: 20220303583Abstract: Video coding using constructed reference frames may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed video. Generating the reconstructed video may include receiving an encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed non-showable reference frame. Generating the reconstructed non-showable reference frame may include decoding a first encoded frame from the encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed frame. Generating the reconstructed frame may include decoding a second encoded frame from the encoded bitstream using the reconstructed non-showable reference frame as a reference frame. Video coding using constructed reference frames may include including the reconstructed frame in the reconstructed video and outputting the reconstructed video.Type: ApplicationFiled: June 8, 2022Publication date: September 22, 2022Inventors: James Bankoski, Yaowu Xu, Paul Wilkins
-
Publication number: 20220264109Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: ApplicationFiled: May 6, 2022Publication date: August 18, 2022Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11405645Abstract: Transform kernel candidates including a vertical transform type associated with a vertical motion and a horizontal transform type associated with a horizontal motion can be encoded or decoded. During an encoding operation, a residual block of a current block is transformed according to a selected transform kernel candidate to produce a transform block. A probability model for encoding the selected transform kernel candidate is then identified based on neighbor transform blocks of the transform block. The selected transform kernel candidate is then encoded according to the probability model. During a decoding operation, the encoded transform kernel candidate is decoded using the probability model. The encoded transform block is then decoded by inverse transforming dequantized transform coefficients thereof according to the decoded transform kernel candidate.Type: GrantFiled: June 22, 2017Date of Patent: August 2, 2022Assignee: GOOGLE LLCInventors: Jingning Han, Yaowu Xu, Ching-Han Chiang
-
Patent number: RE49615Abstract: Decoding a video stream may include decoding a first block of a current frame by decoding a first motion vector from the encoded video stream, decoding an identifier of a first interpolation filter from the encoded video stream, and reconstructing the first block using the first motion vector and the first interpolation filter. Decoding a second block of the current frame may include identifying the first motion vector from the first block as a selected motion vector for predicting the second block in response to decoding an inter-prediction mode identifier for decoding the second block, identifying the first interpolation filter as a selected interpolation filter for predicting the second block in response to identifying the first motion vector from the first block as the selected motion vector for predicting the second block, and reconstructing the second block using the first motion vector and the first interpolation filter.Type: GrantFiled: November 10, 2020Date of Patent: August 15, 2023Assignee: GOOGLE LLCInventors: Yaowu Xu, Jingning Han
-
Patent number: RE49727Abstract: An apparatus for decoding frames of a compressed video data stream having at least one frame divided into partitions, includes a memory and a processor configured to execute instructions stored in the memory to read partition data information indicative of a partition location for at least one of the partitions, decode a first partition of the partitions that includes a first sequence of blocks, decode a second partition of the partitions that includes a second sequence of blocks identified from the partition data information using decoded information of the first partition.Type: GrantFiled: March 12, 2021Date of Patent: November 14, 2023Inventors: Yaowu Xu, Paul Wilkins, James Bankoski