Patents by Inventor Haoping Yu

Haoping Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12167042
    Abstract: Method for decoding a video including a sequence of pictures includes: a bitstream is parsed to obtain a value of a first flag; whether the value of the first flag indicates that a set of header extension parameters is present is determined; when the value of the first flag indicates that the set of header extension parameters is present, the bitstream is parsed to obtain a value of a second flag; whether the value of the second flag indicates that a first parameter in the set of header extension parameters is enabled for the sequence of pictures is determined; when the determination result is yes, the bitstream is parsed to obtain a value of the first parameter for one of the slices in the sequence of pictures; and the slice is decoded based on the value of the first parameter for the slice.
    Type: Grant
    Filed: December 29, 2023
    Date of Patent: December 10, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yue Yu, Haoping Yu
  • Patent number: 12160593
    Abstract: A method for decoding a video from a video bitstream is provided and at least includes: accessing a binary string representing a partition of the video, the partition comprising a plurality of coding tree units (CTUs) forming one or more CTU rows; for each CTU of the plurality of CTUs in the partition, determining whether the CTU is the first CTU in a slice or a tile; in response to determining that the CTU is the first CTU in a slice or a tile, initializing context variables for context-adaptive binary arithmetic coding (CABAC) according to a first context variable initialization process; in response to determining that the CTU is not the first CTU in a slice or a tile, determining whether parallel decoding is enabled and the CTU is the first CTU in a CTU row of a tile.
    Type: Grant
    Filed: May 11, 2024
    Date of Patent: December 3, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Yue Yu, Haoping Yu
  • Publication number: 20240373027
    Abstract: In some embodiments, a video decoder decodes a video from a bitstream of the video using a history-based Rice parameter derivation. The video decoder accesses a binary string representing a partition of the video and processes each coding tree unit (CTU) in the partition to generate decoded coefficient values in the CTU. The process includes updating a history counter for a color component for calculating Rice parameters and prior to calculating a next Rice parameter, updating a replacement variable based on the updated history counter. The process further includes calculating the Rice parameters for transform units (TUs) in the CTU based on the value of the replacement variable and decoding the binary string corresponding to the TUs in the CTU into coefficient values of the TUs based on the calculated Rice parameters.
    Type: Application
    Filed: August 18, 2022
    Publication date: November 7, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240364910
    Abstract: A method for decoding a video includes that: a decoder decodes a video from a bitstream of the video; the decoder accesses a bitstream of the video and extracts a GCI flag from the bitstream; the decoder determines that one or more general constraints are imposed for the video based on the GCI flag value and extracts, from the bitstream of the video, a value indicating a quantity of additional bits included in the bitstream of the video, the additional bits include flag bits indicating respective additional coding tools to be constrained for the video; if the value is greater than five, the decoder extracts six flags from the bitstream of the video that indicate respective constraints for six additional coding tools; and the decoder decodes the bitstream of the video into images based on the constraints for the six additional coding tools indicated by the six flags.
    Type: Application
    Filed: June 25, 2024
    Publication date: October 31, 2024
    Inventors: Jonathan GAN, Yue YU, Haoping YU
  • Publication number: 20240364907
    Abstract: A decoding method, system and storage medium are disclosed. In the method, a video decoder decodes a video from a bitstream of the video. The video decoder decodes, from the bitstream, an additional bit count M indicating a quantity of additional general constraints information (GCI) bits included in the bitstream. The additional bits include flag bits indicating respective additional coding tools to be constrained for the video, and an expected value of the additional bit count is 0, 6 or greater than 6. The decoder decodes M-6 bits that follow six flag bits in the bitstream in response to determining that the decoded additional bit count M is greater than 6. The decoder further decodes the remaining portion of the bitstream into images independent of the decoded M-6 bits and based, at least in part, upon constraints specified for the respective additional coding tools by the six flag bits.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Inventors: Jonathan GAN, Yue YU, Haoping YU
  • Publication number: 20240364939
    Abstract: In some embodiments, a video decoder decodes a video from a bitstream of the video using a history-based rice parameter derivation. The video decoder accesses a binary string representing a partition of the video and processes each coding tree unit (CTU) in the partition to generate decoded coefficient values in the CTU. The process includes updating a replacement variable for a transform unit (TU) in the CTU for calculating rice parameters independently of the previous TU or CTU. The process further includes calculating the rice parameters for TU in the CTU based on the value of the replacement variable and decoding the binary string corresponding to the TU into coefficient values based on the calculated rice parameters. Pixel values of the TU can be determined from the decoded coefficient values for output.
    Type: Application
    Filed: August 25, 2022
    Publication date: October 31, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240305797
    Abstract: In some embodiments, a video decoder decodes a video from a bitstream of the video using a history-based Rice parameter derivation along with the wavefront parallel processing (WPP). The video decoder accesses a binary string representing a partition of the video and processes each coding tree unit (CTU) in the partition to generate decoded coefficient values in the CTU. The process includes prior to decoding the CTU, determining whether WPP is enabled and the CTU is the first CTU of a current CTU row in the partition, and if so, setting a history counter to an initial value. The process further includes decoding the CTU by calculating the Rice parameters for transform units (TUs) in the CTU based on the value of the history counter and decoding the binary string corresponding to the TUs in the CTU into coefficient values of the TUs based on the calculated Rice parameters.
    Type: Application
    Filed: August 26, 2022
    Publication date: September 12, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240297997
    Abstract: A method for encoding a picture of a video including a current transform unit is disclosed. A coefficient of each position in the current transform unit is quantized by a processor to generate quantization levels of the current transform unit. A value of a Rice parameter of a current position in the current transform unit for Golomb-Rice binarization is determined by the processor based on a value of a history variable for a previous transform unit preceding the current transform unit. The value of the history variable is determined based on at least one of a bit depth or a bit rate for encoding the picture. The quantization level of the current position is converted by the processor into a binary representation using Golomb-Rice binarization with the value of the Rice parameter. The binary representation of the current position is compressed by the processor into a bitstream.
    Type: Application
    Filed: June 1, 2022
    Publication date: September 5, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240298015
    Abstract: A method for decoding a video from a video bitstream is provided and at least includes: accessing a binary string representing a partition of the video, the partition comprising a plurality of coding tree units (CTUs) forming one or more CTU rows; for each CTU of the plurality of CTUs in the partition, determining whether the CTU is the first CTU in a slice or a tile; in response to determining that the CTU is the first CTU in a slice or a tile, initializing context variables for context-adaptive binary arithmetic coding (CABAC) according to a first context variable initialization process; in response to determining that the CTU is not the first CTU in a slice or a tile, determining whether parallel decoding is enabled and the CTU is the first CTU in a CTU row of a tile.
    Type: Application
    Filed: May 11, 2024
    Publication date: September 5, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240298030
    Abstract: Systems and methods of the present disclosure provide solutions that address technological challenges related to 3D content. These solutions include a computer-implemented method for encoding three-dimensional (3D) content comprising: determining connectivity information of a mesh frame; packing the connectivity information of the mesh frame into coding blocks; dividing the coding blocks into connectivity coding units (CCUs) comprising connectivity coding samples; and encoding a video connectivity frame associated with the mesh frame based on the coding blocks and the connectivity coding units.
    Type: Application
    Filed: September 9, 2022
    Publication date: September 5, 2024
    Inventors: Vladyslav ZAKHARCHENKO, Haoping YU, Yue YU
  • Publication number: 20240289996
    Abstract: A method for coding three-dimensional (3D) content includes: extracting a video frame from a video, where the video frame includes connectivity information associated with the 3D content; and reconstructing the 3D content based on the connectivity information, where the connectivity information includes: segments representing the 3D content; and sorted faces and vertex indices within each segment.
    Type: Application
    Filed: September 9, 2022
    Publication date: August 29, 2024
    Inventors: Vladyslav ZAKHARCHENKO, Haoping YU, Yue YU
  • Publication number: 20240289997
    Abstract: A method for decoding three-dimensional (3D) content comprising extracting a video frame from a video, wherein the video frame includes connectivity information associated with the 3D content; and reconstructing the 3D content based on the connectivity information. The reconstructing comprises: determining an end of the connectivity information with respect to a mesh frameblock based on a termination connectivity coding unit (CCU) in the block; and determining an end of the connectivity information with respect to a mesh frame based on a termination block in the mesh frame.
    Type: Application
    Filed: September 9, 2022
    Publication date: August 29, 2024
    Inventors: Vladyslav ZAKHARCHENKO, Haoping YU, Yue YU
  • Publication number: 20240242391
    Abstract: Systems and methods of the present disclosure provide solutions that address technological challenges related to 3D content. These solutions include a computer-implemented method for encoding three-dimensional (3D) content comprising: processing the 3D content into segments, each segment comprising a set of faces and vertex indices representative of the 3D content; processing each segment to sort the respective set of faces and vertex indices in each segment; packing each segment of 3D content to generate connectivity information frames of blocks, each block comprising a subset of the sorted faces and vertex indices; and encoding the connectivity information frames.
    Type: Application
    Filed: September 9, 2022
    Publication date: July 18, 2024
    Inventors: Vladyslav ZAKHARCHENKO, Haoping YU, Yue YU
  • Publication number: 20240214585
    Abstract: In certain aspects, a method for encoding a picture of a video including a coding block is disclosed. A coefficient of each position in the coding block to is quantized by a processor to generate a quantization level of the respective position. A high throughput mode is enabled. In the high throughput mode, at least one residual coding bin of the coding block is changed from a context-coded bin to a bypass-coded bin, and bypass bit-alignment is applied. The quantization levels of the coding block are encoded by the processor into a bitstream in the high throughput mode.
    Type: Application
    Filed: April 25, 2022
    Publication date: June 27, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240179323
    Abstract: A video decoder decodes a video from a video bitstream encoded using Versatile Video Coding (VVC). The video decoder determines a bit depth of samples of the video based on Sequence Parameter Set (SPS) syntax element sps_bitdepth_minus8 whose value is in the range of 0 to 8. The decoder further determines the size of a decoded picture buffer (DPB) based on a Video Parameter Set (VPS) syntax element vps_ols_dpb_bitdepth_minus8 whose value is in the range of 0 to 8. The decoder allocates a storage space with the determined size of the DPB, decodes the video bitstream based on the determined bit depth, and thus obtains and stores a decoded picture in the DPB. The decoder further outputs the decoded picture.
    Type: Application
    Filed: February 7, 2024
    Publication date: May 30, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240137567
    Abstract: Method for decoding a video including a sequence of pictures includes: a bitstream is parsed to obtain a value of a first flag; whether the value of the first flag indicates that a set of header extension parameters is present is determined; when the value of the first flag indicates that the set of header extension parameters is present, the bitstream is parsed to obtain a value of a second flag; whether the value of the second flag indicates that a first parameter in the set of header extension parameters is enabled for the sequence of pictures is determined; when the determination result is yes, the bitstream is parsed to obtain a value of the first parameter for one of the slices in the sequence of pictures; and the slice is decoded based on the value of the first parameter for the slice.
    Type: Application
    Filed: December 29, 2023
    Publication date: April 25, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240064303
    Abstract: In certain aspects, a method for encoding a picture of a video including a transform unit is disclosed. A coefficient of each position in the transform unit is quantized by a processor to generate a quantization level of the respective position. A high throughput mode is enabled. In the high throughput mode, a plurality of transform unit bins of the transform unit are changed from context-coded bins to bypass-coded bins, and bypass bit-alignment is applied. The quantization levels of the transform unit are encoded by the processor into a bitstream in the high throughput mode.
    Type: Application
    Filed: April 25, 2022
    Publication date: February 22, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20240022731
    Abstract: A computer-implemented method for encoding or decoding an input video is provided. The method includes determining a bit depth associated with the input video; determining a bit depth associated with weighted prediction offset values for the input video based on the bit depth associated with the input video; determining weighted prediction values for pictures of the input video based on an application of the weighted prediction offset values to prediction values for the pictures of the input video; and processing the input video based on the weighted prediction values and the weighted prediction offset values. An encoder and an decoder are further provided.
    Type: Application
    Filed: September 25, 2023
    Publication date: January 18, 2024
    Inventors: Yue YU, Haoping YU
  • Publication number: 20230336715
    Abstract: A computer-implemented method and a computing system for encoding or decoding a video and a storage medium are provided. The method includes determining a bit depth associated with an input video, determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video, determining a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction, and processing the input video based on the weighting factor and the offset value of the weighted prediction.
    Type: Application
    Filed: June 22, 2023
    Publication date: October 19, 2023
    Inventors: Yue YU, Haoping YU
  • Patent number: 11694316
    Abstract: A method for determining experience quality of virtual reality (VR) multimedia includes, in a process of playing VR multimedia, obtaining a first sensory parameter, a second sensory parameter, and a third sensory parameter of the VR multimedia, where the first sensory parameter, the second sensory parameter, and the third sensory parameter are obtained by performing sampling separately according to at least two same perceptual dimensions, and are respectively parameters that affect fidelity experience, enjoyment experience, and interaction experience, and determining a mean opinion score (MOS) of the VR multimedia based on the first sensory parameter, the second sensory parameter, and the third sensory parameter of the VR multimedia. Because the third sensory parameter is a parameter that affects the interaction experience, an interaction feature of the VR multimedia is considered.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: July 4, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yi Li, Haoping Yu