Patents by Inventor Xiaoyu Xiu

Xiaoyu Xiu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200322630
    Abstract: Video data may be palette decoded. Data defining a palette table may be received. The palette table may comprise index values corresponding to respective colors. Palette index prediction data may be received and may comprise data indicating index values for at least a portion of a palette index map mapping pixels of the video data to color indices in the palette table. The palette index prediction data may comprise run value data associating run values with index values for at least a portion of a palette index map. A run value may be associated with an escape color index. The palette index map may be generated from the palette index prediction data at least in part by determining whether to adjust an index value of the palette index prediction data based on a last index value. The video data may be reconstructed in accordance with the palette index map.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Applicant: VID SCALE, INC.
    Inventors: Chia-Ming Tsai, Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200304788
    Abstract: A block may be identified. The block may be partitioned into one or more (e.g., two) sibling nodes (e.g., sibling nodes BO and B1). A partition direction and a partition type for the block may be determined. If the partition type for the block is binary tree (BT), one or more (e.g., two) partition parameters may be determined for sibling node BO. A partition parameter (e.g., a first partition parameter) may be determined for sibling node B1. A decoder may determine whether to receive an indication of a second partition parameter for B1 based on, for example, the partition direction for the block, the partition type for the block, and the first partition parameter for B1. The decoder may derive the second partition parameter based on, for example, the partition direction and type for the block, and the first partition parameter for B1.
    Type: Application
    Filed: November 1, 2018
    Publication date: September 24, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yuwen He, Fanyi Duanmu, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200296406
    Abstract: Methods and apparatuses are provided for video coding. The method includes: partitioning video pictures into a plurality of coding units (CUs), at least one of which is further partitioned into two prediction units (PUs) including at least one triangular shaped PU; constructing a first merge list including a plurality of candidates, each including one or more motion vectors, based on a merge list construction process for regular merge prediction; and obtaining an index listing including a plurality of reference indices, where each reference index comprises a reference to a motion vector of a candidate in the first merge list.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 17, 2020
    Applicant: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Xianglin WANG, Yi-Wen CHEN, Xiaoyu XIU, Tsung-Chuan MA
  • Publication number: 20200288168
    Abstract: Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or ail of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.
    Type: Application
    Filed: September 28, 2018
    Publication date: September 10, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Zhang, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10735764
    Abstract: Video data may be palette decoded. Data defining a palette table may be received. The palette table may comprise index values corresponding to respective colors. Palette index prediction data may be received and may comprise data indicating index values for at least a portion of a palette index map mapping pixels of the video data to color indices in the palette table. The palette index prediction data may comprise run value data associating run values with index values for at least a portion of a palette index map. A run value may be associated with an escape color index. The palette index map may be generated from the palette index prediction data at least in part by determining whether to adjust an index value of the palette index prediction data based on a last index value. The video data may be reconstructed in accordance with the palette index map.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: August 4, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Chia-Ming Tsai, Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200221122
    Abstract: A device may determine whether to enable or disable bi-directional optical flow (BIO) for a current coding unit (CU) (e.g., block and/or sub-block). Prediction information for the CU may be identified and may include prediction signals associated with a first reference block and a second reference block (e.g., or a first reference sub-block and a second reference sub-block). A prediction difference may be calculated and may be used to determine the similarity between the two prediction signals. The CU may be reconstructed based on the similarity. For example, whether to reconstruct the CU with BIO enabled or BIO disabled may be based on whether the two prediction signals are similar, it may be determined to enable BIO for the CU when the two prediction signals are determined to be dissimilar. For example, the CU may be reconstructed with BIO disabled when the two prediction signals are determined to be similar.
    Type: Application
    Filed: July 3, 2018
    Publication date: July 9, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Ye, Xiaoyu Xiu, Yuwen He
  • Patent number: 10694204
    Abstract: Systems and methods are disclosed for improving the prediction efficiency for residual prediction using motion compensated residual prediction (MCRP). Exemplary residual prediction techniques employ motion compensated prediction and processed residual reference pictures. Further disclosed herein are systems and methods for generating residual reference pictures. These pictures can be generated adaptively with or without considering in-loop filtering effects. Exemplary de-noising filter designs are also described for enhancing the quality of residual reference pictures, and compression methods are described for reducing the storage size of reference pictures. Further disclosed herein are exemplary syntax designs for communicating residuals' motion information.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: June 23, 2020
    Assignee: Vid Scale, Inc.
    Inventors: Chun-Chi Chen, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Publication number: 20200169753
    Abstract: 360-degree video content may be coded. A sampling position in a projection format may be determined to code 360-degree video content. For example, a sampling position in a target projection format and a sampling position in a reference projection format may be identified. The sample position in the target projection format may be related to the corresponding sample position in the reference projection format via a transform function. A parameter weight (e.g., a reference parameter weight) for the sampling position in the reference projection format may be identified. An adjustment factor associated with the parameter weight for the sampling position in the reference projection format may be determined. The parameter weight (e.g., adjusted parameter weight) for the sampling position in the target projection format may be calculated. The calculated adjusted parameter weight may be applied to the sampling position in the target projection format when coding the 360-degree video content.
    Type: Application
    Filed: June 29, 2018
    Publication date: May 28, 2020
    Applicant: VID SCALE, INC.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye
  • Publication number: 20200107027
    Abstract: A video coding device may identify a network abstraction layer (NAL) unit. The video coding device may determine whether the NAL unit includes an active parameter set for a current layer. When the NAL unit includes the active parameter set for the current layer, the video coding device may set an NAL unit header layer identifier associated with the NAL unit to at least one of: zero, a value indicative of the current layer, or a value indicative of a reference layer of the current layer. The NAL unit may be a picture parameter set (PPS) NAL unit. The NAL unit may be a sequence parameter set (SPS) NAL unit.
    Type: Application
    Filed: December 5, 2019
    Publication date: April 2, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yong He, Yan Ye, Xiaoyu Xiu, Yuwen He
  • Publication number: 20200092582
    Abstract: A system, method, and/or instrumentality may be provided for coding a 360-degree video. A picture of the 360-degree video may be received. The picture may include one or more faces associated with one or more projection formats. A first projection format indication may be received that indicates a first projection format may be associated with a first face. A second projection format indication may be received that indicates a second projection format may be associated with a second face. Based on the first projection format, a first transform function associated with the first face may be determined. Based on the second projection format, a second transform function associated with the second face may be determined. At least one decoding process may be performed on the first face using the first transform function and/or at least one decoding process may be performed on the second face using the second transform function.
    Type: Application
    Filed: May 24, 2018
    Publication date: March 19, 2020
    Applicant: VID SCALE, INC.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye
  • Publication number: 20200077087
    Abstract: A video coding device may encode a video signal using intra-block copy prediction. A first picture prediction unit of a first picture may be identified. A second picture may be coded and identified. The second picture may be temporally related to the first picture, and the second picture may include second picture prediction units. A second picture prediction unit that is collocated with the first picture prediction unit may be identified. Prediction information for the first picture prediction unit may be generated. The prediction information may be based on a block vector of the second picture prediction unit that is collocated with the first picture prediction unit.
    Type: Application
    Filed: November 7, 2019
    Publication date: March 5, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200059671
    Abstract: Systems, methods, and instrumentalities are provided to implement video coding system (VCS). The VCS may be configured to receive a video signal, which may include one or more layers (e.g., a base layer (BL) and/or one or more enhancement layers (ELs)). The VCS may be configured to process a BL picture into an inter-layer reference (ILR) picture, e.g., using picture level inter-layer prediction process. The VCS may be configured to select one or both of the processed ILR picture or an enhancement layer (EL) reference picture. The selected reference picture(s) may comprise one of the EL reference picture, or the ILR picture. The VCS may be configured to predict a current EL picture using one or more of the selected ILR picture or the EL reference picture. The VCS may be configured to store the processed ILR picture in an EL decoded picture buffer (DPB).
    Type: Application
    Filed: October 24, 2019
    Publication date: February 20, 2020
    Applicant: Vid Scale, Inc.
    Inventors: Yan Ye, George W. McClellan, Yong He, Xiaoyu Xiu, Yuwen He, Jie Dong, Can Bal, Eun Seok Ryu
  • Publication number: 20200045336
    Abstract: A video coding system (e.g., an encoder and/or a decoder) may perform face-based sub-block motion compensation for 360-degree video to predict samples (e.g., of a sub-block). The video coding system may receive a 360-degree video content. The 360-degree video content may include a current block. The current block may include a plurality of sub-blocks. The system may determine whether a sub-block mode is used for the current block. The system may predict a sample in the current block based on the sub-block level face association. For a first sub-block in the current block, the system may identify a first location of the first sub-block. The system may associate the first sub-block with a first face based on the identified first location of the first sub-block. The system may predict a first sample in the first sub-block based on the first face that is associated with the first sub-block.
    Type: Application
    Filed: March 3, 2018
    Publication date: February 6, 2020
    Applicant: VID SCALE, INC.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10547853
    Abstract: A video coding device may identify a network abstraction layer (NAL) unit. The video coding device may determine whether the NAL unit includes an active parameter set for a current layer. When the NAL unit includes the active parameter set for the current layer, the video coding device may set an NAL unit header layer identifier associated with the NAL unit to at least one of: zero, a value indicative of the current layer, or a value indicative of a reference layer of the current layer. The NAL unit may be a picture parameter set (PPS) NAL unit. The NAL unit may be a sequence parameter set (SPS) NAL unit.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: January 28, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yong He, Yan Ye, Xiaoyu Xiu, Yuwen He
  • Patent number: 10516882
    Abstract: A video coding device may encode a video signal using intra-block copy prediction. A first picture prediction unit of a first picture may be identified. A second picture may be coded and identified. The second picture may be temporally related to the first picture, and the second picture may include second picture prediction units. A second picture prediction unit that is collocated with the first picture prediction unit may be identified. Prediction information for the first picture prediction unit may be generated. The prediction information may be based on a block vector of the second picture prediction unit that is collocated with the first picture prediction unit.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: December 24, 2019
    Assignee: VID Scale, Inc.
    Inventors: Yuwen He, Xiaoyu Xiu, Yan Ye
  • Patent number: 10484686
    Abstract: An palette index map of a video coding unit may be flipped during palette coding if a large run of similar pixels are present at the beginning of the coding unit and a small run of similar pixels are present at the end of the coding unit. The flipping may enable efficient signaling and coding of the large run of pixels. An indication may be sent signaling the flipping. During decoding, an inverse flip may be performed to restore the pixels of the flipped coding unit to their original positions. Selection of a prediction mode for palette coding may take into account various combinations of an index mode run followed by a copy-above mode run. A prediction mode with die smallest per-pixel average bit cost may be selected. Palette sharing may be enabled.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: November 19, 2019
    Assignee: VID SCALE, Inc.
    Inventors: Xiaoyu Xiu, Yan Ye, Yuwen He
  • Patent number: 10484717
    Abstract: Systems, methods, and instrumentalities are provided to implement video coding system (VCS). The VCS may be configured to receive a video signal, which may include one or more layers (e.g., a base layer (BL) and/or one or more enhancement layers (ELs)). The VCS may be configured to process a BL picture into an inter-layer reference (ILR) picture, e.g., using picture level inter-layer prediction process. The VCS may be configured to select one or both of the processed ILR picture or an enhancement layer (EL) reference picture. The selected reference picture(s) may comprise one of the EL reference picture, or the ILR picture. The VCS may be configured to predict a current EL picture using one or more of the selected ILR picture or the EL reference picture. The VCS may be configured to store the processed ILR picture in an EL decoded picture buffer (DPB).
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: November 19, 2019
    Assignee: VID SCALE, Inc.
    Inventors: Yan Ye, George W. McClellan, Yong He, Xiaoyu Xiu, Yuwen He, Jie Dong, Can Bal, Eun Seok Ryu
  • Patent number: 10469847
    Abstract: Cross-component prediction (CCP) and adaptive color transform (ACT) may be performed concurrently in a video coding system. CCP and ACT may be enabled/disabled at the same level (e.g. at the transform unit level) via an indicator signaled in the bitstream such as the ACT enable indicator for the CU. Inverse CCP and ACT may be operated at the same level (e.g. at the transform unit level). Prediction residuals may be converted to original color space without waiting for reconstruction of luma and chroma residuals of an entire prediction unit or coding unit. CCP and ACT transforms may be combined into one process to reduce encoding/decoding latency. Differences in dynamic ranges of color components may be compensated by variable dynamic range adjustments.
    Type: Grant
    Filed: September 11, 2015
    Date of Patent: November 5, 2019
    Assignee: VID SCALE, Inc.
    Inventors: Xiaoyu Xiu, Yan Ye, Yuwen He
  • Patent number: 10405000
    Abstract: Methods and apparatus are provided for performing one-dimensional (1D) transform and coefficient scanning. An encoder may apply 1D transform in either a horizontal or a vertical direction. The encoder may then determine a coefficient scan order based on the 1D transform direction. The scan order may be determined to be in a direction orthogonal to the 1D transform direction. The encoder may further flip the coefficients prior to scanning. The flipping may also be in a direction orthogonal to the 1D transform direction. A decoder may receive indications from the encoder with respect to the 1D transform, coefficient scanning, and/or coefficient flipping. The decoder may perform functions inverse to those performed by the encoder based on the indications.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: September 3, 2019
    Assignee: VID SCALE, Inc.
    Inventors: Jiun-Yu Kao, Maryam Azimi Hashemi, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10404988
    Abstract: Systems and methods are described for generating and decoding a video data bit stream containing a high-level signaling lossless coding syntax element indicating that lossless coding is used. The high-level signaling syntax is one of a Video Parameter Set (VPS), Sequence Parameter Set (SPS), Picture Parameter Set (PPS), or slice segment header. The lossless coding syntax element may be used as a condition for generating one or more SPS, PPS and slice segment header syntax elements related to the quantization, transform, transform skip, transform skip rotation, and in-loop filtering processes.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: September 3, 2019
    Assignee: Vid Scale, Inc.
    Inventors: Yan Ye, Xiaoyu Xiu, Yuwen He