Patents by Inventor Yuwen He

Yuwen He has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200322630
    Abstract: Video data may be palette decoded. Data defining a palette table may be received. The palette table may comprise index values corresponding to respective colors. Palette index prediction data may be received and may comprise data indicating index values for at least a portion of a palette index map mapping pixels of the video data to color indices in the palette table. The palette index prediction data may comprise run value data associating run values with index values for at least a portion of a palette index map. A run value may be associated with an escape color index. The palette index map may be generated from the palette index prediction data at least in part by determining whether to adjust an index value of the palette index prediction data based on a last index value. The video data may be reconstructed in accordance with the palette index map.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Applicant: VID SCALE, INC.
    Inventors: Chia-Ming Tsai, Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200322632
    Abstract: Systems, methods, and instrumentalities are disclosed for discontinuous face boundary filtering for 360-degree video coding. A face discontinuity may be filtered (e.g., to reduce seam artifacts) in whole or in part, for example, using coded samples or padded samples on either side of the face discontinuity. Filtering may be applied, for example, as an in-loop filter or a post-processing step. 2D positional information related to two sides of the face discontinuity may be signaled. In a video bitstream so that filtering may be applied independent of projection formats and/or frame packing techniques.
    Type: Application
    Filed: December 18, 2018
    Publication date: October 8, 2020
    Applicant: VID SCALE, INC.
    Inventors: Philippe Hanhart, Yan Ye, Yuwen He
  • Patent number: 10798423
    Abstract: Cross-plane filtering may be used to restore blurred edges and/or textures in one or both chroma planes using information from a corresponding luma plane. Adaptive cross-plane filters may be implemented. Cross-plane filter coefficients may be quantized and/or signaled such that overhead in a bitstream minimizes performance degradation. Cross-plane filtering may be applied to select regions of a video image (e.g., to edge areas). Cross-plane filters may be implemented in single-layer video coding systems and/or multi-layer video coding systems.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: October 6, 2020
    Assignee: InterDigital Madison Patent Holdings, SAS
    Inventors: Jie Dong, Yuwen He, Yan Ye
  • Publication number: 20200304788
    Abstract: A block may be identified. The block may be partitioned into one or more (e.g., two) sibling nodes (e.g., sibling nodes BO and B1). A partition direction and a partition type for the block may be determined. If the partition type for the block is binary tree (BT), one or more (e.g., two) partition parameters may be determined for sibling node BO. A partition parameter (e.g., a first partition parameter) may be determined for sibling node B1. A decoder may determine whether to receive an indication of a second partition parameter for B1 based on, for example, the partition direction for the block, the partition type for the block, and the first partition parameter for B1. The decoder may derive the second partition parameter based on, for example, the partition direction and type for the block, and the first partition parameter for B1.
    Type: Application
    Filed: November 1, 2018
    Publication date: September 24, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yuwen He, Fanyi Duanmu, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200288168
    Abstract: Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or ail of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.
    Type: Application
    Filed: September 28, 2018
    Publication date: September 10, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Zhang, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Publication number: 20200267381
    Abstract: Systems, methods and instrumentalities are disclosed for adaptively selecting an adaptive loop filter (ALF) procedure for a frame based on which temporal layer the frame is in. ALF procedures may vary in computational complexity. One or more frames including the current frame may be in a temporal layer of a coding scheme. The decoder may determine the current frame's temporal layer level within the coding scheme. The decoder may select an ALF procedure based on the current frame's temporal layer level. If the current frame's temporal layer level is higher within the coding scheme than some other temporal layer levels, an ALF procedure that is less computationally complex may be selected for the current frame. Then the decoder may perform the selected ALF procedure on the current frame.
    Type: Application
    Filed: October 31, 2018
    Publication date: August 20, 2020
    Applicant: VID SCALE, INC.
    Inventors: Rahul Vanam, Yuwen He, Yan Ye
  • Patent number: 10750172
    Abstract: Prediction systems and methods for video coding are described based on nearest neighboring pixels. In exemplary embodiments, to code a first pixel, a plurality of neighboring pixels of the first pixel are reconstructed. The coefficients of a filter such as a Wiener filter are derived based on the reconstructed neighboring pixels. The Wiener filter is applied to the reconstructed neighboring pixels to predict the first pixel. The coefficients of the Wiener filter may be derived on a pixel-by-pixel or a block-by-block basis. The reconstructed pixels may be pixels in the same picture (for intra prediction) or in a reference picture (for inter prediction). In some embodiments, the residuals of the prediction are encoded using RDPCM. In some embodiments, the residuals may be predicted using a Wiener filter.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: August 18, 2020
    Assignee: Vid Scale, Inc.
    Inventors: Rahul Vanam, Yuwen He, Yan Ye
  • Publication number: 20200260120
    Abstract: Systems, methods, and instrumentalities may be provided for discounting reconstructed samples and/or coding information from spatial neighbors across face discontinuities. Whether a current block is located at a face discontinuity may be determined. The face discontinuity may be a face boundary between two or more adjoining blocks that are not spherical neighbors. The coding availability of a neighboring block of the current block may be determined, e.g., based on whether the neighboring block is on the same side of the face discontinuity as the current block. For example, the neighboring block may be determined to be available for decoding the current block if it is on the same side of the face discontinuity as the current block, and unavailable if it Is not on the same side of the face discontinuity. The neighboring block may be a spatial neighboring block or a temporal neighboring block.
    Type: Application
    Filed: September 19, 2018
    Publication date: August 13, 2020
    Applicant: Vid Scale, Inc.
    Inventors: Philippe Hanhart, Yuwen He, Yan Ye
  • Publication number: 20200252629
    Abstract: Sampling grid information may be determined for multi-layer video coding systems. The sampling grid information may be used to align the video layers of a coding system. Sampling grid correction may be performed based on the sampling grid information. The sampling grids may also be detected. In some embodiments, a sampling grid precision may also be detected and/or signaled.
    Type: Application
    Filed: April 22, 2020
    Publication date: August 6, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Ye, Yuwen He, Jie Dong
  • Patent number: 10735764
    Abstract: Video data may be palette decoded. Data defining a palette table may be received. The palette table may comprise index values corresponding to respective colors. Palette index prediction data may be received and may comprise data indicating index values for at least a portion of a palette index map mapping pixels of the video data to color indices in the palette table. The palette index prediction data may comprise run value data associating run values with index values for at least a portion of a palette index map. A run value may be associated with an escape color index. The palette index map may be generated from the palette index prediction data at least in part by determining whether to adjust an index value of the palette index prediction data based on a last index value. The video data may be reconstructed in accordance with the palette index map.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: August 4, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Chia-Ming Tsai, Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200221122
    Abstract: A device may determine whether to enable or disable bi-directional optical flow (BIO) for a current coding unit (CU) (e.g., block and/or sub-block). Prediction information for the CU may be identified and may include prediction signals associated with a first reference block and a second reference block (e.g., or a first reference sub-block and a second reference sub-block). A prediction difference may be calculated and may be used to determine the similarity between the two prediction signals. The CU may be reconstructed based on the similarity. For example, whether to reconstruct the CU with BIO enabled or BIO disabled may be based on whether the two prediction signals are similar, it may be determined to enable BIO for the CU when the two prediction signals are determined to be dissimilar. For example, the CU may be reconstructed with BIO disabled when the two prediction signals are determined to be similar.
    Type: Application
    Filed: July 3, 2018
    Publication date: July 9, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Ye, Xiaoyu Xiu, Yuwen He
  • Patent number: 10708605
    Abstract: A video device may generate an enhanced inter-layer reference (E-ILR) picture io assist in predicting an enhancement layer picture of a. scalable bitstream. An E-ILR picture may include one or more E-ILR blocks. An E-ILR block may be generated using a differential method, a residual method, a bi-prediction method, and/or a uni-prediction method. The video device may determine a first time instance. The video device may subtract a block of a first base layer picture characterized by the first time instance from a block of an enhancement layer picture characterized by the first time instance to generate a differential block characterized by the first time instance. The video device may perform motion compensation on the differential block and add the motion compensated differential picture to a block of the second base layer picture characterized by the second time instance to generate an E-ILR. block.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: July 7, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye
  • Patent number: 10694204
    Abstract: Systems and methods are disclosed for improving the prediction efficiency for residual prediction using motion compensated residual prediction (MCRP). Exemplary residual prediction techniques employ motion compensated prediction and processed residual reference pictures. Further disclosed herein are systems and methods for generating residual reference pictures. These pictures can be generated adaptively with or without considering in-loop filtering effects. Exemplary de-noising filter designs are also described for enhancing the quality of residual reference pictures, and compression methods are described for reducing the storage size of reference pictures. Further disclosed herein are exemplary syntax designs for communicating residuals' motion information.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: June 23, 2020
    Assignee: Vid Scale, Inc.
    Inventors: Chun-Chi Chen, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Publication number: 20200169768
    Abstract: Media content coded using scalable coding techniques may be cached among a group of cache devices. Layered segments of the media content may be pre-loaded onto the cache devices, which may be located throughout a content distribution network, including a home network. The caching location of the media content may be determined based on multiple factors including a content preference associated with the group of cache devices and device capabilities. A cache controller may manage the caching of the media content.
    Type: Application
    Filed: January 29, 2020
    Publication date: May 28, 2020
    Applicant: Vid Scale, Inc.
    Inventors: Yong He, Yuwen He, Yan Ye, Ralph Neff
  • Publication number: 20200169753
    Abstract: 360-degree video content may be coded. A sampling position in a projection format may be determined to code 360-degree video content. For example, a sampling position in a target projection format and a sampling position in a reference projection format may be identified. The sample position in the target projection format may be related to the corresponding sample position in the reference projection format via a transform function. A parameter weight (e.g., a reference parameter weight) for the sampling position in the reference projection format may be identified. An adjustment factor associated with the parameter weight for the sampling position in the reference projection format may be determined. The parameter weight (e.g., adjusted parameter weight) for the sampling position in the target projection format may be calculated. The calculated adjusted parameter weight may be applied to the sampling position in the target projection format when coding the 360-degree video content.
    Type: Application
    Filed: June 29, 2018
    Publication date: May 28, 2020
    Applicant: VID SCALE, INC.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10666953
    Abstract: Sampling grid information may be determined for multi-layer video coding systems. The sampling grid information may be used to align the video layers of a coding system. Sampling grid correction may be performed based on the sampling grid information. The sampling grids may also be detected. In some embodiments, a sampling grid precision may also be detected and/or signaled.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: May 26, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yan Ye, Yuwen He, Jie Dong
  • Patent number: 10652588
    Abstract: Systems, methods, and instrumentalities are disclosed for inverse shaping for high dynamic range (HDR) video coding. A video coding device, e.g., such as a decoder, may determine a plurality of pivot points associated with a plurality of piecewise segments of an inverse reshaping model. The plurality of pivot points may be determined based on an indication received via a message. Each piecewise segment may be defined by a plurality of coefficients. The video coding device may receive an indication of a first subset of coefficients associated with the plurality of piecewise segments. The video coding device may calculate a second subset of coefficients based on the first subset of coefficients and the plurality of pivot points. The video coding device may generate an inverse reshaping model using one or more of the plurality of pivot points, the first subset of coefficients, and the second subset of coefficients.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: May 12, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Louis Kerofsky, Yuwen He, Yan Ye, Arash Vosoughi
  • Publication number: 20200120359
    Abstract: A coding device (e.g., that may be or may include encoder and/or decoder) may receive a frame-packed picture of 380-degree video. The coding device may identify a face in the frame-packed picture that the current block belongs to. The coding device may determine that a current block is located at a boundary of the face that the current block belongs to. The coding device may identify multiple spherical neighboring blocks of the current block. The coding device may identify a cross-face boundary neighboring block. The coding device may identify a block in the frame-packed picture that corresponds to the cross-face boundary neighboring block. The coding device may determine whether to use the identified block to code the current block based on availability of the identified block. The coding device may code the current block based on the determination to use the identified block.
    Type: Application
    Filed: April 10, 2018
    Publication date: April 16, 2020
    Applicant: VID SCALE, INC.
    Inventors: Philippe Hanhart, Yuwen He, Yan Ye
  • Patent number: 10616597
    Abstract: Systems, methods, and instrumentalities are disclosed for reference picture set mapping for scalable video coding. A device may receive an encoded scalable video stream comprising a base layer video stream and an enhancement layer video stream. The base layer video stream and the enhancement layer video streams may be encoded according to different video codecs. For example, the base layer video stream may be encoded according to H.264/AVC and the enhancement layer may be encoded according to HEVC. The enhancement layer video stream may include inter-layer prediction information. The inter-layer prediction information may include information relating to the base layer coding structure. The inter-layer prediction information may identify one or more reference pictures available in a base layer decoded picture buffer (DPB). A decoder may use the inter-layer prediction information to decode the enhancement layer video stream.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: April 7, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yong He, Yan Ye, Yuwen He
  • Publication number: 20200107027
    Abstract: A video coding device may identify a network abstraction layer (NAL) unit. The video coding device may determine whether the NAL unit includes an active parameter set for a current layer. When the NAL unit includes the active parameter set for the current layer, the video coding device may set an NAL unit header layer identifier associated with the NAL unit to at least one of: zero, a value indicative of the current layer, or a value indicative of a reference layer of the current layer. The NAL unit may be a picture parameter set (PPS) NAL unit. The NAL unit may be a sequence parameter set (SPS) NAL unit.
    Type: Application
    Filed: December 5, 2019
    Publication date: April 2, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yong He, Yan Ye, Xiaoyu Xiu, Yuwen He