Patents Assigned to VID Scale, Inc.
  • Patent number: 10932152
    Abstract: Systems, methods, and instrumentalities are disclosed to perform rate adaptation in a wireless transmit/receive unit (WTRU). The WTRU may receive an encoded data stream, which may be encoded according to a Dynamic Adaptive HTTP Streaming (DASH) standard. The WTRU may request and/or receive the data stream from a content server. The WTRU may monitor and/or receive a cross-layer parameter, such as a physical layer parameter, a RRC layer parameter, and/or a MAC layer parameter (e.g., a CQI, a PRB allocation, a MRM, or the like). The WTRU may perform rate adaption based on the cross-layer parameter. For example, the WTRU may set the CE bit of an Explicit Congestion Notification (ECN) field based on the cross-layer parameter. The WTRU may determine to request the data stream encoded at a different rate based on the cross-layer parameter, the CE bit, and/or a prediction based on the cross-layer parameter.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: February 23, 2021
    Assignee: VID SCALE, Inc.
    Inventors: Anantharaman Balasubramanian, Gregory S. Sternberg, Liangping Ma, Samian Kaur, Yuriy Reznik, Avi Rapaport, Weimin Liu, Eduardo Asbun
  • Patent number: 10917660
    Abstract: Intra planar approach(es) may be used to predict a pixel(s) in a current block. The current block may be associated with a reconstructed left reference line, a reconstructed top reference line, and an non-reconstructed reference line to be predicted. The reconstructed reference lines may have been decoded and may be available. The non-reconstructed reference lines to be predicted may include an non-reconstructed right and/or an non-reconstructed bottom reference lines. A pivot reference pixel may be identified and may be located on an extension of the reconstructed left and/or top reference lines. A reference pixel may be determined and may be located on the reconstructed top and/or left reference lines. Pixels on the non-reconstructed reference line(s) may be predicted based on the pivot reference pixel and the reference pixel. Pixels of the current block may be predicted using the predicted pixels on the right and the bottom reference lines.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: February 9, 2021
    Assignee: VID SCALE, Inc.
    Inventors: Rahul Vanam, Yuwen He, Yan Ye
  • Publication number: 20210029378
    Abstract: A video coding device may be configured to perform directional Bi-directional optical flow (BDOF) refinement on a coding unit (CU). The device may determine the direction in which to perform directional BDOF refinement. The device may calculate the vertical direction gradient difference and the horizontal direction gradient difference for the CU. The vertical direction gradient difference may indicate the difference between the vertical gradients for a first reference picture and the vertical gradients for a second reference picture. The horizontal direction gradient difference may indicate the difference between the horizontal gradients for the first reference picture and the horizontal gradients for the second reference picture. The video coding device may determine the direction in which to perform directional BDOF refinement based on the vertical direction gradient difference and the horizontal direction gradient difference.
    Type: Application
    Filed: April 5, 2019
    Publication date: January 28, 2021
    Applicant: VID SCALE, INC.
    Inventors: Yuwen He, Xiaoyu Xiu, Yan Ye
  • Patent number: 10904571
    Abstract: A system, method, and/or instrumentality may be provided for coding a 360-degree video. A picture of the 360-degree video may be received. The picture may include one or more faces associated with one or more projection formats. A first projection format indication may be received that indicates a first projection format may be associated with a first face. A second projection format indication may be received that indicates a second projection format may be associated with a second face. Based on the first projection format, a first transform function associated with the first face may be determined. Based on the second projection format, a second transform function associated with the second face may be determined. At least one decoding process may be performed on the first face using the first transform function and/or at least one decoding process may be performed on the second face using the second transform function.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: January 26, 2021
    Assignee: VID SCALE, Inc.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10897629
    Abstract: In an intra-block copy video encoding method, an encoder performs a hash-based search to identify a selected set of candidate blocks for prediction of an input video block. For each of the candidate blocks in the selected set, the encoder determines a correlation between, on the one hand, luma and chroma components of the input video block and, on the other hand, luma and chroma components of the respective candidate blocks. A predictor block is selected based on the correlation and is used to encode the input video block. In different embodiments, the correlation may be the negative of the sum of absolute differences of the components, may include a Jaccard similarity measure between respective pixels, or may be based on a Hamming distance between two high precision hash values of the input video block and the candidate block.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: January 19, 2021
    Assignee: Vid Scale, Inc.
    Inventors: Yuwen He, Xiaoyu Xiu, Yan Ye, Ralph Neff
  • Publication number: 20210014472
    Abstract: Systems, methods, and instrumentalities are disclosed for client centric service quality control. A first viewport of a 360 degree video may be determined. The 360 degree video may comprise one or more of an equirectangular, a cube-map, a cylindrical, a pyramidal, and/or a spherical projection mapping. The first viewport may be associated with a spatial region of the 360 degree video. An adjacent area that extends around the spatial region may be determined. A second viewport of the 360 degree video may be determined. A bitstream associated with the 360 degree video may be received. One or more enhanced regions may be included in the bitstream. The one or more enhanced regions may correspond to the first and/or second viewport. A high coding bitrate may be associated with the first viewport and/or the second viewport.
    Type: Application
    Filed: September 29, 2020
    Publication date: January 14, 2021
    Applicant: Vid Scale, Inc.
    Inventors: Yong He, Yan Ye, Srinivas Gudumasu, Eduardo Asbun, Ahmed Hamza
  • Patent number: 10887621
    Abstract: Processing video data may include capturing the video data with multiple cameras and stitching the video data together to obtain a 360-degree video. A frame-packed picture may be provided based on the captured and stitched video data. A current sample location may be identified in the frame-packed picture. Whether a neighboring sample location is located outside of a content boundary of the frame-packed picture may be determined. When the neighboring sample location is located outside of the content boundary, a padding sample location may be derived based on at least one circular characteristic of the 360-degree video content and the projection geometry. The 360-degree video content may be processed based on the padding sample location.
    Type: Grant
    Filed: July 7, 2017
    Date of Patent: January 5, 2021
    Assignee: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye, Philippe Hanhart, Xiaoyu Xiu
  • Patent number: 10880349
    Abstract: Quality-based optimizations of a delivery process of streaming content may be enabled. The optimization may take the form of quality-based switching. To enable quality-based switching in a streaming client, the client may have access to information about the quality of an encoded segment and/or sub-segment. Quality-related information may include any number of added quality metrics relating to an encoded segment and/or sub-segment of an encoded video stream. The addition of quality-related information may be accomplished by including the quality-related information in a manifest file, including the quality-related information in segment indices stored in a segment index file, and/or providing additional files with quality-related segment information and providing a link to the information from an MPD file. Upon receiving the quality-related information, the client may request and receive a stream that has a lower bitrate, thereby saving bandwidth while retaining quality of the streaming content.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: December 29, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yuriy Reznik, Eduardo Asbun, Zhifeng Chen, Rahul Vanam
  • Patent number: 10872582
    Abstract: Systems and methods are described for adjusting the color spectrum of synthetic objects in augmented reality (AR) displays under varying lighting conditions and a human observers spectral sensitivities, to produce customized color matches for a specific observer. Spectral data may be captured, and color matching functions (CMFs) of the observer may be used by a spectral color workflow that produces color display values, for example coordinates in RGB space. The color rendering may custom-match for multiple observers with different color perceptions under a wide range of environmental (ambient lighting) conditions.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: December 22, 2020
    Assignee: Vid Scale, Inc.
    Inventor: David Wyble
  • Patent number: 10841566
    Abstract: Systems, methods, and instrumentalities are disclosed for client centric service quality control. A first viewport of a 360 degree video may be determined. The 360 degree video may comprise one or more of an equirectangular, a cube-map, a cylindrical, a pyramidal, and/or a spherical projection mapping. The first viewport may be associated with a spatial region of the 360 degree video. An adjacent area that extends around the spatial region may be determined. A second viewport of the 360 degree video may be determined. A bitstream associated with the 360 degree video may be received. One or more enhanced regions may be included in the bitstream. The one or more enhanced regions may correspond to the first and/or second viewport, A high coding bitrate may be associated with the first viewport and/or the second viewport.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: November 17, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yong He, Yan Ye, Srinivas Gudumasu, Eduardo Asbun, Ahmed Hamza
  • Patent number: 10841615
    Abstract: Systems, methods, and devices are disclosed for performing adaptive color space conversion and adaptive entropy encoding of LUT parameters. A video bitstream may be received and a first flag may be determined based on the video bitstream. The residual may be converted from a first color space to a second color space in response to the first flag. The residual may be coded in two parts separated by the most significant bits and least significant bits of the residual. The residual may be further coded based on its absolute value.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: November 17, 2020
    Assignee: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye
  • Publication number: 20200351543
    Abstract: Systems, methods, and instrumentalities are disclosed for dynamic picture-in-picture (PIP) by a client. The client may reside on any device. The client may receive video content from a server, and identify an object within the video content using at least one of object recognition or metadata. The metadata may include information that indicates a location of an object within a frame of the video content. The client may receive a selection of the object by a user, and determine positional data of the object across frames of the video content using at least one of object recognition or metadata. The client may display an enlarged and time-delayed version of the object within a PIP window across the frames of the video content. Alternatively or additionally, the location of the PIP window within each frame may be fixed or may be based on the location of the object within each frame.
    Type: Application
    Filed: August 23, 2018
    Publication date: November 5, 2020
    Applicant: Vid Scale, Inc.
    Inventors: Louis Kerofsky, Eduardo Asbun
  • Publication number: 20200344458
    Abstract: A video coding device may be configured to periodically select the frame packing configuration (e.g., face layout and/or face rotations parameters) associated with a RAS, The device may receive a plurality of pictures, which may each comprise a plurality of faces. The pictures may be grouped Into a plurality of RASs. The device may select a frame packing configuration with the lowest cost for a first RAS. For example, the cost of a frame packing configuration may be determined based on the first picture of the first RAS. The device may select a frame packing configuration for a second RAS. The frame packing configuration for the first RAS may be different than the frame packing configuration for the second RAS. The frame packing configuration for the first RAS and the frame packing configuration for the second RAS may be signaled in the video bitstream.
    Type: Application
    Filed: January 14, 2019
    Publication date: October 29, 2020
    Applicant: VID SCALE, INC.
    Inventors: Philippe Hanhart, Yuwen He, Yan Ye
  • Publication number: 20200336738
    Abstract: Systems, methods, and Instrumentalities are described herein for calculating local Illumination compensation (LIC) parameters for bi-predicted coding unit (CU). The LIC parameters may be used to generate adjusted samples for the current CU and to address local illumination changes that may exist among temporal neighboring pictures. LIC parameters may be calculated based on bi-predicted reference template samples and template samples for a current CU. Bi-predicted reference template samples may be generated based on reference template samples neighboring temporal reference CUs. For example, the bi-predicted reference template samples may be generated based on averaging the reference template samples. The reference template samples may correspond to template samples for the current CU. A CU may be or may include a coding block and/or a sub-block that may be derived by dividing the coding block.
    Type: Application
    Filed: January 15, 2019
    Publication date: October 22, 2020
    Applicant: VID SCALE, INC.
    Inventors: Xiaoyu Xiu, Yuwen He, Yan Ye, Saurav Bandyopadhyay
  • Publication number: 20200322630
    Abstract: Video data may be palette decoded. Data defining a palette table may be received. The palette table may comprise index values corresponding to respective colors. Palette index prediction data may be received and may comprise data indicating index values for at least a portion of a palette index map mapping pixels of the video data to color indices in the palette table. The palette index prediction data may comprise run value data associating run values with index values for at least a portion of a palette index map. A run value may be associated with an escape color index. The palette index map may be generated from the palette index prediction data at least in part by determining whether to adjust an index value of the palette index prediction data based on a last index value. The video data may be reconstructed in accordance with the palette index map.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Applicant: VID SCALE, INC.
    Inventors: Chia-Ming Tsai, Yuwen He, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200322632
    Abstract: Systems, methods, and instrumentalities are disclosed for discontinuous face boundary filtering for 360-degree video coding. A face discontinuity may be filtered (e.g., to reduce seam artifacts) in whole or in part, for example, using coded samples or padded samples on either side of the face discontinuity. Filtering may be applied, for example, as an in-loop filter or a post-processing step. 2D positional information related to two sides of the face discontinuity may be signaled. In a video bitstream so that filtering may be applied independent of projection formats and/or frame packing techniques.
    Type: Application
    Filed: December 18, 2018
    Publication date: October 8, 2020
    Applicant: VID SCALE, INC.
    Inventors: Philippe Hanhart, Yan Ye, Yuwen He
  • Publication number: 20200304788
    Abstract: A block may be identified. The block may be partitioned into one or more (e.g., two) sibling nodes (e.g., sibling nodes BO and B1). A partition direction and a partition type for the block may be determined. If the partition type for the block is binary tree (BT), one or more (e.g., two) partition parameters may be determined for sibling node BO. A partition parameter (e.g., a first partition parameter) may be determined for sibling node B1. A decoder may determine whether to receive an indication of a second partition parameter for B1 based on, for example, the partition direction for the block, the partition type for the block, and the first partition parameter for B1. The decoder may derive the second partition parameter based on, for example, the partition direction and type for the block, and the first partition parameter for B1.
    Type: Application
    Filed: November 1, 2018
    Publication date: September 24, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yuwen He, Fanyi Duanmu, Xiaoyu Xiu, Yan Ye
  • Publication number: 20200288168
    Abstract: Overlapped block motion compensation (OBMC) may be performed for a current video block based on motion information associated with the current video block and motion information associated with one or more neighboring blocks of the current video block. Under certain conditions, some or ail of these neighboring blocks may be omitted from the OBMC operation of the current block. For instance, a neighboring block may be skipped during the OBMC operation if the current video block and the neighboring block are both uni-directionally or bi-directionally predicted, if the motion vectors associated with the current block and the neighboring block refer to a same reference picture, and if a sum of absolute differences between those motion vectors is smaller than a threshold value. Further, OBMC may be conducted in conjunction with regular motion compensation and may use simplified filters than traditionally allowed.
    Type: Application
    Filed: September 28, 2018
    Publication date: September 10, 2020
    Applicant: VID SCALE, INC.
    Inventors: Yan Zhang, Xiaoyu Xiu, Yuwen He, Yan Ye
  • Patent number: 10771780
    Abstract: Improved method and apparatus for signaling of reference pictures used for temporal prediction. The signaling schemes and construction process for different reference picture lists in HEVC Working Draft 5 (WD5) are improved.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: September 8, 2020
    Assignee: VID SCALE, INC.
    Inventors: Yan Ye, Yong He
  • Publication number: 20200275156
    Abstract: A device may be configured to render at least one spatial region of 360-degree media content, which may include two or more spatial regions. The device may include a receiver configured to receive the 360-degree media content and metadata associated with the 360-degree content. The metadata may include a classification of a respective spatial region of the 360-degree media content. The device may further include a memory configured to store a user preference and a sensor configured to detect a user movement. The device may include a processor configured to determine that the user movement is associated with a rendering of the respective spatial region. The processor may further determine whether the classification complies with the user preference and alter the rendering of the respective spatial region if the classification violates the user preference.
    Type: Application
    Filed: October 2, 2018
    Publication date: August 27, 2020
    Applicant: Vid Scale, Inc.
    Inventors: Yong He, Yan Ye, Ali C. Begen, Ahmed Hamza