Patents by Inventor Jian-Liang Lin

Jian-Liang Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11263722
    Abstract: A video processing method includes: decoding a part of a bitstream to generate a decoded frame, where the decoded frame is a projection-based frame that includes projection faces in a hemisphere cubemap projection layout; and remapping sample locations of the projection-based frame to locations on the sphere, where a sample location within the projection-based frame is converted into a local sample location within a projection face packed in the projection-based frame; in response to adjustment criteria being met, an adjusted local sample location within the projection face is generated by applying adjustment to one coordinate value of the local sample location within the projection face, and the adjusted local sample location within the projection face is remapped to a location on the sphere; and in response to the adjustment criteria not being met, the local sample location within the projection face is remapped to a location on the sphere.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: March 1, 2022
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20210398245
    Abstract: A video processing method includes: decoding a part of a bitstream to generate a decoded frame, where the decoded frame is a projection-based frame that includes projection faces in a hemisphere cubemap projection layout; and remapping sample locations of the projection-based frame to locations on the sphere, where a sample location within the projection-based frame is converted into a local sample location within a projection face packed in the projection-based frame; in response to adjustment criteria being met, an adjusted local sample location within the projection face is generated by applying adjustment to one coordinate value of the local sample location within the projection face, and the adjusted local sample location within the projection face is remapped to a location on the sphere; and in response to the adjustment criteria not being met, the local sample location within the projection face is remapped to a location on the sphere.
    Type: Application
    Filed: January 7, 2021
    Publication date: December 23, 2021
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20210392374
    Abstract: A video processing method includes a step of receiving a bitstream, and a step of decoding a part of the bitstream to generate a decoded frame, including parsing a plurality of syntax elements from the bitstream. The decoded frame is a projection-based frame that includes a plurality of projection faces packed at a plurality of face positions with different position indexes in a hemisphere cubemap projection layout. A portion of a 360-degree content of a sphere is mapped to the plurality of projection faces via hemisphere cubemap projection. Values of the plurality of syntax elements are indicative of face indexes of the plurality of projection faces packed at the plurality of face positions, respectively, and are constrained to meet a requirement of bitstream conformance.
    Type: Application
    Filed: February 16, 2021
    Publication date: December 16, 2021
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Patent number: 11196992
    Abstract: A method and apparatus of video coding incorporating Deep Neural Network are disclosed. A target signal is processed using DNN (Deep Neural Network), where the target signal provided to DNN input corresponds to the reconstructed residual, output from the prediction process, the reconstruction process, one or more filtering processes, or a combination of them. The output data from DNN output is provided for the encoding process or the decoding process. The DNN can be used to restore pixel values of the target signal or to predict a sign of one or more residual pixels between the target signal and an original signal. An absolute value of one or more residual pixels can be signalled in the video bitstream and used with the sign to reduce residual error of the target signal.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: December 7, 2021
    Assignee: MEDIATEK INC.
    Inventors: Yu-Wen Huang, Yu-Chen Sun, Tzu-Der Chuang, Jian-Liang Lin, Ching-Yeh Chen
  • Patent number: 11190768
    Abstract: A video decoding method includes decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that has projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of packing of the projection faces in the cube-based projection layout.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: November 30, 2021
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Patent number: 11190801
    Abstract: A video encoding method includes: encoding a projection-based frame to generate a part of a bitstream, wherein at least a portion of a 360-degree content of a sphere is mapped to projection faces via cube-based projection, and the projection-based frame has the projection faces packed in a cube-based projection layout; and signaling at least one syntax element via the bitstream, wherein said at least one syntax element is associated with a mapping function that is employed by the cube-based projection to determine sample locations for each of the projection faces.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: November 30, 2021
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20210337229
    Abstract: A video decoding method includes decoding a part of a bitstream to generate a decoded frame. The decoded frame is a projection-based frame that comprises at least one projection face and at least one guard band packed in a projection layout. At least a portion of a 360-degree content of a sphere is mapped to the at least one projection face via projection. The decoded frame is in a 4:2:0 chroma format or a 4:2:2 chroma format, and a guard band size of each of the at least one guard band is equal to an even number of luma samples.
    Type: Application
    Filed: July 11, 2021
    Publication date: October 28, 2021
    Applicant: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20210337230
    Abstract: A video decoding method includes decoding a part of a bitstream to generate a decoded frame, wherein the decoded frame is a projection-based frame that includes a plurality of projection faces packed in a projection layout with M projection face columns and N projection face rows, M and N are positive integers, and at least a portion of a 360-degree content of a sphere is mapped to the plurality of projection faces via projection. Regarding the decoded frame, a picture width excluding guard band samples is equal to an integer multiple of M, and a picture height excluding guard band samples is equal to an integer multiple of N.
    Type: Application
    Filed: July 11, 2021
    Publication date: October 28, 2021
    Applicant: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Patent number: 11134271
    Abstract: Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. According to one method, if a leaf coding unit contains one or more face edges, the leaf processing unit is split into sub-processing units along the face edges without the need to signal the partition. In another method, if the quadtree (QT) of binary tree (BT) partition depth for a processing unit has not reached the maximum QT or BT depth, the processing unit is split. If the processing unit contains a horizontal face edge, QT or horizontal BT partition is applied. If the processing unit contains a vertical face edge, QT or vertical BT partition is applied.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: September 28, 2021
    Assignee: MEDIATEK INC.
    Inventors: Cheng-Hsuan Shih, Jian-Liang Lin
  • Patent number: 11094088
    Abstract: Methods and apparatus of coding a video sequence, wherein pictures from the video sequence contain one or more discontinuous edges are disclosed. The loop filtering process associated with the loop filter is then applied to the current reconstructed pixel to generate a filtered reconstructed pixel, where if the loop filtering process is across a virtual boundary of the current picture, one or more alternative reference pixels are used to replace unexpected reference pixels located in a different side of the virtual boundary from the current reconstructed pixel, and said one or more alternative reference pixels are generated from second reconstructed pixels in a same side of the virtual boundary as the current reconstructed pixel. According to another method, reference pixels are derived from spherical neighbouring reference pixels for the loop filtering process.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: August 17, 2021
    Assignee: MEDIATEK INC.
    Inventors: Sheng Yen Lin, Lin Liu, Jian-Liang Lin
  • Patent number: 11095912
    Abstract: A video decoding method includes decoding a part of a bitstream to generate a decoded frame. The decoded frame is a projection-based frame that comprises at least one projection face and at least one guard band packed in a projection layout. At least a portion of a 360-degree content of a sphere is mapped to the at least one projection face via projection. The decoded frame is in a 4:2:0 chroma format or a 4:2:2 chroma format, and a guard band size of each of the at least one guard band is equal to an even number of luma samples.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: August 17, 2021
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Patent number: 11089330
    Abstract: A method and apparatus for coding a depth block in three-dimensional video coding are disclosed. Embodiments of the present invention divide a depth block into depth sub-blocks and determine default motion parameters. For each depth sub-block, the motion parameters of a co-located texture block covering the center sample of the depth sub-block are determined. If the motion parameters are available, the motion parameters are assigned as inherited motion parameters for the depth sub-block. If the motion parameters are unavailable, the default motion parameters are assigned as inherited motion parameters for the depth sub-block. The depth sub-block is then encoded or decoded using the inherited motion parameters or a motion candidate selected from a motion candidate set including the inherited motion parameters. The depth block may correspond to a depth prediction unit (PU) and the depth sub-block corresponds to a depth sub-PU.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: August 10, 2021
    Assignee: HFI INNOVATION INC.
    Inventors: Jicheng An, Kai Zhang, Jian-Liang Lin
  • Patent number: 11089335
    Abstract: Method and apparatus of coding a video sequence are disclosed. According to this method, a first syntax is signalled in or parsed from a bitstream, where the first syntax indicates whether a loop filtering process is disabled for one or more virtual boundaries in a corresponding region. A reconstructed filter unit in a current picture is received, wherein the reconstructed filter unit is associated with the loop filter and the reconstructed filter unit comprises reconstructed pixels for applying a loop filtering process associated with the loop filter to a current reconstructed pixel. When the first syntax is true, the loop filter processing is disabled when the reconstructed filter unit is across said one or more virtual boundaries in the corresponding region. When the first syntax is false, the loop filter processing is not disabled when the reconstructed filter unit is across the virtual boundary.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: August 10, 2021
    Assignee: MEDIATEK INC.
    Inventors: Sheng Yen Lin, Jian-Liang Lin, Lin Liu
  • Patent number: 11069026
    Abstract: A video processing method includes: obtaining a plurality of square projection faces from an omnidirectional content of a sphere according to a cube-based projection, scaling the square projection faces to generate a plurality of scaled projection faces, respectively, creating at least one padding region, generating a projection-based frame by packing the scaled projection faces and said at least one padding region in a projection layout of the cube-based projection, and encoding the projection-based frame to generate a part of a bitstream.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: July 20, 2021
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20210211736
    Abstract: A video processing method includes receiving a reconstructed frame, and applying in-loop filtering, by at least one in-loop filter, to the reconstructed frame. The step of in-loop filtering includes performing a sample adaptive offset (SAO) filtering operation. The step of performing the SAO filtering operation includes keeping a value of a current pixel unchanged by blocking the SAO filtering operation of the current pixel included in the reconstructed frame from being applied across a virtual boundary defined in the reconstructed frame.
    Type: Application
    Filed: December 25, 2020
    Publication date: July 8, 2021
    Inventors: Sheng-Yen Lin, Jian-Liang Lin
  • Patent number: 11057643
    Abstract: A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: July 6, 2021
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Chia-Ying Li, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20210203995
    Abstract: A video decoding method includes: decoding a part of a bitstream to generate a decoded frame, including parsing a syntax element from the bitstream. The decoded frame is a projection-based frame that includes at least one projection face and at least one guard band packed in a projection layout with padding, and at least a portion of a 360-degree content of a sphere is mapped to the at least one projection face via projection. The syntax element specifies a guard band type of the at least one guard band.
    Type: Application
    Filed: December 28, 2020
    Publication date: July 1, 2021
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Patent number: 11049314
    Abstract: Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, the method receives coded data for an extended 2D (two-dimensional) frame including an encoded 2D frame with one or more encoded guard bands, wherein the encoded 2D frame is projected from a 3D (three-dimensional) sphere using a target projection, wherein said one or more encoded guard bands are based on a blending of one or more guard bands with an overlapped region when the overlapped region exists. The method then decodes the coded data into a decoded extended 2D frame including a decoded 2D frame with one or more decoded guard bands, and derives a reconstructed 2D frame from the decoded extended 2D frame.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: June 29, 2021
    Assignee: MEDIATEK INC
    Inventors: Cheng-Hsuan Shih, Chia-Ying Li, Ya-Hsuan Lee, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Patent number: 11004173
    Abstract: A video processing method includes: obtaining a plurality of projection faces from an omnidirectional content of a sphere, wherein the omnidirectional content of the sphere is mapped onto the projection faces via cubemap projection, and the projection faces comprise a first projection face; obtaining, by a re-sampling circuit, a first re-sampled projection face by re-sampling at least a portion of the first projection face through non-uniform mapping; generating a projection-based frame according to a projection layout of the cubemap projection, wherein the projection-based frame comprises the first re-sampled projection face packed in the projection layout; and encoding the projection-based frame to generate a part of a bitstream.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: May 11, 2021
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Peng Wang, Lin Liu, Ya-Hsuan Lee, Hung-Chih Lin, Shen-Kai Chang
  • Patent number: 10999595
    Abstract: A method and apparatus of priority-based MVP (motion vector predictor) derivation for motion compensation in a video encoder or decoder are disclosed. According to this method, one or more final motion vector predictors (MVPs) are derived using priority-based MVP derivation process. The one or more final MVPs are derived by selecting one or more firstly available MVs from a priority-based MVP list for Inter prediction mode, Skip mode or Merge mode based on reference data of one or two target reference pictures that are reconstructed prior to the current block according to a priority order. Therefore, there is no need for transmitting information at the encoder side nor deriving information at the decoder side that is related to one or more MVP indices to identify the one or more final MVPs in the video bitstream.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: May 4, 2021
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Tzu-Der Chuang, Yu-Wen Huang, Yi-Wen Chen