Patents by Inventor Jian-Liang Lin

Jian-Liang Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190297350
    Abstract: A sample adaptive offset (SAO) filtering method for a reconstructed projection-based frame includes: obtaining at least one padding pixel in a padding area that acts as an extension of a face boundary of a first projection face, and applying SAO filtering to a block that has at least one pixel included in the first projection face. In the reconstructed projection-based frame, there is image content discontinuity between the face boundary of the first projection face and a face boundary of a second projection face. The at least one padding pixel is involved in the SAO filtering of the block.
    Type: Application
    Filed: March 22, 2019
    Publication date: September 26, 2019
    Inventors: Sheng-Yen Lin, LIN LIU, Jian-Liang Lin
  • Publication number: 20190289328
    Abstract: Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. According to one method, if a leaf coding unit contains one or more face edges, the leaf processing unit is split into sub-processing units along the face edges without the need to signal the partition. In another method, if the quadtree (QT) of binary tree (BT) partition depth for a processing unit has not reached the maximum QT or BT depth, the processing unit is split. If the processing unit contains a horizontal face edge, QT or horizontal BT partition is applied. If the processing unit contains a vertical face edge, QT or vertical BT partition is applied.
    Type: Application
    Filed: March 12, 2019
    Publication date: September 19, 2019
    Inventors: Cheng-Hsuan SHIH, Jian-Liang LIN
  • Publication number: 20190289316
    Abstract: Method and apparatus of coding 360-degree virtual reality (VR360) pictures are disclosed. According to the method, when a first MV (motion vector) of a target neighboring block for the current block is not available within the 2D projection picture, or when the target neighboring block is not in a same face as the current block: a true neighboring block picture corresponding to the target neighboring block is identified within the 2D projection; if a second MV of the true neighboring block exists, the second MV of the true neighboring block is transformed into a derived MV; and a current MV of the current block is encoded or decoded using the derived MV or one selected candidate in a MV candidate list including the derived MV as an MV predictor.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 19, 2019
    Inventors: Cheng-Hsuan SHIH, Jian-Liang LIN
  • Publication number: 20190289327
    Abstract: Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. A target reconstructed VR picture in a reconstructed VR picture sequence is divided into multiple processing units and whether a target processing unit contains any discontinuous edge corresponding to a face boundary in the target reconstructed VR picture is determined. If the target processing unit contains any discontinuous edge: the target processing unit is split into two or more sub-processing units along the discontinuous edges; and NN processing is applied to each of the sub-processing units to generate a filtered processing unit. If the target processing unit contains no discontinuous edge, the NN processing is applied to the target processing unit to generate the filtered processing unit. A method and apparatus for CNN training process are also disclosed. The input reconstructed VR pictures and original pictures are divided into sub-frames along discontinuous boundaries for the training process.
    Type: Application
    Filed: February 27, 2019
    Publication date: September 19, 2019
    Inventors: Sheng-Yen LIN, Jian-Liang LIN
  • Publication number: 20190281273
    Abstract: An adaptive loop filtering (ALF) method for a reconstructed projection-based frame includes: obtaining at least one spherical neighboring pixel in a padding area that acts as an extension of a face boundary of a first projection face, and applying adaptive loop filtering to a block in the first projection face. In the reconstructed projection-based frame, there is image content discontinuity between the face boundary of the first projection face and a face boundary of a second projection face. A region on the sphere to which the padding area corresponds is adjacent to a region on the sphere from which the first projection face is obtained. The at least one spherical neighboring pixel is involved in the adaptive loop filtering of the block.
    Type: Application
    Filed: March 7, 2019
    Publication date: September 12, 2019
    Inventors: Sheng-Yen Lin, Jian-Liang Lin
  • Publication number: 20190281293
    Abstract: A de-blocking method is applied to a reconstructed projection-based frame having a first projection face and a second projection face, and includes obtaining a first spherical neighboring block for a first block with a block edge to be de-blocking filtered, and selectively applying de-blocking to the block edge of the first block for at least updating a portion of pixels of the first block. There is image content discontinuity between a face boundary of the first projection face and a face boundary of the second projection face. The first block is a part of the first projection face, and the block edge of the first block is a part of the face boundary of the first projection face. A region on a sphere to which the first spherical neighboring block corresponds is adjacent to a region on the sphere from which the first projection face is obtained.
    Type: Application
    Filed: March 8, 2019
    Publication date: September 12, 2019
    Inventors: Sheng-Yen Lin, Jian-Liang Lin, Cheng-Hsuan Shih
  • Patent number: 10412407
    Abstract: A method and apparatus for video coding utilizing a motion vector predictor (MVP) for a motion vector (MV) for a block are disclosed. According to an embodiment, a mean candidate is derived from at least two candidates in the current candidate list. The mean candidate includes two MVs for the bi-prediction or one MV for the uni-prediction, and at least one MV of the mean candidate is derived as a mean of the MVs of said at least two candidates in one of list 0 and list 1. The mean candidate is added to the current candidate list to form a modified candidate list, and one selected candidate is determined as a MVP or MVPs from the modified candidate list, for current MV or MVs of the current block. The current block is then encoded or decoded in Inter, Merge, or Skip mode utilizing the MVP or MVPs selected.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: September 10, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Patent number: 10412402
    Abstract: A method and apparatus for applying filter to Intra prediction samples are disclosed. According to an embodiment of the present invention, a filter is applied to one or more prediction samples of the Initial Intra prediction block to form one or more filtered prediction samples. For example, the filter is applied to the prediction sample in the non-boundary locations of the Initial Intra prediction block. Alternatively, the filter is applied to all prediction samples in the Initial Intra prediction block. The filtered Intra prediction block comprising one or more filtered prediction samples is used as a predictor for Intra prediction encoding or decoding of the current block. The filter corresponds to a FIR (finite impulse response) filter or an IIR (infinite impulse response) filter.
    Type: Grant
    Filed: December 4, 2015
    Date of Patent: September 10, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang
  • Publication number: 20190272616
    Abstract: A video processing method includes: obtaining a plurality of square projection faces from an omnidirectional content of a sphere according to a cube-based projection, scaling the square projection faces to generate a plurality of scaled projection faces, respectively, creating at least one padding region, generating a projection-based frame by packing the scaled projection faces and said at least one padding region in a projection layout of the cube-based projection, and encoding the projection-based frame to generate a part of a bitstream.
    Type: Application
    Filed: February 27, 2019
    Publication date: September 5, 2019
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20190272617
    Abstract: A cube-based projection method includes generating pixels of different square projection faces associated with a cube-based projection of a 360-degree image content of a sphere. Pixels of a first square projection face are generated by utilizing a first mapping function set. Pixels of a second square projection face are generated by utilizing a second mapping function set. The different square projection faces include the first square projection face and the second square projection face. The second mapping function set is not identical to the first mapping function set.
    Type: Application
    Filed: February 26, 2019
    Publication date: September 5, 2019
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20190251660
    Abstract: A video processing method includes receiving a bitstream, processing the bitstream to obtain at least one syntax element from the bitstream, and decoding the bitstream to generate a current decoded frame having a rotated 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format. The at least one syntax element signaled via the bitstream indicates rotation information of content-oriented rotation that is involved in generating the rotated 360-degree image/video content, and includes a first syntax element. When the content-oriented rotation is enabled, the first syntax element indicates a rotation degree along a specific rotation axis.
    Type: Application
    Filed: April 24, 2019
    Publication date: August 15, 2019
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Jian-Liang Lin, Shen-Kai Chang
  • Patent number: 10380715
    Abstract: A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and an octahedron projection layout, and encoding, by a video encoder, the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by triangular projection faces packed in the octahedron projection layout. The omnidirectional image/video content of the viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: August 13, 2019
    Assignee: MEDIATEK INC.
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Hui Ou Yang, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20190246105
    Abstract: Methods and apparatus of processing cube face images are disclosed. According to embodiments of the present invention, one or more discontinuous boundaries within each assembled cubic frame are determined and used for selective filtering, where the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled. Furthermore, the filtering process is applied to one or more continuous areas in each assembled cubic frame.
    Type: Application
    Filed: April 17, 2019
    Publication date: August 8, 2019
    Inventors: Hung-Chih LIN, Jian-Liang LIN, Chia-Ying LI, Chao-Chih HUANG, Shen-Kai CHANG
  • Patent number: 10368067
    Abstract: Methods and apparatus of processing cube face images are disclosed. According to embodiments of the present invention, one or more discontinuous boundaries within each assembled cubic frame are determined and used for selective filtering, where the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled. Furthermore, the filtering process is applied to one or more continuous areas in each assembled cubic frame.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: July 30, 2019
    Assignee: MEDIATEK INC.
    Inventors: Hung-Chih Lin, Jian-Liang Lin, Chia-Ying Li, Chao-Chih Huang, Shen-Kai Chang
  • Patent number: 10362314
    Abstract: Aspects of the disclosure include a method for video coding. The method includes receiving input data associated with a current block in a current image frame of video data, where the current block is coded by intra-prediction or to be coded by intra-prediction. The method also includes determining an intra-prediction mode of the current block, selecting one of a plurality of filters including at least a default filter and an N-tap filter, and generating filtered neighboring samples by filtering neighboring samples adjacent to the current block using the selected filter, where N is a positive integer different from 3. Moreover, the method includes encoding or decoding the current block by predicting the current block based on the filtered neighboring samples and the intra-prediction mode.
    Type: Grant
    Filed: November 18, 2016
    Date of Patent: July 23, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yu-Wen Huang
  • Patent number: 10356386
    Abstract: A video processing method includes obtaining projection face(s) from an omnidirectional content of a sphere, and obtaining a re-sampled projection face by re-sampling at least a portion of a projection face of the projection face(s) through non-uniform mapping. The omnidirectional content of the sphere is mapped onto the projection face(s) via a 360-degree Virtual Reality (360 VR) projection. The projection face has a first source region and a second source region. The re-sampled projection face has a first re-sampled region and a second re-sampled region. The first re-sampled region is derived from re-sampling the first source region with a first sampling density. The second re-sampled region is derived from re-sampling the second source region with a second sampling density that is different from the first sampling density.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: July 16, 2019
    Assignee: MEDIATEK INC.
    Inventors: Ya-Hsuan Lee, Peng Wang, Jian-Liang Lin, Shen-Kai Chang
  • Patent number: 10349083
    Abstract: A method and apparatus for low-latency illumination compensation in a three-dimensional (3D) and multi-view coding system are disclosed. According to the present invention, the encoder determines whether to enable or disable the illumination compensation for the current picture or slice based on a condition related to statistic associated with a selected reference picture or slice respectively, or related to high-level coding information associated with the current picture or slice respectively. The high-level coding information associated with the current picture or slice excludes any information related to pixel values of the current picture or slice respectively. The illumination compensation is them applied according to the decision made by the encoder. A similar low-latency method is also applied for depth lookup table (DLT) based coding.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: July 9, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Yi-Wen Chen, Kai Zhang, Jian-Liang Lin, Yu-Wen Huang
  • Patent number: 10341638
    Abstract: A method and apparatus using a single converted DV (disparity vector) from the depth data for a conversion region are disclosed. Embodiments according to the present invention receive input data and depth data associated with a conversion region of a current picture in a current dependent view. The conversion region is checked to determine whether it is partitioned into multiple motion prediction sub-blocks. If the conversion region is partitioned into multiple motion prediction sub-blocks, then a single converted DV from the depth data associated with the conversion region is determined and each of the multiple motion prediction sub-blocks of the conversion region is processed according to a first coding tool using the single converted DV. If the conversion region is not partitioned into multiple motion prediction sub-blocks, the conversion region is processed according to the first coding tool or a second coding tool using the single converted DV.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: July 2, 2019
    Assignee: MediaTek Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Publication number: 20190191180
    Abstract: A method and apparatus for coding a depth block in three-dimensional video coding are disclosed. Embodiments of the present invention divide a depth block into depth sub-blocks and determine default motion parameters. For each depth sub-block, the motion parameters of a co-located texture block covering the center sample of the depth sub-block are determined. If the motion parameters are available, the motion parameters are assigned as inherited motion parameters for the depth sub-block. If the motion parameters are unavailable, the default motion parameters are assigned as inherited motion parameters for the depth sub-block. The depth sub-block is then encoded or decoded using the inherited motion parameters or a motion candidate selected from a motion candidate set including the inherited motion parameters. The depth block may correspond to a depth prediction unit (PU) and the depth sub-block corresponds to a depth sub-PU.
    Type: Application
    Filed: February 22, 2019
    Publication date: June 20, 2019
    Applicant: HFI INNOVATION INC.
    Inventors: Jicheng AN, Kai Zhang, Jian-Liang Lin
  • Patent number: 10264282
    Abstract: A method and apparatus of video encoding or decoding for a video encoding or decoding system applied to multi-face sequences corresponding to a 360-degree virtual reality sequence are disclosed. According the present invention, one or more multi-face sequences representing the 360-degree virtual reality sequence are derived. If Inter prediction is selected for a current block in a current face, one virtual reference frame is derived for each face of said one or more multi-face sequences by assigning one target reference face to a center of said one virtual reference frame and connecting neighboring faces of said one target reference face to said one target reference face at boundaries of said one target reference face. Then, the current block in the current face is encoded or decoded using a current virtual reference frame derived for the current face to derive an Inter predictor for the current block.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: April 16, 2019
    Assignee: MEDIATEK INC.
    Inventors: Chao-Chih Huang, Hung-Chih Lin, Jian-Liang Lin, Chia-Ying Li, Shen-Kai Chang