Patents by Inventor Jian-Liang Lin

Jian-Liang Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10264281
    Abstract: A method and apparatus for three-dimensional video coding are disclosed. Embodiments according to the present invention apply the pruning process to one or more spatial candidates and at least one of the inter-view candidate and the temporal candidate to generate a retained candidate set. The pruning process removes any redundant candidate among one or more spatial candidates and at least one of the inter-view candidate and the temporal candidate. A Merge/Skip candidate list is then generated, which includes the retained candidate set. In one embodiment, the temporal candidate is exempted from the pruning process. In another embodiment, the inter-view candidate is exempted from the pruning process. In other embodiments of the present invention, the pruning process is applied to the inter-view candidate and two or more spatial candidates. The pruning process compares the spatial candidates with the inter-view candidate.
    Type: Grant
    Filed: July 2, 2013
    Date of Patent: April 16, 2019
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Patent number: 10257539
    Abstract: A method and apparatus for coding a depth block in three-dimensional video coding are disclosed. Embodiments of the present invention divide a depth block into depth sub-blocks and determine default motion parameters. For each depth sub-block, the motion parameters of a co-located texture block covering the center sample of the depth sub-block are determined. If the motion parameters are available, the motion parameters are assigned as inherited motion parameters for the depth sub-block. If the motion parameters are unavailable, the default motion parameters are assigned as inherited motion parameters for the depth sub-block. The depth sub-block is then encoded or decoded using the inherited motion parameters or a motion candidate selected from a motion candidate set including the inherited motion parameters. The depth block may correspond to a depth prediction unit (PU) and the depth sub-block corresponds to a depth sub-PU.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: April 9, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Jicheng An, Kai Zhang, Jian-Liang Lin
  • Patent number: 10249019
    Abstract: Methods and apparatus of processing omnidirectional images are disclosed. According to one method, a current set of omnidirectional images converted from each spherical image in a 360-degree panoramic video sequence using a selected projection format is received, where the selected projection format belongs to a projection format group comprising a cubicface format, and the current set of omnidirectional images with the cubicface format consists of six cubic faces. If the selected projection format corresponds to the cubicface format, one or more mapping syntax elements to map the current set of omnidirectional images into a current cubemap image are signaled. The coded data are then provided in a bitstream including said one or more mapping syntax elements for the current set of omnidirectional images.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: April 2, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Chia-Ying Li, Shen-Kai Chang
  • Patent number: 10244258
    Abstract: A method and apparatus for processing a prediction block and using the modified prediction block for predictive coding of a current block are disclosed. Embodiments according to the present invention receive a prediction block for the current block and classify pixels in the prediction block into two or more segments. Each segment of the prediction block is then processed depending on information derived from each segment of the prediction block to form a modified prediction segment. The modified prediction block consisting of modified prediction segments of the prediction block is used as a predictor for encoding or decoding the current block.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: March 26, 2019
    Assignee: MEDIATEK SINGAPORE PTE. LTD.
    Inventors: Kai Zhang, Jicheng An, Xianguo Zhang, Han Huang, Jian-Liang Lin
  • Patent number: 10244259
    Abstract: A method and apparatus of three-dimensional/multi-view coding using aligned reference information are disclosed. The method operates by receiving input data associated with a current block of a current frame in a dependent view, determining a first DV (Disparity Vector) from one or more neighboring blocks of the current block, wherein the first DV refers to a first reference view to derive first reference information, selecting a second reference view for the current block to derive second reference information, aligning the first reference information associated with the first reference view with the second reference information associated with the second reference view, and applying inter-view encoding or decoding to the input data utilizing the first DV or the second reference information after applying said aligning the first reference information with the second reference information.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: March 26, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Jicheng An, Kai Zhang, Jian-Liang Lin
  • Publication number: 20190088001
    Abstract: A projection-based frame is generated according to an omnidirectional video frame and an octahedron projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the octahedron projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.
    Type: Application
    Filed: September 30, 2017
    Publication date: March 21, 2019
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Chia-Ying Li, Shen-Kai Chang, Chi-Cheng Ju, Chao-Chih Huang, Hui Ouyang
  • Publication number: 20190082183
    Abstract: Methods for processing 360-degree virtual reality images are disclosed. According to one method, coding flags for the target block are skipped for inactive blocks at an encoder side or pixels for the target block are derived based on information identifying the target block being the inactive block at a decoder side. According to another method, when a target block is partially filled with inactive pixels, the best predictor is selected using rate-distortion optimization, where distortion associated with the rate-distortion optimization is measured by excluding inactive pixels of the target block. According to another method, the inactive pixels of a residual block are padded with values to achieve the best rate-distortion optimization. According to another method, active pixels of the residual block are rearranged into a smaller block and coding is applied to the smaller block, or shape-adaptive transform coding is applied to the active pixels of the residual block.
    Type: Application
    Filed: September 11, 2018
    Publication date: March 14, 2019
    Inventors: Cheng-Hsuan SHIH, Jian-Liang LIN
  • Patent number: 10230937
    Abstract: A method and apparatus for a three-dimensional or multi-view video encoding or decoding system utilizing unified disparity vector derivation is disclosed. When a three-dimensional coding tool using a derived disparity vector (DV) is selected, embodiments according to the present invention will first obtain the derived DV from one or more neighboring blocks. If the derived DV is available, the selected three-dimensional coding tool is applied to the current block using the derived DV. If the derived DV is not available, the selected three-dimensional coding tool is applied to the current block using a default DV, where the default DV is set to point to an inter-view reference picture in a reference picture list of the current block.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: March 12, 2019
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Na Zhang, Yi-Wen Chen, Jicheng An, Yu-Lin Chang
  • Publication number: 20190075315
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) candidate set for a block are disclosed. Embodiments according to the present invention determine a plurality of spatial neighboring blocks of the block, obtain one or more spatial MVP candidates from motion vectors associated with the spatial neighboring blocks, determine whether one or more redundant MVP candidates exist in the spatial MVP candidates, generate a first MVP candidate set, wherein said generating the first MVP candidate set comprises not including the determined one or more redundant MVP candidates into the first MVP candidate set, and generate a final MVP candidate set according to the first MVP candidate set.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 7, 2019
    Inventors: Tzu-Der CHUANG, Jian-Liang LIN, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20190068948
    Abstract: A method for a three-dimensional encoding or decoding system incorporating sub-block based inter-view motion prediction is disclosed. The system utilizes motion or disparity parameters associated with reference sub-blocks in a reference picture of a reference view corresponding to the texture sub-PCU split from a current texture PU (prediction unit) to predict the motion or disparity parameters of the current texture PU. Candidate motion or disparity parameters for the current texture PU may comprise candidate motion or disparity parameters derived for all texture sub-PUs from splitting the current texture PU. The candidate motion or disparity parameters for the current texture PU can be used as a sub-block-based inter-view Merge candidate for the current texture PU in Merge mode. The sub-block-based inter-view Merge candidate can be inserted into a first position of a candidate list.
    Type: Application
    Filed: October 25, 2018
    Publication date: February 28, 2019
    Applicant: HFI Innovation Inc.
    Inventors: Jicheng AN, Kai ZHANG, Jian-Liang LIN
  • Publication number: 20190068949
    Abstract: According to one method, at a source side or an encoder side, a selected viewport associated with the 360-degree virtual reality images is determined. One or more parameters related to the selected pyramid projection format are then determined. According to the present invention, one or more syntax elements for said one or more parameters are included in coded data of the 360-degree virtual reality images. The coded data of the 360-degree virtual reality images are provided as output data. At a receiver side or a decoder side, one or more syntax elements for one or more parameters are parsed from the coded data of the 360-degree virtual reality images. A selected pyramid projection format associated with the 360-degree virtual reality images is determined based on information including said one or more parameters. The 360-degree virtual reality images are then recovered according to the selected viewport.
    Type: Application
    Filed: August 20, 2018
    Publication date: February 28, 2019
    Inventors: Peng WANG, Hung-Chih LIN, Jian-Liang LIN, Shen-Kai CHANG
  • Patent number: 10218957
    Abstract: A method of sub-PU (prediction unit) syntax element signaling for a three-dimensional or multi-view video coding system is disclosed. A first syntax element associated with a texture sub-PU size is transmitted only for texture video data and a second syntax element associated with a depth sub-PU size is transmitted only for depth video data. The first syntax element associated with the texture sub-PU size is used to derive an IVMP (inter-view motion prediction) prediction candidate used for a texture block. The second syntax element associated with the depth sub-PU size is used to a MPI (motion parameter inheritance) prediction candidate for a depth block.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: February 26, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Han Huang, Xianguo Zhang, Jicheng An, Jian-Liang Lin, Kai Zhang
  • Patent number: 10212411
    Abstract: A method of simplified depth-based block partitioning (DBBP) for three-dimensional and multi-view video coding is disclosed. In one embodiment, the method receives input data associated with a current texture block in a dependent view, and determines a corresponding depth block or a reference texture block in a reference view for the current texture block. Then, the method derives a representative value based on the corresponding depth block or the reference texture block, and generates a current segmentation mask from the corresponding depth block or the reference texture block. Then, the method selects a current block partition from block partition candidates, wherein the representative value is used for generating the segmentation mask or selecting the current block partition or both, and applies DBBP coding to the current texture block according to the current segmentation mask generated and the current block partition selected.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: February 19, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Xianguo Zhang, Kai Zhang, Jicheng An, Han Huang, Jian-Liang Lin
  • Publication number: 20190045224
    Abstract: A method and apparatus of video coding using Non-Local (NL) denoising filter are disclosed. According to the present invention, the decoded picture or the processed-decoded picture is divided into multiple blocks. The NL loop-filter is applied to a target block with NL on/off control to generate a filtered output. The NL loop-filter process comprises determining, for the target block, a patch group consisting of K nearest reference blocks within a search window located in one or more reference regions and deriving one filtered output which could be one block for the target block or one filtered patch group based on pixel values of the target block and pixel values of the patch group. The filtered output is provided for further loop-filter processing if there is any further loop-filter processing or the filtered output is provided for storing in a reference picture buffer if there is no further loop-filter processing.
    Type: Application
    Filed: February 3, 2017
    Publication date: February 7, 2019
    Inventors: Yu-Wen HUANG, Ching-Yeh CHEN, Tzu-Der CHUANG, Jian-Liang LIN, Yi-Wen CHEN
  • Patent number: 10194170
    Abstract: Aspects of the disclosure provide a method for video coding. The method includes receiving input data associated with a processing block in a current picture, selecting, from a set of neighboring reconstructed samples for intra-coding pixels in the processing block, a plurality of reference samples for a pixel in the processing block based on a position of the pixel and an intra prediction mode of the processing block, determining a projection phase for the pixel based on the position of the pixel and the intra prediction mode of the processing block, determining coefficients of an interpolation filter based on the projection phase for the pixel, applying the interpolation filter with the determined coefficients on the reference samples to generate a prediction of the pixel, and encoding or decoding the pixel in the processing block using the prediction of the pixel.
    Type: Grant
    Filed: November 17, 2016
    Date of Patent: January 29, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yu-Wen Huang
  • Publication number: 20190026858
    Abstract: A video processing method includes: obtaining a plurality of projection faces from an omnidirectional content of a sphere, wherein the omnidirectional content of the sphere is mapped onto the projection faces via cubemap projection, and the projection faces comprise a first projection face; obtaining, by a re-sampling circuit, a first re-sampled projection face by re-sampling at least a portion of the first projection face through non-uniform mapping; generating a projection-based frame according to a projection layout of the cubemap projection, wherein the projection-based frame comprises the first re-sampled projection face packed in the projection layout; and encoding the projection-based frame to generate a part of a bitstream.
    Type: Application
    Filed: September 26, 2018
    Publication date: January 24, 2019
    Inventors: Jian-Liang Lin, Peng Wang, LIN LIU, Ya-Hsuan Lee, Hung-Chih Lin, Shen-Kai Chang
  • Publication number: 20190026934
    Abstract: Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, a 2D (two-dimensional) frame is divided into multiple blocks. The multiple blocks are encoded or decoded using quantization parameters by restricting a delta quantization parameter to be within a threshold for any two blocks corresponding to two neighboring blocks on a 3D sphere. According to another embodiment, one or more guard bands are added to one or more edges that are discontinuous in the 2D frame but continuous in the 3D sphere. Fade-out process is applied to said one or more guard bands to generate one or more faded guard bands. At the decoder side, the reconstructed 2D frame is generated from the decoded extended 2D frame by cropping said one or more decoded faded guard bands or by blending said one or more decoded faded guard bands and reconstructed duplicated areas.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 24, 2019
    Inventors: Cheng-Hsuan SHIH, Chia-Ying LI, Ya-Hsuan LEE, Hung-Chih LIN, Jian-Liang LIN, Shen-Kai CHANG
  • Patent number: 10178410
    Abstract: A method and apparatus for three-dimensional and scalable video coding are disclosed. Embodiments according to the present invention determine a motion information set associated with the video data, wherein at least part of the motion information set is made available or unavailable conditionally depending on the video data type. The video data type may correspond to depth data, texture data, a view associated with the video data in three-dimensional video coding, or a layer associated with the video data in scalable video coding. The motion information set is then provided for coding or decoding of the video data, other video data, or both. At least a flag may be used to indicate whether part of the motion information set is available or unavailable. Alternatively, a coding profile for the video data may be used to determine whether the motion information is available or not based on the video data type.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: January 8, 2019
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Yi-Wen Chen, Jian-Liang Lin, Yu-Wen Huang
  • Patent number: 10165252
    Abstract: A method for a three-dimensional encoding or decoding system incorporating sub-block based inter-view motion prediction is disclosed. The system utilizes motion or disparity parameters associated with reference sub-blocks in a reference picture of a reference view corresponding to the texture sub-PUs split from a current texture PU (prediction unit) to predict the motion or disparity parameters of the current texture PU. Candidate motion or disparity parameters for the current texture PU may comprise candidate motion or disparity parameters derived for all texture sub-PUs from splitting the current texture PU. The candidate motion or disparity parameters for the current texture PU can be used as a sub-block-based inter-view Merge candidate for the current texture PU in Merge mode. The sub-block-based inter-view Merge candidate can be inserted into a first position of a candidate list.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: December 25, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Jicheng An, Kai Zhang, Jian-Liang Lin
  • Publication number: 20180359459
    Abstract: A video processing method includes obtaining projection face(s) from an omnidirectional content of a sphere, and obtaining a re-sampled projection face by re-sampling at least a portion of a projection face of the projection face(s) through non-uniform mapping. The omnidirectional content of the sphere is mapped onto the projection face(s) via a 360-degree Virtual Reality (360 VR) projection. The projection face has a first source region and a second source region. The re-sampled projection face has a first re-sampled region and a second re-sampled region. The first re-sampled region is derived from re-sampling the first source region with a first sampling density. The second re-sampled region is derived from re-sampling the second source region with a second sampling density that is different from the first sampling density.
    Type: Application
    Filed: April 3, 2018
    Publication date: December 13, 2018
    Inventors: Ya-Hsuan Lee, Peng Wang, Jian-Liang Lin, Shen-Kai Chang