Patents by Inventor Jian-Liang Lin

Jian-Liang Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10154279
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) candidate set for a block are disclosed. Embodiments according to the present invention generate a complete full MVP candidate set based on the redundancy-removed MVP candidate set if one or more redundant MVP candidates exist. In one embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value corresponding to a non-redundant MVP is assigned to each replacement MVP candidate. In another embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value is assigned to each replacement MVP candidate according to a rule. The procedure of assigning value, checking redundancy, removing redundant MVP candidate are repeated until the MVP candidate set is complete and full.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: December 11, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 10142655
    Abstract: A method and apparatus for direct Simplified Depth Coding (dSDC) to derive prediction value directly for each segment without deriving depth prediction samples or depth prediction subsamples. The dSDC method substantially reduces the computations associated with deriving the prediction samples or subsamples and calculating the average of the prediction samples or subsamples by deriving the prediction value directly based on the reconstructed neighboring depth samples. The direct SDC can be applied to derive the two prediction values, P0 and P1 for the two segments of a depth block coded by SDC depth modelling mode 1 (DMM-1).
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: November 27, 2018
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Kai Zhang, Yi-Wen Chen, Jicheng An
  • Publication number: 20180338160
    Abstract: Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, each 360-degree virtual reality image is projected into one first projection picture using first projection-format conversion. The first projection pictures are encoded and decoded into first reconstructed projection pictures. Each first reconstructed projection picture is then projected into one second reconstructed projection picture or one third reconstructed projection picture corresponding to a selected viewpoint using second projection-format conversion. One or more discontinuous edges in one or more second reconstructed projection pictures or one or more third reconstructed projection pictures corresponding to the selected viewpoint are identified. A post-processing filter is then applied to at least one discontinuous edge in the second reconstructed projection pictures or third reconstructed projection pictures corresponding to the selected viewpoint to generate filtered output.
    Type: Application
    Filed: May 10, 2018
    Publication date: November 22, 2018
    Inventors: Ya-Hsuan LEE, Jian-Liang LIN, Shen-Kai CHANG
  • Publication number: 20180332305
    Abstract: A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.
    Type: Application
    Filed: September 30, 2017
    Publication date: November 15, 2018
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Chia-Ying Li, Shen-Kai Chang, Chi-Cheng Ju
  • Publication number: 20180332292
    Abstract: A method and apparatus of Intra prediction filtering in an image or video encoder or decoder are disclosed.
    Type: Application
    Filed: November 16, 2016
    Publication date: November 15, 2018
    Applicant: MEDIATEK INC.
    Inventors: Jian-Liang LIN, Yu-Wen HUANG
  • Publication number: 20180324454
    Abstract: A method and apparatus for video coding utilizing a motion vector predictor (MVP) for a motion vector (MV) for a block are disclosed. According to an embodiment, a mean candidate is derived from at least two candidates in the current candidate list. The mean candidate includes two MVs for the bi-prediction or one MV for the uni-prediction, and at least one MV of the mean candidate is derived as a mean of the MVs of said at least two candidates in one of list 0 and list 1. The mean candidate is added to the current candidate list to form a modified candidate list, and one selected candidate is determined as a MVP or MVPs from the modified candidate list, for current MV or MVs of the current block. The current block is then encoded or decoded in Inter, Merge, or Skip mode utilizing the MVP or MVPs selected.
    Type: Application
    Filed: October 28, 2016
    Publication date: November 8, 2018
    Inventors: Jian-Liang LIN, Yi-Wen CHEN
  • Patent number: 10116964
    Abstract: A method for a three-dimensional encoding or decoding system incorporating restricted sub-PU level prediction is disclosed. In one embodiment, the sub-PU level prediction associated with inter-view motion prediction or view synthesis prediction is restricted to the uni-prediction. In another embodiment, the sub-PU partition associated with inter-view motion prediction or view synthesis prediction is disabled if the sub-PU partition would result in sub-PU size smaller than the minimum PU split size or the PU belongs to a restricted partition group. The minimum PU split size may correspond to 8×8. The restricted partition group may correspond to one or more asymmetric motion partition (AMP) modes.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: October 30, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Jicheng An, Kai Zhang, Yi-Wen Chen, Jian-Liang Lin
  • Patent number: 10110915
    Abstract: Embodiments of the present invention identify a texture collocated block of a texture picture in the given view corresponding to a current depth block. A Merge candidate, or a motion vector predictor (MVP) or disparity vector predictor (DVP) candidate is derived from a candidate list including a texture candidate derived from motion information of the texture collocated block. Coding or decoding is then applied to the input data associated with the current depth block using the texture candidate if the texture candidate is selected as the Merge candidate in Merge mode or the texture candidate is selected as the MVP or DVP candidate in Inter mode.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: October 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Yi-Wen Chen, Jian-Liang Lin
  • Patent number: 10110923
    Abstract: A method of deriving VSP (View Synthesis Prediction) Merge candidates with aligned inter-view reference pictures is disclosed. The method generates a second Disparity Vector (DV) using a scaled DV derived from Neighboring Block Disparity Vector (NBDV) of the current block. A method of deriving one or more inter-view DV Merge candidates with aligned DV and associated inter-view reference pictures is also disclosed. The inter-view reference picture pointed by the DV derived from Depth oriented NBDV (DoNBDV) is used as the reference picture and the DV derived from DoNBDV is used as the DV for inter-view DV Merge candidate. Furthermore, a method of deriving temporal DV for NBDV is disclosed, where if the temporal neighboring block has a DV existing, the DV is used as an available DV for the current CU only if the associated inter-view reference picture exists in the reference lists of the current CU.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: October 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Na Zhang, Yi-Wen Chen, Jian-Liang Lin, Jicheng An, Kai Zhang
  • Patent number: 10110922
    Abstract: A method of illumination compensation for three-dimensional or multi-view encoding and decoding. The method incorporates an illumination compensation flag only if the illumination compensation is enabled and the current coding unit is processed by one 2N×2N prediction unit. The illumination compensation is applied to the current coding unit according to the illumination compensation flag. The illumination compensation flag is incorporated when the current coding unit is coded in Merge mode without checking whether a current reference picture is an inter-view reference picture.
    Type: Grant
    Filed: April 3, 2014
    Date of Patent: October 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Kai Zhang, Yi-Wen Chen, Jicheng An, Jian-Liang Lin
  • Patent number: 10110925
    Abstract: A method of video coding utilizing ARP (advanced residual prediction) by explicitly signaling the temporal reference picture or deriving the temporal reference picture at the encoder and the decoder using identical process is disclosed. To encode or decode a current block in a current picture from a dependent view, a corresponding block in a reference view corresponding to the current block is determined based on a DV (disparity vector). For the encoder side, the temporal reference picture in the reference view of the corresponding block is explicitly signaled using syntax element(s) in the slice header or derived using an identical process as the decoder. For the decoder side, the temporal reference picture in the reference view of the corresponding block is determined according to the syntax element(s) in the slice header or derived using an identical process as the decoder. The temporal reference picture is then used for ARP.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: October 23, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Yu-Lin Chang
  • Patent number: 10097850
    Abstract: A method implemented in an apparatus for video coding a current block coded in an Inter, Merge, or Skip mode determines neighboring blocks of the current block, wherein a motion vector predictor (MVP) candidate set is derived from MVP candidates associated with the neighboring blocks. The method determines at least one redundant MVP candidate, if said MVP candidate is within a same PU (Prediction Unit) as another MVP candidate in the MVP candidate set. The method removes said at least one redundant MVP candidate from the MVP candidate set, and provides a modified MVP candidate set for determining a final MVP, wherein the modified MVP candidate set corresponds to the MVP candidate set with said at least one redundant MVP candidate removed. Finally, the method encodes or decodes the current block according to the final MVP.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: October 9, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180276788
    Abstract: A video processing method includes receiving an omnidirectional content corresponding to a sphere, generating a projection-based frame according to at least the omnidirectional content and a segmented sphere projection (SSP) format, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by a first circular projection face, a second circular projection face, and at least one rectangular projection face packed in an SSP layout. A north polar region of the sphere is mapped onto the first circular projection face. A south polar region of the sphere is mapped onto the second circular projection face. At least one non-polar ring-shaped segment between the north polar region and the south polar region of the sphere is mapped onto said at least one rectangular projection face.
    Type: Application
    Filed: March 20, 2018
    Publication date: September 27, 2018
    Inventors: Ya-Hsuan Lee, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180262775
    Abstract: A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.
    Type: Application
    Filed: March 12, 2018
    Publication date: September 13, 2018
    Inventors: Ya-Hsuan Lee, Chia-Ying Li, Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20180262774
    Abstract: A video processing method includes: receiving a first input frame with a 360-degree Virtual Reality (360 VR) projection format; applying first content-oriented rotation to the first input frame to generate a first content-rotated frame; encoding the first content-rotated frame to generate a first part of a bitstream, including generating a first reconstructed frame and storing a reference frame derived from the first reconstructed frame; receiving a second input frame with the 360 VR projection format; applying second content-oriented rotation to the second input frame to generate a second content-rotated frame; configuring content re-rotation according to the first content-oriented rotation and the second content-oriented rotation; applying the content re-rotation to the reference frame to generate a re-rotated reference frame; and encoding, by a video encoder, the second content-rotated frame to generate a second part of the bitstream, including using the re-rotated reference frame for predictive coding of
    Type: Application
    Filed: March 5, 2018
    Publication date: September 13, 2018
    Inventors: Hung-Chih Lin, Jian-Liang Lin, Shen-Kai Chang
  • Patent number: 10075690
    Abstract: A method for three-dimensional video coding using aligned motion parameter derivation for motion information prediction and inheritance is disclosed. Embodiments according to the present invention utilize motion parameters associated with a corresponding block for motion information prediction or inheritance. The aligned motion parameters may be derived by searching each current reference picture list of the current block to find a matched reference picture having a same POC (Picture Order Count) or a same view index as that of the reference picture pointed by the MV of the corresponding block. The aligned motion parameters may also be derived by searching each current reference picture list to check whether the reference picture index of the reference picture in the reference view to be inherited exceeds a maximum reference picture index of each current reference picture list of the current block.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: September 11, 2018
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Patent number: 10075692
    Abstract: A method and apparatus for video coding of a block of depth data or texture data using a simple Intra mode is disclosed. The method determines a prediction process selected from a prediction process list for the current block, where the prediction process list comprises at least a single sample mode and at least a simplified Intra prediction mode. If the prediction process selected for the current block corresponds to one single sample mode, encoding or decoding the current block using a single sample value derived from one or more previously decoded pixels for a whole current block. If the prediction process selected for the current block corresponds to one simplified Intra prediction mode, encoding or decoding the current block using Intra prediction signal derived according to a corresponding Intra prediction mode with no residual coding for the current block.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: September 11, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Yi-Wen Chen, Jian-Liang Lin
  • Publication number: 20180249158
    Abstract: A method and apparatus of video coding incorporating Deep Neural Network are disclosed. A target signal is processed using DNN (Deep Neural Network), where the target signal provided to DNN input corresponds to the reconstructed residual, output from the prediction process, the reconstruction process, one or more filtering processes, or a combination of them. The output data from DNN output is provided for the encoding process or the decoding process. The DNN can be used to restore pixel values of the target signal or to predict a sign of one or more residual pixels between the target signal and an original signal. An absolute value of one or more residual pixels can be signalled in the video bitstream and used with the sign to reduce residual error of the target signal.
    Type: Application
    Filed: August 29, 2016
    Publication date: August 30, 2018
    Inventors: Yu-Wen HUANG, Yu-Chen SUN, Tzu-Der CHUANG, Jian-Liang LIN, Ching-Yeh CHEN
  • Publication number: 20180249146
    Abstract: A method of simplified depth-based block partitioning (DBBP) for three-dimensional and multi-view video coding is disclosed. In one embodiment, the method receives input data associated with a current texture block in a dependent view, and determines a corresponding depth block or a reference texture block in a reference view for the current texture block. Then, the method derives a representative value based on the corresponding depth block or the reference texture block, and generates a current segmentation mask from the corresponding depth block or the reference texture block. Then, the method selects a current block partition from block partition candidates, wherein the representative value is used for generating the segmentation mask or selecting the current block partition or both, and applies DBBP coding to the current texture block according to the current segmentation mask generated and the current block partition selected.
    Type: Application
    Filed: May 1, 2018
    Publication date: August 30, 2018
    Inventors: Xianguo ZHANG, Kai ZHANG, Jicheng AN, Han HUANG, Jian-Liang LIN
  • Publication number: 20180249154
    Abstract: Method and apparatus of video coding using decoder derived motion information based on bilateral matching or template matching are disclosed. According to one method, an initial motion vector (MV) index is signalled in a video bitstream at an encoder side or determined from the video bitstream at a decoder side. A selected MV is then derived using bilateral matching, template matching or both to refine an initial MV associated with the initial MV index. In another method, when both MVs for list 0 and list 1 exist in template matching, the smallest-cost MV between the two MVs may be used for uni-prediction template matching if the cost is lower than the bi-prediction template matching. According to yet another method, the refinement of the MV search is dependent on the block size. According to yet another method, merge candidate MV pair is always used for bilateral matching or template matching.
    Type: Application
    Filed: September 2, 2016
    Publication date: August 30, 2018
    Inventors: Tzu-Der CHUANG, Ching-Yeh CHEN, Chih-Wei HSU, Yu-Wen HUANG, Jian-Liang LIN, Yu-Chen SUN, Yi-Ting SHEN