Patents by Inventor Jian-Liang Lin

Jian-Liang Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10554095
    Abstract: An AC motor with a reduction mechanism is disclosed. The AC motor has a stator unit, a rotor unit, and a reduction transmission unit. An axial space of the AC motor can be substantially reduced by providing the reduction transmission unit directly built in the AC motor, and a path of the power transmission between the rotor unit and an output shaft can be reduced, so that the loss of the mechanical energy can be lowered.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: February 4, 2020
    Assignee: NATIONAL CHENG KUNG UNIVERSITY
    Inventors: Hong-sen Yan, Yi-chang Wu, Jian-liang Lin, Kuan-chen Chen
  • Patent number: 10528842
    Abstract: An image processing method applied to an image processing system. The image processing method comprises: (a) computing an image intensity distribution of an input image; (b) performing atmospheric light estimation to the input image; (c) performing transmission estimation according to a result of the step (a) to the input image, to generate a transmission estimation parameter; and (d) recovering scene radiance of the input image according to a result generated by the step (b) and the transmission estimation parameter. At least one of the steps (a)-(c) are performed to data corresponding to only partial pixels of the input image.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: January 7, 2020
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yu-Wen Huang
  • Patent number: 10511835
    Abstract: Method and apparatus of video coding using decoder derived motion information based on bilateral matching or template matching are disclosed. According to one method, an initial motion vector (MV) index is signalled in a video bitstream at an encoder side or determined from the video bitstream at a decoder side. A selected MV is then derived using bilateral matching, template matching or both to refine an initial MV associated with the initial MV index. In another method, when both MVs for list 0 and list 1 exist in template matching, the smallest-cost MV between the two MVs may be used for uni-prediction template matching if the cost is lower than the bi-prediction template matching. According to yet another method, the refinement of the MV search is dependent on the block size. According to yet another method, merge candidate MV pair is always used for bilateral matching or template matching.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: December 17, 2019
    Assignee: MEDIATEK INC.
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Chih-Wei Hsu, Yu-Wen Huang, Jian-Liang Lin, Yu-Chen Sun, Yu-Ting Shen
  • Patent number: 10477234
    Abstract: A method and apparatus derive a motion vector predictor (MVP) candidate set for a block. Embodiments according to the present invention determine a plurality of spatial neighboring blocks of the block, obtain one or more spatial MVP candidates from motion vectors associated with the spatial neighboring blocks, determine whether one or more redundant MVP candidates exist in the spatial MVP candidates, generate a first MVP candidate set, wherein said generating the first MVP candidate set comprises not including the determined one or more redundant MVP candidates into the first MVP candidate set, and generate a final MVP candidate set according to the first MVP candidate set.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: November 12, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 10477183
    Abstract: A method of three-dimensional video encoding and decoding that adaptively incorporates camera parameters in the video bitstream according to a control flag is disclosed. The control flag is derived based on a combination of individual control flags associated with multiple depth-oriented coding tools. Another control flag can be incorporated in the video bitstream to indicate whether there is a need for the camera parameters for the current layer. In another embodiment, a first flag and a second flag are used to adaptively control the presence and location of camera parameters for each layer or each view in the video bitstream. The first flag indicates whether camera parameters for each layer or view are present in the video bitstream. The second flag indicates camera parameter location for each layer or view in the video bitstream.
    Type: Grant
    Filed: July 18, 2014
    Date of Patent: November 12, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Yu-Lin Chang, Yi-Wen Chen, Jian-Liang Lin
  • Patent number: 10477230
    Abstract: A method and apparatus for determining a derived disparity vector (DV) directly from associated depth block for motion vector prediction in three-dimensional video encoding or decoding are disclosed. Input data associated with current motion information of a current texture block of a current texture picture in a current dependent view and a depth block associated with the current texture block are received. The derived DV for the current texture block based on the depth block is then determined and used for inter-view or temporal motion vector prediction (MVP). If the current motion information corresponds to inter-view prediction, the current DV is encoded or decoded using the derived DV as a MVP. If the current motion information corresponds to temporal prediction, the current MV is encoded or decoded using a derived MV of a corresponding texture block in a reference view as the MVP.
    Type: Grant
    Filed: April 2, 2014
    Date of Patent: November 12, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Patent number: 10462484
    Abstract: A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: October 29, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Hung-Chih Lin, Chia-Ying Li, Shen-Kai Chang, Chi-Cheng Ju
  • Patent number: 10462459
    Abstract: Aspects of the disclosure provide a method for denoising a reconstructed picture. The method can include receiving reconstructed video data corresponding to a picture, dividing the picture into current patches, forming patch groups each including a current patch and a number of reference patches that are similar to the current patch, denoising the patch groups to modify pixel values of the patch groups to create a filtered picture, and generating a reference picture based on the filtered picture for encoding or decoding a picture. The operation of denoising the patch groups includes deriving a variance of compression noise in the respective patch group based on a compression noise model. A selection of model parameters is determined based on coding unit level information.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: October 29, 2019
    Assignee: MEDIATEK INC.
    Inventors: Ching-Yeh Chen, Jian-Liang Lin, Tzu-Der Chuang, Yu-Wen Huang
  • Publication number: 20190325553
    Abstract: A video processing method includes receiving a bitstream, and decoding, by a video decoder, the bitstream to generate a decoded frame. The decoded frame is a projection-based frame that has a 360-degree image/video content represented by triangular projection faces packed in an octahedron projection layout. An omnidirectional image/video content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
    Type: Application
    Filed: July 2, 2019
    Publication date: October 24, 2019
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Hui Ou Yang, Jian-Liang Lin, Shen-Kai Chang
  • Publication number: 20190297350
    Abstract: A sample adaptive offset (SAO) filtering method for a reconstructed projection-based frame includes: obtaining at least one padding pixel in a padding area that acts as an extension of a face boundary of a first projection face, and applying SAO filtering to a block that has at least one pixel included in the first projection face. In the reconstructed projection-based frame, there is image content discontinuity between the face boundary of the first projection face and a face boundary of a second projection face. The at least one padding pixel is involved in the SAO filtering of the block.
    Type: Application
    Filed: March 22, 2019
    Publication date: September 26, 2019
    Inventors: Sheng-Yen Lin, LIN LIU, Jian-Liang Lin
  • Publication number: 20190289328
    Abstract: Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. According to one method, if a leaf coding unit contains one or more face edges, the leaf processing unit is split into sub-processing units along the face edges without the need to signal the partition. In another method, if the quadtree (QT) of binary tree (BT) partition depth for a processing unit has not reached the maximum QT or BT depth, the processing unit is split. If the processing unit contains a horizontal face edge, QT or horizontal BT partition is applied. If the processing unit contains a vertical face edge, QT or vertical BT partition is applied.
    Type: Application
    Filed: March 12, 2019
    Publication date: September 19, 2019
    Inventors: Cheng-Hsuan SHIH, Jian-Liang LIN
  • Publication number: 20190289327
    Abstract: Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. A target reconstructed VR picture in a reconstructed VR picture sequence is divided into multiple processing units and whether a target processing unit contains any discontinuous edge corresponding to a face boundary in the target reconstructed VR picture is determined. If the target processing unit contains any discontinuous edge: the target processing unit is split into two or more sub-processing units along the discontinuous edges; and NN processing is applied to each of the sub-processing units to generate a filtered processing unit. If the target processing unit contains no discontinuous edge, the NN processing is applied to the target processing unit to generate the filtered processing unit. A method and apparatus for CNN training process are also disclosed. The input reconstructed VR pictures and original pictures are divided into sub-frames along discontinuous boundaries for the training process.
    Type: Application
    Filed: February 27, 2019
    Publication date: September 19, 2019
    Inventors: Sheng-Yen LIN, Jian-Liang LIN
  • Publication number: 20190289316
    Abstract: Method and apparatus of coding 360-degree virtual reality (VR360) pictures are disclosed. According to the method, when a first MV (motion vector) of a target neighboring block for the current block is not available within the 2D projection picture, or when the target neighboring block is not in a same face as the current block: a true neighboring block picture corresponding to the target neighboring block is identified within the 2D projection; if a second MV of the true neighboring block exists, the second MV of the true neighboring block is transformed into a derived MV; and a current MV of the current block is encoded or decoded using the derived MV or one selected candidate in a MV candidate list including the derived MV as an MV predictor.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 19, 2019
    Inventors: Cheng-Hsuan SHIH, Jian-Liang LIN
  • Publication number: 20190281273
    Abstract: An adaptive loop filtering (ALF) method for a reconstructed projection-based frame includes: obtaining at least one spherical neighboring pixel in a padding area that acts as an extension of a face boundary of a first projection face, and applying adaptive loop filtering to a block in the first projection face. In the reconstructed projection-based frame, there is image content discontinuity between the face boundary of the first projection face and a face boundary of a second projection face. A region on the sphere to which the padding area corresponds is adjacent to a region on the sphere from which the first projection face is obtained. The at least one spherical neighboring pixel is involved in the adaptive loop filtering of the block.
    Type: Application
    Filed: March 7, 2019
    Publication date: September 12, 2019
    Inventors: Sheng-Yen Lin, Jian-Liang Lin
  • Publication number: 20190281293
    Abstract: A de-blocking method is applied to a reconstructed projection-based frame having a first projection face and a second projection face, and includes obtaining a first spherical neighboring block for a first block with a block edge to be de-blocking filtered, and selectively applying de-blocking to the block edge of the first block for at least updating a portion of pixels of the first block. There is image content discontinuity between a face boundary of the first projection face and a face boundary of the second projection face. The first block is a part of the first projection face, and the block edge of the first block is a part of the face boundary of the first projection face. A region on a sphere to which the first spherical neighboring block corresponds is adjacent to a region on the sphere from which the first projection face is obtained.
    Type: Application
    Filed: March 8, 2019
    Publication date: September 12, 2019
    Inventors: Sheng-Yen Lin, Jian-Liang Lin, Cheng-Hsuan Shih
  • Patent number: 10412402
    Abstract: A method and apparatus for applying filter to Intra prediction samples are disclosed. According to an embodiment of the present invention, a filter is applied to one or more prediction samples of the Initial Intra prediction block to form one or more filtered prediction samples. For example, the filter is applied to the prediction sample in the non-boundary locations of the Initial Intra prediction block. Alternatively, the filter is applied to all prediction samples in the Initial Intra prediction block. The filtered Intra prediction block comprising one or more filtered prediction samples is used as a predictor for Intra prediction encoding or decoding of the current block. The filter corresponds to a FIR (finite impulse response) filter or an IIR (infinite impulse response) filter.
    Type: Grant
    Filed: December 4, 2015
    Date of Patent: September 10, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang
  • Patent number: 10412407
    Abstract: A method and apparatus for video coding utilizing a motion vector predictor (MVP) for a motion vector (MV) for a block are disclosed. According to an embodiment, a mean candidate is derived from at least two candidates in the current candidate list. The mean candidate includes two MVs for the bi-prediction or one MV for the uni-prediction, and at least one MV of the mean candidate is derived as a mean of the MVs of said at least two candidates in one of list 0 and list 1. The mean candidate is added to the current candidate list to form a modified candidate list, and one selected candidate is determined as a MVP or MVPs from the modified candidate list, for current MV or MVs of the current block. The current block is then encoded or decoded in Inter, Merge, or Skip mode utilizing the MVP or MVPs selected.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: September 10, 2019
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen
  • Publication number: 20190272616
    Abstract: A video processing method includes: obtaining a plurality of square projection faces from an omnidirectional content of a sphere according to a cube-based projection, scaling the square projection faces to generate a plurality of scaled projection faces, respectively, creating at least one padding region, generating a projection-based frame by packing the scaled projection faces and said at least one padding region in a projection layout of the cube-based projection, and encoding the projection-based frame to generate a part of a bitstream.
    Type: Application
    Filed: February 27, 2019
    Publication date: September 5, 2019
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20190272617
    Abstract: A cube-based projection method includes generating pixels of different square projection faces associated with a cube-based projection of a 360-degree image content of a sphere. Pixels of a first square projection face are generated by utilizing a first mapping function set. Pixels of a second square projection face are generated by utilizing a second mapping function set. The different square projection faces include the first square projection face and the second square projection face. The second mapping function set is not identical to the first mapping function set.
    Type: Application
    Filed: February 26, 2019
    Publication date: September 5, 2019
    Inventors: Ya-Hsuan Lee, Jian-Liang Lin
  • Publication number: 20190251660
    Abstract: A video processing method includes receiving a bitstream, processing the bitstream to obtain at least one syntax element from the bitstream, and decoding the bitstream to generate a current decoded frame having a rotated 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format. The at least one syntax element signaled via the bitstream indicates rotation information of content-oriented rotation that is involved in generating the rotated 360-degree image/video content, and includes a first syntax element. When the content-oriented rotation is enabled, the first syntax element indicates a rotation degree along a specific rotation axis.
    Type: Application
    Filed: April 24, 2019
    Publication date: August 15, 2019
    Inventors: Hung-Chih Lin, Chao-Chih Huang, Chia-Ying Li, Jian-Liang Lin, Shen-Kai Chang