Patents by Inventor Yu-Wen Huang

Yu-Wen Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9967563
    Abstract: A method and apparatus for loop filter processing of boundary pixels across a block boundary aligned with a slice or tile boundary is disclosed. Embodiments according to the present invention use a parameter of a neighboring slice or tile for loop filter processing across slice or tile boundaries according to a flag indicating whether cross slice or tile loop filter processing is allowed not. According to one embodiment of the present invention, the parameter is a quantization parameter corresponding to a neighboring slice or tile, and the quantization parameter is used for filter decision in deblocking filter.
    Type: Grant
    Filed: January 30, 2013
    Date of Patent: May 8, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Wei Hsu, Chia-Yang Tsai, Yu-Wen Huang
  • Patent number: 9961369
    Abstract: A method and apparatus for three-dimensional video encoding or decoding using the disparity vector derived from an associated depth block are disclosed. The method determines an associated depth block for a current texture block and derives a derived disparity vector based on a subset of depth samples of the associated depth block. The subset contains less depth samples than the associated depth block and the subset excludes a single-sample subset corresponding to a center sample of the associated depth block. The derived disparity vector can be used as an inter-view motion (disparity) vector predictor in Inter mode, an inter-view (disparity) candidate in Merge mode or Skip mode. The derived disparity vector can also be used to locate a reference block for inter-view motion prediction in Inter mode, inter-view candidate in Merge or Skip mode, inter-view motion prediction, inter-view disparity prediction, or inter-view residual prediction.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: May 1, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9961364
    Abstract: An apparatus and method for temporal motion vector prediction for a current block in a picture are disclosed. In the present method, one temporal block in a first reference picture in a first list selected from a list group comprising list 0 and list 1 is determined. When the determined temporal block has at least one motion vector, a candidate set is determined based on the motion vector of the temporal block. The temporal motion vector predictor or temporal motion vector predictor candidate or temporal motion vector or temporal motion vector candidate for the current block is determined from the candidate set by checking a presence of a motion vector pointing to a reference picture in a first specific list in said at least one motion vector, wherein the first specific list is selected from the list group based on a priority order.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: May 1, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Yu-Pao Tsai, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180115764
    Abstract: A method and apparatus for deriving MV/MVP (motion vector or motion vector predictor) or DV/DVP (disparity vector or disparity vector predictor) associated Skip mode, Merge mode or Inter mode for a block of a current picture in three-dimensional (3D) video coding are disclosed. The 3D video coding may use temporal prediction and inter-view prediction to exploit temporal and inter-view correlation. MV/DV prediction is applied to reduce bitrate associated with MV/DV coding. The MV/MVP or DV/DVP for a block is derived from spatial candidates, temporal candidates and inter-view candidates. For the inter-view candidate, the position of the inter-view co-located block can be located using a global disparity vector (GDV) or warping the current block onto the co-located picture according to the depth information. The candidate can also be derived as the vector corresponding to warping the current block onto the co-located picture according to the depth information.
    Type: Application
    Filed: December 20, 2017
    Publication date: April 26, 2018
    Applicant: HFI INNOVATION INC.
    Inventors: Jian-Liang LIN, Yi-Wen CHEN, Yu-Pao TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20180109812
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode an image or video. A slice is partitioned into a set of first units. For each first unit in the set of first units, the first unit is partitioned into a set of second units. The partitioning includes, for each second unit in the set of second units, determining whether the second unit satisfies a predetermined constraint. If the second unit does not satisfy the predetermined constraint, a first set of partitioning techniques is tested to partition the second unit. If the second unit satisfies the predetermined constraint, the first set of partitioning techniques and a second set of partitioning techniques are tested to partition the second unit. The second unit is partitioned using a technique from the first set of partitioning techniques or the second set of partitioning techniques identified by the testing.
    Type: Application
    Filed: October 12, 2017
    Publication date: April 19, 2018
    Applicant: Media Tek Inc.
    Inventors: Chia-Ming Tsai, Chih-Wei Hsu, Tzu-Der Chuang, Ching-Yeh Chen, Yu-Wen Huang
  • Publication number: 20180109785
    Abstract: In one embodiment, a method receives a video bitstream corresponding to compressed video, wherein Filter Unit (FU) based in-loop filtering is allowed in a reconstruction loop associated with the compressed video. The method then derives reconstructed video from the video bitstream, wherein the reconstructed video is partitioned into FUs and derives a merge flag from the video bitstream for each of the FUs, wherein the merge flag indicates whether said each of the FUs is merged with a neighboring FU. The method further receives a merge index from the video bitstream if the merge flag indicates that said each of the FUs is merged, and receives the filter parameters from the video bitstream if the merge flag indicates that said each of the FUs is not merged. Finally, the method applies the in-loop filtering to said each of the FUs using the filter parameters.
    Type: Application
    Filed: December 14, 2017
    Publication date: April 19, 2018
    Inventors: Ching-Yeh CHEN, Chih-Ming FU, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20180103260
    Abstract: A method and apparatus of sharing an on-chip buffer or cache memory for a video coding system using coding modes including Inter prediction mode or Intra Block Copy (IntraBC) mode are disclosed. At least partial pre-deblocking reconstructed video data of a current picture is stored in an on-chip buffer or cache memory. If the current block is coded using IntraBC mode, the pre-deblocking reconstructed video data of the current picture stored in the on-chip buffer or cache memory are used to derive IntraBC prediction for the current block. In some embodiments, if the current block is coded using Inter prediction mode, Inter reference video data from the previous picture stored in the on-chip buffer or cache memory are used to derive Inter prediction for the current block. In another embodiment, the motion compensation/motion estimation unit is shared by the two modes.
    Type: Application
    Filed: June 3, 2016
    Publication date: April 12, 2018
    Inventors: Tzu-Der CHUANG, Ping CHAO, Ching-Yeh CHEN, Yu-Chen Sun, Chih-Ming WANG, Chia-Yun Cheng, Han-Liang Chou, Yu-Wen Huang
  • Patent number: 9942571
    Abstract: A method and apparatus for sharing context among different SAO syntax elements for a video coder are disclosed. Embodiments of the present invention apply CABAC coding to multiple SAO syntax elements according to a joint context model, wherein the multiple SAO syntax elements share the joint context. The multiple SAO syntax elements may correspond to SAO merge left flag and SAO merge up flag. The multiple SAO syntax elements may correspond to SAO merge left flags or merge up flags associated with different color components. The joint context model can be derived based on joint statistics of the multiple SAO syntax elements. Embodiments of the present invention code the SAO type index using truncated unary binarization, using CABAC with only one context, or using CABAC with context mode for the first bin associated with the SAO type index and with bypass mode for any remaining bin.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: April 10, 2018
    Assignee: HFI INNOVATIONS INC.
    Inventors: Chih-Ming Fu, Yu-Wen Huang, Chih-Wei Hsu, Shaw-Min Lei
  • Publication number: 20180098088
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) candidate set for a block are disclosed. Embodiments according to the present invention generate a complete full MVP candidate set based on the redundancy-removed MVP candidate set if one or more redundant MVP candidates exist. In one embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value corresponding to a non-redundant MVP is assigned to each replacement MVP candidate. In another embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value is assigned to each replacement MVP candidate according to a rule. The procedure of assigning value, checking redundancy, removing redundant MVP candidate are repeated until the MVP candidate set is complete and full.
    Type: Application
    Filed: December 6, 2017
    Publication date: April 5, 2018
    Inventors: Tzu-Der Chuang, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180091829
    Abstract: A method of palette index map coding of blocks in a picture by grouping coded symbols of the same type is disclosed for video encoder and decoder. In one embodiment, all syntax elements corresponding to the pixel index are grouped into a pixel index group, and all syntax elements corresponding to the escape pixel are grouped into an escape pixel group. All syntax elements corresponding to the run type and run length are grouped into an interleaved run type/run length group, or grouped into separate run type group and run length group. In another embodiment, the system parses from the video bitstream a last-run mode syntax element for a current block, where the last-run mode syntax element indicates whether a last run mode is a copy-index mode or a copy-above mode. Information associated with the last-run mode syntax element is used for reconstructing palette index map.
    Type: Application
    Filed: February 3, 2016
    Publication date: March 29, 2018
    Inventors: Shan LIU, Xiaozhong XU, Tzu-Der CHUANG, Yu-Chen SUN, Wang-Lin LAI, Yu-Wen HUANG, Jing YE
  • Patent number: 9924181
    Abstract: A method and apparatus for inter-layer prediction for scalable video coding are disclosed. Embodiments of the present invention utilize weighted prediction for scalable coding. The weighted prediction is based on the predicted texture data and the inter-layer Intra prediction data derived from BL reconstructed data. The inter-layer Intra prediction data corresponds to the BL reconstructed data or up-sampled BL reconstructed data. The predicted texture data corresponds to spatial Intra prediction data or motion-compensated prediction data based on the second EL video data in the same layer as the current EL picture. Embodiments of the present invention also utilize the reference picture list including an inter-layer reference picture (ILRP) corresponding to BL reconstructed texture frame or up-sampled BL reconstructed texture frame for Inter prediction of EL video data. The motion vector is limited to a range around (0,0) when the ILRP is selected as a reference picture.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: March 20, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Ming Fu, Shih-Ta Hsiang
  • Patent number: 9918068
    Abstract: A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: March 13, 2018
    Assignee: MEDIA TEK INC.
    Inventors: Yu-Lin Chang, Shih-Ta Hsiang, Chi-Ling Wu, Chih-Ming Fu, Chia-Ping Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180070077
    Abstract: A method of inter-layer or inter-view prediction for an inter-layer or inter-view video coding system is disclosed. The method includes receiving a to-be-processed block in the EL or the EV, determining a collocated block in the BL (Base layer) or the BV (Base View), wherein the collocated block is located at a location in the BL or the BV corresponding to the to-be-processed block in the EL (Enhancement Layer)or in the EV (Enhancement view), deriving a predictor for the to-be-processed block in the EL or the EV from the collocated block in the BL or the BV based on pixel data of the BL or the BV, wherein the predictor corresponds to a linear function of pixel data in the collocated block, and encoding or decoding the to-be-processed block in the EL or the EV using the predictor.
    Type: Application
    Filed: October 27, 2017
    Publication date: March 8, 2018
    Inventors: Chia-Yang TSAI, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Patent number: 9905023
    Abstract: A depth image processing method and a depth image processing system are provided. The depth image processing method includes: capturing a first image and a second image; performing a feature comparison to acquire a plurality of feature pairs between the first image and the second image, wherein each of the feature pairs includes a feature in the first image and a corresponding feature in the second image; computing disparities of the feature pairs; computing a depth image through the first image and the second image when the disparities of the feature pairs are all smaller than a disparity threshold.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: February 27, 2018
    Assignee: Wistron Corporation
    Inventors: Sheng-Shien Hsieh, Kai-Chung Cheng, Yu-Wen Huang, Tzu-Yao Lin, Pin-Hong Liou
  • Publication number: 20180041769
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to receive compressed video data, wherein the compressed video data is related to a set of frames. A decoder-side predictor refinement technique is used to calculate a new motion vector for a current frame from the set of frames, wherein the new motion vector estimates motion for the current frame based on one or more reference frames. An existing motion vector associated with a different frame from a motion vector buffer is retrieved. The new motion vector is calculated based on the existing motion vector using a decoder-side motion vector prediction technique, such that the existing motion vector is in the motion vector buffer after calculating the new motion vector.
    Type: Application
    Filed: August 7, 2017
    Publication date: February 8, 2018
    Applicant: MediaTek Inc.
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Chih-Wei Hsu, Yu-Wen Huang
  • Patent number: 9877019
    Abstract: Implementations of the invention are provided in methods for filter-unit based in-loop filtering in a video decoder and encoder. In one implementation, filter parameters are selected from a filter parameter set for each filter based on a filter index. In another implementation, the picture is partitioned into filter units according to filter unit size, which can be selected between a default size and other size. When other size is selected, the filter unit size may be conveyed using direct size information or ratio information. In another implementation, a merge flag and a merge index are used to convey filter unit merge information. A method for filter-unit based in-loop filtering in a video encoder for color video is disclosed. In one embodiment, the method incorporates filter syntax in the video bitstream by interleaving the color-component filter syntax for the FUs.
    Type: Grant
    Filed: December 31, 2011
    Date of Patent: January 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Ching-Yeh Chen, Chih-Ming Fu, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9877041
    Abstract: A method and apparatus for deriving a scaled motion vector (MV) for a current block based on a candidate MV associated with a candidate block determines a first picture distance between a current picture corresponding to the current block and a target reference picture pointed to by a current motion vector of the current block, and then determines a second picture distance between a candidate picture corresponding to the candidate block and a candidate reference picture pointed to by the candidate MV of the candidate block. The method further determines a pre-scaled distance division having a first value related to dividing a pre-scaling factor by the second picture distance, and determines an intermediate scaling factor by right-shifting a multiplication result associated with the first picture distance and the pre-scaled distance division by q bits, wherein q is a positive integer.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Jian-Liang Lin, Ching-Yeh Chen, Yi-Wen Chen, Yu-Wen Huang
  • Patent number: 9872016
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) candidate set for a block are disclosed. Embodiments according to the present invention generate a complete full MVP candidate set based on the redundancy-removed MVP candidate set if one or more redundant MVP candidates exist. In one embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value corresponding to a non-redundant MVP is assigned to each replacement MVP candidate. In another embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value is as signed to each replacement MVP candidate according to a rule. The procedure of assigning value, checking redundancy, removing redundant MVP candidate are repeated until the MVP candidate set is complete and full.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: January 16, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9872015
    Abstract: Video decoding and encoding with in-loop processing of reconstructed video are disclosed. At the decoder side, a flag is received from the video bitstream and according to the flag, information associated with in-loop filter parameters is received either from a data payload in the video bitstream to be shared by two or more coding blocks or individual coding block data in the video bitstream. At the encoder side, information associated with the in-loop filter parameters is incorporated either in a data payload in a video bitstream to be shared by two or more coding blocks or interleaved with individual coding block data in the video bitstream according to a flag. The data payload in the video bitstream is in a picture level, Adaptation Parameter Set (APS), or a slice header.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: January 16, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180014028
    Abstract: Methods for decoding of a video bitstream by a video decoding circuit are provided. In one implementation, a method receives coded data for a 2N×2N coding unit (CU) from the video bitstream, selects one or more first codewords according to whether asymmetric motion partition is disabled or enabled when a size of said 2N×2N CU is not equal to a smallest CU size, wherein none of the first codewords corresponds to INTER N×N partition, selects one or more second codewords when the size of said 2N×2N CU is equal to the smallest CU size, wherein none of the second codewords corresponds to the INTER N×N partition when N is 4, determines a CU structure for said 2N×2N CU from the video bitstream using said one or more first codewords or said one or more second codewords, and decodes the video bitstream using the CU structure.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Inventors: Shan LIU, Yu-Wen HUANG, Shaw-Min LEI