Patents by Inventor Chih-Ming Fu

Chih-Ming Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10405004
    Abstract: A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: September 3, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20190261008
    Abstract: A system and method of content adaptive pixel intensity processing are described. The method includes receiving a predefined set of processed video data configured from the processed video data, deriving a range information associated with an original maximum value and an original minimum value for a predefined set of original video data, wherein the predefined set of processed video data is derived from the predefined set of original video data, and adaptively clipping pixel intensity of the predefined set of processed video data to a range deriving from the range information, wherein the range information is incorporated in a bitstream and represented in a form of the original maximum value and the original minimum value, prediction values associated with a reference maximum value and a reference minimum value, or a range index associated with a predefined range set.
    Type: Application
    Filed: May 6, 2019
    Publication date: August 22, 2019
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Yu-Wen Huang
  • Publication number: 20190149846
    Abstract: A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
    Type: Application
    Filed: January 16, 2019
    Publication date: May 16, 2019
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Patent number: 10291921
    Abstract: A system and method of content adaptive pixel intensity processing are described. The method includes receiving a predefined set of processed video data configured from the processed video data, deriving a range information associated with an original maximum value and an original minimum value for a predefined set of original video data, wherein the predefined set of processed video data is derived from the predefined set of original video data, and adaptively clipping pixel intensity of the predefined set of processed video data to a range deriving from the range information, wherein the range information is incorporated in a bitstream and represented in a form of the original maximum value and the original minimum value, prediction values associated with a reference maximum value and a reference minimum value, or a range index associated with a predefined range set.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: May 14, 2019
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Yu-Wen Huang
  • Patent number: 10178410
    Abstract: A method and apparatus for three-dimensional and scalable video coding are disclosed. Embodiments according to the present invention determine a motion information set associated with the video data, wherein at least part of the motion information set is made available or unavailable conditionally depending on the video data type. The video data type may correspond to depth data, texture data, a view associated with the video data in three-dimensional video coding, or a layer associated with the video data in scalable video coding. The motion information set is then provided for coding or decoding of the video data, other video data, or both. At least a flag may be used to indicate whether part of the motion information set is available or unavailable. Alternatively, a coding profile for the video data may be used to determine whether the motion information is available or not based on the video data type.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: January 8, 2019
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Yi-Wen Chen, Jian-Liang Lin, Yu-Wen Huang
  • Patent number: 10116967
    Abstract: A method and apparatus for Sample Adaptive Offset (SAO) processing of video data in a video decoder are disclosed. In an embodiment, the method includes receiving a block of processed-reconstructed pixels associated with a picture from a media or a processor, wherein the block of processed-reconstructed pixels are decoded from a video bitstream; determining a SAO type index from the video bitstream, wherein the SAO type index is decoded according to truncated unary binarization, the SAO type index is decoded using CABAC (context-based adaptive binary arithmetic coding) with one context, or the SAO type index is decoded by CABAC using a context mode for a first bin associated with the SAO type index and using a bypass mode for any remaining bin associated with the SAO type index; and applying SAO processing to the block of processed-reconstructed pixels based on SAO information including the SAO type index.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: October 30, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Yu-Wen Huang, Chih-Wei Hsu, Shaw-Min Lei
  • Patent number: 9998737
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. The method operates by receiving input data to said in-loop processing, wherein the input data corresponds to reconstructed coding units of the picture; configuring the input data into multiple filter units; selecting a filter from a candidate filter set comprising at least two candidate filters associated with said in-loop processing for one of said multiple filter units; applying said in-loop processing to said one of said multiple filter units using the selected filter to generate a processed filter unit, wherein when said one of said multiple filter units comprises at least two reconstructed coding units, the selected filter is applied to all of said at least two reconstructed coding units; and providing processed video data comprising the processed filter unit. The apparatus provide circuits to carryout the operations of the method.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: June 12, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180109785
    Abstract: In one embodiment, a method receives a video bitstream corresponding to compressed video, wherein Filter Unit (FU) based in-loop filtering is allowed in a reconstruction loop associated with the compressed video. The method then derives reconstructed video from the video bitstream, wherein the reconstructed video is partitioned into FUs and derives a merge flag from the video bitstream for each of the FUs, wherein the merge flag indicates whether said each of the FUs is merged with a neighboring FU. The method further receives a merge index from the video bitstream if the merge flag indicates that said each of the FUs is merged, and receives the filter parameters from the video bitstream if the merge flag indicates that said each of the FUs is not merged. Finally, the method applies the in-loop filtering to said each of the FUs using the filter parameters.
    Type: Application
    Filed: December 14, 2017
    Publication date: April 19, 2018
    Inventors: Ching-Yeh CHEN, Chih-Ming FU, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Patent number: 9942571
    Abstract: A method and apparatus for sharing context among different SAO syntax elements for a video coder are disclosed. Embodiments of the present invention apply CABAC coding to multiple SAO syntax elements according to a joint context model, wherein the multiple SAO syntax elements share the joint context. The multiple SAO syntax elements may correspond to SAO merge left flag and SAO merge up flag. The multiple SAO syntax elements may correspond to SAO merge left flags or merge up flags associated with different color components. The joint context model can be derived based on joint statistics of the multiple SAO syntax elements. Embodiments of the present invention code the SAO type index using truncated unary binarization, using CABAC with only one context, or using CABAC with context mode for the first bin associated with the SAO type index and with bypass mode for any remaining bin.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: April 10, 2018
    Assignee: HFI INNOVATIONS INC.
    Inventors: Chih-Ming Fu, Yu-Wen Huang, Chih-Wei Hsu, Shaw-Min Lei
  • Patent number: 9924181
    Abstract: A method and apparatus for inter-layer prediction for scalable video coding are disclosed. Embodiments of the present invention utilize weighted prediction for scalable coding. The weighted prediction is based on the predicted texture data and the inter-layer Intra prediction data derived from BL reconstructed data. The inter-layer Intra prediction data corresponds to the BL reconstructed data or up-sampled BL reconstructed data. The predicted texture data corresponds to spatial Intra prediction data or motion-compensated prediction data based on the second EL video data in the same layer as the current EL picture. Embodiments of the present invention also utilize the reference picture list including an inter-layer reference picture (ILRP) corresponding to BL reconstructed texture frame or up-sampled BL reconstructed texture frame for Inter prediction of EL video data. The motion vector is limited to a range around (0,0) when the ILRP is selected as a reference picture.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: March 20, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Ming Fu, Shih-Ta Hsiang
  • Patent number: 9918068
    Abstract: A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: March 13, 2018
    Assignee: MEDIA TEK INC.
    Inventors: Yu-Lin Chang, Shih-Ta Hsiang, Chi-Ling Wu, Chih-Ming Fu, Chia-Ping Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9877019
    Abstract: Implementations of the invention are provided in methods for filter-unit based in-loop filtering in a video decoder and encoder. In one implementation, filter parameters are selected from a filter parameter set for each filter based on a filter index. In another implementation, the picture is partitioned into filter units according to filter unit size, which can be selected between a default size and other size. When other size is selected, the filter unit size may be conveyed using direct size information or ratio information. In another implementation, a merge flag and a merge index are used to convey filter unit merge information. A method for filter-unit based in-loop filtering in a video encoder for color video is disclosed. In one embodiment, the method incorporates filter syntax in the video bitstream by interleaving the color-component filter syntax for the FUs.
    Type: Grant
    Filed: December 31, 2011
    Date of Patent: January 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Ching-Yeh Chen, Chih-Ming Fu, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9872022
    Abstract: Implementations of sample adaptive offset (SAO) processing a reconstructed picture in an image or video coding system are described. In one example implementation, a method may receive input data associated with the reconstructed picture. The method may also perform multiple stages of SAO filtering operations on a SAO processing unit of at least a portion of the reconstructed picture. Information related to a SAO parameter set that signals one or more SAO types, one or more SAO subtypes, one or more SAO offset values, or a combination thereof, used by the multiple stages of SAO filtering operations are encoded or decoded.
    Type: Grant
    Filed: December 30, 2014
    Date of Patent: January 16, 2018
    Assignee: MEDIATEK INC.
    Inventors: Shih-Ta Hsiang, Chih-Ming Fu
  • Patent number: 9872015
    Abstract: Video decoding and encoding with in-loop processing of reconstructed video are disclosed. At the decoder side, a flag is received from the video bitstream and according to the flag, information associated with in-loop filter parameters is received either from a data payload in the video bitstream to be shared by two or more coding blocks or individual coding block data in the video bitstream. At the encoder side, information associated with the in-loop filter parameters is incorporated either in a data payload in a video bitstream to be shared by two or more coding blocks or interleaved with individual coding block data in the video bitstream according to a flag. The data payload in the video bitstream is in a picture level, Adaptation Parameter Set (APS), or a slice header.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: January 16, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9860528
    Abstract: A method and apparatus for scalable video coding are disclosed, wherein the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. According to embodiments of the present invention, information from the base layer is exploited for coding the enhancement layer. The information coding for the enhancement layer includes CU structure, motion vector predictor (MVP) information, MVP/merge candidates, intra prediction mode, residual quadtree information, texture information, residual information, context adaptive entropy coding, Adaptive Lop Filter (ALF), Sample Adaptive Offset (SAO), and deblocking filter.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: January 2, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Chih-Ming Fu, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9860530
    Abstract: A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
    Type: Grant
    Filed: October 11, 2012
    Date of Patent: January 2, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Kun-Bin Lee, Yi-Hau Chen, Chi-Cheng Ju, Yu-Wen Huang, Shaw-Min Lei, Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Wei Hsu
  • Patent number: 9826911
    Abstract: A wearable device is provided. The wearable device includes a photon sensor, a processor, and an output unit. The photon sensor senses light reflected from a specific region and transforms the sensed light to a plurality of electric-signal components. The processor receives the electric-signal components sensed within a period to form a dimensional sensing signal. The processor extracts a feature of a waveform of the dimensional sensing signal and determines whether a predetermined heart condition of the object is present according to the feature of the waveform of the dimensional sensing signal to generate a determination signal. The output unit is coupled to the processor. The output unit receives the determination signal and generates an alarm signal according to the determination signal.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: November 28, 2017
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Shu-Yu Hsu, Po-Wen Ku
  • Patent number: 9813738
    Abstract: A method and apparatus for processing in-loop reconstructed video using an in-loop filter is disclosed. In the recent HEVC development, adaptive loop filtering (ALF) is being adopted to process in-loop reconstruction video data, where ALF can be selectively turned ON or OFF for each block in a frame or a slice. An advanced ALF is disclosed later that allows a choice of multiple filter sets that can be applied to the reconstructed video data adaptively. In the present disclosure, pixels of the in-loop reconstructed video data are divided into a plurality of to-be-filtered regions, and an in-loop filter from a filter set is determined for each to-be-filtered region based on a rate-distortion optimization procedure. According to one embodiment of the present invention, computation of cost function associated with the rate-distortion optimization procedure is related to correlation values associated with original video data and the in-loop reconstructed video data.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: November 7, 2017
    Assignee: HFI Innovation Inc.
    Inventors: Chia-Yang Tsai, Chih-Ming Fu, Ching-Yeh Chen, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9762925
    Abstract: A video encoder that utilizes adaptive interpolation filtering for coding video data includes a prediction unit, a reconstruction unit, a reference picture buffer, a filter parameter estimator for estimating filter parameters according to the original video data and the predicted samples, and an adaptive interpolation filter for utilizing the stored filter parameters to perform filtering on the reconstructed video data.
    Type: Grant
    Filed: March 17, 2009
    Date of Patent: September 12, 2017
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Xun Guo, Kai Zhang, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9729897
    Abstract: The invention provides a motion prediction method. First, a plurality of candidate units corresponding to a current unit of a current frame is determined. A plurality of motion vectors of the candidate units is then obtained. A plurality of scaling factors of the candidate units is then calculated according to a plurality of respective temporal distances depending on a plurality of reference frames of the motion vectors. The motion vectors of the candidate units are then scaled according to the scaling factors to obtain a plurality of scaled motion vectors. The scaled motion vectors are ranked, and a subset of highest ranking motion vectors are identified to be included in a candidate set. Finally, a motion vector predictor for motion prediction of the current unit is then selected from the candidate units.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: August 8, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Yu-Pao Tsai, Chih-Ming Fu, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei