Patents by Inventor Chih-Ming Fu

Chih-Ming Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9998737
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. The method operates by receiving input data to said in-loop processing, wherein the input data corresponds to reconstructed coding units of the picture; configuring the input data into multiple filter units; selecting a filter from a candidate filter set comprising at least two candidate filters associated with said in-loop processing for one of said multiple filter units; applying said in-loop processing to said one of said multiple filter units using the selected filter to generate a processed filter unit, wherein when said one of said multiple filter units comprises at least two reconstructed coding units, the selected filter is applied to all of said at least two reconstructed coding units; and providing processed video data comprising the processed filter unit. The apparatus provide circuits to carryout the operations of the method.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: June 12, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20180109785
    Abstract: In one embodiment, a method receives a video bitstream corresponding to compressed video, wherein Filter Unit (FU) based in-loop filtering is allowed in a reconstruction loop associated with the compressed video. The method then derives reconstructed video from the video bitstream, wherein the reconstructed video is partitioned into FUs and derives a merge flag from the video bitstream for each of the FUs, wherein the merge flag indicates whether said each of the FUs is merged with a neighboring FU. The method further receives a merge index from the video bitstream if the merge flag indicates that said each of the FUs is merged, and receives the filter parameters from the video bitstream if the merge flag indicates that said each of the FUs is not merged. Finally, the method applies the in-loop filtering to said each of the FUs using the filter parameters.
    Type: Application
    Filed: December 14, 2017
    Publication date: April 19, 2018
    Inventors: Ching-Yeh CHEN, Chih-Ming FU, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Patent number: 9942571
    Abstract: A method and apparatus for sharing context among different SAO syntax elements for a video coder are disclosed. Embodiments of the present invention apply CABAC coding to multiple SAO syntax elements according to a joint context model, wherein the multiple SAO syntax elements share the joint context. The multiple SAO syntax elements may correspond to SAO merge left flag and SAO merge up flag. The multiple SAO syntax elements may correspond to SAO merge left flags or merge up flags associated with different color components. The joint context model can be derived based on joint statistics of the multiple SAO syntax elements. Embodiments of the present invention code the SAO type index using truncated unary binarization, using CABAC with only one context, or using CABAC with context mode for the first bin associated with the SAO type index and with bypass mode for any remaining bin.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: April 10, 2018
    Assignee: HFI INNOVATIONS INC.
    Inventors: Chih-Ming Fu, Yu-Wen Huang, Chih-Wei Hsu, Shaw-Min Lei
  • Patent number: 9924181
    Abstract: A method and apparatus for inter-layer prediction for scalable video coding are disclosed. Embodiments of the present invention utilize weighted prediction for scalable coding. The weighted prediction is based on the predicted texture data and the inter-layer Intra prediction data derived from BL reconstructed data. The inter-layer Intra prediction data corresponds to the BL reconstructed data or up-sampled BL reconstructed data. The predicted texture data corresponds to spatial Intra prediction data or motion-compensated prediction data based on the second EL video data in the same layer as the current EL picture. Embodiments of the present invention also utilize the reference picture list including an inter-layer reference picture (ILRP) corresponding to BL reconstructed texture frame or up-sampled BL reconstructed texture frame for Inter prediction of EL video data. The motion vector is limited to a range around (0,0) when the ILRP is selected as a reference picture.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: March 20, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Ming Fu, Shih-Ta Hsiang
  • Patent number: 9918068
    Abstract: A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: March 13, 2018
    Assignee: MEDIA TEK INC.
    Inventors: Yu-Lin Chang, Shih-Ta Hsiang, Chi-Ling Wu, Chih-Ming Fu, Chia-Ping Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9877019
    Abstract: Implementations of the invention are provided in methods for filter-unit based in-loop filtering in a video decoder and encoder. In one implementation, filter parameters are selected from a filter parameter set for each filter based on a filter index. In another implementation, the picture is partitioned into filter units according to filter unit size, which can be selected between a default size and other size. When other size is selected, the filter unit size may be conveyed using direct size information or ratio information. In another implementation, a merge flag and a merge index are used to convey filter unit merge information. A method for filter-unit based in-loop filtering in a video encoder for color video is disclosed. In one embodiment, the method incorporates filter syntax in the video bitstream by interleaving the color-component filter syntax for the FUs.
    Type: Grant
    Filed: December 31, 2011
    Date of Patent: January 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Ching-Yeh Chen, Chih-Ming Fu, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9872022
    Abstract: Implementations of sample adaptive offset (SAO) processing a reconstructed picture in an image or video coding system are described. In one example implementation, a method may receive input data associated with the reconstructed picture. The method may also perform multiple stages of SAO filtering operations on a SAO processing unit of at least a portion of the reconstructed picture. Information related to a SAO parameter set that signals one or more SAO types, one or more SAO subtypes, one or more SAO offset values, or a combination thereof, used by the multiple stages of SAO filtering operations are encoded or decoded.
    Type: Grant
    Filed: December 30, 2014
    Date of Patent: January 16, 2018
    Assignee: MEDIATEK INC.
    Inventors: Shih-Ta Hsiang, Chih-Ming Fu
  • Patent number: 9872015
    Abstract: Video decoding and encoding with in-loop processing of reconstructed video are disclosed. At the decoder side, a flag is received from the video bitstream and according to the flag, information associated with in-loop filter parameters is received either from a data payload in the video bitstream to be shared by two or more coding blocks or individual coding block data in the video bitstream. At the encoder side, information associated with the in-loop filter parameters is incorporated either in a data payload in a video bitstream to be shared by two or more coding blocks or interleaved with individual coding block data in the video bitstream according to a flag. The data payload in the video bitstream is in a picture level, Adaptation Parameter Set (APS), or a slice header.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: January 16, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9860530
    Abstract: A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
    Type: Grant
    Filed: October 11, 2012
    Date of Patent: January 2, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Kun-Bin Lee, Yi-Hau Chen, Chi-Cheng Ju, Yu-Wen Huang, Shaw-Min Lei, Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Wei Hsu
  • Patent number: 9860528
    Abstract: A method and apparatus for scalable video coding are disclosed, wherein the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. According to embodiments of the present invention, information from the base layer is exploited for coding the enhancement layer. The information coding for the enhancement layer includes CU structure, motion vector predictor (MVP) information, MVP/merge candidates, intra prediction mode, residual quadtree information, texture information, residual information, context adaptive entropy coding, Adaptive Lop Filter (ALF), Sample Adaptive Offset (SAO), and deblocking filter.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: January 2, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Chih-Ming Fu, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9826911
    Abstract: A wearable device is provided. The wearable device includes a photon sensor, a processor, and an output unit. The photon sensor senses light reflected from a specific region and transforms the sensed light to a plurality of electric-signal components. The processor receives the electric-signal components sensed within a period to form a dimensional sensing signal. The processor extracts a feature of a waveform of the dimensional sensing signal and determines whether a predetermined heart condition of the object is present according to the feature of the waveform of the dimensional sensing signal to generate a determination signal. The output unit is coupled to the processor. The output unit receives the determination signal and generates an alarm signal according to the determination signal.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: November 28, 2017
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Shu-Yu Hsu, Po-Wen Ku
  • Patent number: 9813738
    Abstract: A method and apparatus for processing in-loop reconstructed video using an in-loop filter is disclosed. In the recent HEVC development, adaptive loop filtering (ALF) is being adopted to process in-loop reconstruction video data, where ALF can be selectively turned ON or OFF for each block in a frame or a slice. An advanced ALF is disclosed later that allows a choice of multiple filter sets that can be applied to the reconstructed video data adaptively. In the present disclosure, pixels of the in-loop reconstructed video data are divided into a plurality of to-be-filtered regions, and an in-loop filter from a filter set is determined for each to-be-filtered region based on a rate-distortion optimization procedure. According to one embodiment of the present invention, computation of cost function associated with the rate-distortion optimization procedure is related to correlation values associated with original video data and the in-loop reconstructed video data.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: November 7, 2017
    Assignee: HFI Innovation Inc.
    Inventors: Chia-Yang Tsai, Chih-Ming Fu, Ching-Yeh Chen, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9762925
    Abstract: A video encoder that utilizes adaptive interpolation filtering for coding video data includes a prediction unit, a reconstruction unit, a reference picture buffer, a filter parameter estimator for estimating filter parameters according to the original video data and the predicted samples, and an adaptive interpolation filter for utilizing the stored filter parameters to perform filtering on the reconstructed video data.
    Type: Grant
    Filed: March 17, 2009
    Date of Patent: September 12, 2017
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Xun Guo, Kai Zhang, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9729897
    Abstract: The invention provides a motion prediction method. First, a plurality of candidate units corresponding to a current unit of a current frame is determined. A plurality of motion vectors of the candidate units is then obtained. A plurality of scaling factors of the candidate units is then calculated according to a plurality of respective temporal distances depending on a plurality of reference frames of the motion vectors. The motion vectors of the candidate units are then scaled according to the scaling factors to obtain a plurality of scaled motion vectors. The scaled motion vectors are ranked, and a subset of highest ranking motion vectors are identified to be included in a candidate set. Finally, a motion vector predictor for motion prediction of the current unit is then selected from the candidate units.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: August 8, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Yu-Pao Tsai, Chih-Ming Fu, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20170156592
    Abstract: A healthcare system is provided. The healthcare system includes a data server, an algorithm server, a display device, and a communication network. The data server stores a plurality of physiological signals. The algorithm server receives the plurality of physiological signals from the data server. The algorithm server applies a plurality of algorithms on the plurality of physiological signals to obtain at least one feature of the plurality of physiological signals and generates at least one label according to the at least one label. The display device displays the at least one label. The communication network communicatively connects the data server, the algorithm server, and the display device for providing signal transmission paths therebetween.
    Type: Application
    Filed: November 17, 2016
    Publication date: June 8, 2017
    Inventor: Chih-Ming FU
  • Publication number: 20170163982
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. The method operates by receiving input data to said in-loop processing, wherein the input data corresponds to reconstructed coding units of the picture; configuring the input data into multiple filter units; selecting a filter from a candidate filter set comprising at least two candidate filters associated with said in-loop processing for one of said multiple filter units; applying said in-loop processing to said one of said multiple filter units using the selected filter to generate a processed filter unit, wherein when said one of said multiple filter units comprises at least two reconstructed coding units, the selected filter is applied to all of said at least two reconstructed coding units; and providing processed video data comprising the processed filter unit. The apparatus provide circuits to carryout the operations of the method.
    Type: Application
    Filed: February 23, 2017
    Publication date: June 8, 2017
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Patent number: 9641863
    Abstract: An apparatus and method for sample adaptive offset (SAO) to restore intensity shift of processed video data are disclosed. In an encoder side, the processed video data corresponding to reconstructed video data, deblocked-reconstructed video data, or adaptive loop filtered and deblocked-reconstructed video data are partitioned into regions smaller than a picture. The region partition information is signaled in a video bitstream located in a position before intensity offset values syntax. At the decoder side, the processed video data is partitioned into regions according to the partition information parsed from the bitstream at a position before intensity offset values syntax. Region-based SAO is applied to each region based on the intensity offset for the category of the region-based SAO type selected.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: May 2, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9615093
    Abstract: A method and apparatus receives input data for in-loop processing, where the input data corresponds to reconstructed or reconstructed-and-deblocked coding units of the picture. The input data is divided into multiple filter units and each filter unit includes one or more boundary-aligned reconstructed or reconstructed-and-deblocked coding units. A candidate filter is then selected from a candidate filter set for the in-loop processing. The candidate filter set comprises at least two candidate filters the said in-loop processing corresponding to adaptive loop filter (ALF), adaptive offset (AO), or adaptive clipping (AC). The in-loop processing is then applied to one of the filter units to generate a processed filter unit by using the candidate filter selected to all boundary-aligned reconstructed or reconstructed-and-deblocked coding units in said one of the filter units.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: April 4, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20170041638
    Abstract: A method and apparatus for Sample Adaptive Offset (SAO) processing of video data in a video decoder are disclosed. In an embodiment, the method includes receiving a block of processed-reconstructed pixels associated with a picture from a media or a processor, wherein the block of processed-reconstructed pixels are decoded from a video bitstream; determining a SAO type index from the video bitstream, wherein the SAO type index is decoded according to truncated unary binarization, the SAO type index is decoded using CABAC (context-based adaptive binary arithmetic coding) with one context, or the SAO type index is decoded by CABAC using a context mode for a first bin associated with the SAO type index and using a bypass mode for any remaining bin associated with the SAO type index; and applying SAO processing to the block of processed-reconstructed pixels based on SAO information including the SAO type index.
    Type: Application
    Filed: October 19, 2016
    Publication date: February 9, 2017
    Inventors: Chih-Ming FU, Yu-Wen HUANG, Chih-Wei HSU, Shaw-Min LEI
  • Patent number: 9560362
    Abstract: A method and apparatus for 3D video coding system are disclosed. Embodiments according to the present invention apply SAO process (sample adaptive offset process) to at least one dependent-view image of the processed multi-view images if processed multi-view images are received. Also embodiments according to the present invention apply the SAO process to at least one dependent-view image of the processed multi-view images or at least one depth map of the processed multi-view depth maps if both processed multi-view images and the processed multi-view depth maps are received. The SAO can be applied to each color component of the processed multi-view images or the processed multi-view depth maps. The SAO parameters associated with a target region in one dependent-view image or in one depth map corresponding to one view may share or may be predicted by second SAO parameters associated with a source region corresponding to another view.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: January 31, 2017
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Yi-Wen Chen, Chih-Wei Hsu