Patents by Inventor Yu-Wen Huang

Yu-Wen Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160150096
    Abstract: A method for changing a setting of a mobile communication device is disclosed. The method includes receiving context information of the mobile communication device, changing the setting of the mobile communication device according to the context information and a user preference rule, and updating the user preference rule according to the context information and the changed setting.
    Type: Application
    Filed: May 19, 2015
    Publication date: May 26, 2016
    Inventors: Chia-Ping Chen, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20160142706
    Abstract: A method for chroma intra prediction mode decoding includes decoding a chroma intra prediction mode for a current chroma block according to a codeword set corresponding to a chroma intra prediction mode set, wherein the codeword set comprises at least one codeword with a first length type and at least one codeword with a second length type. If a codeword is one of said at least one codeword with the first length type, the chroma intra prediction mode is decoded as a Luma-based chroma prediction Mode (LM) or a Direct Mode (DM). The method also includes determines the chroma intra prediction mode based on an intra prediction mode of a current luma block if the chroma intra prediction mode is the DM.
    Type: Application
    Filed: January 27, 2016
    Publication date: May 19, 2016
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Yu-Wen Huang, Shan Liu, Zhi Zhou, Shaw-Min Lei
  • Publication number: 20160127747
    Abstract: A method of modified SAO (sample-adaptive offset) processing for a reconstructed picture in a video coding system to improve the performance is disclosed. In one example, a SAO-sign threshold is introduced to determine the sign of the difference between a current reconstructed pixel and a neighboring reconstructed pixel. A range of difference values greater than the negative SAO-sign threshold and smaller than the positive SAO-sign threshold is assigned to have a sign value of 0. In another example, the SAO-offset value is derived by multiplying the SAO-offset sign with a result from applying left shift by the SAO-bit-shift value to the absolute SAO-offset value. In yet another example, the absolute SAO-offset value is coded by truncated Rice (TR) codes and a maximum TR value is indicated by a syntax element.
    Type: Application
    Filed: July 15, 2014
    Publication date: May 5, 2016
    Inventors: Shih-Ta Hsiang, Yu-Wen Huang
  • Patent number: 9307239
    Abstract: An apparatus and method for deriving a motion vector predictor are disclosed. A search set comprising of multiple (spatial, or temporal) search MVs with priority is determined, wherein the search MVs for multiple neighboring reference block or one or more co-located reference blocks are configured into multiple search MV groups. In order to improve coding efficiency, embodiments according to the present invention, perform redundancy check every time after a search MV group is searched to determine whether an available search MV found. If an available search MV is found and the available search MV is not the same as a previously derived motion vector predictor (MVP), the available search MV is used as the MVP and the MVP derivation process terminates. Otherwise, the MVP derivation process moves to the next reference block. The search MV group can be configured to include different search MV(s) associated with reference blocks.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: April 5, 2016
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9300963
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporal MVP candidates. The method uses a flag to indicate whether temporal MVP candidates are disabled. If the flag indicates that the temporal MVP candidates are disabled, the MVP is derived from the spatial MVP candidates only. Otherwise, the MVP is derived from the spatial and temporal MVP candidates. The method may further skip spatial redundant MVP removal by comparing MV values. Furthermore, the parsing error robustness scheme determines a forced temporal MVP when a temporal MVP is not available and the temporal MVP candidates are allowed as indicated by the flag. The flag may be incorporated in sequence, picture, slice level, or a combination of these levels.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: March 29, 2016
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Chih-Ming Fu, Chi-Ling Wu, Yu-Pao Tsai, Ching-Yeh Chen, Shaw-Min Lei
  • Patent number: 9237349
    Abstract: A method and apparatus for sharing information in a video decoding system are disclosed. The method derives reconstructed data for a picture from a bitstream, where the picture is partitioned into multiple slices. An information-sharing flag is parsed from the bitstream associated with a current reconstructed slice. If the information-sharing flag indicates information sharing, shared information is determined from a part of the bitstream not corresponding to the current reconstructed slice, and in-loop filtering process is applied to the current reconstructed slice according to the shared information. If the information-sharing flag indicates filter no information sharing, individual information is determined from a part of the bitstream corresponding to the current reconstructed slice, and in-loop filtering process is applied to the current reconstructed slice according to the individual information. A method for a corresponding encoder is also disclosed.
    Type: Grant
    Filed: February 17, 2015
    Date of Patent: January 12, 2016
    Assignee: MEDIATEK INC
    Inventors: Chia-Yang Tsai, Chih-Wei Hsu, Yu-Wen Huang, Ching-Yeh Chen, Chih-Ming Fu, Shaw-Min Lei
  • Publication number: 20150381999
    Abstract: A method of inter-layer motion vector scaling using an inter-layer MV scaling factor to reduce computational complexity is disclosed. In one embodiment, image size information regarding the EL picture and the BL picture of the video sequence is received. An inter-layer motion vector (MV) scaling factor is determined based on the image size information. Scaled MVs are determined based on the BL MVs and the inter-layer MV scaling factor. The scaled MVs are then provided for inter-layer coding of the EL picture. In another embodiment, an inter-layer position-mapping scaling factor is determined based on the image size information regarding the EL picture and the BL picture. BL mapping positions corresponding to EL pixel positions are determined based on the EL pixel positions and the inter-layer position-mapping scaling factor. The up-sampled BL picture at the BL mapping positions is then provided for inter-layer coding of the EL picture.
    Type: Application
    Filed: March 12, 2014
    Publication date: December 31, 2015
    Inventors: Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Publication number: 20150365680
    Abstract: A method and apparatus of line buffer reduction for context adaptive entropy processing are disclosed. The context formation for context adaptive entropy processing depends on block information associated with one or more neighboring blocks. When a first neighboring block is on an upper side of a horizontal region boundary or on a left side of a vertical region boundary of the region, the block information is replaced by replacement block information to reduce or remove line buffer requirement for storing the block information of neighboring blocks on the other side of the region boundaries from the current block. The context adaptive entropy processing is CABAC encoding, CABAC decoding, CAVLC encoding, or CAVLC decoding.
    Type: Application
    Filed: August 26, 2015
    Publication date: December 17, 2015
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen
  • Publication number: 20150365684
    Abstract: A method for cross-color Intra prediction using the LM Intra mode using multi-row or multi-column neighboring reconstructed pixels for LM parameter derivation or using only top pixels or left pixels of neighboring pixels is disclosed. Multiple LM Intra modes can be used. For example, three LM Intra modes can be used and the LM parameters for the three LM Intra modes can be determined based on only the top pixels, only the left pixels and both the top pixels and left pixels of neighboring reconstructed pixels respectively. To remove the need for additional buffer requirement for deriving the LM parameters based on using multi-row or multi-column neighboring reconstructed pixels, the current method re-uses existing buffers, where the buffers are used for deblocking. A syntax element can be used to indicate one of the multi-LM modes selected.
    Type: Application
    Filed: March 13, 2014
    Publication date: December 17, 2015
    Inventors: Ching-Yeh CHEN, Chih-Wei HSU, Chia-Yang TSAI, Yu-Wen HUANG
  • Publication number: 20150350676
    Abstract: A method and apparatus for three-dimensional video coding, multi-view video coding and scalable video coding are disclosed. Embodiments of the present invention use two stage motion data compression to reduce motion data buffer requirement. A first-stage motion data compression is applied after each texture picture or depth map is coded to reduce motion data buffer requirement. Accordingly, first compressed motion data is stored in reduced resolution in the buffer to reduce storage requirement and the first compressed motion data is used for coding process of other texture pictures or depth maps in the same access unit. After all pictures in an access unit are coded, motion data associated with the access unit is further compressed and the second compressed motion data is used during coding process of pictures in other access unit.
    Type: Application
    Filed: September 18, 2013
    Publication date: December 3, 2015
    Applicant: MediaTek Inc.
    Inventors: Yi-Wen CHEN, Jian-Liang LIN, Yu-Wen HUANG
  • Publication number: 20150350648
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. Input data to the in-loop processing is received and the input data corresponds to reconstructed or reconstructed-and-deblocked coding units of the picture. The input data is divided into multiple filter units and each filter unit includes one or more boundary-aligned reconstructed or reconstructed-and-deblocked coding units. A candidate filter is then selected from a candidate filter set for the in-loop processing. The candidate filter set comprises at least two candidate filters the said in-loop processing corresponding to adaptive loop filter (ALF), adaptive offset (AO), or adaptive clipping (AC). The in-loop processing is then applied to one of the filter units to generate a processed filter unit by using the candidate filter selected to all boundary-aligned reconstructed or reconstructed-and-deblocked coding units in said one of the filter units.
    Type: Application
    Filed: August 10, 2015
    Publication date: December 3, 2015
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20150341636
    Abstract: A method and apparatus of inter-layer and the inter-view adaptive Intra prediction (IL-AIP and IV-AIP) for a video coding system are disclosed. The video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) for the inter-layer video coding system, and the video data is configured into a Base View (BV) and an Enhancement View (EV) for the inter-view video coding system. The adaptive Intra predictor for the to-be-processed block in the EL or the EV is derived based on the BL or the BV. For inter-layer and inter-view adaptive LM Intra prediction, the LM adaptive Intra predictor for the to-be-processed chroma block in the EL or the EV is derived based on the BL or the BV.
    Type: Application
    Filed: April 25, 2013
    Publication date: November 26, 2015
    Applicant: MEDIATEK INC.
    Inventors: Chia-Yang Tsai, Tzu-Der Chuang, Ching-Yeh Chen, Yu-Wen Huang
  • Publication number: 20150326886
    Abstract: A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
    Type: Application
    Filed: October 10, 2012
    Publication date: November 12, 2015
    Inventors: Yi-Hau CHEN, Kun-Bin LEE, Chi-Cheng JU, Yu-Wen HUANG, Shaw-Min LEI, Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Chih-Wei HSU
  • Publication number: 20150326876
    Abstract: An apparatus and method for temporal motion vector prediction for a current block in a picture are disclosed. In the present method, one temporal block in a first reference picture in a first list selected from a list group comprising list 0 and list 1 is determined. When the determined temporal block has at least one motion vector, a candidate set is determined based on the motion vector of the temporal block. The temporal motion vector predictor or temporal motion vector predictor candidate or temporal motion vector or temporal motion vector candidate for the current block is determined from the candidate set by checking a presence of a motion vector pointing to a reference picture in a first specific list in said at least one motion vector, wherein the first specific list is selected from the list group based on a priority order.
    Type: Application
    Filed: July 21, 2015
    Publication date: November 12, 2015
    Inventors: Yu-Pao Tsai, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9161041
    Abstract: For sample adaptive offset, classification may be used to classify the pixels into multiple categories and pixels in each category are offset compensated using an offset value for the category. The classification may be based on values of the current pixel and its neighboring pixels before SAO compensation. Therefore, the SAO compensated pixel cannot be written back to the current pixel location until the category for all pixels are determined. An embodiment of the present invention stores the relation between the current pixel and said one or more neighboring pixels so that the SAO compensated current pixel can replace the current pixel without buffering the to-be-processed pixels for classification. The SAO process may be performed on a region by region basis to adapt to the local characteristics of the picture. Rate-distortion optimization (RDO) is often used to guide the mode decision, such as region splitting/region merging decision.
    Type: Grant
    Filed: July 6, 2011
    Date of Patent: October 13, 2015
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9154778
    Abstract: A method and apparatus for processing of coded video using adaptive offset (AO) are disclosed. Embodiments of the present invention divide reconstructed video data into multiple filter units and apply adaptive offset to the filter units to generate filtered video data, where boundaries of filter units correspond to boundaries of coding units and each filter unit contains at one or more coding units. Furthermore, two or more of the multiple filter units can be merged as indicated by a merge index to share filter information of the adaptive offset. A filter control flag can be used to indicate filter ON/OFF control. The luma and chroma components may also share the same filter information. In another embodiment, the filter information sharing among filter units can be applied regardless of whether the boundaries of the filter units are aligned with the boundaries of the coding units.
    Type: Grant
    Filed: May 29, 2013
    Date of Patent: October 6, 2015
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20150281733
    Abstract: A method and apparatus for three-dimensional and scalable video coding are disclosed. Embodiments according to the present invention determine a motion information set associated with the video data, wherein at least part of the motion information set is made available or unavailable conditionally depending on the video data type. The video data type may correspond to depth data, texture data, a view associated with the video data in three-dimensional video coding, or a layer associated with the video data in scalable video coding. The motion information set is then provided for coding or decoding of the video data, other video data, or both. At least a flag may be used to indicate whether part of the motion information set is available or unavailable. Alternatively, a coding profile for the video data may be used to determine whether the motion information is available or not based on the video data type.
    Type: Application
    Filed: August 29, 2013
    Publication date: October 1, 2015
    Applicant: MediaTek Inc.
    Inventors: Chih-Ming Fu, Yi-Wen Chen, Jian-Liang Lin, Yu-Wen Huang
  • Publication number: 20150281708
    Abstract: A method and apparatus for coding video data using Inter prediction mode or Merge mode in a video coding system are disclosed, where the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL), and the EL has higher spatial resolution or better video quality than the BL. In one embodiment, at least one information piece of motion information associated with one or more BL blocks in the BL is identified. A motion vector prediction (MVP) candidate list or a Merge candidate list for the selected block in the EL is then determined, where said at least one information piece associated with said one or more BL blocks in the BL is included in the MVP-candidate list or a Merge candidate MVP candidate list or the Merge candidate list. The input data associated with the selected block is coded or decoded using the MVP candidate list or the Merge candidate list.
    Type: Application
    Filed: March 19, 2013
    Publication date: October 1, 2015
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Yu-Wen Huang, Jian-Liang Lin, Yi-Wen Chen, Shih-Ta Hsiang, Shan Liu, Shaw-Min Lei
  • Patent number: 9137544
    Abstract: A method and apparatus for deriving a temporal motion vector predictor (MVP) are disclosed. The MVP is derived for a current block of a current picture in Inter, or Merge, or Skip mode based on co-located reference blocks of a co-located block. The co-located reference blocks comprise an above-left reference block of the bottom-right neighboring block of the co-located block. The reference motion vectors associated with the co-located reference blocks are received and used to derive the temporal MVP. Various configurations of co-located reference blocks can be used to practice the present invention. If the MVP cannot be found based on the above-left reference block, search for the MVP can be continued based on other co-located reference blocks. When an MVP is found, the MVP is checked against the previously found MVP. If the MVP is the same as the previously found MVP, the search for MVP continues.
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: September 15, 2015
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yu-Pao Tsai, Yi-Wen Chen, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9124898
    Abstract: An apparatus and method for motion vector prediction for a current block in a picture are disclosed. In video coding systems, the spatial and temporal redundancy is exploited using spatial and temporal prediction to reduce the information to be transmitted. Motion Vector Prediction (MVP) has been used to further conserve the bitrate associated with motion vector. In conventional temporal MVP, the predictor is often based on a single candidate such as the co-located motion vector in the previous frame/picture. If the co-located motion vector in the previous frame/picture does not exist, the predictor for the current block is not available. A technique for improved MVP is disclosed where the MVP utilized multiple candidates based on co-located motion vectors from future and/or past reference pictures. The candidates are arranged according to priority order to provide better availability of MVP and also to provide more accurate prediction.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: September 1, 2015
    Assignee: MEDIATEK INC.
    Inventors: Yu-Pao Tsai, Jian-Liang Lin, Yu-Wen Huang, Shaw-Min Lei