Patents by Inventor Shaw-Min Lei

Shaw-Min Lei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9432670
    Abstract: A method and apparatus for sample adaptive offset (SAO) compensation of reconstructed video data are disclosed. In one embodiment, the relation between the current pixel and said one or more neighboring pixels is stored so that the SAO compensated current pixel can replace the current pixel without buffering the to-be-processed pixels for classification. The SAO process may be performed on a region by region basis to adapt to the local characteristics of the picture.
    Type: Grant
    Filed: January 11, 2015
    Date of Patent: August 30, 2016
    Assignee: MEDIATEK INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9420296
    Abstract: A method and apparatus for clipping a transform coefficient are disclosed. Embodiments according to the present invention avoid overflow of the quantized transform coefficient by clipping the quantization level adaptively after quantization. In one embodiment, the method comprises generating the quantization level for the transform coefficient of a transform unit by quantizing the transform coefficient according to a quantization matrix and quantization parameter. The clipping condition is determined and the quantization level is clipped according to the clipping condition to generate a clipping-processed quantization level. The clipping condition includes a null clipping condition. The quantization level is clipped to fixed-range represented in n bits for the null clipping condition, where n correspond to 8, 16, or 32. The quantization level may also be clipped within a range from ?m to m?1 for the null clipping condition, where m may correspond to 128, 32768, or 2147483648.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: August 16, 2016
    Assignee: MEDIATEK SINGAPORE PTE. LTD.
    Inventors: Xun Guo, Shaw-Min Lei
  • Publication number: 20160234499
    Abstract: A method and apparatus for scalable video coding are disclosed, wherein the video data is configured into a Base Layer (BL) and an Enhancement Layer (EL) and wherein the EL has higher spatial resolution or better video quality than the BL. According to embodiments of the present invention, information from the base layer is exploited for coding the enhancement layer. The information coding for the enhancement layer includes CU structure, motion information, motion information, MVP/merge candidates, intra prediction mode, residual quadtree information, texture information, residual information, context adaptive entropy coding, Adaptive Lop Filter (ALF), Sample Adaptive Offset (SAO), and deblocking filter.
    Type: Application
    Filed: April 19, 2016
    Publication date: August 11, 2016
    Inventors: Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen Huang, Shaw-Min Lei, Chih-Ming FU, Chia-Yang TSAI
  • Publication number: 20160205410
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) for a motion vector (MV) of a current block of a current picture in Inter, or Merge, or Skip mode. The method selects a co-located block corresponding to a co-located picture and receives one or more reference motion vectors (MVs) of one or more co-located reference blocks associated with the co-located block. The method also determines a search set and determines a search order for the search set, if the search MV corresponding to the given reference list is not available, the search order then searches the search MV corresponding to a reference list different from the given reference list. Finally, the method determines the MVP for the current block based on the search set and the search order and provides the MVP for the current block.
    Type: Application
    Filed: March 17, 2016
    Publication date: July 14, 2016
    Inventors: Jian-Liang Lin, Yu-Pao Tsai, Yi-Wen Chen, Jicheng An, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20160205403
    Abstract: In one implementation, a method codes video pictures, in which each of the video pictures is partitioned into LCUs (largest coding units). The method operates by receiving a current LCU, partitioning the current LCU adaptively to result in multiple leaf CUs, determining whether a current leaf CU has at least one nonzero quantized transform coefficient according to both Prediction Mode (PredMode) and Coded Block Flag (CBF), and incorporating quantization parameter information for the current leaf CU in a video bitstream, if the current leaf CU has at least one nonzero quantized transform coefficient. If the current leaf CU has no nonzero quantized transform coefficient, the method excludes the quantization parameter information for the current leaf CU in the video bitstream.
    Type: Application
    Filed: March 18, 2016
    Publication date: July 14, 2016
    Inventors: Yu-Wen HUANG, Ching-Yeh CHEN, Chih-Ming FU, Chih-Wei HSU, Yu-Lin CHANG, Tzu-Der CHUANG, Shaw-Min LEI
  • Publication number: 20160205401
    Abstract: A method and apparatus for clipping a transform coefficient are disclosed. In one implementation, a method is implemented in a video encoder for clipping a quantization level. The method operates by generating the quantization level for a transform coefficient of a transform unit by quantizing the transform coefficient according to a quantization matrix and quantization parameter, determining a clipping condition in the video encoder based on video source bit-depth, and clipping the quantization level according to the clipping condition to generate a clipping-processed quantization level.
    Type: Application
    Filed: March 24, 2016
    Publication date: July 14, 2016
    Inventors: Xun GUO, Shaw-Min Lei
  • Publication number: 20160191939
    Abstract: A method for video coding a current block coded in an Inter, Merge, or Skip mode. The method determines neighboring blocks of the current block, wherein a motion vector predictor (MVP) candidate set is derived from MVP candidates associated with the neighboring blocks. The method determines at least one redundant MVP candidate, if said MVP candidate is within a same PU (Prediction Unit) as another MVP candidate in the MVP candidate set. The method removes said at least one redundant MVP candidate from the MVP candidate set, and provides a modified MVP candidate set for determining a final MVP, wherein the modified MVP candidate set corresponds to the MVP candidate set with said at least one redundant MVP candidate removed. Finally, the method encodes or decodes the current block according to the final MVP. A corresponding apparatus is also provided.
    Type: Application
    Filed: March 4, 2016
    Publication date: June 30, 2016
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20160191921
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) candidate set for motion vector coding of a current block. Embodiments according to the present invention determine a redundancy-removed spatial-temporal MVP candidate set. The redundancy-removed spatial-temporal MVP candidate set is derived from a spatial-temporal MVP candidate set by removing any redundant MVP candidate. The spatial-temporal MVP candidate set includes a top spatial MVP candidate, a left spatial MVP candidate and one temporal MVP candidate. The method further checks whether candidate number of the redundancy-removed spatial-temporal MVP candidate set is smaller than a threshold, and adds a zero motion vector to the redundancy-removed spatial-temporal MVP candidate set if the candidate number is smaller than the threshold. Finally, the method provides the redundancy-removed spatial-temporal MVP candidate set for encoding or decoding of the motion vector of a current block.
    Type: Application
    Filed: March 9, 2016
    Publication date: June 30, 2016
    Inventors: Liang ZHAO, Xun GUO, Shaw-Min LEI
  • Patent number: 9374600
    Abstract: A method of video coding using intra prediction in which the intra prediction modes are ranked according to a priority order associated with the block size. One or more tables are used, where the tables ranks the intra prediction modes according to a first priority order for the block having a first block size and ranks the intra prediction modes according to a second priority order for the block having a second block size. Two or more neighboring intra prediction modes corresponding to two or more neighboring blocks are received, where each neighboring block has a neighboring block size corresponding to the first block size or the second block size. A highest-priority mode among said two or more neighboring intra prediction modes is selected as the most probable mode. The current mode is then encoded or decoded using the most probable mode as a predictor.
    Type: Grant
    Filed: January 11, 2015
    Date of Patent: June 21, 2016
    Assignee: MEDIATEK SINGAPORE PTE. LTD
    Inventors: Mei Guo, Guo Xun, Shaw-Min Lei
  • Publication number: 20160173872
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporalone or more MVP candidates. The method determines a value of a flag in a video bitstream, where the flag is utilized for selectively disabling use of one or more temporal MVP candidates for motion vector prediction. The method selects, based on an index derived from the video bitstream, the MVP from one or more non-temporal MVP candidates responsive to the flag indicating that said one or more temporal MVP candidates are not to be utilized for motion vector prediction. Further, the method provides the MVP for the current block.
    Type: Application
    Filed: February 19, 2016
    Publication date: June 16, 2016
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Chih-Ming Fu, Chi-Ling Wu, Yu-Pao Tsai, Ching-Yeh Chen, Shaw-Min Lei
  • Publication number: 20160173905
    Abstract: A method for deriving a motion vector predictor (MVP) receives motion vectors (MVs) associated with reference blocks of the current block. The method determines at least one first spatial search MV associated with a first MV searching order and at least one second spatial search MV associated with a second MV searching order for each neighboring reference block. Then, the method determines whether a first available-first spatial search MV exists for said at least one neighboring reference block according to the first MV searching order, and provides the first available-first spatial search MV as a spatial MVP for the current block. Finally, the method determines whether a first available-second spatial search MV exists for said at least one neighboring reference block according to the second MV searching order only if none of first spatial search MVs for said at least one neighboring reference block is available.
    Type: Application
    Filed: February 25, 2016
    Publication date: June 16, 2016
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20160156938
    Abstract: A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
    Type: Application
    Filed: February 4, 2016
    Publication date: June 2, 2016
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20160150096
    Abstract: A method for changing a setting of a mobile communication device is disclosed. The method includes receiving context information of the mobile communication device, changing the setting of the mobile communication device according to the context information and a user preference rule, and updating the user preference rule according to the context information and the changed setting.
    Type: Application
    Filed: May 19, 2015
    Publication date: May 26, 2016
    Inventors: Chia-Ping Chen, Yu-Wen Huang, Shaw-Min Lei
  • Publication number: 20160142706
    Abstract: A method for chroma intra prediction mode decoding includes decoding a chroma intra prediction mode for a current chroma block according to a codeword set corresponding to a chroma intra prediction mode set, wherein the codeword set comprises at least one codeword with a first length type and at least one codeword with a second length type. If a codeword is one of said at least one codeword with the first length type, the chroma intra prediction mode is decoded as a Luma-based chroma prediction Mode (LM) or a Direct Mode (DM). The method also includes determines the chroma intra prediction mode based on an intra prediction mode of a current luma block if the chroma intra prediction mode is the DM.
    Type: Application
    Filed: January 27, 2016
    Publication date: May 19, 2016
    Inventors: Tzu-Der Chuang, Ching-Yeh Chen, Yu-Wen Huang, Shan Liu, Zhi Zhou, Shaw-Min Lei
  • Patent number: 9307239
    Abstract: An apparatus and method for deriving a motion vector predictor are disclosed. A search set comprising of multiple (spatial, or temporal) search MVs with priority is determined, wherein the search MVs for multiple neighboring reference block or one or more co-located reference blocks are configured into multiple search MV groups. In order to improve coding efficiency, embodiments according to the present invention, perform redundancy check every time after a search MV group is searched to determine whether an available search MV found. If an available search MV is found and the available search MV is not the same as a previously derived motion vector predictor (MVP), the available search MV is used as the MVP and the MVP derivation process terminates. Otherwise, the MVP derivation process moves to the next reference block. The search MV group can be configured to include different search MV(s) associated with reference blocks.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: April 5, 2016
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9300963
    Abstract: A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporal MVP candidates. The method uses a flag to indicate whether temporal MVP candidates are disabled. If the flag indicates that the temporal MVP candidates are disabled, the MVP is derived from the spatial MVP candidates only. Otherwise, the MVP is derived from the spatial and temporal MVP candidates. The method may further skip spatial redundant MVP removal by comparing MV values. Furthermore, the parsing error robustness scheme determines a forced temporal MVP when a temporal MVP is not available and the temporal MVP candidates are allowed as indicated by the flag. The flag may be incorporated in sequence, picture, slice level, or a combination of these levels.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: March 29, 2016
    Assignee: MEDIATEK INC.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Chih-Ming Fu, Chi-Ling Wu, Yu-Pao Tsai, Ching-Yeh Chen, Shaw-Min Lei
  • Publication number: 20160050436
    Abstract: A method and apparatus of scaling list data signaling for inter-layer or inter-view sharing of the scaling list data from a reference layer or a reference view in a scalable or three-dimensional video coding system are disclosed. A first flag may be incorporated in the current bitstream to indicate the scaling list data sharing from a reference layer or view. When the first flag exists and the first flag has a first value, the scaling list data for the current layer or the current view is determined from a reference bitstream for a reference layer or a reference view. When the first flag exists and the first flag has a second value, the scaling list data for the current layer or the current view is determined from the current bitstream.
    Type: Application
    Filed: March 20, 2014
    Publication date: February 18, 2016
    Inventors: Shan LIU, Shaw-Min LEI
  • Patent number: 9237349
    Abstract: A method and apparatus for sharing information in a video decoding system are disclosed. The method derives reconstructed data for a picture from a bitstream, where the picture is partitioned into multiple slices. An information-sharing flag is parsed from the bitstream associated with a current reconstructed slice. If the information-sharing flag indicates information sharing, shared information is determined from a part of the bitstream not corresponding to the current reconstructed slice, and in-loop filtering process is applied to the current reconstructed slice according to the shared information. If the information-sharing flag indicates filter no information sharing, individual information is determined from a part of the bitstream corresponding to the current reconstructed slice, and in-loop filtering process is applied to the current reconstructed slice according to the individual information. A method for a corresponding encoder is also disclosed.
    Type: Grant
    Filed: February 17, 2015
    Date of Patent: January 12, 2016
    Assignee: MEDIATEK INC
    Inventors: Chia-Yang Tsai, Chih-Wei Hsu, Yu-Wen Huang, Ching-Yeh Chen, Chih-Ming Fu, Shaw-Min Lei
  • Publication number: 20150350648
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. Input data to the in-loop processing is received and the input data corresponds to reconstructed or reconstructed-and-deblocked coding units of the picture. The input data is divided into multiple filter units and each filter unit includes one or more boundary-aligned reconstructed or reconstructed-and-deblocked coding units. A candidate filter is then selected from a candidate filter set for the in-loop processing. The candidate filter set comprises at least two candidate filters the said in-loop processing corresponding to adaptive loop filter (ALF), adaptive offset (AO), or adaptive clipping (AC). The in-loop processing is then applied to one of the filter units to generate a processed filter unit by using the candidate filter selected to all boundary-aligned reconstructed or reconstructed-and-deblocked coding units in said one of the filter units.
    Type: Application
    Filed: August 10, 2015
    Publication date: December 3, 2015
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20150326886
    Abstract: A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
    Type: Application
    Filed: October 10, 2012
    Publication date: November 12, 2015
    Inventors: Yi-Hau CHEN, Kun-Bin LEE, Chi-Cheng JU, Yu-Wen HUANG, Shaw-Min LEI, Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Chih-Wei HSU