Patents by Inventor Chia-Yang Tsai

Chia-Yang Tsai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190268619
    Abstract: An unencoded video frame of a sequence of video frames is encoded to generate an encoded bit-stream representative of the unencoded video frame. The encoded bit-stream includes a header portion and a video data payload portion. The unencoded video frame may be divided into an array of prediction blocks, including a first prediction block. A coding mode is selected from a plurality of coding modes for use in encoding the first prediction block. The first prediction block is encoded using the selected coding mode to generate a portion of the video data payload of the encoded bit-stream. A coding mode selection flag is provided in the header portion of the encoded bit-stream, which indicates which coding mode of the plurality of coding modes was selected for encoding the first prediction block.
    Type: Application
    Filed: May 13, 2019
    Publication date: August 29, 2019
    Inventors: Chia-Yang TSAI, Kyle KUANG, Xiaobo LIU
  • Patent number: 10321150
    Abstract: An unencoded video frame of a sequence of video frames is encoded to generate an encoded bit-stream representative of the unencoded video frame. The encoded bit-stream includes a header portion and a video data payload portion. The unencoded video frame may be divided into an array of prediction blocks, including a first prediction block. A coding mode is selected from a plurality of coding modes for use in encoding the first prediction block. The first prediction block is encoded using the selected coding mode to generate a portion of the video data payload of the encoded bit-stream. A coding mode selection flag is provided in the header portion of the encoded bit-stream, which indicates which coding mode of the plurality of coding modes was selected for encoding the first prediction block.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: June 11, 2019
    Assignee: RealNetworks, Inc.
    Inventors: Chia-Yang Tsai, Kyle Kuang, Xiaobo Liu
  • Publication number: 20190149846
    Abstract: A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
    Type: Application
    Filed: January 16, 2019
    Publication date: May 16, 2019
    Inventors: Chih-Ming FU, Ching-Yeh CHEN, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Publication number: 20190110050
    Abstract: A transform block processing procedure wherein a maximum coding-block size and a maximum transform-block size for an unencoded video frame is determined. The unencoded video frame is divided into a plurality of coding-blocks including a first coding block and the first coding block is divided into at least one prediction block and a plurality of transform blocks. The size of the transform blocks depend at least in part on the size of the coding block and the corresponding prediction blocks. The transform blocks are then encoded, thereby generating a video data payload of an encoded bit-stream. A frame header of the encoded bit -stream, including a maximum coding-block-size flag and a maximum-transform-block-size flag, is generated.
    Type: Application
    Filed: December 11, 2018
    Publication date: April 11, 2019
    Inventors: Chia-Yang TSAI, Wenpeng DING, Gang WU
  • Patent number: 10218974
    Abstract: A transform block processing procedure wherein a maximum coding-block size and a maximum transform-block size for an unencoded video frame is determined. The unencoded video frame is divided into a plurality of coding-blocks including a first coding-block and the first coding block is divided into at least one prediction block and a plurality of transform blocks. The size of the transform blocks depend at least in part on the size of the coding block and the corresponding prediction blocks. The transform blocks are then encoded, thereby generating a video data payload of an encoded bit-stream. A frame header of the encoded bit-stream, including a maximum coding-block size flag and a maximum-transform-block-size flag, is generated.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: February 26, 2019
    Assignee: RealNetworks, Inc.
    Inventors: Chia-Yang Tsai, Wenpeng Ding, Gang Wu
  • Publication number: 20190007696
    Abstract: An unencoded video frame of a sequence of video frames is encoded to generate an encoded bit-stream representative of the unencoded video frame. The encoded bit-stream includes a header portion and a video data payload portion. The unencoded video frame may be divided into an array of prediction blocks, including a first prediction block. A coding mode is selected from a plurality of coding modes for use in encoding the first prediction block. The first prediction block is encoded using the selected coding mode to generate a portion of the video data payload of the encoded bit-stream. A coding mode selection flag is provided in the header portion of the encoded bit-stream, which indicates which coding mode of the plurality of coding modes was selected for encoding the first prediction block.
    Type: Application
    Filed: December 22, 2015
    Publication date: January 3, 2019
    Applicant: RealNetworks, Inc.
    Inventors: Chia-Yang TSAI, Gang WU
  • Publication number: 20190007681
    Abstract: A transform block processing procedure wherein a maximum coding-block size and a maximum transform-block size for an unencoded video frame is determined. The unencoded video frame is divided into a plurality of coding-blocks including a first coding-block and the first coding block is divided into at least one prediction block and a plurality of transform blocks. The size of the transform blocks depend at least in part on the size of the coding block and the corresponding prediction blocks. The transform blocks are then encoded, thereby generating a video data payload of an encoded bit-stream. A frame header of the encoded bit-stream, including a maximum coding-block size flag and a maximum-transform-block-size flag, is generated.
    Type: Application
    Filed: December 22, 2015
    Publication date: January 3, 2019
    Inventors: Chia-Yang TSAI, Wengpeng DING
  • Patent number: 10154268
    Abstract: In one implementation, a method operates by receiving neighboring reconstructed first-color pixels and current reconstructed first-color pixels of a current first-color block and receiving neighboring reconstructed second-color pixels of a current second-color block collocated with the current first-color block. The method then determines linear model (LM) parameters according to a linear model for one or more LM Intra modes. The method then receives input data associated with current second-color pixels of the current second-color block and generates a cross-color Intra predictor from the current reconstructed first-color pixels of the current first-color block using the LM parameters associated with a LM Intra mode selected from said one or more LM Intra modes. Finally, the method applies cross-color Intra prediction encoding or decoding to the current second-color pixels of the current second-color block using the cross-color Intra predictor for the selected LM Intra mode.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: December 11, 2018
    Assignee: MEDIATEK INC.
    Inventors: Ching-Yeh Chen, Chih-Wei Hsu, Chia-Yang Tsai, Yu-Wen Huang
  • Patent number: 10136144
    Abstract: A method and apparatus for inter-layer prediction for scalable video coding are disclosed. Embodiments according to the present invention apply inter-layer adaptive filtering to the video data derived from the reconstructed BL video data to generate inter-layer adaptive filtered data. The inter-layer adaptive filtered data is then included as prediction data to encode or decode the EL video data. The video data derived from the reconstructed BL video data is up-sampled before applying inter-layer adaptive filtering. The up-sampling may also be included in the inter-layer adaptive filtering. In another embodiment, the inter-layer adaptive filtering comprises adaptive up-sampling. For up-sampled BL video data at locations not collocated with the EL video data, the up-sampled BL video data is divided into location types according to locations of the up-sampled BL video data. Each location type may have an individual filter for up-sampling video data in the group.
    Type: Grant
    Filed: May 21, 2013
    Date of Patent: November 20, 2018
    Assignee: MEDIATEK Singapore Pte. Ltd.
    Inventors: Shan Liu, Mei Guo, Chia-Yang Tsai, Ching-Yeh Chen, Shaw-Min Lei
  • Patent number: 10123048
    Abstract: A method of adaptive loop filtering with implicit sample-based On/Off control for reconstructed video to improve the performance is disclosed. In one embodiment, each pixel of the video data associated with the reconstructed current image unit is classified into a first group and a second group. Adaptive Loop Filter (ALF) is then applied to these pixels belonging to the first group. For pixels in the second group, ALF is not applied. The image unit may correspond to one coding tree block (CTB) or one coding tree unit (CTU). Various classification means for classifying each pixel into a first group or a second group have also been disclosed. The adaptive loop filtering with implicit sample-based On/Off control may also be used as an additional mode in a system supporting block-based On/Off control.
    Type: Grant
    Filed: November 13, 2014
    Date of Patent: November 6, 2018
    Assignee: MediaTek Inc.
    Inventors: Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang
  • Publication number: 20180295362
    Abstract: A protocol is provided by which a current block and a neighboring block are identified and the current block is processed. In some variants a deblocking filter is applied with a filtering block size set either to the standard blocksize or to the shared blocksize, depending on whether the shared size of the current and neighboring blocks is smaller than a standard blocksize.
    Type: Application
    Filed: September 30, 2015
    Publication date: October 11, 2018
    Inventors: Chia-Yang TSAI, Kai WANG, Chao KUANG
  • Publication number: 20180199053
    Abstract: An unencoded video frame of a sequence of video frames is encoded to generate an encoded bit-stream representative of the unencoded video frame. The encoded bit-stream includes a header portion and a video data payload portion. The unencoded video frame may be divided into an array of prediction blocks, including a first prediction block. A coding mode is selected from a plurality of coding modes for use in encoding the first prediction block. The first prediction block is encoded using the selected coding mode to generate a portion of the video data payload of the encoded bit-stream. A coding mode selection flag is provided in the header portion of the encoded bit-stream, which indicates which coding mode of the plurality of coding modes was selected for encoding the first prediction block.
    Type: Application
    Filed: March 31, 2015
    Publication date: July 12, 2018
    Applicant: REALNETWORKS, INC.
    Inventors: Chia-Yang TSAI, Kyle KUANG, Xiaobo LIU
  • Patent number: 9998737
    Abstract: A method and apparatus for processing of coded video using in-loop processing are disclosed. The method operates by receiving input data to said in-loop processing, wherein the input data corresponds to reconstructed coding units of the picture; configuring the input data into multiple filter units; selecting a filter from a candidate filter set comprising at least two candidate filters associated with said in-loop processing for one of said multiple filter units; applying said in-loop processing to said one of said multiple filter units using the selected filter to generate a processed filter unit, wherein when said one of said multiple filter units comprises at least two reconstructed coding units, the selected filter is applied to all of said at least two reconstructed coding units; and providing processed video data comprising the processed filter unit. The apparatus provide circuits to carryout the operations of the method.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: June 12, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Ming Fu, Ching-Yeh Chen, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9967563
    Abstract: A method and apparatus for loop filter processing of boundary pixels across a block boundary aligned with a slice or tile boundary is disclosed. Embodiments according to the present invention use a parameter of a neighboring slice or tile for loop filter processing across slice or tile boundaries according to a flag indicating whether cross slice or tile loop filter processing is allowed not. According to one embodiment of the present invention, the parameter is a quantization parameter corresponding to a neighboring slice or tile, and the quantization parameter is used for filter decision in deblocking filter.
    Type: Grant
    Filed: January 30, 2013
    Date of Patent: May 8, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Chih-Wei Hsu, Chia-Yang Tsai, Yu-Wen Huang
  • Publication number: 20180109793
    Abstract: A transform block processing procedure wherein a maximum coding-block size and a maximum transform-block size for an unencoded video frame is determined. The unencoded video frame is divided into a plurality of coding-blocks including a first coding-block and the first coding block is divided into at least one prediction block and a plurality of transform blocks. The size of the transform blocks depend at least in part on the size of the coding block and the corresponding prediction blocks. The transform blocks are then encoded, thereby generating a video data payload of an encoded bit-stream. A frame header of the encoded bit-stream, including a maximum coding-block size flag and a maximum-transform-block-size flag, is generated.
    Type: Application
    Filed: March 31, 2015
    Publication date: April 19, 2018
    Inventors: Chia-Yang TSAI, Wenpeng DING, Gang WU
  • Publication number: 20180109816
    Abstract: Methods and systems for inserting and extracting message data into and out of an encoded bitstream representative of an unencoded video frame are described herein. The unencoded video frame and at least one accompanying message for inclusion in the encoded bitstream are obtained and the unencoded video frame is encoded, thereby generating a video data payload of the encoded bitstream. A message size corresponding to the accompanying message(s) is obtained and a frame header of the encoded bitstream is generated. The frame header may include a message-enabled flag, a message-count flag, at least one message-size flag corresponding to each of the accompanying messages, and message-data corresponding to the contents of the accompanying message(s). The message-count flag indicates a number of accompanying messages being included in the frame header, and each message-size flag indicates the size of a corresponding accompanying message.
    Type: Application
    Filed: March 31, 2015
    Publication date: April 19, 2018
    Inventors: Chia-Yang TSAI, Gang WU, Kai WANG, Ihwan LIMASI
  • Publication number: 20180109785
    Abstract: In one embodiment, a method receives a video bitstream corresponding to compressed video, wherein Filter Unit (FU) based in-loop filtering is allowed in a reconstruction loop associated with the compressed video. The method then derives reconstructed video from the video bitstream, wherein the reconstructed video is partitioned into FUs and derives a merge flag from the video bitstream for each of the FUs, wherein the merge flag indicates whether said each of the FUs is merged with a neighboring FU. The method further receives a merge index from the video bitstream if the merge flag indicates that said each of the FUs is merged, and receives the filter parameters from the video bitstream if the merge flag indicates that said each of the FUs is not merged. Finally, the method applies the in-loop filtering to said each of the FUs using the filter parameters.
    Type: Application
    Filed: December 14, 2017
    Publication date: April 19, 2018
    Inventors: Ching-Yeh CHEN, Chih-Ming FU, Chia-Yang TSAI, Yu-Wen HUANG, Shaw-Min LEI
  • Patent number: 9924181
    Abstract: A method and apparatus for inter-layer prediction for scalable video coding are disclosed. Embodiments of the present invention utilize weighted prediction for scalable coding. The weighted prediction is based on the predicted texture data and the inter-layer Intra prediction data derived from BL reconstructed data. The inter-layer Intra prediction data corresponds to the BL reconstructed data or up-sampled BL reconstructed data. The predicted texture data corresponds to spatial Intra prediction data or motion-compensated prediction data based on the second EL video data in the same layer as the current EL picture. Embodiments of the present invention also utilize the reference picture list including an inter-layer reference picture (ILRP) corresponding to BL reconstructed texture frame or up-sampled BL reconstructed texture frame for Inter prediction of EL video data. The motion vector is limited to a range around (0,0) when the ILRP is selected as a reference picture.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: March 20, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen, Chia-Yang Tsai, Chih-Ming Fu, Shih-Ta Hsiang
  • Publication number: 20180070077
    Abstract: A method of inter-layer or inter-view prediction for an inter-layer or inter-view video coding system is disclosed. The method includes receiving a to-be-processed block in the EL or the EV, determining a collocated block in the BL (Base layer) or the BV (Base View), wherein the collocated block is located at a location in the BL or the BV corresponding to the to-be-processed block in the EL (Enhancement Layer)or in the EV (Enhancement view), deriving a predictor for the to-be-processed block in the EL or the EV from the collocated block in the BL or the BV based on pixel data of the BL or the BV, wherein the predictor corresponds to a linear function of pixel data in the collocated block, and encoding or decoding the to-be-processed block in the EL or the EV using the predictor.
    Type: Application
    Filed: October 27, 2017
    Publication date: March 8, 2018
    Inventors: Chia-Yang TSAI, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Patent number: 9877019
    Abstract: Implementations of the invention are provided in methods for filter-unit based in-loop filtering in a video decoder and encoder. In one implementation, filter parameters are selected from a filter parameter set for each filter based on a filter index. In another implementation, the picture is partitioned into filter units according to filter unit size, which can be selected between a default size and other size. When other size is selected, the filter unit size may be conveyed using direct size information or ratio information. In another implementation, a merge flag and a merge index are used to convey filter unit merge information. A method for filter-unit based in-loop filtering in a video encoder for color video is disclosed. In one embodiment, the method incorporates filter syntax in the video bitstream by interleaving the color-component filter syntax for the FUs.
    Type: Grant
    Filed: December 31, 2011
    Date of Patent: January 23, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Ching-Yeh Chen, Chih-Ming Fu, Chia-Yang Tsai, Yu-Wen Huang, Shaw-Min Lei