Patents by Inventor Olena CHUBACH
Olena CHUBACH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250080756Abstract: A method and apparatus for inter prediction in video coding system are disclosed. According to the method, one or more model parameters of one or more cross-color models for the second-color block are determined. Then, cross-color predictors for the second-color block are determined, wherein one cross-color predictor value for the second-color block is generated for each second-color pixel of the second-color block by applying said one or more cross-color models to corresponding reconstructed or predicted first-color pixels. The input data associated with the second-color block is encoded using prediction data comprising the cross-color predictors for the second-color block at the encoder side, or the input data associated with the second-color block is decoded using the prediction data comprising the cross-color predictors for the second-color block at the decoder side.Type: ApplicationFiled: December 20, 2022Publication date: March 6, 2025Inventors: Man-Shu CHIANG, Olena CHUBACH, Yu-Ling HSIAO, Chia-Ming TSAI, Chun-Chia CHEN, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20250071260Abstract: A method and apparatus for video coding are disclosed. According to the method, a set of MC (Motion Compensation) candidates with each MC candidate comprising predicted samples for coding boundary pixels of the current block are determined. The set of MC candidates comprises a first candidate, and wherein the first candidate corresponds to a weighted sum of first predicted pixels generated according to first motion information of the current block and second predicted pixels generated according to second motion information of a neighbouring boundary block of the current block. Boundary matching costs associated with the set of MC candidates are determined respectively. A final candidate is determined from the set of MC candidates based on the boundary matching costs. The current block is encoded or decoded using the final candidate.Type: ApplicationFiled: January 10, 2023Publication date: February 27, 2025Inventors: Chun-Chia CHEN, Olena CHUBACH, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20250063155Abstract: A method and apparatus for inter prediction in video coding system are disclosed. According to the method, input data associated with a current block comprising at least one colour block are received. A blending predictor is determined according to a weighted sum of at least two candidate predictions generated based on one or more first hypotheses of prediction, one or more second hypotheses of prediction, or both. The first hypotheses of prediction are generated based on one or more intra prediction modes comprising a DC mode, a planar mode or at least one angular modes. The second hypotheses of prediction are generated based on one or more cross-component modes and a collocated block of said at least one colour block. The input data associated with the colour block is encoded or decoded using the blending predictor.Type: ApplicationFiled: December 20, 2022Publication date: February 20, 2025Inventors: Man-Shu CHIANG, Olena CHUBACH, Chia-Ming TSAI, Yu-Ling HSIAO, Chun-Chia CHEN, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20250056008Abstract: A video coding system that uses multiple models to predict chroma samples is provided. The video coding system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The system constructs two or more chroma prediction models based on luma and chroma samples neighboring the current block. The system applies the two or more chroma prediction models to incoming or reconstructed luma samples of the current block to produce two or more model predictions. The system computes predicted chroma samples by combining the two or more model predictions. The system uses the predicted chroma samples to reconstruct chroma samples of the current block or to encode the current block.Type: ApplicationFiled: December 20, 2022Publication date: February 13, 2025Inventors: Yu-Ling HSIAO, Olena CHUBACH, Chun-Chia CHEN, Chia-Ming TSAI, Man-Shu CHIANG, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20250039356Abstract: A video coding system that uses multiple models to predict chroma samples is provided. The video coding system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The video coding system derives multiple prediction linear models based on luma and chroma samples neighboring the current block. The video coding system constructs a composite linear model based on the multiple prediction linear models. The video coding system applies the composite linear model to incoming or reconstructed luma samples of the current block to generate a chroma predictor of the current block. The video coding system uses the chroma predictor to reconstruct chroma samples of the current block or to encode the current block.Type: ApplicationFiled: December 29, 2022Publication date: January 30, 2025Inventors: Chia-Ming TSAI, Chun-Chia CHEN, Yu-Ling HSIAO, Man-Shu CHIANG, Chih-Wei HSU, Olena CHUBACH, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20250024072Abstract: A method and apparatus for video coding system that uses intra prediction based on cross-colour linear model are disclosed. According to the method, model parameters for a first-colour predictor model are determined and the first-colour predictor model provides a predicted first-colour pixel value according to a combination of at least two corresponding reconstructed second-colour pixel values. According to another method, the first-colour predictor model provides a predicted first-colour pixel value based on a second degree model or higher of one or more corresponding reconstructed second-colour pixel values. First-colour predictors for the current first-colour block are determined according to the first-colour prediction model. The input data are then encoded at the encoder side or decoded at the decoder side using the first-colour predictors.Type: ApplicationFiled: October 26, 2022Publication date: January 16, 2025Inventors: Olena CHUBACH, Ching-Yeh CHEN, Tzu-Der CHUANG, Chun-Chia CHEN, Man-Shu CHIANG, Chia-Ming TSAI, Yu-Ling HSIAO, Chih-Wei HSU, Yu-Wen HUANG
-
Publication number: 20250008171Abstract: A video streaming server transmitting coded video for a current picture in a first set of network packets. The server generates a set of configuration data for the current picture. The server transmits the set of configuration data in a second set of network packets. The server transmits a particular network packet comprising a group identifier identifying a group that is applicable to the current picture, the identified group comprising the second set of network packets. A video streaming client receives the first set of network packets and reconstructs the current picture. The client receives the second set of network packets and the particular network packet. The client uses the group identifier in the particular network packet to identify the second set of network packets as being in the group that is applicable to the current picture. The client outputs the reconstructed current picture by applying the set configuration data.Type: ApplicationFiled: June 27, 2024Publication date: January 2, 2025Inventors: Lulin Chen, Olena Chubach, Yu-Wen Huang
-
Publication number: 20250008125Abstract: A video coding system that uses chroma prediction is provided. The system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The system constructs a chroma prediction model based on luma and chroma samples neighboring the current block. The system signals a set of chroma prediction related syntax element and a refinement to the chroma prediction model. The system performs chroma prediction by applying the chroma prediction model to reconstructed luma samples of the current block to obtain predicted chroma samples of the current block. The system uses the predicted chroma samples to reconstruct 10 chroma samples of the current block or to encode the current block.Type: ApplicationFiled: October 11, 2022Publication date: January 2, 2025Inventors: Chia-Ming TSAI, Olena CHUBACH, Chun-Chia CHEN, Ching-Yeh CHEN, Man-Shu CHIANG, Yu-Ling HSIAO, Tzu-Der CHUANG, Chih-Wei HSU, Yu-Wen HUANG
-
Publication number: 20240430474Abstract: A video encoder or a video decoder may perform operations to determine an initial motion vector (MV) such as a control point motion vector (CPMV) candidate according to an affine mode or an additional prediction signal representing an additional hypothesis motion vector, for a current sub-block in a current frame of a video stream; determine a current template associated with the current sub-block in the current frame; retrieve a reference template within a search area in a reference frame; and compute a difference between the reference template and the current template based on an optimization measurement. Additional operations performed may include iterating the retrieving and the computing the difference for a different reference template within the search area until a refinement MV, such as a refined CPMV or refined additional hypothesis motion vector, is found to minimize the difference according to the optimization measurement.Type: ApplicationFiled: August 18, 2022Publication date: December 26, 2024Inventors: Olena CHUBACH, Chun-Chia CHEN, Man-Shu CHIANG, Tzu-Der CHUANG, Ching-Yeh CHEN, Chih-Wei HSU, Yu-Wen HUANG
-
Publication number: 20240414366Abstract: A video coding system that uses local illumination compensation to code pixel blocks is provided. A video encoder receives samples for an original block of pixels to be encoded as a current block of a current picture of a video. The video encoder applies a linear model to a reference block to generate a prediction block for the current block. The linear model includes a scale parameter and an offset parameter. The video encoder may use the samples of the original block and samples from a reconstructed reference frame to derive the scale parameter and the offset parameter. The video encoder signals the scale parameter and the offset parameter in a bitstream. The video encoder encodes the current block by using the prediction block to reconstruct the current block.Type: ApplicationFiled: November 25, 2022Publication date: December 12, 2024Inventors: Olena CHUBACH, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG, Chun-Chia CHEN
-
Publication number: 20240357082Abstract: A video coding system that uses template matching (TM) to improve signaling of coding modes is provided. The system receives data to be encoded or decoded as a current block of a current picture of a video. The system identifies a set of pixels neighboring the current block as a current template. The system identifies a reference template of each candidate coding mode in a plurality of candidate coding modes. The system computes a template matching (TM) cost for each candidate coding mode based on matching the current template with the reference template of the candidate coding mode. The system selects a candidate coding mode from the plurality of candidate coding modes based on the computed TM costs. The system reconstructs the current block or encoding the current block into a bitstream by using selected candidate coding mode.Type: ApplicationFiled: August 18, 2022Publication date: October 24, 2024Inventors: Olena CHUBACH, Chun-Chia CHEN, Man-Shu CHIANG, Tzu-Der CHUANG, Ching-Yeh CHEN, Chih-Wei HSU, Yu-Wen HUANG
-
Publication number: 20240357153Abstract: Method and apparatus for template matching with a determined area are disclosed. According to this method, a current template comprising current neighbouring pixels on an above side of the current block, on a left side of the current block, or a combination thereof for a current block is received. An area in a reference picture is then determined, where the reference picture corresponds to a previously coded picture. A matching result between a restricted reference template of a reference block and the current template is then determined, wherein the restricted reference template is generating by using only neighbouring reference pixels of a reference template inside the determined area, the reference template has a same shape as the current template, and a location of the reference template is determined according to a target motion vector (MV) from the current template.Type: ApplicationFiled: August 18, 2022Publication date: October 24, 2024Inventors: Chun-Chia CHEN, Olena CHUBACH, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20240357084Abstract: A method and apparatus for video coding system that utilizes low-latency template-matching motion-vector refinement are disclosed. According to this method, a current template for the current block is determined, where at least one of current above template and current left template is removed or is located away from a respective above edge or a respective left edge of the current block and the current template is generated using reconstructed samples. Candidate reference templates, corresponding to the current template at respective candidate locations, associated with the current block at a set of candidate locations in a reference picture are determined. A location of a target reference template among the candidate reference templates is determined, where the target reference template achieves a best match with the current template. A refined motion vector (MV) is determined by refining an initial MV according to the location of the target reference template.Type: ApplicationFiled: August 12, 2022Publication date: October 24, 2024Inventors: Olena CHUBACH, Chun-Chia CHEN, Man-Shu CHIANG, Tzu-Der CHUANG, Ching-Yeh CHEN, Chih-Wei HSU, Yu-Wen HUANG
-
Publication number: 20240357081Abstract: A method and apparatus for video coding system that utilizes low-latency template-matching motion-vector refinement are disclosed. According to this method, input data associated with a current block of a video unit in a current picture are received. Motion compensation is then applied to the current block according to an initial motion vector (MV) to obtain initial motion-compensated predictors of the current. After applying the motion compensation to the current block, template-matching MV refinement is applied to the current block to obtain a refined MV for the current block. The current block is then encoded or decoded using information including the refined MV. The method may further comprise determining gradient values of the initial motion-compensated predictors. The initial motion-compensated predictors can be adjusted by taking into consideration of the gradient values and/or MV difference between the refined and initial MVs.Type: ApplicationFiled: August 18, 2022Publication date: October 24, 2024Inventors: Chun-Chia CHEN, Olena CHUBACH, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
-
Publication number: 20240357083Abstract: A method and apparatus for video coding system that utilizes low-latency template-matching motion-vector refinement are disclosed. According to this method, a current template for a current block is determined, where the current template includes an inside current template including inside prediction samples or inside partially reconstructed samples inside the current block. The inside partially reconstructed samples are derived by adding a DC value of the current block to the inside prediction samples. Corresponding candidate reference templates associated with the current block are determined at a set of candidate locations. A location of a target reference template among the candidate reference templates that achieves a best match between the current template and the candidate reference templates is determined. An initial motion vector (MV) is then refined according to the location of the target reference template.Type: ApplicationFiled: August 12, 2022Publication date: October 24, 2024Inventors: Olena CHUBACH, Chun-Chia CHEN, Man-Shu CHIANG, Tzu-Der CHUANG, Ching-Yeh CHEN, Chih-Wei HSU, Yu-Wen HUANG
-
Patent number: 12095993Abstract: A method and apparatus for video coding using a coding mode belonging to a mode group comprising an Intra Block Copy (IBC) mode and an Intra mode are disclosed. According to the present invention, for both IBC and Intra mode, a same default scaling matrix is used to derive the scaling matrix for a current block. In another embodiment, for the current block with block size of M×N or N×M, and M greater than N, a target scaling matrix is derived from an M×M scaling matrix by down-sampling the M×M scaling matrix to an M×N or N×M scaling matrix.Type: GrantFiled: March 6, 2020Date of Patent: September 17, 2024Assignee: HFI INNOVATION INC.Inventors: Chen-Yen Lai, Olena Chubach, Tzu-Der Chuang, Ching-Yeh Chen, Chih-Wei Hsu, Yu-Wen Huang
-
Patent number: 12041248Abstract: A down-sample video coding system is provided. A decoding system receives to be decoded data from a bitstream for one or more pictures of a video. Each picture includes pixels having different color components. The decoding system receives up-down-sampling parameters that are applicable to a current video unit in the received data. The up-down-sampling parameters include different subsets for different color components. The decoding system decodes the data to reconstruct the current video unit. The decoding system up-samples the reconstructed current video unit according to the up-down-sampling parameters. The different color components of the current video unit are up-sampled according to different subsets of the up-down-sampling parameters.Type: GrantFiled: July 15, 2022Date of Patent: July 16, 2024Assignee: MediaTek Singapore Pte. Ltd.Inventors: Olena Chubach, Yu-Wen Huang, Ching-Yeh Chen
-
Patent number: 11979613Abstract: Encoding methods and apparatuses include receiving input video data of a current block in a current picture and applying a Cross-Component Adaptive Loop Filter (CCALF) processing on the current block based on cross-component filter coefficients to refine chroma components of the current block according to luma sample values. The method further includes signaling two Adaptive Loop Filter (ALF) signal flags and two CCALF signal flags in an Adaptation Parameter Set (APS) with an APS parameter type equal to ALF or parsing two ALF signal flags and two CCALF signal flags from an APS with an APS parameter type equal to ALF, signaling or parsing one or more Picture Header (PH) CCALF syntax elements or Slice Header (SH) CCALF syntax elements, wherein both ALF and CCALF signaling are present either in a PH or SH, and encoding or decoding the current block in the current picture.Type: GrantFiled: June 28, 2022Date of Patent: May 7, 2024Assignee: HFI INNOVATION INC.Inventors: Ching-Yeh Chen, Olena Chubach, Chen-Yen Lai, Tzu-Der Chuang, Chih-Wei Hsu, Yu-Wen Huang
-
Patent number: 11882270Abstract: Method and apparatus for signaling or parsing constrained active entries in reference picture lists for multi-layer coding are disclosed. For the decoder side, when the current picture is a RADL (Random Access Decodable Leading) picture, reference picture list 0 or reference picture list 1 of the current picture is mandatorily required to contain no active entry corresponding to a RASL (Random Access Skipped Leading) picture with pps_mixed_nalu_types_in_pic_flag equal to 0 or a picture that precedes an associated IRAP (Intra Random Access Point) picture in decoding order, and wherein an active entry in the reference picture list 0 or the reference picture list 1 of the RADL picture can refer to a RASL picture with the pps_mixed_nalu_types_in_pic_flag equal to 1 and a referenced RASL picture either belongs to the same layer or a different layer than a layer containing the current picture which is the RADL picture.Type: GrantFiled: June 8, 2021Date of Patent: January 23, 2024Assignee: HFI INNOVATION INC.Inventors: Shih-Ta Hsiang, Lulin Chen, Chih-Wei Hsu, Olena Chubach
-
Patent number: 11778235Abstract: A method for performing transform skip mode (TSM) in a video decoder is provided. A video decoder receives data from a bitstream to be decoded as a plurality of video pictures. The video decoder parses the bitstream for a first syntax element in a sequence parameter set (SPS) of a current sequence of video pictures. When the first syntax element indicates that transform skip mode is allowed for the current sequence of video pictures and when transform skip mode is used for a current block in a current picture of the current sequence, the video decoder reconstructs the current block by using quantized residual signals that are not transformed.Type: GrantFiled: April 11, 2022Date of Patent: October 3, 2023Assignee: HFI INNOVATION INC.Inventors: Shih-Ta Hsiang, Lulin Chen, Tzu-Der Chuang, Chih-Wei Hsu, Ching-Yeh Chen, Olena Chubach, Yu-Wen Huang