Ching Yeh Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: Video processing methods and apparatuses in a video encoding or decoding system for processing a video picture partitioned into blocks with one or more partition constraints. The video encoding or decoding system receives input data of a current block and checks whether a predefined splitting type is allowed to partition the current block according to first and second constraints. The first constraint restricts each sub-block partitioned from the current block to be completely contained in one pipeline unit, and the second constraint restricts each sub-block partitioned from the current block to contain one or more complete pipeline units. The pipeline units are non-overlapping units in the video picture designed for pipeline processing. The current block is not partitioned by the predefined splitting type if any sub-block partitioned by the predefined splitting type violates both the first and second constraints. The system encodes or decodes the current block.
Abstract: An intra prediction method comprises receiving input data of a current block in a current picture, deriving multiple Most Probable Modes (MPMs) to be included in a MPM list for the current block, setting remaining intra prediction modes as non-MPMs, and encoding or decoding the current block according to a current intra prediction mode selecting from the MPMs and non-MPMs. The first MPM in the MPM list is Planar mode for blocks coded or to be coded in intra prediction, and one or more other MPMs in the MPM list are derived according to a number of available angular mode of one or more neighboring blocks of the current block.
Abstract: A method and apparatus of Inter prediction for video coding using Multi-hypothesis (MH) are disclosed. If an MH mode is used for the current block: at least one MH candidate is derived using reduced reference data by adjusting at least one coding-control setting; an Inter candidate list is generated, where the Inter candidate list comprises said at least one MH candidate; and current motion information associated with the current block is encoded using the Inter candidate list at the video encoder side or the current motion information associated with the current block is decoded at the video decoder side using the Merge candidate list. The coding control setting may correspond to prediction direction setting, filter tap setting, block size of reference block to be fetched, reference picture setting or motion limitation setting.
Abstract: A deblocking filtering method includes receiving reconstructed video data associated with a block boundary in a video coding system. The block boundary has N lines of samples crossing the block boundary from a P side to a Q side of the boundary. The method further includes determining whether to apply a first filter set to reduce block artifacts at the block boundary based on whether a first inter-side difference of a first line of the N lines of samples is greater than an inter-side difference threshold, determining a filter length of a filter in the first filter set based on a first side length of the P side, and a second side length of the Q side when it is determined to apply the first filter set, and applying at least one filter in the first filter set with the determined filter length on the block boundary.
Abstract: Method and apparatus of using Bi-directional optical flow (BIO) for a true bi-direction predicted block are disclosed. According to one method of the present invention, the gradients are limited to a predefined bit-depth in order to reduce the computational complexity. According to another method, the data range of gradient sum and/or the difference between L0 and L1 interpolated pixels are shifted by a predefined bit-depth. The pre-defined bit-depth can be implicitly determined or signalled in a bit stream at a sequence level, picture level or slice level. The pre-defined bit-depth can also be determined dependent on input bit-depth.
June 10, 2019
Date of Patent:
October 19, 2021
Yu-Chi Su, Ching-Yeh Chen, Tzu-Der Chuang, Chen-Yen Lai
Abstract: Video data processing methods and apparatuses receive input data associated with a current split node partitioned from a parent node by a splitting type, determine a depth of the current split node according to the splitting type, and compare the depth of the current split node with a maximum delta QP signaling depth. A video decoding system derives a delta QP from one or more syntax elements signaled in a TU associated with the current split node according to the comparing result, reconstructs a final QP for the current split node based on a reference QP and the delta QP, and decodes one or more TUs associated with the current split node using the final QP. The depth is counted in a way considering different splitting types and splitting partitions.
Abstract: Video processing methods and apparatuses for processing video pictures referring to a high-level syntax set include receiving input data, determining a first syntax element indicating whether reference picture resampling is disabled or constrained, determining a second syntax element indicating whether subpicture partitioning is disabled or constrained, and encoding or decoding the video pictures. The first and second syntax elements are restricted to disable or constrain subpicture partitioning when reference picture resampling is enabled or disable or constrain reference picture resampling when subpicture partitioning is enabled. The first syntax element and the second syntax element are syntax elements signaled in the high level syntax set.
Abstract: A method of video coding using generalized bi-prediction (GBi) receives input data associated with a current block in a current picture, wherein the input data comprises information associated with a block size of the current block, determines a set of weighting factor pairs, wherein a size of the set of weighting factor pairs depends on the block size of the current block, and derives a set of advanced motion vector prediction (AMVP) candidate lists comprising MVP (motion vector prediction) candidates. The method further derives a set of final motion information based on the MVP candidates, determines that the set of final information comprises a bi-prediction predictor, generates a final predictor by combining two reference blocks associated with the final motion information using a target weighting factor pair selected from the set of weighting factor pairs, and encodes or decoding the current block using the final predictor.
Abstract: A video processing method comprises receiving input data of a current block, checking if the current block satisfies one or more predefined criteria, setting the current block to be a root block if the current block satisfies the predefined criteria, one or more color components of one or more blocks in the current block are not checked with the predefined criteria if the current block is a root block, encoding or decoding the one or more color components of one or more blocks in the current block using neighboring reconstructed samples of the one or more color components of the current block as reference samples if the current block is a root block. Each block in the current block is encoded or decoded using neighboring reconstructed samples of each block in the current block as reference samples if the current block is not a root block.
Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to encode or decode video data. A current block of video data is coded using affine prediction. A first set of candidates of a candidate list for the current block is generated, including determining one or more inherited candidates and deriving one or more constructed candidates. After generating the first set of candidates, it is determined whether the candidate list is full. Upon determining the candidate list is not full, the candidate list is filled by generating a second set of candidates of the candidate list, including one or more of generating one or more zero motion vector candidates, generating one or more additional derived candidates based on the plurality of associated neighboring blocks of the current block, and generating a temporal motion vector candidate based on a temporal collocated picture.
Abstract: A training method for a memory system is provided. The memory system includes a memory controller and a memory. The memory controller is connected with the memory. The training method includes the following steps. Firstly, the memory samples n command/address signals according to a first signal edge and a second signal edge of a clock signal to acquire a first sampled content and a second sampled content. The memory selectively outputting one of the first sampled content and the second sampled content through m data signals to the memory controller in response to a control signal. Moreover, m is larger than n and smaller than 2n.
Abstract: Methods and apparatus of Inter prediction using coding modes including an affine mode are disclosed. According to one method, if the target neighbouring block is in a neighbouring region of the current block, an affine control-point MV candidate is derived based on two target MVs (motion vectors) of the target neighbouring block where the affine control-point MV candidate is based on a 4-parameter affine model and the target neighbouring block is coded in a 6-parameter affine mode. According to another method, if the target neighbouring block is in a neighbouring region of the current block, an affine control-point MV candidate is derived based on two sub-block MVs (motion vectors) of the target neighbouring block, if the target neighbouring block is in a same region as the current block, the affine control-point MV candidate is derived based on control-point MVs of the target neighbouring block.
June 20, 2019
September 23, 2021
Tzu-Der CHUANG, Ching-Yeh CHEN, Zhi-Yi LIN
Abstract: Method and apparatus of video coding are disclosed. According to one method, the left reference boundary samples and the top reference boundary samples are checked jointly. According to another method, selected original left reference boundary samples and selected original top reference boundary samples at specific positions are used for predictor up-sampling. According to yet another method, the horizontal interpolation and the vertical interpolation are in a fixed order regardless of a shape of the current block, size of the current block or both.
Abstract: Aspects of the disclosure provide a video coding method for processing a current prediction unit (PU) with a sub-PU temporal motion vector prediction (TMVP) mode. The method can include receiving the current PU including sub-PUs, determining an initial motion vector that is a motion vector of a spatial neighboring block of the current PU, performing a searching process to search for a main collocated picture in a sequence of reference pictures of the current PU based on the initial motion vector, and obtaining collocated motion information in the main collocated picture for the sub-PUs of the current PU. The searching process can include turning on motion vector scaling operation for searching a subset of the sequence of reference pictures, and turning off the motion vector scaling operation for searching the other reference pictures in the sequence of reference pictures.
Abstract: Video processing methods and apparatuses for candidate set determination for binary-tree splitting blocks comprise receiving input data of a current block partitioned from a parent block by binary-tree splitting, determining a candidate set for the current block by prohibiting a spatial candidate derived from a neighboring block partitioned from the same parent block or determining the candidate set for the current block by conducting a pruning process if the neighboring block is coded in Inter prediction, and encoding or decoding the current block based on the candidate set by selecting one final candidate from the candidate set. The pruning process comprises scanning the candidate set to determine if any candidate equals to the spatial candidate derived from the neighboring block, and removing the candidate equals to the spatial candidate from the candidate set.
Abstract: A methods and apparatus for block partition in video encoding and decoding are disclosed. According to one method, a current data unit is partitioned into initial blocks using inferred splitting without split-syntax signalling. The initial blocks comprises multiple initial luma blocks and multiple initial chroma blocks, and size of the initial luma block is M×N, M and N are positive integers and the current data unit is larger than M×N for the luma component. A partition structure is determined for partitioning each initial luma block and each initial chroma block into one or more luma CUs (coding units) and one or more chroma CUs respectively. The luma syntaxes and the chroma syntaxes associated with one initial block in the current data unit are signalled or parsed, and then the luma syntaxes and the chroma syntaxes associated with one next initial block in the current data unit are signalled or parsed.
Abstract: A method and apparatus use an Inter coding tool and OBMC (Overlapped Block Motion Compensation). According to one implementation, a method of video coding using OBMC (Overlapped Block Motion Compensation) operates by receiving input data associated with a current block, wherein the input data correspond to pixel data to be coded at an encoder side or coded data to be decoded at a decoder side, applying the OBMC to the current block depending on one or more constraints, and signaling an OBMC syntax conditionally at the encoder side or parsing the OBMC syntax conditionally at the decoder side for the current block, wherein the OBMC syntax indicates whether the OBMC is applied to the current block.
Abstract: Exemplary video processing methods and apparatuses for coding a current block by overlapped sub-block motion compensation split the current block into overlapped sub-blocks, determine a sub-block MV for each overlapped sub-block, derive an initial predictor for each sub-block by motion compensation using the sub-block MV, derive a final predictor for each overlapped region by blending the initial predictors of the overlapped region, and encode or decode the current block based on the final predictors. Exemplary video processing methods and apparatuses for coding blocks with OBMC generate a converted MV by changing a MV to an integer MV or changing a MV component to an integer component, derive an OBMC region by motion compensation using the converted MVs, and encode or decode the blocks by blending an OBMC predictor with an original predictor.
Abstract: Method and apparatus of using Bi-directional optical flow (BIO) for a true bi-direction predicted block are disclosed. According to one method of the present invention, a division-free BIO process is disclosed, where the x-motion offset and y-motion offset are derived using operations including right-shifting without any division operation. According to another method, a refined predictor is generated for the current block by applying BIO process to the reference blocks, where said applying the BIO process comprises applying a boundary-condition-dependent BIO process conditionally to boundary pixels associated with the reference blocks.
Abstract: Methods and apparatus of motion compensation using the bi-directional optical flow (BIO) techniques are disclosed. According to one method of the present invention, the BIO process is applied to encode or decode bi-directional current block in Merge mode only or in AMVP (advanced motion vector prediction) mode only. According to another method, the BIO process conditionally to encode or decode the current block depending on a jointly-coded flag if the current block is coded using a bi-prediction mode. According to yet another method, x-offset value vx and y-offset value vy for the current block are added to the current motion vector to form a final motion vector. The final motion vector is then used as a reference motion vector for following blocks. In still yet another method, the BIO process is applied to the chroma component.