Patents by Inventor Kai Zhang

Kai Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250150604
    Abstract: Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current video block of a video and a bitstream of the video, a sign prediction of a block vector difference (BVD) of the current video block; determining the BVD at least based on the sign prediction of the BVD; and performing the conversion based on the BVD.
    Type: Application
    Filed: January 10, 2025
    Publication date: May 8, 2025
    Inventors: Na ZHANG, Kai ZHANG, Li ZHANG
  • Publication number: 20250148654
    Abstract: A method for processing video data is disclosed. The method includes: performing a conversion between visual media data and a bitstream of the visual media data by an image compression framework, wherein the image compression framework includes a preprocessing function and a compressor, and the visual media data is processed by the preprocessing function and the compressor sequentially, and wherein the preprocessing function receives a first image among the visual media data with a size of W0×H0×C0 as input and outputs a preprocessed first image with a size of W1×H1×C1, wherein W0 is an input width, H0 is an input height, and C0 is an input channel number, and wherein W1 is an output width, H1 is an output height, and C1 is an output channel number.
    Type: Application
    Filed: January 7, 2025
    Publication date: May 8, 2025
    Inventors: Meng Wang, Kai Zhang, Li Zhang
  • Publication number: 20250150580
    Abstract: A video processing method includes deriving multiple temporal motion vector prediction (TMVP) candidates for a video block in a current picture based on multiple blocks associated with a second block in one or more pictures that are temporally co-located with the current picture, wherein the current picture is excluded from the one or more pictures, and the second block is temporally collocated with the video block, wherein the second block has a same size as the video block, and wherein a relative position of the second block to a top-left corner of a second picture of the one or more pictures is same as that of the video block to a top-left corner of the current picture; adding the multiple TMVP candidates to a motion candidate list associated with the video block; and performing a conversion between the video block and a bitstream.
    Type: Application
    Filed: January 13, 2025
    Publication date: May 8, 2025
    Inventors: Li Zhang, Kai Zhang, Hongbin Liu, Yue Wang
  • Publication number: 20250150575
    Abstract: Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block; determining, based on a similarity metric between a first motion candidate and a second motion candidate in the motion candidate list, whether to update the motion candidate list; and performing the conversion based on the determination.
    Type: Application
    Filed: December 20, 2024
    Publication date: May 8, 2025
    Inventors: Mehdi SALEHIFAR, Yuwen He, Kai Zhang, Na Zhang, Li Zhang
  • Publication number: 20250148652
    Abstract: Embodiments of the present disclosure provide a solution for point cloud coding. A method for point cloud coding is proposed. The method comprises: determining, for a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence, first attribute information of a first node of the current frame based on second attribute information of a second node of the current frame, a node representing a spatial partition of the current frame, a first partition depth of the first node being different from a second partition depth of the second node; and performing the conversion based on the first attribute information.
    Type: Application
    Filed: January 9, 2025
    Publication date: May 8, 2025
    Inventors: Wenyi WANG, Yingzhan XU, Kai ZHANG, Li ZHANG
  • Publication number: 20250150599
    Abstract: Embodiments of the disclosure provide a solution for video processing. A method for video processing is proposed. The method includes: generating, for a conversion between a video unit of a video and a bitstream of the video, an intra mode for the video unit based on coding information associated with the video unit, wherein the video unit is an intra template matching (TM) coded block or the video unit is an intra copy block (IBC) coded block; and performing the conversion based on the generated intra mode.
    Type: Application
    Filed: January 13, 2025
    Publication date: May 8, 2025
    Inventors: Zhipin DENG, Kai Zhang, Li Zhang
  • Patent number: 12294701
    Abstract: A method of video processing is described. The method includes performing a conversion between a chroma block of a video region of a video picture of a video and a coded representation of the video according to a rule. The rule specifies that, due to the chroma block having a size M×N, the chroma block is disallowed to be represented in the coded representation using an intra coding mode. M and N are integers that indicate a width and a height of the chroma block, respectively. The intra coding mode includes coding the chroma block based on a previously coded video region of the video picture.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: May 6, 2025
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Jizheng Xu, Zhipin Deng, Li Zhang, Hongbin Liu, Kai Zhang
  • Patent number: 12294739
    Abstract: A video processing method includes performing a conversion between a video including a video unit and a coded representation of the video, where, after the video unit is encoded or decoded with an intra prediction mode, one or more frequence tables and/or one or more sorted intra prediction mode (IPM) tables are selectively updated according to a rule, where the one or more frequence tables include information about frequence of the intra prediction mode used for processing the video unit in the conversion, where the frequence indicates an occurrence of the intra prediction mode used for the conversion, and where the one or more sorted IPM tables indicate the intra prediction mode used in the processing.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: May 6, 2025
    Assignees: Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc., Bytedance (HK) Limited
    Inventors: Junru Li, Meng Wang, Li Zhang, Kai Zhang, Hongbin Liu, Yue Wang, Shiqi Wang
  • Patent number: 12294709
    Abstract: A method of video processing includes performing a conversion between a video including a video region and a bitstream of the video according to a rule. The rule specifies a relationship between enablement of a palette mode and a coding type of the video region. The video region may represent a coding block of the video.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: May 6, 2025
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Jizheng Xu, Zhipin Deng, Li Zhang, Hongbin Liu, Kai Zhang
  • Publication number: 20250142130
    Abstract: A mechanism for processing video data is disclosed. The mechanism determines to modify a video unit attendant to applying a video compression function. The modification may include applying a geometric conversion to the video unit. A conversion is performed between a visual media data and a bitstream based on the modified video unit.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Inventors: Yue Li, Kai Zhang, Li Zhang
  • Publication number: 20250142080
    Abstract: Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current block of a video and a bitstream of the video, motion information of the current block based on motion information of a neighboring block of the current block and an intra block copy merge mode with block vector difference (IBC-MBVD) mode, the neighboring block being coded with a reconstruction-reordered intra block copy (RRIBC) mode; and performing the conversion based on the motion information of the current block.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Zhipin DENG, Kai ZHANG, Li ZHANG, Na ZHANG
  • Publication number: 20250142121
    Abstract: Embodiments of the present disclosure provide a solution for point cloud coding. A method for point cloud coding is proposed. The method comprises: obtaining, for a conversion between a current point cloud (PC) sample of a point cloud sequence and a bitstream of the point cloud sequence, target information regarding whether an attribute inter prediction is enabled for the current PC sample, the target information being determined based on at least one of rate information or distortion information associated with coding at least one target PC sample with the attribute inter prediction, wherein the at least one target PC sample comprises at least one of: the current PC sample, or at least one PC sample of the point cloud sequence coded before the current PC sample; and performing the conversion based on the target information.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Yingzhan XU, Wenyi Wang, Kai Zhang, Li Zhang
  • Publication number: 20250142064
    Abstract: Embodiments of the disclosure provide a solution for video processing. A method for video processing is proposed. The method includes: generating, for a conversion between a video unit of a video and a bitstream of the video unit, a sample value of a first color component of the video unit that is corresponding to a sample of a second color component by applying a plurality of filters to at least one sample of the first color component; and performing the conversion based on the generated sample value.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Kai ZHANG, Li Zhang
  • Publication number: 20250142056
    Abstract: Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: determining, for a conversion between a current video block of a video and a bitstream of the video, whether a candidate frame associated with the current video block is a co-located frame based on a frame type of the candidate frame, the co-located frame being co-located with a frame comprising the current video block; and performing the conversion based on the determining.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Inventors: Kai ZHANG, Li Zhang
  • Publication number: 20250142054
    Abstract: An example method of video processing includes determining, for a conversion between a current picture of a video and a coded representation of the video, a position of a reference sample in a reference picture that is associated with the current picture based on a top-left position of a window of a picture. The picture includes at least the current picture or the reference picture, and the window is subject to a processing rule during the conversion. The method also includes performing the conversion based on the determining.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Inventors: Kai Zhang, Li Zhang, Hongbin Liu, Zhipin Deng, Jizheng Xu, Yue Wang
  • Publication number: 20250142087
    Abstract: Embodiments of the disclosure provide a solution for video processing. A method for video processing is proposed. The method includes: determining, during a conversion between a target block of a video and a bitstream of the target block, that a combination of an intra block copy (IBC) and an intra prediction (CIBCIP) mode is applied to the target block; obtaining an IBC predicted signal and an intra predicted signal based on the CIBCIP mode; deriving a prediction or a reconstruction of the target block by combining the IBC predicted signal and the intra predicted signal; and performing the conversion based on the prediction or the reconstruction of the target block.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Yang WANG, Kai Zhang, Na Zhang, Li Zhang
  • Publication number: 20250139883
    Abstract: Embodiments are configured to render 3D models using an importance sampling method. First, embodiments obtain a 3D model including a plurality of density values corresponding to a plurality of locations in a 3D space, respectively. Embodiments then sample the color information from within a random subset of the plurality of locations using a probability distribution based on the plurality of density values. Embodiments have a higher probability to sample each location within the random subset of locations if the location has a higher density probability. Embodiments then an image depicting a view of the 3D model based on the sampling within the random subset of the plurality of locations.
    Type: Application
    Filed: November 1, 2023
    Publication date: May 1, 2025
    Inventors: Milos Hasan, Iliyan Georgiev, Sai Bi, Julien Philip, Kalyan K. Sunkavalli, Xin Sun, Fujun Luan, Kevin James Blackburn-Matzen, Zexiang Xu, Kai Zhang
  • Publication number: 20250139771
    Abstract: A method and a device for nidus recognition in a neuroimage, an electronic apparatus, and a storage medium are provided. A collection of neuroimages to be recognized, including a first structural image, a first nidus image, and a first metabolic image, is determined, and then image preprocessing is performed on the collection of neuroimages to acquire a collection of object images including a second structural image, a second nidus image, and a second metabolic image. The collection of object images is input into a trained three-dimensional convolutional neural network to acquire a position of a nidus of the target object, and then the position of the nidus is labeled on the first structural image based on the position of the nidus of the target object to acquire and display an image of the position of the nidus.
    Type: Application
    Filed: June 13, 2024
    Publication date: May 1, 2025
    Applicant: Beijing Tiantan Hospital, Capital Medical University
    Inventors: Jiajie MO, Kai ZHANG, Wenhan HU, Chao ZHANG, Xiu WANG, Baotian ZHAO, Zhihao GUO, Bowen YANG, Zilin LI, Yuan YAO
  • Publication number: 20250142081
    Abstract: Embodiments of the present disclosure provide a solution for video processing. A method for video processing is proposed. The method comprises: obtaining, for a conversion between a current video block of a video and a bitstream of the video, values for a set of adjusting parameters associated with values for a set of model parameters of a local illumination compensation (LIC) model for coding the current video block; updating the values for the set of model parameters based on the values for the set of adjusting parameters; and performing the conversion based on the updated values for the set of model parameters.
    Type: Application
    Filed: January 3, 2025
    Publication date: May 1, 2025
    Inventors: Bharath VISHWANATH, Kai ZHANG, Li ZHANG
  • Publication number: 20250142106
    Abstract: Embodiments of the disclosure provide a solution for video processing. A method for video processing is proposed. The method includes: applying, for a conversion between a video unit of a video and a bitstream of the video unit, a wrap around motion compensation (WAMC) during a derivation of motion information for the video unit; and performing the conversion based on the derived motion information.
    Type: Application
    Filed: December 27, 2024
    Publication date: May 1, 2025
    Inventors: Yang WANG, Kai ZHANG, Zhipin DENG, Li ZHANG