Patents by Inventor Lingzhi Liu
Lingzhi Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12327385Abstract: A neural network system, a method and an apparatus for image compression are provided. The neural network may include a generator including an encoder, an entropy estimator, and a decoder, where the encoder receives an input image and generates an encoder output, a plurality of quantized feature entries are obtained based on the encoder output outputted at a last encoder block, the entropy estimator receives the plurality of quantized feature entries and calculates an entropy loss based on the plurality of quantized feature entries, and the decoder receives the plurality of quantized feature entries and generates a reconstructed image. Furthermore, the neural network may include a discriminator that determines whether the reconstructed image different from the input image based on a discriminator loss. Moreover, the generator may determine whether content of the reconstructed image matches content of the input image based on a generator loss including the entropy loss.Type: GrantFiled: October 19, 2022Date of Patent: June 10, 2025Assignees: SANTA CLARA UNIVERSITY, KWAI INC.Inventors: Yifei Pei, Ying Liu, Nam Ling, Yongxiong Ren, Lingzhi Liu
-
Publication number: 20250039373Abstract: An decoding method is disclosed, including: parsing a received bitstream to determine whether a current image block is required to be partitioned; parsing the bitstream to determine a partition direction when the current image block is required to be partitioned, wherein the partition direction is a horizontal direction; partitioning the current image block into four rectangular subblocks when the partition direction is the horizontal direction, wherein a size of the current image block is expressed as 16×H, with H representing a height of the current image block and 16 is a width of the current image block, wherein H is not equal to 16, and wherein a size of each of the four rectangular subblocks is expressed as 16×H/4; and reconstructing the current image block based on the four rectangular subblocks.Type: ApplicationFiled: October 10, 2024Publication date: January 30, 2025Inventors: Changcai Lai, Xiaoran Cao, Yongbing Lin, Lingzhi Liu, Yun He
-
Patent number: 12143582Abstract: An decoding method is disclosed, including: parsing a bitstream to determine whether a current coding block is required to be partitioned; when the current coding block is required to be partitioned, parsing the bitstream to determine whether the current coding blocks is partitioned in a horizontal direction or a vertical direction; partitioning the current coding block into four first rectangular subblocks in the horizontal direction or four second rectangular subblocks in the vertical direction; and reconstructing the current coding block based on the four first rectangular subblocks or the four second rectangular subblocks.Type: GrantFiled: July 21, 2023Date of Patent: November 12, 2024Assignees: Huawei Technologies Co., Ltd, Tsinghua UniversityInventors: Changcai Lai, Xiaoran Cao, Yongbing Lin, Lingzhi Liu, Yun He
-
Publication number: 20240362519Abstract: A method for processing data in a multi-mode single-engine system, an apparatus, and a non-transitory computer-readable storage medium are provided. In the method, a graphic processing engine receives a first input query. Further, the graphic processing engine obtains a first set of model parameters by switching between multiple sets of model parameters based on the first input query. Moreover, the graphic processing engine infers a first output for the first input query based on the first set of parameters.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Applicant: KWAI INC.Inventors: Yongxiong REN, Yang LIU, Lingzhi LIU
-
Publication number: 20240311946Abstract: Methods and apparatuses are provided for serving multiple models with a single engine. The method includes: providing a single engine for a plurality of models on a server, where the server includes at least one graphics processing unit (GPU) and memory coupled with the at least one GPU; loading the plurality of models onto the memory of the server at once; and serving, by the single engine, the plurality of models, where the single engine accommodates all structures and weights of the plurality of models with a shared input and output of the memory.Type: ApplicationFiled: March 14, 2023Publication date: September 19, 2024Applicant: KWAI INC.Inventors: Yang LIU, Yongxiong REN, Lingzhi LIU, Xing WEN
-
Patent number: 12058312Abstract: A method and an apparatus for video processing are provided. The method includes that a decoding terminal receives a plurality of coded video frames coded using one or more generative adversarial networks (GANs), receives network parameters related to the one or more GANs, and decodes the plurality of coded video frames using GANs based on the network parameters. Further, the one or more GANs respectively implement one or more video coding functions including reference-frame coding, motion-compensated frame prediction, and residue-frame coding.Type: GrantFiled: October 6, 2021Date of Patent: August 6, 2024Assignees: KWAI INC., SANTA CLARA UNIVERSITYInventors: Pengli Du, Ying Liu, Nam Ling, Lingzhi Liu, Yongxiong Ren, Ming Kai Hsu
-
Publication number: 20240185473Abstract: A neural network system, a method and an apparatus for image compression are provided. The neural network may include a generator including an encoder, an entropy estimator, and a decoder, where the encoder receives an input image and generates an encoder output, a plurality of quantized feature entries are obtained based on the encoder output outputted at a last encoder block, the entropy estimator receives the plurality of quantized feature entries and calculates an entropy loss based on the plurality of quantized feature entries, and the decoder receives the plurality of quantized feature entries and generates a reconstructed image. Furthermore, the neural network may include a discriminator that determines whether the reconstructed image different from the input image based on a discriminator loss. Moreover, the generator may determine whether content of the reconstructed image matches content of the input image based on a generator loss including the entropy loss.Type: ApplicationFiled: October 19, 2022Publication date: June 6, 2024Applicants: SANTA CLARA UNIVERSITY, KWAI INC.Inventors: Yifei PEI, Ying LIU, Nam LING, Yongxiong REN, Lingzhi LIU
-
Publication number: 20240185075Abstract: A method, an apparatus, and a non-transitory computer-readable storage medium for video compression using a generative adversarial network (GAN) are provided. The method includes obtaining, by a generator of the GAN, a reconstructed target frame based on a reference frame and a raw target frame to be reconstructed; concatenating, by a transformer-based discriminator of the GAN, the reference frame, the raw target frame and the reconstructed target frame to obtain a paired data; determining, by the transformer-based discriminator of the GAN, whether the paired data is real or fake to guide reconstruction of the raw target frame; and determining a generator loss and a transformer-based discriminator loss, and performing gradient back propagation and updating network parameters of the GAN based on the generator loss and the transformer-based discriminator loss.Type: ApplicationFiled: October 21, 2022Publication date: June 6, 2024Applicants: SANTA CLARA UNIVERSITY, KWAI INC.Inventors: Pengli DU, Ying LIU, Nam LING, Yongxiong REN, Lingzhi LIU
-
Patent number: 12001510Abstract: A method and an apparatus for length-aware local tiling in a sparse attention module in a transformer in heterogeneous devices are provided. The method includes that a heterogeneous device including one or more GPUs: divides a transformed sparsity mask into a plurality of first tiles and obtaining one or more effective first tiles from the plurality of first tiles, where each effective first tile includes at least one non-zero element; loads the one or more effective first tiles into a shared memory in the one or more GPUs and loads a plurality of elements in a first matrix corresponding to the one or more effective first tiles into the shared memory; and performs multiplication by a first sampled dense-dense matrix multiplication (SDDMM) kernel in the sparse attention module in the transformer by fetching the one or more effective first tiles and the plurality of elements from the shared memory.Type: GrantFiled: November 17, 2021Date of Patent: June 4, 2024Assignee: BEIJING TRANSTREAMS TECHNOLOGY CO. LTD.Inventors: Zhendong Wang, Yongxiong Ren, Yang Liu, Lingzhi Liu
-
Patent number: 12002453Abstract: A method and an apparatus for automatic speech recognition are provided. The method includes: generating a weight matrix for a layer of a plurality of layers in a neural network; dividing the weight matrix into a plurality of blocks, each block including a plurality of weights; selecting a pre-determined percentage of weights from at least one block for block-wise pruning; and generating a block-wise pruned weight matrix by setting the pre-determined percentage of weights selected from the at least one block to zero. The weight matrix includes a set of weights associated with the layer, the plurality of layers includes a first layer receiving a first input associated with one or more audio feature sequences, and the plurality of layers are executed on one or more processors. The method efficiently accelerates model inference using irregular pruning.Type: GrantFiled: March 25, 2021Date of Patent: June 4, 2024Assignee: BEIJING TRANSTREAMS TECHNOLOGY CO. LTD.Inventors: Yongxiong Ren, Bingbing Li, Yang Liu, Lingzhi Liu
-
Patent number: 11928446Abstract: A method, apparatus, and a non-transitory computer-readable storage medium for generating heterogenous platform code. The method may obtain a neural network model. The neural network model may be programed to run on at least one platform. The method may also obtain an initial intermediate representation (IR) code by encoding the neural network model, and obtain a target IR code by adding decorations to the initial IR code based on a target platform. The method may also output an executable code optimized to run on the target platform by decoding the target IR code.Type: GrantFiled: November 11, 2021Date of Patent: March 12, 2024Assignee: KWAI INC.Inventors: Zhen Peng, Yang Liu, Hanxian Huang, Yongxiong Ren, Jishen Yang, Lingzhi Liu, Xin Chen
-
Patent number: 11830480Abstract: Systems and methods are provided for automatic speech recognition. In the method, the system obtains a padded sequence by processing a plurality of acoustic signals. The system compresses the padded sequence by reducing the size of the padded sequence to obtain a compressed sequence. The system inputs the compressed sequence into a pre-trained encoder neural network to obtain an encoded sequence and then decompresses the encoded sequence by recovering the encoded sequence to an original sequential ordering. The system inputs the encoded sequence to a decoding module to obtain recognition texts.Type: GrantFiled: February 17, 2021Date of Patent: November 28, 2023Assignee: KWAI INC.Inventors: Yongxiong Ren, Yang Liu, Heng Liu, Lingzhi Liu
-
Publication number: 20230362374Abstract: An decoding method is disclosed, including: parsing a bitstream to obtain a first flag, wherein the first flag specifies whether a current coding block is required to be partitioned; when the first flag specifies that the current coding block is required to be partitioned, parsing the bitstream to obtain a second flag, wherein the second flag specifies whether the current coding blocks is partitioned in a horizontal direction or a vertical direction; partitioning the current coding block into four first rectangular subblocks in the horizontal direction or four second rectangular subblocks in the vertical direction; and reconstructing the current coding block based on the four first rectangular subblocks or the four second rectangular subblocks.Type: ApplicationFiled: July 21, 2023Publication date: November 9, 2023Inventors: Changcai Lai, Xiaoran Cao, Yongbing Lin, Lingzhi Liu, Yun He
-
Patent number: 11750809Abstract: An encoding method with multiple image block division manners is disclosed, including: determining a division manner and a division direction of an image block; dividing the image block to obtain image subblocks sequentially arranged horizontally or vertically; determining whether the image subblocks need subdivision, and if subdivision is not needed, predicting the encoding object in the frame according to the image subblocks, to obtain residual data; performing transformation, quantization, and entropy encoding for the residual data so as to obtain coded residual data; and writing the division manner of the image block, the division direction of the image block, an identifier indicating whether the image subblocks need subdivision, and the coded residual data into a bitstream. By applying the encoding method, better prediction accuracy can be achieved when the image block presents a small change of pixel value in the horizontal or vertical direction.Type: GrantFiled: February 16, 2022Date of Patent: September 5, 2023Assignees: Huawei Technologies Co., Ltd., Tsinghua UniversityInventors: Changcai Lai, Xiaoran Cao, Yongbing Lin, Lingzhi Liu, Yun He
-
Patent number: 11741967Abstract: An automatic speech recognition system and a method thereof are provided. The system includes an encoder and a decoder. The encoder comprises a plurality of encoder layers. At least one encoder layer includes a plurality of encoder sublayers fused into one or more encoder kernels. The system further comprises a first pair of ping-pong buffers communicating with the one or more encoder kernels. The decoder comprises a plurality of decoder layers. At least one decoder layer includes a plurality of decoder sublayers fused into one or more decoder kernels. The decoder receives a decoder output related to the encoder output and generates a decoder output. The encoder sends the decoder output to a beam search kernel.Type: GrantFiled: January 4, 2021Date of Patent: August 29, 2023Assignee: KWAI INC.Inventors: Yongxiong Ren, Heng Liu, Yang Liu, Lingzhi Liu, Jie Li, Yuanyuan Zhao, Xiaorui Wang
-
Publication number: 20230153381Abstract: A method and an apparatus for length-aware local tiling in a sparse attention module in a transformer in heterogeneous devices are provided. The method includes that a heterogeneous device including one or more GPUs: divides a transformed sparsity mask into a plurality of first tiles and obtaining one or more effective first tiles from the plurality of first tiles, where each effective first tile includes at least one non-zero element; loads the one or more effective first tiles into a shared memory in the one or more GPUs and loads a plurality of elements in a first matrix corresponding to the one or more effective first tiles into the shared memory; and performs multiplication by a first sampled dense-dense matrix multiplication (SDDMM) kernel in the sparse attention module in the transformer by fetching the one or more effective first tiles and the plurality of elements from the shared memory.Type: ApplicationFiled: November 17, 2021Publication date: May 18, 2023Applicant: KWAI INC.Inventors: Zhendong WANG, Yongxiong REN, Yang LIU, Lingzhi LIU
-
Publication number: 20230143291Abstract: A method, apparatus, and a non-transitory computer-readable storage medium for generating heterogenous platform code. The method may obtain a neural network model. The neural network model may be programed to run on at least one platform. The method may also obtain an initial intermediate representation (IR) code by encoding the neural network model, and obtain a target IR code by adding decorations to the initial IR code based on a target platform. The method may also output an executable code optimized to run on the target platform by decoding the target IR code.Type: ApplicationFiled: November 11, 2021Publication date: May 11, 2023Applicant: KWAI INC.Inventors: Zhen PENG, Yang LIU, Hanxian HUANG, Yongxiong REN, Jishen YANG, Lingzhi LIU, Xin CHEN
-
Publication number: 20230133305Abstract: A method and an apparatus for accelerating a transformer with a sparse attention pattern are provided. The method includes that a heterogeneous device including one or more GPUs loads a first matrix, a second matrix, and a transformed sparsity mask into a first sampled dense-dense matrix multiplication (SDDMM) kernel in a sparse attention module in the transformer and generates a first output based on the first matrix, the second matrix, and the transformed sparsity mask by the first SDDMM kernel, generates a second output by a softmax kernel in the sparse attention module based on the first output, loads the second output, a third matrix, and the transformed sparsity mask into a matrix multiplication kernel in the sparse attention module, and generates an output of the sparse attention module.Type: ApplicationFiled: October 28, 2021Publication date: May 4, 2023Applicant: KWAI INC.Inventors: Zhendong WANG, Yongxiong REN, Yang LIU, Lingzhi LIU
-
Publication number: 20230105436Abstract: A method and an apparatus for video processing are provided. The method includes that a decoding terminal receives a plurality of coded video frames coded using one or more generative adversarial networks (GANs), receives network parameters related to the one or more GANs, and decodes the plurality of coded video frames using GANs based on the network parameters. Further, the one or more GANs respectively implement one or more video coding functions including reference-frame coding, motion-compensated frame prediction, and residue-frame coding.Type: ApplicationFiled: October 6, 2021Publication date: April 6, 2023Applicants: KWAI INC., SANTA CLARA UNIVERSITYInventors: Pengli DU, Ying LIU, Nam LING, Lingzhi LIU, Yongxiong REN, Ming Kai HSU
-
Patent number: D1069081Type: GrantFiled: June 15, 2023Date of Patent: April 1, 2025Assignee: NEOLINK GROUP CO., Ltd.Inventor: Lingzhi Liu