Patents by Inventor Wehai Wu

Wehai Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8335260
    Abstract: A method for quantizing vector. The method includes: performing a quantization process on a vector to be quantized by use of N basic codebook vectors and the adjustment vectors of each of the basic codebook vectors, generating a basic codebook vector and an adjustment vector used for quantizing the vectors to be quantized, N being a positive integer larger than or equal to 1. According to the present invention, based on the method a device for quantizing vector is disclosed. According to embodiments of the present invention, the quantization of an input vector is done by introducing the modification vectors for the base codebook vectors, therefore the memory amount of the base codebook vectors is reduced effectively, and the calculation amount is merely the calculation amount required for going through N codebooks. Therefore, the complexity of the vector quantization could be decreased effectively.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: December 18, 2012
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Lixiong Li, Libin Guo, Liang Zhang, Dejun Zhang, Wehai Wu, Tinghong Wang
  • Publication number: 20090074076
    Abstract: A method for quantizing vector. The method includes: performing a quantization process on a vector to be quantized by use of N basic codebook vectors and the adjustment vectors of each of the basic codebook vectors, generating a basic codebook vector and an adjustment vector used for quantizing the vectors to be quantized, N being a positive integer larger than or equal to 1. According to the present invention, based on the method a device for quantizing vector is disclosed. According to embodiments of the present invention, the quantization of an input vector is done by introducing the modification vectors for the base codebook vectors, therefore the memory amount of the base codebook vectors is reduced effectively, and the calculation amount is merely the calculation amount required for going through N codebooks. Therefore, the complexity of the vector quantization could be decreased effectively.
    Type: Application
    Filed: November 18, 2008
    Publication date: March 19, 2009
    Applicant: Huawei Technologies Co., Ltd
    Inventors: Lixiong Li, Libin Guo, Liang Zhang, Dejun Zhang, Wehai Wu, Tinghong Wang