Patents by Inventor Libin Guo

Libin Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240097434
    Abstract: A method for detecting abnormal direct current voltage measurement in a modular multilevel converter high voltage direct current transmission system is provided. In the method, a valve group voltage at a detection pole is obtained, voltages at voltage measurement points at the detection pole are collected, and comparison and determination are performed based on the actual arrangement of the voltage measurement points, and then whether an abnormal measurement occurs at each of the voltage measurement points is determined.
    Type: Application
    Filed: May 18, 2022
    Publication date: March 21, 2024
    Applicant: ELECTRIC POWER RESEARCH INSTITUTE. CHINA SOUTHERN POWER GRID
    Inventors: Qinlei CHEN, Shuyong LI, Qi GUO, Libin HUANG, Xuehua LIN, Zhijiang LIU, Deyang CHEN, Chao LUO, Guanming ZENG, Mengjun LIAO, Lijun DENG, Liu CUI, Zhida HUANG, Haiping GUO, Tianyu GUO
  • Publication number: 20240086693
    Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 14, 2024
    Inventors: Yiwen GUO, Yuqing Hou, Anbang YAO, Dongqi Cai, Lin Xu, Ping Hu, Shandong Wang, Wenhua Cheng, Yurong Chen, Libin Wang
  • Patent number: 8335260
    Abstract: A method for quantizing vector. The method includes: performing a quantization process on a vector to be quantized by use of N basic codebook vectors and the adjustment vectors of each of the basic codebook vectors, generating a basic codebook vector and an adjustment vector used for quantizing the vectors to be quantized, N being a positive integer larger than or equal to 1. According to the present invention, based on the method a device for quantizing vector is disclosed. According to embodiments of the present invention, the quantization of an input vector is done by introducing the modification vectors for the base codebook vectors, therefore the memory amount of the base codebook vectors is reduced effectively, and the calculation amount is merely the calculation amount required for going through N codebooks. Therefore, the complexity of the vector quantization could be decreased effectively.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: December 18, 2012
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Lixiong Li, Libin Guo, Liang Zhang, Dejun Zhang, Wehai Wu, Tinghong Wang
  • Publication number: 20090074076
    Abstract: A method for quantizing vector. The method includes: performing a quantization process on a vector to be quantized by use of N basic codebook vectors and the adjustment vectors of each of the basic codebook vectors, generating a basic codebook vector and an adjustment vector used for quantizing the vectors to be quantized, N being a positive integer larger than or equal to 1. According to the present invention, based on the method a device for quantizing vector is disclosed. According to embodiments of the present invention, the quantization of an input vector is done by introducing the modification vectors for the base codebook vectors, therefore the memory amount of the base codebook vectors is reduced effectively, and the calculation amount is merely the calculation amount required for going through N codebooks. Therefore, the complexity of the vector quantization could be decreased effectively.
    Type: Application
    Filed: November 18, 2008
    Publication date: March 19, 2009
    Applicant: Huawei Technologies Co., Ltd
    Inventors: Lixiong Li, Libin Guo, Liang Zhang, Dejun Zhang, Wehai Wu, Tinghong Wang