Patents by Inventor Linyong HUANG

Linyong HUANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240045975
    Abstract: The present disclosure discloses a processor and a multi-core processor. The processor includes a processor core and a memory. The processor core includes a homomorphic encryption instruction execution module and a general-purpose instruction execution module; the homomorphic encryption instruction execution module is configured to perform homomorphic encryption operation and includes a plurality of instruction set architecture extension components, wherein the plurality of instruction set architecture extension components are respectively configured to perform a sub-operation related to the homomorphic encryption; the general-purpose instruction execution module is configured to perform non-homomorphic encryption operation. The memory is vertically stacked with the processor core and is used as a cache or scratchpad memory of the processor core.
    Type: Application
    Filed: December 14, 2022
    Publication date: February 8, 2024
    Inventors: SHUANGCHEN LI, ZHE ZHANG, LINYONG HUANG, DIMIN NIU, XUANLE REN, HONGZHONG ZHENG
  • Publication number: 20240005133
    Abstract: This application describes an hardware and a software design for quantization in GNN computation. An exemplary method may include: receiving a graph comprising a plurality of nodes respectively represented by a plurality of feature vectors; segmenting the plurality of feature vectors into a plurality of sub-vectors and grouping the plurality of sub-vectors into a plurality of groups of sub-vectors; performing vector clustering on each of the plurality of groups of sub-vectors to generate a plurality of centroids as a codebook; encoding each of the plurality of feature vectors to obtain a plurality of index maps by quantizing sub-vectors within the each feature vector based on the codebook, wherein each index map occupies a smaller storage space than the each feature vector does; and storing the plurality of index maps as an assignment table instead of the plurality of feature vectors to represent the plurality of nodes for GNN computation.
    Type: Application
    Filed: August 30, 2022
    Publication date: January 4, 2024
    Inventors: Linyong HUANG, Zhe ZHANG, Shuangchen LI, Hongzhong ZHENG
  • Publication number: 20220051086
    Abstract: The present disclosure provides an accelerator for processing a vector or matrix operation. The accelerator comprises a vector processing unit comprising a plurality of computation units having circuitry configured to process a vector operation in parallel; a matrix multiplication unit comprising a first matrix multiplication operator, a second matrix multiplication operator, and an accumulator, the first matrix multiplication operator and the second matrix multiplication operator having circuitry configured to process a matrix operation and the accumulator having circuitry configured to accumulate output results of the first matrix multiplication operator and the second matrix multiplication operator; and a memory storing input data for the vector operation or the matrix operation and being configured to communicate with the vector processing unit and the matrix multiplication unit.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 17, 2022
    Inventors: Fei XUE, Wei HAN, Yuhao WANG, Fei SUN, Lide DUAN, Shuangchen LI, Dimin NIU, Tianchan GUAN, Linyong HUANG, Zhaoyang DU, Hongzhong ZHENG