Patents by Inventor Chenglong Zeng

Chenglong Zeng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230307036
    Abstract: The present disclosure provides storage and accessing methods for parameters in a streaming AI accelerator chip, and relates to the technical field of artificial intelligence, wherein the streaming-based data buffer comprises: a plurality of banks, different banks being configured to store different data; a data read circuit configured to receive a read control signal and a read address corresponding to a computation task, in the case the read control signal corresponds to a first read mode, determine n banks from the plurality of banks based on the read control signal, and read first data required for performing the computation task in parallel from the n banks based on the read address, the first data comprising n pieces of data corresponding to the n banks in a one-to-one correspondence, n?2, n being a positive integer.
    Type: Application
    Filed: March 16, 2023
    Publication date: September 28, 2023
    Inventors: Chenglong Zeng, Kuen Hung Tsoi, Xinyu Niu
  • Publication number: 20230205607
    Abstract: A data stream architecture-based accelerator includes a storage unit, a read-write address generation unit and a computing unit. The storage unit includes a plurality of banks. The read-write address generation unit is used for generating storage unit read-write addresses according to a preset read-write parallelism, determining target banks in the storage unit according to the storage unit read-write addresses and reading to-be-processed data from the target banks for operations in the computing unit. The computing unit includes a plurality of data paths and is configured to determine target data paths according to a preset computing parallelism so that the target data paths can perform operations on the to-be-processed data to obtain processed data, and then store the processed data into the target banks according to the storage unit read-write addresses.
    Type: Application
    Filed: December 26, 2022
    Publication date: June 29, 2023
    Inventors: Chenglong Zeng, Kuen Hung Tsoi, Xinyu Niu
  • Publication number: 20230128529
    Abstract: An acceleration system includes: a direct memory accessor configured to store a computation graph, a first data stream lake buffer and a second data stream lake buffer, the first data stream lake buffer being configured to cache the computation graph; an arithmetic unit configured to obtain an i-th layer of computing nodes of the computation graph to obtain an (i+1)-th layer of computing nodes; and the first fan-out device configured to replicate the (i+1)-th layer of computing nodes and store the same in the direct memory accessor and the second data stream lake buffer, respectively. The arithmetic unit extracts the (i+1)-th layer of computing nodes from the second data stream lake buffer to obtain a (i+2)-th layer of computing nodes, and the above steps are repeated until the n layer of computing nodes is obtained, where 1?i?n-3, n?4, i is a positive integer, and n is a positive integer.
    Type: Application
    Filed: December 22, 2022
    Publication date: April 27, 2023
    Inventors: Chenglong Zeng, Yuanchao Li, Kuen Hung Tsoi, Xinyu Niu
  • Publication number: 20230128421
    Abstract: An embodiment of the present application discloses a neural network accelerator, including: a convolution calculation module, which is used to perform a convolution operation on an input data input into a preset neural network to obtain a first output data; a tail calculation module, which is used to perform a calculation on the first output data to obtain a second output data; a storage module, which is used to cache the input data and the second output data; and a first control module, which is used to transmit the first output data to the tail calculation module. The convolution calculation module includes a plurality of convolution calculation units, the tail calculation module includes a plurality of tail calculation units, the first control module includes a plurality of first control units, and at least two convolution calculation units are connected to one tail calculation unit through one first control unit.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 27, 2023
    Inventors: Chenglong Zeng, Yuanchao Li, Kuen Hung Tsoi, Xinyu Niu
  • Patent number: D950111
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: April 26, 2022
    Inventor: Chenglong Zeng
  • Patent number: D1002901
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: October 24, 2023
    Inventor: Chenglong Zeng