Patents by Inventor Qilin ZHENG

Qilin ZHENG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914347
    Abstract: A method includes selecting one of a first safety architecture and a second safety architecture of a protection system configured to monitor a protection system. The protection system includes an input base, a controller base and an output base. The selecting includes selecting one of a first voting logic associated with the first safety architecture and a second voting logic associated with the second architecture. The controller base is configured to execute the selected voting logic. The method also includes configuring the protection system including a plurality of processing channels to operate in one of a first configuration associated with the first safety architecture and a second configuration associated with the second safety architecture. The configuring includes altering the number of processing channels releasably coupled to the protection system and hardware relay output in the protection system.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: February 27, 2024
    Assignee: GE Infrastructure Technology LLC
    Inventors: Peigen Zheng, Qiang Bai, Lifeng Wang, Qilin Xue
  • Patent number: 11822617
    Abstract: Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of PIM blocks in rows and columns, a controller, and an accumulator. Each PIM block is configured into a computing mode or a memory mode. The controller is configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode. The first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices. The accumulator is configured to output the third matrix based on the partial sums of the third matrix.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 21, 2023
    Assignee: NEONEXUS PTE. LTD.
    Inventor: Qilin Zheng
  • Patent number: 11797643
    Abstract: Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of tiles that each include one or more PIM blocks. A PIM block may include a hybrid-mode PIM block that may be configured into a digital mode or an analog mode. The PIM block configured into digital mode may perform operations associated with depth-wise (DW) convolution. On the other hand, a PIM block configured into analog mode may perform operations associated with point-wise (PW) convolution. A controller may be used to configure the PIM block into either digital mode or analog mode, depending on the computations.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: October 24, 2023
    Assignee: NEONEXUS PTE. LTD.
    Inventor: Qilin Zheng
  • Publication number: 20220129519
    Abstract: Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of tiles that each include one or more PIM blocks. A PIM block may include a hybrid-mode PIM block that may be configured into a digital mode or an analog mode. The PIM block configured into digital mode may perform operations associated with depth-wise (DW) convolution. On the other hand, a PIM block configured into analog mode may perform operations associated with point-wise (PW) convolution. A controller may be used to configure the PIM block into either digital mode or analog mode, depending on the computations.
    Type: Application
    Filed: November 9, 2020
    Publication date: April 28, 2022
    Inventor: Qilin Zheng
  • Publication number: 20220012303
    Abstract: Embodiments of apparatus and method for matrix multiplication using processing-in-memory (PIM) are disclosed. In an example, an apparatus for matrix multiplication includes an array of PIM blocks in rows and columns, a controller, and an accumulator. Each PIM block is configured into a computing mode or a memory mode. The controller is configured to divide the array of PIM blocks into a first set of PIM blocks each configured into the memory mode and a second set of PIM blocks each configured into the computing mode. The first set of PIM blocks are configured to store a first matrix, and the second set of PIM blocks are configured to store a second matrix and calculate partial sums of a third matrix based on the first and second matrices. The accumulator is configured to output the third matrix based on the partial sums of the third matrix.
    Type: Application
    Filed: September 29, 2020
    Publication date: January 13, 2022
    Inventor: Qilin Zheng
  • Patent number: 11216375
    Abstract: A data caching circuit and method are provided. The circuit is configured to cache data for a feature map calculated by a neural network, wherein a size of a convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides at a step of S in the feature map, where K is a positive integer and S is a positive integer, the circuit comprising: a cache comprising K caching units, each caching unit being configured to respectively store a plurality of rows of the feature map, the plurality of rows comprising a corresponding row in every K consecutive rows of the feature map.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: January 4, 2022
    Assignee: Hangzhou Zhicun Intelligent Technology Co., Ltd.
    Inventors: Qilin Zheng, Shaodi Wang
  • Publication number: 20210263849
    Abstract: A data caching circuit and method are provided. The circuit is configured to cache data for a feature map calculated by a neural network, wherein a size of a convolution kernel of the neural network is K*K data, and a window corresponding to the convolution kernel slides at a step of S in the feature map, where K is a positive integer and S is a positive integer, the circuit comprising: a cache comprising K caching units, each caching unit being configured to respectively store a plurality of rows of the feature map, the plurality of rows comprising a corresponding row in every K consecutive rows of the feature map.
    Type: Application
    Filed: April 15, 2020
    Publication date: August 26, 2021
    Inventors: Qilin ZHENG, Shaodi WANG