Patents by Inventor Zhijiong Luo

Zhijiong Luo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11663454
    Abstract: A digital integrated circuit with embedded memory for neural network inferring may include a controller and a matrix of processing blocks and cyclic bidirectional interconnections, where each processing block is coupled to 4 neighboring processing blocks regardless of its position in the matrix. A cyclic bidirectional interconnection may transmit every processing block's output to its upper, lower, left, right neighboring blocks or to its cyclic neighbors of the same row or column in replacement of any missing upper, lower, left or right neighbors. Each processing block may include invariant word buffers, variant word buffers, a multiplexer, and a processing unit. The multiplexer may select one of the 4 neighbor processing blocks' outputs. The processing unit may accept as inputs the multiplexer's selected value, a selected value from the variant word buffers and a selected value from the invariant word buffer and produce output which acts as the processing block's output.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: May 30, 2023
    Assignee: Aspiring Sky Co. Limited
    Inventors: Yujie Wen, Zhijiong Luo
  • Patent number: 11568219
    Abstract: Technologies are described for multiple accelerators for a neural network, and methods thereof. In an example implementation, a neural network can be mapped to a system comprising a control unit and multiple accelerators, where the controller unit controls each accelerator's behavior, sends data to and receives data from each accelerator through the interconnections. Sub-networks may be created by grouping several network layers or dividing a network layer into multiple sub-layers depending on data to be processed and memory capacity of each accelerator. Accelerators have internal storage, thus, do not require external memory.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: January 31, 2023
    Assignee: Aspiring Sky Co. Limited
    Inventors: Yujie Wen, Zhijiong Luo
  • Patent number: 11514136
    Abstract: A circuit for performing parallel convolutional computation for features and kernels of variable sizes may receive inputs of an m×n matrix of feature data, an m×n matrix of convolution data, and a (2m?1)×(2n?1) matrix of kernel data. A feature manager of the circuit may hold m rows of n data buffers storing the input feature data and rotating values between rows during one restricted convolution calculation. A kernel manager of the circuit may hold a (2m?1)×(2n?1) matrix of data buffers storing the input kernel data in the buffers and cyclically rotating values in upwards, downwards, leftwards and rightwards directions for different restricted convolution calculations. A row convolution engine of the circuit may hold m row convolution processors, each storing and updating input convolution data by multiplication-and-accumulation (MAC) operations on its input feature and kernel data rows. The circuit produces accumulated convolutional data.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: November 29, 2022
    Assignee: Aspiring Sky Co. Limited
    Inventors: Yujie Wen, Zhijiong Luo
  • Patent number: 11361813
    Abstract: Technologies for a three-dimensional (3D) multi-bit non-volatile dynamic random access memory (nvDRAM) device, which may include a DRAM array having a plurality of DRAM cells with single or dual transistor implementation and a non-volatile memory (NVM) array having a plurality of NVM cells with single or dual transistor implementations, where the DRAM array and the NVM array are arranged by rows of word lines and columns of bit lines. The nvDRAM device may also include one or more of isolation devices coupled between the DRAM array and the NVM array and configured to control connection between the dynamic random access bit lines (BLs) and the non-volatile BLs. The word lines run horizontally and may enable to select one word of memory data, whereas bit lines run vertically and may be connected to storage cells of different memory address.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: June 14, 2022
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Xuntong Zhao
  • Patent number: 11270748
    Abstract: Technologies for various memory structures for artificial intelligence (AI) applications and methods thereof are described. An XNOR circuit along with a sense amplifier may be combined with an array (or multiple arrays) of memory such as non-volatile memory (NVM) or an NVM, SRAM combination to perform an XNOR operation on the data read from the memory. Various versions may include different connections allowing simplification of circuitry or timing. In some examples, memory array may include programmable resistor/switch device combinations, or multiple columns connected to a single XNOR+SA circuit.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: March 8, 2022
    Assignee: Aspiring Sky Co., Limited
    Inventors: Zhijiong Luo, Xuntong Zhao
  • Patent number: 11087823
    Abstract: Technologies for a multi-bit non-volatile dynamic random access memory (nvDRAM) device, which may include a DRAM array having a plurality of DRAM cells with single or dual transistor implementation and a non-volatile memory (NVM) array having a plurality of NVM cells with single or dual transistor implementations, where the DRAM array and the NVM array are arranged by rows of word lines and columns of bit lines. The nvDRAM device may also include one or more of isolation devices coupled between the DRAM array and the NVM array and configured to control connection between the dynamic random access bit lines (BLs) and the non-volatile BLs. The word lines run horizontally and may enable to select one word of memory data, whereas bit lines run vertically and may be connected to storage cells of different memory address.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: August 10, 2021
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Xuntong Zhao
  • Publication number: 20210134353
    Abstract: Technologies for a three-dimensional (3D) multi-bit non-volatile dynamic random access memory (nvDRAM) device, which may include a DRAM array having a plurality of DRAM cells with single or dual transistor implementation and a non-volatile memory (NVM) array having a plurality of NVM cells with single or dual transistor implementations, where the DRAM array and the NVM array are arranged by rows of word lines and columns of bit lines. The nvDRAM device may also include one or more of isolation devices coupled between the DRAM array and the NVM array and configured to control connection between the dynamic random access bit lines (BLs) and the non-volatile BLs. The word lines run horizontally and may enable to select one word of memory data, whereas bit lines run vertically and may be connected to storage cells of different memory address.
    Type: Application
    Filed: January 8, 2021
    Publication date: May 6, 2021
    Applicant: Aspiring Sky Co., Limited
    Inventors: Zhijiong LUO, Xuntong ZHAO
  • Publication number: 20200364288
    Abstract: A circuit for performing parallel convolutional computation for features and kernels of variable sizes may receive inputs of an m×n matrix of feature data, an m×n matrix of convolution data, and a (2m?1)×(2n?1) matrix of kernel data. A feature manager of the circuit may hold m rows of n data buffers storing the input feature data and rotating values between rows during one restricted convolution calculation. A kernel manager of the circuit may hold a (2m?1)×(2n?1) matrix of data buffers storing the input kernel data in the buffers and cyclically rotating values in upwards, downwards, leftwards and rightwards directions for different restricted convolution calculations. A row convolution engine of the circuit may hold m row convolution processors, each storing and updating input convolution data by multiplication-and-accumulation (MAC) operations on its input feature and kernel data rows. The circuit produces accumulated convolutional data.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 19, 2020
    Applicant: Aspiring Sky Co. Limited
    Inventors: Yujie WEN, Zhijiong LUO
  • Publication number: 20200364544
    Abstract: Technologies are described for multiple accelerators for a neural network, and methods thereof. In an example implementation, a neural network can be mapped to a system comprising a control unit and multiple accelerators, where the controller unit controls each accelerator's behavior, sends data to and receives data from each accelerator through the interconnections. Sub-networks may be created by grouping several network layers or dividing a network layer into multiple sub-layers depending on data to be processed and memory capacity of each accelerator. Accelerators have internal storage, thus, do not require external memory.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 19, 2020
    Applicant: Aspiring Sky Co. Limited
    Inventors: Yujie WEN, Zhijiong LUO
  • Patent number: 10811096
    Abstract: A memory system may include one or more hybrid fast memory blocks with m-bit fast volatile random access memory (RAM) cells and N×m bit non-volatile memory (NVM) cells. The memory system may also include one or more other memory blocks with NVM cells. The fast flash memory may buffer the NVM data improving access speed. The different memory blocks may utilize a single, unified interface to communicate with other devices/circuits. The unified interface may be a parallel interface (e.g., flash memory/SRAM combinations), or the unified interface may be a pipeline interface (e.g., system on a chip “SOC” implementations) supporting fast memory read/write operations.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: October 20, 2020
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Shu Wang, Xiaoming Jin
  • Publication number: 20200311530
    Abstract: A digital integrated circuit with embedded memory for neural network inferring may include a controller and a matrix of processing blocks and cyclic bidirectional interconnections, where each processing block is coupled to 4 neighboring processing blocks regardless of its position in the matrix. A cyclic bidirectional interconnection may transmit every processing block's output to its upper, lower, left, right neighboring blocks or to its cyclic neighbors of the same row or column in replacement of any missing upper, lower, left or right neighbors. Each processing block may include invariant word buffers, variant word buffers, a multiplexer, and a processing unit. The multiplexer may select one of the 4 neighbor processing blocks' outputs. The processing unit may accept as inputs the multiplexer's selected value, a selected value from the variant word buffers and a selected value from the invariant word buffer and produce output which acts as the processing block's output.
    Type: Application
    Filed: March 27, 2020
    Publication date: October 1, 2020
    Applicant: Aspiring Sky Co. Limited
    Inventors: Yujie WEN, Zhijiong LUO
  • Publication number: 20200251157
    Abstract: Technologies for various memory structures for artificial intelligence (AI) applications and methods thereof are described. An XNOR circuit along with a sense amplifier may be combined with an array (or multiple arrays) of memory such as non-volatile memory (NVM) or an NVM, SRAM combination to perform an XNOR operation on the data read from the memory. Various versions may include different connections allowing simplification of circuitry or timing. In some examples, memory array may include programmable resistor/switch device combinations, or multiple columns connected to a single XNOR+SA circuit.
    Type: Application
    Filed: February 5, 2020
    Publication date: August 6, 2020
    Applicant: Aspiring Sky Co., Limited
    Inventors: Zhijiong LUO, Xuntong Zhao
  • Publication number: 20200126610
    Abstract: Technologies for a multi-bit non-volatile dynamic random access memory (nvDRAM) device, which may include a DRAM array having a plurality of DRAM cells with single or dual transistor implementation and a non-volatile memory (NVM) array having a plurality of NVM cells with single or dual transistor implementations, where the DRAM array and the NVM array are arranged by rows of word lines and columns of bit lines. The nvDRAM device may also include one or more of isolation devices coupled between the DRAM array and the NVM array and configured to control connection between the dynamic random access bit lines (BLs) and the non-volatile BLs.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 23, 2020
    Applicant: Aspiring Sky Co. Limited
    Inventors: Zhijiong LUO, Xuntong Zhao
  • Patent number: 10559344
    Abstract: Technologies are generally described herein for a hybrid non-volatile memory structure that includes a number of SRAM buffers. SRAM access times may be achieved for non-volatile read/write operations by performing access queue buffered read/write operations first. The SRAM buffer may be shareable as a system SRAM. In other examples, a hybrid non-volatile memory according to some embodiments may include a high speed block and a high endurance block to store different types of data with different access needs. The hybrid non-volatile memory may also include a normal block to store the data which is non-frequently changed.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: February 11, 2020
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Shu Wang, Xiaoming Jin
  • Patent number: 10403342
    Abstract: A memory system includes a code flash and data flash merged flash memory, which may contain a code flash with differential cell structure, a data flash with single cell structure, decoder circuitry, a sense amplifier, and other suitable support circuitry. The code flash and data flash may be located in a same plane or multi planes. In some examples, the code flash may be also accessed to read while the data flash is performing write operation, and vice versa.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: September 3, 2019
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Shu Wang, Xiaoming Jin
  • Patent number: 10402342
    Abstract: Technologies are described for re-configurable non-volatile memory structures and systems for FPGA, as well as, non-volatile static random access memory (nvSRAM) cells with multiple non-volatile memory (NVM) bits. Proposed structures may quickly switch/reconfigure look-up tables (LUTs) and/or reconfigure FPGA routings. Memory structures according to some embodiments may reduce the switching/reconfiguring times to one or a few clock cycles. Thus, fast or real time FPGA reconfiguration is enabled, one LUT may serve multiple functions, thereby, a fraction of current FPGAs may be used to perform multi-functions, which may substantially reduce FPGA chip areas. Structures according to embodiments may further provide simple routing for entire system by re-configuration and enhanced data security by avoiding external data transmission.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: September 3, 2019
    Assignee: Aspiring Sky Co., Limited
    Inventors: Zhijiong Luo, Xiaoming Jin, Shu Wang
  • Patent number: 10353715
    Abstract: Memory structures are provided, where a fast SRAM in an mNVSRAM block may serve as the buffer for a large block NVM memory to increase the data exchange rate between computing units or processor cores and the large NVM memory. The mNVSRAM blocks may also provide a fast boot function, where a boot code may be stored in the NVM parts of the mNVSRAM block, and due to the high bandwidth communication between fast SRAM part and the associated NVM memories, the boot code may be transferred into the fast SRAM in one or a few clock cycles enabling fast boot up function. Similarly, code stored in the NVM parts of an mNVSRAM block may be transferred into fast SRAM rapidly at wake-up time enabling fast wake up and voiding a need to wake up any other memory part, which may also result in energy savings for the computing system.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: July 16, 2019
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Xiaoming Jin, Shu Wang, Zuqu Li
  • Patent number: 10354716
    Abstract: Technologies are generally described herein for static random access memory (SRAM) based memory structures and methods thereof such as multi-bit non-volatile static random-access memory (nvSRAM) with arrayed SRAM and NVM or SRAM buffered one time programmable (OTP) memories, RRAMs or other resistive RAMs.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: July 16, 2019
    Assignee: Aspiring Sky Co. Limited
    Inventors: Zhijiong Luo, Xiaoming Jin, Shu Wang
  • Publication number: 20180366170
    Abstract: A memory system includes a code flash and data flash merged flash memory, which may contain a code flash with differential cell structure, a data flash with single cell structure, decoder circuitry, a sense amplifier, and other suitable support circuitry. The code flash and data flash may be located in a same plane or multi planes. In some examples, the code flash may be also accessed to read while the data flash is performing write operation, and vice versa.
    Type: Application
    Filed: June 19, 2018
    Publication date: December 20, 2018
    Applicant: Aspiring Sky Co. Limited
    Inventors: Zhijiong LUO, Shu Wang, Xiaoming Jin
  • Publication number: 20180336948
    Abstract: A memory system may include one or more hybrid fast memory blocks with m-bit fast volatile random access memory (RAM) cells and N×m bit non-volatile memory (NVM) cells. The memory system may also include one or more other memory blocks with NVM cells. The fast flash memory may buffer the NVM data improving access speed. The different memory blocks may utilize a single, unified interface to communicate with other devices/circuits. The unified interface may be a parallel interface (e.g., flash memory/SRAM combinations), or the unified interface may be a pipeline interface (e.g., system on a chip “SOC” implementations) supporting fast memory read/write operations.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 22, 2018
    Applicant: Aspiring Sky Co. Limited
    Inventors: Zhijiong LUO, Shu WANG, Xiaoming JIN