Patents by Inventor Xinjie Guo
Xinjie Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12347484Abstract: A memory device includes a non-volatile memory cells, source regions and drain regions arranged in rows and columns. Respective ones of the columns of drain regions include first drain regions and second drain regions that alternate with each other. Respective ones of first lines electrically connect together the source regions in one of the rows of the source regions and are electrically isolated from the source regions in other rows of the source regions. Respective ones of second lines electrically connect together the first drain regions of one of the columns of drain regions and are electrically isolated from the second drain regions of the one column of drain regions. Respective ones of third lines electrically connect together the second drain regions of one of the columns of drain regions and are electrically isolated from the first drain regions of the one column of drain regions.Type: GrantFiled: April 28, 2023Date of Patent: July 1, 2025Assignees: Silicon Storage Technology, Inc., The Regents of the University of CaliforniaInventors: Hieu Van Tran, Nhan Do, Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Vipin Tiwari, Mark Reiten
-
Publication number: 20250201773Abstract: A computing-in-memory system, a packaging method for a computing-in-memory system, and an apparatus are provided. The computing-in-memory system includes: one or more first sub-chips integrated on a first side of a computing-in-memory chip, each of the one or more first sub-chips including one or more arrays of computing-in-memory cells of the computing-in-memory system, where the one or more arrays of computing-in-memory cells are configured to perform computations on received data; a second sub-chip integrated on a second side, opposite to the first side, of the computing-in-memory chip, the second sub-chip including a peripheral analog circuit IP core and a digital circuit IP core of the computing-in-memory chip; and an interface module configured to communicatively couple the second sub-chip to each of the one or more first sub-chips.Type: ApplicationFiled: July 26, 2024Publication date: June 19, 2025Inventors: Yu TIAN, Xinjie GUO, Xuguang SUN
-
Publication number: 20250203881Abstract: A computing-in-memory system, a packaging method for a computing-in-memory system, and an apparatus are provided. The computing-in-memory system includes: one or more first chips that each include one or more arrays of computing-in-memory cells that are configured to perform computations on received data; a second chip that includes a peripheral analog circuit IP core and a digital circuit IP core; an interposer positioned between the one or more first chips and the second chip; and an interface module configured to communicatively couple the second chip to each first chip via the interposer. The interface module includes one or more sub-interface modules on each first chip, where the sub-interface modules are aligned with each other. The interposer includes a first portion aligned with the sub-interface modules on the one or more first chips and a second portion configured to arrange a communication path between the second chip and each first chip.Type: ApplicationFiled: July 11, 2024Publication date: June 19, 2025Inventors: Xinjie GUO, Xuguang SUN, Yu TIAN
-
Publication number: 20250199966Abstract: A computing-in-memory system is provided, which includes: one or more first chips each integrated with one or more arrays of computing-in-memory cells of the computing-in-memory system that are configured to perform computations on received data; a second chip, on a first side of which a peripheral analog circuit IP core and a digital circuit IP core of the computing-in-memory system are integrated; and an interface module configured to communicatively couple the second chip to each first chip. The interface module includes one or more first sub-interface modules on each first chip and aligned with each other, and one or more second sub-interface modules integrated on a second side, opposite to the first side, of the second chip and aligned with the one or more first sub-interface modules. A communication path between the second chip and each first chip is integrated on the second side of the second chip.Type: ApplicationFiled: July 26, 2024Publication date: June 19, 2025Inventors: Xuguang SUN, Xinjie GUO, Yu TIAN
-
Publication number: 20250199998Abstract: A computing-in-memory system, a packaging method for a computing-in-memory system, and an apparatus are provided. The computing-in-memory system includes: one or more first chips each integrated with one or more arrays of computing-in-memory cells of a computing-in-memory chip, where the one or more arrays of computing-in-memory cells are configured to perform computations on received data; a second chip integrated with a peripheral analog circuit IP core and a digital circuit IP core of the computing-in-memory system; a third chip between the one or more first chips and the second chip, the third chip including NAND memory; and an interface module configured to communicatively couple the one or more first chips, the second chip, and the third chip, such that the one or more first chips and the second chip have access to data stored in the NAND memory.Type: ApplicationFiled: July 11, 2024Publication date: June 19, 2025Inventors: Shaodi WANG, Yu TIAN, Xinjie GUO
-
Patent number: 12300313Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.Type: GrantFiled: January 21, 2022Date of Patent: May 13, 2025Assignees: Silicon Storage Technology, Inc., The Regents of the University of CaliforniaInventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 12112798Abstract: Numerous examples are disclosed for an output block coupled to a non-volatile memory array in a neural network and associated methods. In one example, a circuit for converting a current in a neural network into an output voltage comprises a non-volatile memory cell comprises a word line terminal, a bit line terminal, and a source line terminal, wherein the bit line terminal receives the current; and a switch for selectively coupling the word line terminal to the bit line terminal; wherein when the switch is closed, the current flows into the non-volatile memory cell and the output voltage is provided on the bit line terminal.Type: GrantFiled: March 20, 2023Date of Patent: October 8, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 12106211Abstract: Building blocks for implementing Vector-by-Matrix Multiplication (VMM) are implemented with analog circuitry including non-volatile memory devices (flash transistors) and using in-memory computation. In one example, improved performance and more accurate VMM is achieved in arrays including multi-gate flash transistors when computation uses a control gate or the combination of control gate and word line (instead of using the word line alone). In another example, very fast weight programming of the arrays is achieved using a novel programming protocol. In yet another example, higher density and faster array programming is achieved when the gate(s) responsible for erasing devices, or the source line, are re-routed across different rows, e.g., in a zigzag form. In yet another embodiment a neural network is provided with nonlinear synaptic weights implemented with nonvolatile memory devices.Type: GrantFiled: April 27, 2018Date of Patent: October 1, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Dmitri Strukov, Farnood Merrikh Bayat, Mohammad Bavandpour, Mohammad Reza Mahmoodi, Xinjie Guo
-
Patent number: 12057160Abstract: Numerous examples of summing circuits for a neural network are disclosed. In one example, a circuit for summing current received from a plurality of synapses in a neural network comprises a voltage source; a load coupled between the voltage source and an output node; a voltage clamp coupled to the output node for maintaining a voltage at the output node; and a plurality of synapses coupled between the output node and ground; wherein an output current flows through the output node, the output current equal to a sum of currents drawn by the plurality of synapses.Type: GrantFiled: March 20, 2023Date of Patent: August 6, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11972795Abstract: Numerous examples are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one example, a circuit comprises a digital-to-analog converter to convert a target weight comprising digital bits into a target voltage, a current-to-voltage converter to convert an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator to compare the output voltage to the target voltage during a verify operation.Type: GrantFiled: March 10, 2023Date of Patent: April 30, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11853856Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs. Various algorithms for tuning the memory cells to contain the correct weight values are disclosed.Type: GrantFiled: January 18, 2020Date of Patent: December 26, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11829859Abstract: Numerous embodiments are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a converter for converting a target weight into a target current and a comparator for comparing the target current to an output current from the selected non-volatile memory cell during a verify operation. In another embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a digital-to-analog converter for converting a target weight comprising digital bits into a target voltage, a current-to-voltage converter for converting an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator for comparing the output voltage to the target voltage during a verify operation.Type: GrantFiled: April 16, 2021Date of Patent: November 28, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11790208Abstract: A number of circuits for use in an output block coupled to a non-volatile memory array in a neural network are disclosed. The embodiments include a circuit for converting an output current from a neuron in a neural network into an output voltage, a circuit for converting a voltage received on an input node into an output current, a circuit for summing current received from a plurality of neurons in a neural network, and a circuit for summing current received from a plurality of neurons in a neural network.Type: GrantFiled: April 22, 2021Date of Patent: October 17, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230252265Abstract: A method of scanning N×N pixels using a vector-by-matrix multiplication array by (a) associating a filter of M×M pixels adjacent first vertical and horizontal edges, (b) providing values for the pixels associated with different respective rows of the filter to input lines of different respective N input line groups, (c) shifting the filter horizontally by X pixels, (d) providing values for the pixels associated with different respective rows of the horizontally shifted filter to input lines, of different respective N input line groups, which are shifted by X input lines, (e) repeating steps (c) and (d) until a second vertical edge is reached, (f) shifting the filter horizontally to be adjacent the first vertical edge, and shifting the filter vertically by X pixels, (g) repeating steps (b) through (e) for the vertically shifted filter, and (h) repeating steps (f) and (g) until a second horizontal edge is reached.Type: ApplicationFiled: March 24, 2023Publication date: August 10, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230229887Abstract: Numerous examples are disclosed for an output block coupled to a non-volatile memory array in a neural network and associated methods. In one example, a circuit for converting a current in a neural network into an output voltage comprises a non-volatile memory cell comprises a word line terminal, a bit line terminal, and a source line terminal, wherein the bit line terminal receives the current; and a switch for selectively coupling the word line terminal to the bit line terminal; wherein when the switch is closed, the current flows into the non-volatile memory cell and the output voltage is provided on the bit line terminal.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Farnood Merrikh BAYAT, Xinjie GUO, Dmitri STRUKOV, Nhan DO, Hieu Van TRAN, Vipin TIWARI, Mark REITEN
-
Publication number: 20230229888Abstract: Numerous examples of summing circuits for a neural network are disclosed. In one example, a circuit for summing current received from a plurality of synapses in a neural network comprises a voltage source; a load coupled between the voltage source and an output node; a voltage clamp coupled to the output node for maintaining a voltage at the output node; and a plurality of synapses coupled between the output node and ground; wherein an output current flows through the output node, the output current equal to a sum of currents drawn by the plurality of synapses.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230206026Abstract: Numerous examples are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one example, a circuit comprises a digital-to-analog converter to convert a target weight comprising digital bits into a target voltage, a current-to-voltage converter to convert an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator to compare the output voltage to the target voltage during a verify operation.Type: ApplicationFiled: March 10, 2023Publication date: June 29, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11308383Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.Type: GrantFiled: May 12, 2017Date of Patent: April 19, 2022Assignee: Silicon Storage Technology, Inc.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20210287065Abstract: A number of circuits for use in an output block coupled to a non-volatile memory array in a neural network are disclosed. The embodiments include a circuit for converting an output current from a neuron in a neural network into an output voltage, a circuit for converting a voltage received on an input node into an output current, a circuit for summing current received from a plurality of neurons in a neural network, and a circuit for summing current received from a plurality of neurons in a neural network.Type: ApplicationFiled: April 22, 2021Publication date: September 16, 2021Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20210019609Abstract: Building blocks for implementing Vector-by-Matrix Multiplication (VMM) are implemented with analog circuitry including non-volatile memory devices (flash transistors) and using in-memory computation. In one example, improved performance and more accurate VMM is achieved in arrays including multi-gate flash transistors when computation uses a control gate or the combination of control gate and word line (instead of using the word line alone). In another example, very fast weight programming of the arrays is achieved using a novel programming protocol. In yet another example, higher density and faster array programming is achieved when the gate(s) responsible for erasing devices, or the source line, are re-routed across different rows, e.g., in a zigzag form. In yet another embodiment a neural network is provided with nonlinear synaptic weights implemented with nonvolatile memory devices.Type: ApplicationFiled: April 27, 2018Publication date: January 21, 2021Applicant: The Regents of the University of CaliforniaInventors: Dmitri Strukov, Farnood Merrikh Bayat, Mohammad Bavandpour, Mohammad Reza Mahmoodi, Xinjie Guo