Patents by Inventor Dmitri Strukov
Dmitri Strukov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12112798Abstract: Numerous examples are disclosed for an output block coupled to a non-volatile memory array in a neural network and associated methods. In one example, a circuit for converting a current in a neural network into an output voltage comprises a non-volatile memory cell comprises a word line terminal, a bit line terminal, and a source line terminal, wherein the bit line terminal receives the current; and a switch for selectively coupling the word line terminal to the bit line terminal; wherein when the switch is closed, the current flows into the non-volatile memory cell and the output voltage is provided on the bit line terminal.Type: GrantFiled: March 20, 2023Date of Patent: October 8, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 12106211Abstract: Building blocks for implementing Vector-by-Matrix Multiplication (VMM) are implemented with analog circuitry including non-volatile memory devices (flash transistors) and using in-memory computation. In one example, improved performance and more accurate VMM is achieved in arrays including multi-gate flash transistors when computation uses a control gate or the combination of control gate and word line (instead of using the word line alone). In another example, very fast weight programming of the arrays is achieved using a novel programming protocol. In yet another example, higher density and faster array programming is achieved when the gate(s) responsible for erasing devices, or the source line, are re-routed across different rows, e.g., in a zigzag form. In yet another embodiment a neural network is provided with nonlinear synaptic weights implemented with nonvolatile memory devices.Type: GrantFiled: April 27, 2018Date of Patent: October 1, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Dmitri Strukov, Farnood Merrikh Bayat, Mohammad Bavandpour, Mohammad Reza Mahmoodi, Xinjie Guo
-
Patent number: 12057160Abstract: Numerous examples of summing circuits for a neural network are disclosed. In one example, a circuit for summing current received from a plurality of synapses in a neural network comprises a voltage source; a load coupled between the voltage source and an output node; a voltage clamp coupled to the output node for maintaining a voltage at the output node; and a plurality of synapses coupled between the output node and ground; wherein an output current flows through the output node, the output current equal to a sum of currents drawn by the plurality of synapses.Type: GrantFiled: March 20, 2023Date of Patent: August 6, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11972795Abstract: Numerous examples are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one example, a circuit comprises a digital-to-analog converter to convert a target weight comprising digital bits into a target voltage, a current-to-voltage converter to convert an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator to compare the output voltage to the target voltage during a verify operation.Type: GrantFiled: March 10, 2023Date of Patent: April 30, 2024Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11853856Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs. Various algorithms for tuning the memory cells to contain the correct weight values are disclosed.Type: GrantFiled: January 18, 2020Date of Patent: December 26, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11829859Abstract: Numerous embodiments are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a converter for converting a target weight into a target current and a comparator for comparing the target current to an output current from the selected non-volatile memory cell during a verify operation. In another embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a digital-to-analog converter for converting a target weight comprising digital bits into a target voltage, a current-to-voltage converter for converting an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator for comparing the output voltage to the target voltage during a verify operation.Type: GrantFiled: April 16, 2021Date of Patent: November 28, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Patent number: 11790208Abstract: A number of circuits for use in an output block coupled to a non-volatile memory array in a neural network are disclosed. The embodiments include a circuit for converting an output current from a neuron in a neural network into an output voltage, a circuit for converting a voltage received on an input node into an output current, a circuit for summing current received from a plurality of neurons in a neural network, and a circuit for summing current received from a plurality of neurons in a neural network.Type: GrantFiled: April 22, 2021Date of Patent: October 17, 2023Assignee: SILICON STORAGE TECHNOLOGY, INC.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230259738Abstract: A memory device includes a non-volatile memory cells, source regions and drain regions arranged in rows and columns. Respective ones of the columns of drain regions include first drain regions and second drain regions that alternate with each other. Respective ones of first lines electrically connect together the source regions in one of the rows of the source regions and are electrically isolated from the source regions in other rows of the source regions. Respective ones of second lines electrically connect together the first drain regions of one of the columns of drain regions and are electrically isolated from the second drain regions of the one column of drain regions. Respective ones of third lines electrically connect together the second drain regions of one of the columns of drain regions and are electrically isolated from the first drain regions of the one column of drain regions.Type: ApplicationFiled: April 28, 2023Publication date: August 17, 2023Inventors: Hieu Van Tran, NHAN DO, FARNOOD MERRIKH BAYAT, XINJIE GUO, DMITRI STRUKOV, VIPIN TIWARI, MARK REITEN
-
Publication number: 20230252265Abstract: A method of scanning N×N pixels using a vector-by-matrix multiplication array by (a) associating a filter of M×M pixels adjacent first vertical and horizontal edges, (b) providing values for the pixels associated with different respective rows of the filter to input lines of different respective N input line groups, (c) shifting the filter horizontally by X pixels, (d) providing values for the pixels associated with different respective rows of the horizontally shifted filter to input lines, of different respective N input line groups, which are shifted by X input lines, (e) repeating steps (c) and (d) until a second vertical edge is reached, (f) shifting the filter horizontally to be adjacent the first vertical edge, and shifting the filter vertically by X pixels, (g) repeating steps (b) through (e) for the vertically shifted filter, and (h) repeating steps (f) and (g) until a second horizontal edge is reached.Type: ApplicationFiled: March 24, 2023Publication date: August 10, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230229887Abstract: Numerous examples are disclosed for an output block coupled to a non-volatile memory array in a neural network and associated methods. In one example, a circuit for converting a current in a neural network into an output voltage comprises a non-volatile memory cell comprises a word line terminal, a bit line terminal, and a source line terminal, wherein the bit line terminal receives the current; and a switch for selectively coupling the word line terminal to the bit line terminal; wherein when the switch is closed, the current flows into the non-volatile memory cell and the output voltage is provided on the bit line terminal.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Farnood Merrikh BAYAT, Xinjie GUO, Dmitri STRUKOV, Nhan DO, Hieu Van TRAN, Vipin TIWARI, Mark REITEN
-
Publication number: 20230229888Abstract: Numerous examples of summing circuits for a neural network are disclosed. In one example, a circuit for summing current received from a plurality of synapses in a neural network comprises a voltage source; a load coupled between the voltage source and an output node; a voltage clamp coupled to the output node for maintaining a voltage at the output node; and a plurality of synapses coupled between the output node and ground; wherein an output current flows through the output node, the output current equal to a sum of currents drawn by the plurality of synapses.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20230206026Abstract: Numerous examples are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one example, a circuit comprises a digital-to-analog converter to convert a target weight comprising digital bits into a target voltage, a current-to-voltage converter to convert an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator to compare the output voltage to the target voltage during a verify operation.Type: ApplicationFiled: March 10, 2023Publication date: June 29, 2023Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20220147794Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.Type: ApplicationFiled: January 21, 2022Publication date: May 12, 2022Inventors: FARNOOD MERRIKH BAYAT, XINJIE GUO, DMITRI STRUKOV, NHAN DO, HIEU VAN TRAN, VIPIN TIWARI, MARK REITEN
-
Patent number: 11308383Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.Type: GrantFiled: May 12, 2017Date of Patent: April 19, 2022Assignee: Silicon Storage Technology, Inc.Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20210287065Abstract: A number of circuits for use in an output block coupled to a non-volatile memory array in a neural network are disclosed. The embodiments include a circuit for converting an output current from a neuron in a neural network into an output voltage, a circuit for converting a voltage received on an input node into an output current, a circuit for summing current received from a plurality of neurons in a neural network, and a circuit for summing current received from a plurality of neurons in a neural network.Type: ApplicationFiled: April 22, 2021Publication date: September 16, 2021Inventors: Farnood Merrikh Bayat, Xinjie Guo, Dmitri Strukov, Nhan Do, Hieu Van Tran, Vipin Tiwari, Mark Reiten
-
Publication number: 20210232893Abstract: Numerous embodiments are disclosed for verifying a weight programmed into a selected non-volatile memory cell in a neural memory. In one embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a converter for converting a target weight into a target current and a comparator for comparing the target current to an output current from the selected non-volatile memory cell during a verify operation. In another embodiment, a circuit for verifying a weight programmed into a selected non-volatile memory cell in a neural memory comprises a digital-to-analog converter for converting a target weight comprising digital bits into a target voltage, a current-to-voltage converter for converting an output current from the selected non-volatile memory cell during a verify operation into an output voltage, and a comparator for comparing the output voltage to the target voltage during a verify operation.Type: ApplicationFiled: April 16, 2021Publication date: July 29, 2021Inventors: FARNOOD MERRIKH BAYAT, XINJIE GUO, DMITRI STRUKOV, NHAN DO, HIEU VAN TRAN, VIPIN TIWARI, MARK REITEN
-
Publication number: 20210019609Abstract: Building blocks for implementing Vector-by-Matrix Multiplication (VMM) are implemented with analog circuitry including non-volatile memory devices (flash transistors) and using in-memory computation. In one example, improved performance and more accurate VMM is achieved in arrays including multi-gate flash transistors when computation uses a control gate or the combination of control gate and word line (instead of using the word line alone). In another example, very fast weight programming of the arrays is achieved using a novel programming protocol. In yet another example, higher density and faster array programming is achieved when the gate(s) responsible for erasing devices, or the source line, are re-routed across different rows, e.g., in a zigzag form. In yet another embodiment a neural network is provided with nonlinear synaptic weights implemented with nonvolatile memory devices.Type: ApplicationFiled: April 27, 2018Publication date: January 21, 2021Applicant: The Regents of the University of CaliforniaInventors: Dmitri Strukov, Farnood Merrikh Bayat, Mohammad Bavandpour, Mohammad Reza Mahmoodi, Xinjie Guo
-
Patent number: 10812084Abstract: A security primitive for an integrated circuit comprises an array of floating-gate transistors monolithically integrated into the integrated circuit and coupled to one another in a crossbar configuration. The floating-gate transistors have instance-specific process-induced variations in analog behavior to provide one or more reconfigurable physically unclonable functions (PUFs).Type: GrantFiled: November 6, 2019Date of Patent: October 20, 2020Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Dmitri Strukov, Hussein Nili Ahmadabadi, Mohammad Reza Mahmoodi, Zahra Fahimi
-
Publication number: 20200151543Abstract: An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs. Various algorithms for tuning the memory cells to contain the correct weight values are disclosed.Type: ApplicationFiled: January 18, 2020Publication date: May 14, 2020Inventors: Farnood Merrikh BAYAT, Xinjie GUO, Dmitri STRUKOV, Nhan DO, Hieu Van TRAN, Vipin TIWARI, Mark REITEN
-
Publication number: 20200145008Abstract: A security primitive for an integrated circuit comprises an array of floating-gate transistors monolithically integrated into the integrated circuit and coupled to one another in a crossbar configuration. The floating-gate transistors have instance-specific process-induced variations in analog behavior to provide one or more reconfigurable physically unclonable functions (PUFs).Type: ApplicationFiled: November 6, 2019Publication date: May 7, 2020Inventors: Dmitri Strukov, Hussein Nili Ahmadabadi, Mohammad Reza Mahmoodi, Zahra Fahimi