Patents by Inventor Geoffrey W. Burr

Geoffrey W. Burr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11915132
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: February 27, 2024
    Assignee: International Business Machines Corporation
    Inventor: Geoffrey W. Burr
  • Patent number: 11797833
    Abstract: Optimized synapses for neuromorphic arrays are provided. In various embodiments, first and second single-transistor current sources are electrically coupled in series. The first single-transistor current source is electrically coupled to both a first control circuit and second control circuit, free of any intervening logic gate between the first single-transistor current source and either one of the control circuits. The second single-transistor current source is electrically coupled to both the first control circuit and the second control circuit, free of any intervening logic gate between the second single-transistor current source and either one of the control circuits. A capacitor is electrically coupled to the first and second single-transistor current sources. A read circuit is electrically coupled to the capacitor. The first and second single-transistor current sources are adapted to charge the capacitor only when concurrently receiving a control signal from both the first and second control circuits.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: October 24, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Publication number: 20230086636
    Abstract: Computations in Artificial neural networks (ANNs) are accomplished using simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. Crossbar arrays may be used to represent one layer of the ANN with Non-Volatile Memory (NVM) elements at each crosspoint, where the conductance of the NVM elements may be used to encode the synaptic weights, and a highly parallel current summation on the array achieves a weighted sum operation that is representative of the values of the output neurons. A method is outlined to transfer such neuron values from the outputs of one array to the inputs of a second array with no need for global clock synchronization, irrespective of the distances between the arrays, and to use such values at the next array, and/or to convert such values into digital bits at the next array.
    Type: Application
    Filed: November 28, 2022
    Publication date: March 23, 2023
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Patent number: 11580373
    Abstract: Computations in Artificial neural networks (ANNs) are accomplished using simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. Crossbar arrays may be used to represent one layer of the ANN with Non-Volatile Memory (NVM) elements at each crosspoint, where the conductance of the NVM elements may be used to encode the synaptic weights, and a highly parallel current summation on the array achieves a weighted sum operation that is representative of the values of the output neurons. A method is outlined to transfer such neuron values from the outputs of one array to the inputs of a second array with no need for global clock synchronization, irrespective of the distances between the arrays, and to use such values at the next array, and/or to convert such values into digital bits at the next array.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: February 14, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Geoffrey W Burr, Pritish Narayanan
  • Patent number: 11436479
    Abstract: A system and method are shown for transferring weight information to analog non-volatile memory elements wherein the programming pulse duration is directly proportional to the difference in weights. Furthermore, the system and method avoid weight transfers when the weights are already well-matched.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pritish Narayanan, Geoffrey W Burr
  • Patent number: 11270194
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation. A method is shown that improves performance by distributing the synaptic weight across multiple conductances of varying significance, implementing carry operations between less-significant signed analog conductance-pairs to more-significant analog conductance-pairs.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: March 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Geoffrey W Burr
  • Patent number: 11182673
    Abstract: Single-shot learning and disambiguation of multiple predictions in hierarchical temporal memory is provided. In various embodiments an input sequence is read. The sequence comprises first, second, and third time-ordered components. Each of the time-ordered components is encoded in a sparse distributed representation. The sparse distributed representation of the first time-ordered component is inputted into a first portion of a hierarchical temporal memory. The sparse distributed representation of the second time-ordered component is inputted into a second portion of the hierarchical temporal memory. The second portion is connected to the first portion by a first plurality of synapses. A plurality of predictions as to the third time-ordered component is generated within a third portion of the hierarchical temporal memory. The third portion is connected to the second portion by a second plurality of synapses.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: November 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Publication number: 20210287090
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation.
    Type: Application
    Filed: May 25, 2021
    Publication date: September 16, 2021
    Inventor: Geoffrey W. Burr
  • Patent number: 11074499
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: July 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Geoffrey W Burr
  • Patent number: 10896370
    Abstract: Triage of training data for acceleration of large-scale machine learning is provided. In various embodiments, training input from a set of training data is provided to an artificial neural network. The artificial neural network comprises a plurality of output neurons. Each output neuron corresponds to a class. From the artificial neural network, output values are determined at each of the plurality of output neurons. From the output values, a classification of the training input by the artificial neural network is determined. A confidence value of the classification is determined. Based on the confidence value, a probability of inclusion of the training input in subsequent training is determined. A subset of the set of training data is determined based on the probability. The artificial neural network is trained based on the subset.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Geoffrey W. Burr
  • Patent number: 10726331
    Abstract: Neural network circuits providing early integration before ADC are described. Comparators are adapted to compare a plurality of output analog voltages from a first synaptic array to a predetermined threshold to generate a vector of bits indicating whether the plurality of analog voltages exceed the predetermined threshold, and transmit the vector of bits via a network. At least one ADC is configured to convert the plurality of analog voltages to a vector of digital values, and transmit the vector of digital values via the network. At least one modulator is configured to receive the vector of bits from the network, provide pulses to each of a plurality of input wires of a second synaptic array based on the vector of bits, receive the vector of digital values from the network, and provide pulses to each of the plurality of input wires based on the vector of digital values.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: July 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Geoffrey W. Burr
  • Patent number: 10692573
    Abstract: High dynamic range resistive arrays are provided. An array of resistive elements provides a vector of current outputs equal to the analog vector-matrix product between (i) a vector of voltage inputs to the array encoding a vector of analog input values and (ii) a matrix of analog resistive weights within the array. First stage current mirrors are electrically coupled to a subset of the resistive elements through a local current accumulation wire. A second stage current mirror is electrically coupled to the first stage current mirrors through a global accumulation wire. Each of the first stage current mirrors includes at least one component having respective scaling factors selectable to scale up or down the current in the local current accumulation wire, thus controlling the aggregate current on the global accumulation wire.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: June 23, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Publication number: 20200050928
    Abstract: A system and method are shown for transferring weight information to analog non-volatile memory elements wherein the programming pulse duration is directly proportional to the difference in weights. Furthermore, the system and method avoid weight transfers when the weights are already well-matched.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 13, 2020
    Inventors: PRITISH NARAYANAN, GEOFFREY W. BURR
  • Publication number: 20200005865
    Abstract: High dynamic range resistive arrays are provided. An array of resistive elements provides a vector of current outputs equal to the analog vector-matrix product between (i) a vector of voltage inputs to the array encoding a vector of analog input values and (ii) a matrix of analog resistive weights within the array. First stage current mirrors are electrically coupled to a subset of the resistive elements through a local current accumulation wire. A second stage current mirror is electrically coupled to the first stage current mirrors through a global accumulation wire. Each of the first stage current mirrors includes at least one component having respective scaling factors selectable to scale up or down the current in the local current accumulation wire, thus controlling the aggregate current on the global accumulation wire.
    Type: Application
    Filed: September 10, 2019
    Publication date: January 2, 2020
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Patent number: 10453528
    Abstract: High dynamic range resistive arrays are provided. An array of resistive elements provides a vector of current outputs equal to the analog vector-matrix product between (i) a vector of voltage inputs to the array encoding a vector of analog input values and (ii) a matrix of analog resistive weights within the array. First stage current mirrors are electrically coupled to a subset of the resistive elements through a local current accumulation wire. A second stage current mirror is electrically coupled to the first stage current mirrors through a global accumulation wire. Each of the first stage current mirrors includes at least one component having respective scaling factors selectable to scale up or down the current in the local current accumulation wire, thus controlling the aggregate current on the global accumulation wire.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: October 22, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Publication number: 20190156194
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 23, 2019
    Inventor: GEOFFREY W. BURR
  • Publication number: 20190147328
    Abstract: Optimized synapses for neuromorphic arrays are provided. In various embodiments, first and second single-transistor current sources are electrically coupled in series. The first single-transistor current source is electrically coupled to both a first control circuit and second control circuit, free of any intervening logic gate between the first single-transistor current source and either one of the control circuits. The second single-transistor current source is electrically coupled to both the first control circuit and the second control circuit, free of any intervening logic gate between the second single-transistor current source and either one of the control circuits. A capacitor is electrically coupled to the first and second single-transistor current sources. A read circuit is electrically coupled to the capacitor. The first and second single-transistor current sources are adapted to charge the capacitor only when concurrently receiving a control signal from both the first and second control circuits.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Inventors: Geoffrey W. Burr, Pritish Narayanan
  • Publication number: 20190034788
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation. A method is shown that improves performance by distributing the synaptic weight across multiple conductances of varying significance, implementing carry operations between less-significant signed analog conductance-pairs to more-significant analog conductance-pairs.
    Type: Application
    Filed: July 26, 2017
    Publication date: January 31, 2019
    Inventor: GEOFFREY W BURR
  • Publication number: 20180253645
    Abstract: Triage of training data for acceleration of large-scale machine learning is provided. In various embodiments, training input from a set of training data is provided to an artificial neural network. The artificial neural network comprises a plurality of output neurons. Each output neuron corresponds to a class. From the artificial neural network, output values are determined at each of the plurality of output neurons. From the output values, a classification of the training input by the artificial neural network is determined. A confidence value of the classification is determined. Based on the confidence value, a probability of inclusion of the training input in subsequent training is determined. A subset of the set of training data is determined based on the probability. The artificial neural network is trained based on the subset.
    Type: Application
    Filed: March 3, 2017
    Publication date: September 6, 2018
    Inventor: Geoffrey W. Burr
  • Publication number: 20180211162
    Abstract: Computations in Artificial neural networks (ANNs) are accomplished using simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. Crossbar arrays may be used to represent one layer of the ANN with Non-Volatile Memory (NVM) elements at each crosspoint, where the conductance of the NVM elements may be used to encode the synaptic weights, and a highly parallel current summation on the array achieves a weighted sum operation that is representative of the values of the output neurons. A method is outlined to transfer such neuron values from the outputs of one array to the inputs of a second array with no need for global clock synchronization, irrespective of the distances between the arrays, and to use such values at the next array, and/or to convert such values into digital bits at the next array.
    Type: Application
    Filed: January 20, 2017
    Publication date: July 26, 2018
    Inventors: Geoffrey W. Burr, Pritish Narayanan