Patents by Inventor Ram A. Krishnamurthy

Ram A. Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190205730
    Abstract: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: Intel Corporation
    Inventors: Huseyin E. SUMBUL, Gregory K. Chen, Raghavan Kumar, Phil Christopher Knag, Ram Krishnamurthy
  • Publication number: 20190187208
    Abstract: An apparatus is provided which comprises: a multi-bit quad latch with an internally coupled level sensitive scan circuitry; and a combinational logic coupled to an output of the multi-bit quad latch. Another apparatus is provided which comprises: a plurality of sequential logic circuitries; and a clocking circuitry comprising inverters, wherein the clocking circuitry is shared by the plurality of sequential logic circuitries.
    Type: Application
    Filed: December 18, 2017
    Publication date: June 20, 2019
    Applicant: Intel Corporation
    Inventors: Amit Agarwal, Ram Krishnamurthy, Satish Damaraju, Steven Hsu, Simeon Realov
  • Publication number: 20190138893
    Abstract: An apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The memory array includes an embedded dynamic random access memory (eDRAM) memory array. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes a switched capacitor circuit. The switched capacitor circuit includes a back-end-of-line (BEOL) capacitor coupled to a thin film transistor within the metal/dielectric layers of the semiconductor chip. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip.
    Type: Application
    Filed: September 28, 2018
    Publication date: May 9, 2019
    Inventors: Abhishek SHARMA, Jack T. KAVALIEROS, Ian A. YOUNG, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Uygar AVCI, Gregory K. CHEN, Amrita MATHURIYA, Raghavan KUMAR, Phil KNAG, Huseyin Ekin SUMBUL, Nazila HARATIPOUR, Van H. LE
  • Patent number: 10284368
    Abstract: Some implementations disclosed herein provide techniques and arrangements for provisioning keys to integrated circuits/processor. In one embodiments, a key provisioner/tester apparatus may include a memory device to receive a unique hardware key generated by a first logic of a processor. The key provisioner/tester apparatus may further include a cipher device to permanently store an encrypted first key in nonvolatile memory of the processor, detect whether the stored encrypted first key is valid, and to isolate at least one of the first logic and the nonvolatile memory of the processor from all sources that are exterior to the processor in response to detecting that the stored encrypted first key is valid.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: May 7, 2019
    Assignee: Intel Corporation
    Inventors: Jiangtao Li, Anand Rajan, Roel Maes, Sanu K Mathew, Ram Krishnamurthy, Ernie Brickell
  • Publication number: 20190102170
    Abstract: A compute-in-memory (CIM) circuit that enables a multiply-accumulate (MAC) operation based on a current-sensing readout technique. An operational amplifier coupled with a bitline of a column of bitcells included in a memory array of the CIM circuit to cause the bitcells to act like ideal current sources for use in determining an analog voltage value outputted from the operational amplifier for given states stored in the bitcells and for given input activations for the bitcells. The analog voltage value sensed by processing circuitry of the CIM circuit and converted to a digital value to compute a multiply-accumulate (MAC) value.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Inventors: Gregory K. CHEN, Raghavan KUMAR, Huseyin Ekin SUMBUL, Phil KNAG, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Abhishek SHARMA, Ian A. YOUNG
  • Publication number: 20190103156
    Abstract: A full-rail digital-read CIM circuit enables a weighted read operation on a single row of a memory array. A weighted read operation captures a value of a weight stored in the single memory array row without having to rely on weighted row-access. Rather, using full-rail access and a weighted sampling capacitance network, the CIM circuit enables the weighted read operation even under process variation, noise and mismatch.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Inventors: Huseyin Ekin SUMBUL, Gregory K. CHEN, Raghavan KUMAR, Phil Ekin KNAG, Abhishek SHARMA, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Ram A. KRISHNAMURTHY, Ian A. YOUNG
  • Publication number: 20190102359
    Abstract: A binary CIM circuit enables all memory cells in a memory array to be effectively accessible simultaneously for computation using fixed pulse widths on the wordlines and equal capacitance on the bitlines. The fixed pulse widths and equal capacitance ensure that a minimum voltage drop in the bitline represents one least significant bit (LSB) so that the bitline voltage swing remains safely within the maximum allowable range. The binary CIM circuit maximizes the effective memory bandwidth of a memory array for a given maximum voltage range of bitline voltage.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Inventors: Phil KNAG, Gregory K. CHEN, Raghavan KUMAR, Huseyin Ekin SUMBUL, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram A. Krishnamurthy, Ian A. Young
  • Patent number: 10248906
    Abstract: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventors: Huseyin E. Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Christopher Knag, Ram Krishnamurthy
  • Publication number: 20190080731
    Abstract: Techniques and mechanisms for providing data to be used in an in-memory computation at a memory device. In an embodiment a memory device comprises a first memory array and circuitry, coupled to the first memory array, to perform a data computation based on data stored at the first memory array. Prior to the computation, the first memory array receives the data from a second memory array of the memory device. The second memory array extends horizontally in parallel with, but is offset vertically from, the first memory array. In another embodiment, a single integrated circuit die includes both the first memory array and the second memory array.
    Type: Application
    Filed: September 6, 2018
    Publication date: March 14, 2019
    Inventors: Jack KAVALIEROS, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Gregory CHEN, Van LE, Amrita MATHURIYA, Abhishek SHARMA, Raghavan KUMAR, Phil KNAG, Huseyin SUMBUL
  • Publication number: 20190065151
    Abstract: A memory device that includes a plurality subarrays of memory cells to store static weights and a plurality of digital full-adder circuits between subarrays of memory cells is provided. The digital full-adder circuit in the memory device eliminates the need to move data from a memory device to a processor to perform machine learning calculations. Rows of full-adder circuits are distributed between sub-arrays of memory cells to increase the effective memory bandwidth and reduce the time to perform matrix-vector multiplications in the memory device by performing bit-serial dot-product primitives in the form of accumulating m 1-bit x n-bit multiplications.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 28, 2019
    Inventors: Gregory K. CHEN, Raghavan KUMAR, Huseyin Ekin SUMBUL, Phil KNAG, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Abhishek SHARMA, Ian A. YOUNG
  • Publication number: 20190057300
    Abstract: The present disclosure is directed to systems and methods of bit-serial, in-memory, execution of at least an nth layer of a multi-layer neural network in a first on-chip processor memory circuitry portion contemporaneous with prefetching and storing layer weights associated with the (n+1)st layer of the multi-layer neural network in a second on-chip processor memory circuitry portion. The storage of layer weights in on-chip processor memory circuitry beneficially decreases the time required to transfer the layer weights upon execution of the (n+1)st layer of the multi-layer neural network by the first on-chip processor memory circuitry portion. In addition, the on-chip processor memory circuitry may include a third on-chip processor memory circuitry portion used to store intermediate and/or final input/output values associated with one or more layers included in the multi-layer neural network.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
  • Publication number: 20190056885
    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory, bit-serial, mathematical operations performed by a pipelined SRAM architecture (bit-serial PISA) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. The bit-serial PISA circuitry is coupled to PISA memory circuitry via a relatively high-bandwidth connection to beneficially facilitate the storage and retrieval of layer weights by the bit-serial PISA circuitry during execution. Direct memory access (DMA) circuitry transfers the neural network model and input data from system memory to the bit-serial PISA memory and also transfers output data from the PISA memory circuitry to system memory circuitry.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
  • Publication number: 20190057304
    Abstract: The present disclosure is directed to systems and methods of implementing an analog neural network using a pipelined SRAM architecture (“PISA”) circuitry disposed in on-chip processor memory circuitry. The on-chip processor memory circuitry may include processor last level cache (LLC) circuitry. One or more physical parameters, such as a stored charge or voltage, may be used to permit the generation of an in-memory analog output using a SRAM array. The generation of an in-memory analog output using only word-line and bit-line capabilities beneficially increases the computational density of the PISA circuit without increasing power requirements.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
  • Publication number: 20190057036
    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
  • Publication number: 20190057727
    Abstract: Techniques and mechanisms for configuring a memory device to perform a sequence of in-memory computations. In an embodiment, a memory device includes a memory array and circuitry, coupled thereto, to perform data computations based on the data stored at the memory array. Based on instructions received at the memory device, control circuitry is configured to enable an automatic performance of a sequence of operations. In another embodiment, the memory device is coupled in an in-series arrangement of other memory devices to provide a pipeline circuit architecture. The memory devices each function as a respective stage of the pipeline circuit architecture, where the stages each perform respective in-memory computations. Some or all such stages each provide a different respective layer of a neural network.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
  • Publication number: 20190057050
    Abstract: Techniques and mechanisms for performing in-memory computations with circuitry having a pipeline architecture. In an embodiment, various stages of a pipeline each include a respective input interface and a respective output interface, distinct from said input interface, to couple to different respective circuitry. These stages each further include a respective array of memory cells and circuitry to perform operations based on data stored by said array. A result of one such in-memory computation may be communicated from one pipeline stage to a respective next pipeline stage for use in further in-memory computations. Control circuitry, interconnect circuitry, configuration circuitry or other logic of the pipeline precludes operation of the pipeline as a monolithic, general-purpose memory device. In other embodiments, stages of the pipeline each provide a different respective layer of a neural network.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 21, 2019
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
  • Publication number: 20190042928
    Abstract: An apparatus is described. The apparatus includes a compute in memory circuit. The compute in memory circuit includes a memory circuit and an encoder. The memory circuit is to provide 2m voltage levels on a read data line where m is greater than 1. The memory circuit includes storage cells sufficient to store a number of bits n where n is greater than m. The encoder is to receive an m bit input and convert the m bit input into an n bit word that is to be stored in the memory circuit, where, the m bit to n bit encoding performed by the encoder creates greater separation between those of the voltage levels that demonstrate wider voltage distributions on the read data line than others of the voltage levels.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 7, 2019
    Inventors: Ian A. YOUNG, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Gregory K. CHEN, Amrita MATHURIYA, Abhishek SHARMA, Raghavan KUMAR, Phil KNAG, Huseyin Ekin SUMBUL
  • Publication number: 20190042199
    Abstract: Compute-in memory circuits and techniques are described. In one example, a memory device includes an array of memory cells, the array including multiple sub-arrays. Each of the sub-arrays receives a different voltage. The memory device also includes capacitors coupled with conductive access lines of each of the multiple sub-arrays and circuitry coupled with the capacitors, to share charge between the capacitors in response to a signal. In one example, computing device, such as a machine learning accelerator, includes a first memory array and a second memory array. The computing device also includes an analog processor circuit coupled with the first and second memory arrays to receive first analog input voltages from the first memory array and second analog input voltages from the second memory array and perform one or more operations on the first and second analog input voltages, and output an analog output voltage.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 7, 2019
    Inventors: Huseyin Ekin SUMBUL, Phil KNAG, Gregory K. CHEN, Raghavan KUMAR, Abhishek SHARMA, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Ram KRISHNAMURTHY, Ian A. YOUNG
  • Publication number: 20190042160
    Abstract: A memory circuit has compute-in-memory (CIM) circuitry that performs computations based on time-to-digital conversion (TDC). The memory circuit includes an array of memory cells addressable with column address and row address. The memory circuit includes CIM sense circuitry to sense a voltage for multiple memory cells triggered together. The CIM sense circuitry including a TDC circuit to convert a time for discharge of the multiple memory cells to a digital value. A processing circuit determines a value of the multiple memory cells based on the digital value.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 7, 2019
    Inventors: Raghavan KUMAR, Phil KNAG, Gregory K. CHEN, Huseyin Ekin SUMBUL, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Abhishek SHARMA, Ram KRISHNAMURTHY, Ian A. YOUNG
  • Publication number: 20190043560
    Abstract: A memory circuit has compute-in-memory circuitry that enables a multiply-accumulate (MAC) operation based on shared charge. Row access circuitry drives multiple rows of a memory array to multiply a first data word with a second data word stored in the memory array. The row access circuitry drives the multiple rows based on the bit pattern of the first data word. Column access circuitry drives a column of the memory array when the rows are driven. Accessed rows discharge the column line in an accumulative fashion. Sensing circuitry can sense voltage on the column line. A processor in the memory circuit computes a MAC value based on the voltage sensed on the column.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 7, 2019
    Inventors: Huseyin Ekin Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Knag, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram Krishnamurthy, Ian A. Young