Patents by Inventor Ram A. Krishnamurthy

Ram A. Krishnamurthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11048434
    Abstract: A memory circuit has compute-in-memory (CIM) circuitry that performs computations based on time-to-digital conversion (TDC). The memory circuit includes an array of memory cells addressable with column address and row address. The memory circuit includes CIM sense circuitry to sense a voltage for multiple memory cells triggered together. The CIM sense circuitry including a TDC circuit to convert a time for discharge of the multiple memory cells to a digital value. A processing circuit determines a value of the multiple memory cells based on the digital value.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Raghavan Kumar, Phil Knag, Gregory K. Chen, Huseyin Ekin Sumbul, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Ram Krishnamurthy, Ian A. Young
  • Publication number: 20210194468
    Abstract: A family of novel, low power, min-drive strength, double-edge triggered (DET) input data multiplexer (Mux-D) scan flip-flop (FF) is provided. The flip-flop takes the advantage of no state node in the slave to remove data inverters in a traditional DET FF to save power, without affecting the flip-flop functionality under coupling/glitch scenarios.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Applicant: Intel Corporation
    Inventors: Amit Agarwal, Steven Hsu, Anupama Ambardar Thaploo, Simeon Realov, Ram Krishnamurthy
  • Publication number: 20210194469
    Abstract: A fast Mux-D scan flip-flop is provided, which bypasses a scan multiplexer to a master keeper side path, removing delay overhead of a traditional Mux-D scan topology. The design is compatible with simple scan methodology of Mux-D scan, while preserving smaller area and small number of inputs/outputs. Since scan Mux is not in the forward critical path, circuit topology has similar high performance as level-sensitive scan flip-flop and can be easily converted into bare pass-gate version. The new fast Mux-D scan flip-flop combines the advantages of the conventional LSSD and Mux-D scan flip-flop, without the disadvantages of each.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Applicant: Intel Corporation
    Inventors: Amit Agarwal, Steven Hsu, Simeon Realov, Mahesh Kumashikar, Ram Krishnamurthy
  • Patent number: 11016701
    Abstract: Techniques and mechanisms for a memory device to perform in-memory computing based on a logic state which is detected with a voltage-controlled oscillator (VCO). In an embodiment, a VCO circuit of the memory device receives from a memory array a first signal indicating a logic state that is based on one or more currently stored data bits. The VCO provides a conversion from the logic state being indicated by a voltage characteristic of the first signal to the logic state being indicated by a corresponding frequency characteristic of a cyclical signal. Based on the frequency characteristic, the logic state is identified and communicated for use in an in-memory computation at the memory device. In another embodiment, a result of the in-memory computation is written back to the memory array.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 25, 2021
    Assignee: Intel Corporation
    Inventors: Ian Young, Ram Krishnamurthy, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Raghavan Kumar, Phil Knag, Huseyin Sumbul, Gregory Chen
  • Patent number: 11009549
    Abstract: An apparatus is provided which comprises: a multi-bit quad latch with an internally coupled level sensitive scan circuitry; and a combinational logic coupled to an output of the multi-bit quad latch. Another apparatus is provided which comprises: a plurality of sequential logic circuitries; and a clocking circuitry comprising inverters, wherein the clocking circuitry is shared by the plurality of sequential logic circuitries.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: May 18, 2021
    Assignee: Intel Corporation
    Inventors: Amit Agarwal, Ram Krishnamurthy, Satish Damaraju, Steven Hsu, Simeon Realov
  • Publication number: 20210117197
    Abstract: Systems, apparatuses and methods identify a plurality of registers that are associated with a system-on-chip. The plurality of registers includes a first portion dedicated to write operations and a second portion dedicated to read operations. The technology writes data to the first portion of the plurality of registers, and transfers the data from the first portion to the second portion.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 22, 2021
    Applicant: Intel Corporation
    Inventors: Steven Hsu, Amit Agarwal, Debabrata Mohapatra, Arnab Raha, Moongon Jung, Gautham Chinya, Ram Krishnamurthy
  • Patent number: 10956813
    Abstract: An apparatus is described. The apparatus includes a compute in memory circuit. The compute in memory circuit includes a memory circuit and an encoder. The memory circuit is to provide 2m voltage levels on a read data line where m is greater than 1. The memory circuit includes storage cells sufficient to store a number of bits n where n is greater than m. The encoder is to receive an m bit input and convert the m bit input into an n bit word that is to be stored in the memory circuit, where, the m bit to n bit encoding performed by the encoder creates greater separation between those of the voltage levels that demonstrate wider voltage distributions on the read data line than others of the voltage levels.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: Ian A. Young, Ram Krishnamurthy, Sasikanth Manipatruni, Gregory K. Chen, Amrita Mathuriya, Abhishek Sharma, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul
  • Publication number: 20210043500
    Abstract: Embodiments disclosed herein include interconnect layers that include non-uniform interconnect heights and methods of forming such devices. In an embodiment, an interconnect layer comprises an interlayer dielectric (ILD), a first interconnect disposed in the ILD, wherein the first interconnect has a first height, and a second interconnect disposed in the ILD, wherein the second interconnect has a second height that is different than the first height.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Kevin Lai LIN, Mauro KOBRINSKY, Mark ANDERS, Himanshu KAUL, Ram KRISHNAMURTHY
  • Publication number: 20210043567
    Abstract: Embodiments disclosed herein include a semiconductor device with interconnects with non-uniform heights. In an embodiment, the semiconductor device comprises a semiconductor substrate, and a back end of line (BEOL) stack over the semiconductor substrate. In an embodiment, the BEOL stack comprises first interconnects and second interconnects in an interconnect layer of the BEOL stack. In an embodiment, the first interconnects have a first height and the second interconnects have a second height that is different than the first height.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Mark ANDERS, Himanshu KAUL, Ram KRISHNAMURTHY, Kevin Lai LIN, Mauro KOBRINSKY
  • Patent number: 10884957
    Abstract: Techniques and mechanisms for performing in-memory computations with circuitry having a pipeline architecture. In an embodiment, various stages of a pipeline each include a respective input interface and a respective output interface, distinct from said input interface, to couple to different respective circuitry. These stages each further include a respective array of memory cells and circuitry to perform operations based on data stored by said array. A result of one such in-memory computation may be communicated from one pipeline stage to a respective next pipeline stage for use in further in-memory computations. Control circuitry, interconnect circuitry, configuration circuitry or other logic of the pipeline precludes operation of the pipeline as a monolithic, general-purpose memory device. In other embodiments, stages of the pipeline each provide a different respective layer of a neural network.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: January 5, 2021
    Assignee: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
  • Patent number: 10877752
    Abstract: A compute-in-memory (CIM) circuit that enables a multiply-accumulate (MAC) operation based on a current-sensing readout technique. An operational amplifier coupled with a bitline of a column of bitcells included in a memory array of the CIM circuit to cause the bitcells to act like ideal current sources for use in determining an analog voltage value outputted from the operational amplifier for given states stored in the bitcells and for given input activations for the bitcells. The analog voltage value sensed by processing circuitry of the CIM circuit and converted to a digital value to compute a multiply-accumulate (MAC) value.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Gregory K. Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Phil Knag, Ram Krishnamurthy, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Ian A. Young
  • Patent number: 10860682
    Abstract: A binary CIM circuit enables all memory cells in a memory array to be effectively accessible simultaneously for computation using fixed pulse widths on the wordlines and equal capacitance on the bitlines. The fixed pulse widths and equal capacitance ensure that a minimum voltage drop in the bitline represents one least significant bit (LSB) so that the bitline voltage swing remains safely within the maximum allowable range. The binary CIM circuit maximizes the effective memory bandwidth of a memory array for a given maximum voltage range of bitline voltage.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Phil Knag, Gregory K. Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram Krishnamurthy, Ian A. Young
  • Patent number: 10831446
    Abstract: A memory device that includes a plurality subarrays of memory cells to store static weights and a plurality of digital full-adder circuits between subarrays of memory cells is provided. The digital full-adder circuit in the memory device eliminates the need to move data from a memory device to a processor to perform machine learning calculations. Rows of full-adder circuits are distributed between sub-arrays of memory cells to increase the effective memory bandwidth and reduce the time to perform matrix-vector multiplications in the memory device by performing bit-serial dot-product primitives in the form of accumulating m 1-bit×n-bit multiplications.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Gregory K. Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Phil Knag, Ram Krishnamurthy, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Ian A. Young
  • Patent number: 10825509
    Abstract: A full-rail digital-read CIM circuit enables a weighted read operation on a single row of a memory array. A weighted read operation captures a value of a weight stored in the single memory array row without having to rely on weighted row-access. Rather, using full-rail access and a weighted sampling capacitance network, the CIM circuit enables the weighted read operation even under process variation, noise and mismatch.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: November 3, 2020
    Assignee: Intel Corporation
    Inventors: Huseyin Ekin Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Knag, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram Krishnamurthy, Ian A. Young
  • Publication number: 20200334161
    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.
    Type: Application
    Filed: July 6, 2020
    Publication date: October 22, 2020
    Applicant: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, IAN YOUNG, Abhishek Sharma
  • Publication number: 20200279850
    Abstract: Examples herein relate to a memory device comprising an eDRAM memory cell, the eDRAM memory cell can include a write circuit formed at least partially over a storage cell and a read circuit formed at least partially under the storage cell; a compute near memory device bonded to the memory device; a processor; and an interface from the memory device to the processor. In some examples, circuitry is included to provide an output of the memory device to emulate output read rate of an SRAM memory device comprises one or more of: a controller, a multiplexer, or a register. Bonding of a surface of the memory device can be made to a compute near memory device or other circuitry. In some examples, a layer with read circuitry can be bonded to a layer with storage cells. Any layers can be bonded together using techniques described herein.
    Type: Application
    Filed: March 23, 2020
    Publication date: September 3, 2020
    Inventors: Abhishek SHARMA, Noriyuki SATO, Sarah ATANASOV, Huseyin Ekin SUMBUL, Gregory K. CHEN, Phil KNAG, Ram KRISHNAMURTHY, Hui Jae YOO, Van H. LE
  • Patent number: 10748603
    Abstract: A memory circuit has compute-in-memory circuitry that enables a multiply-accumulate (MAC) operation based on shared charge. Row access circuitry drives multiple rows of a memory array to multiply a first data word with a second data word stored in the memory array. The row access circuitry drives the multiple rows based on the bit pattern of the first data word. Column access circuitry drives a column of the memory array when the rows are driven. Accessed rows discharge the column line in an accumulative fashion. Sensing circuitry can sense voltage on the column line. A processor in the memory circuit computes a MAC value based on the voltage sensed on the column.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Huseyin Ekin Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Knag, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram Krishnamurthy, Ian A. Young
  • Publication number: 20200242459
    Abstract: Techniques are provided for implementing a hybrid processing architecture comprising a general-purpose processor (CPU) and a neural processing unit (NPU), coupled to an analog in-memory artificial intelligence (AI) processor. According to an embodiment, the hybrid processor implements an AI instruction set including instructions to perform analog in-memory computations. The AI processor comprises one or more layers, the NN layers including memory circuitry and analog processing circuitry. The memory circuitry is configured to store the weighting factors and the input data. The analog processing circuitry is configured to perform analog calculations on the stored weighting factors and the stored input data in accordance with the execution, by the NPU, of instruction from the AI instruction set. The AI instruction set includes instructions to perform dot products, multiplication, differencing, normalization, pooling, thresholding, transposition, and backpropagation training.
    Type: Application
    Filed: January 30, 2019
    Publication date: July 30, 2020
    Applicant: Intel Corporation
    Inventors: Sasikanth Manipatruni, Ram Krishnamurthy, Amrita Mathuriya, Dmitri Nikonov, Ian Young
  • Publication number: 20200242458
    Abstract: Techniques are provided for implementing a hybrid processing architecture comprising a general-purpose processor (CPU) coupled to an analog in-memory artificial intelligence (AI) processor. A hybrid processor implementing the techniques according to an embodiment includes an AI processor configured to perform analog in-memory computations based on neural network (NN) weighting factors and input data provided by the CPU. The AI processor includes one or more NN layers. The NN layers include digital access circuits to receive data and weighting factors and to provide computational results. The NN layers also include memory circuits to store data and weights, and further include bit line processors and cross bit line processors to perform analog dot product computations between columns of the data memory circuits and the weight factor memory circuits. Some of the NN layers are configured as convolutional NN layers and others are configured as fully connected NN layers, according to some embodiments.
    Type: Application
    Filed: January 25, 2019
    Publication date: July 30, 2020
    Applicant: Intel Corporation
    Inventors: Sasikanth Manipatruni, Ram Krishnamurthy, Amrita Mathuriya, Dmitri Nikonov, Ian Young
  • Publication number: 20200233923
    Abstract: A binary CIM circuit enables all memory cells in a memory array to be effectively accessible simultaneously for computation using fixed pulse widths on the wordlines and equal capacitance on the bitlines. The fixed pulse widths and equal capacitance ensure that a minimum voltage drop in the bitline represents one least significant bit (LSB) so that the bitline voltage swing remains safely within the maximum allowable range. The binary CIM circuit maximizes the effective memory bandwidth of a memory array for a given maximum voltage range of bitline voltage.
    Type: Application
    Filed: April 2, 2020
    Publication date: July 23, 2020
    Inventors: Phil KNAG, Gregory K. CHEN, Raghavan KUMAR, Huseyin Ekin SUMBUL, Abhishek SHARMA, Sasikanth MANIPATRUNI, Amrita MATHURIYA, Ram KRISHNAMURTHY, Ian A. YOUNG