Patents by Inventor RAGHAVAN KUMAR

RAGHAVAN KUMAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220058167
    Abstract: Techniques and mechanisms to facilitate Bitcoin mining operations which support version rolling. In an embodiment, Bitcoin mining circuitry comprises a first scheduler, a first digest, a second scheduler and a second digest arranged in a pipeline configuration. Hash circuitry calculates a first plurality of hashes each based on first bits of a Merkle root, and on a different respective identifier of a Bitcoin protocol version. The first scheduler generates first message schedules each based on second bits of the Merkle root, and on a different respective nonce value. In another embodiment, the first scheduler successively provides the first message schedules to the first digest, wherein, for each such providing of one of the first message schedules, the first digest, second scheduler and second digest successively generate second hashes each based on the provided one of the first message schedules, and on a different respective one of the first hashes.
    Type: Application
    Filed: December 21, 2020
    Publication date: February 24, 2022
    Applicant: Intel Corporation
    Inventors: Vikram Suresh, Sanu Mathew, Raghavan Kumar
  • Patent number: 11240039
    Abstract: In one example an apparatus comprises a computer readable memory, a signature logic to generate a signature to be transmitted in association with a message, the signature logic to apply a hash-based signature scheme to the message using a private key to generate the signature comprising a public key, or a verification logic to verify a signature received in association with the message, the verification logic to apply the hash-based signature scheme to verify the signature using the public key, and an accelerator logic to apply a structured order to at least one set of inputs to the hash-based signature scheme. Other examples may be described.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 1, 2022
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Sanu Mathew, Manoj Sastry, Santosh Ghosh, Raghavan Kumar, Rafael Misoczki
  • Publication number: 20220012581
    Abstract: An apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The memory array includes an embedded dynamic random access memory (eDRAM) memory array. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes a switched capacitor circuit. The switched capacitor circuit includes a back-end-of-line (BEOL) capacitor coupled to a thin film transistor within the metal/dielectric layers of the semiconductor chip. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Abhishek SHARMA, Jack T. KAVALIEROS, Ian A. YOUNG, Ram KRISHNAMURTHY, Sasikanth MANIPATRUNI, Uygar AVCI, Gregory K. CHEN, Amrita MATHURIYA, Raghavan KUMAR, Phil KNAG, Huseyin Ekin SUMBUL, Nazila HARATIPOUR, Van H. LE
  • Patent number: 11223483
    Abstract: In one example an apparatus comprises a computer-readable memory, signature logic to compute a message hash of an input message using a secure hash algorithm, process the message hash to generate an array of secret key components for the input message, apply a hash chain function to the array of secret key components to generate an array of signature components, the hash chain function comprising a series of even-index hash chains and a series of odd-index hash chains, wherein the even-index hash chains and the odd-index hash chains generate a plurality of intermediate node values and a one-time public key component between the secret key components and the signature components and store at least some of the intermediate node values in the computer-readable memory for use in one or more subsequent signature operations. Other examples may be described.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 11, 2022
    Assignee: INTEL CORPORATION
    Inventors: Rafael Misoczki, Vikram Suresh, Santosh Ghosh, Manoj Sastry, Sanu Mathew, Raghavan Kumar
  • Patent number: 11218320
    Abstract: In one example an apparatus comprises a computer readable memory, hash logic to generate a message hash value based on an input message, signature logic to generate a signature to be transmitted in association with the message, the signature logic to apply a hash-based signature scheme to a private key to generate the signature comprising a public key, and accelerator logic to pre-compute at least one set of inputs to the signature logic. Other examples may be described.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 4, 2022
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Sanu Mathew, Manoj Sastry, Santosh Ghosh, Raghavan Kumar, Rafael Misoczki
  • Patent number: 11205017
    Abstract: Embodiments are directed to post quantum public key signature operation for reconfigurable circuit devices. An embodiment of an apparatus includes one or more processors; and a reconfigurable circuit device, the reconfigurable circuit device including a dedicated cryptographic hash hardware engine, and a reconfigurable fabric including logic elements (LEs), wherein the one or more processors are to configure the reconfigurable circuit device for public key signature operation, including mapping a state machine for public key generation and verification to the reconfigurable fabric, including mapping one or more cryptographic hash engines to the reconfigurable fabric, and combining the dedicated cryptographic hash hardware engine with the one or more mapped cryptographic hash engines for cryptographic signature generation and verification.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: December 21, 2021
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Sanu Mathew, Rafael Misoczki, Santosh Ghosh, Raghavan Kumar, Manoj Sastry, Andrew H. Reinders
  • Patent number: 11195079
    Abstract: In one embodiment, a processor comprises a first neuro-synaptic core comprising first circuitry to configure the first neuro-synaptic core as a neuron core responsive to a first value specified by a configuration parameter; and configure the first neuro-synaptic core as a synapse core responsive to a second value specified by the configuration parameter.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: December 7, 2021
    Assignee: Intel Corporation
    Inventors: Huseyin E. Sumbul, Gregory K. Chen, Phil Knag, Raghavan Kumar, Ram K. Krishnamurthy
  • Patent number: 11157799
    Abstract: A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Huseyin E. Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Christopher Knag, Ram Krishnamurthy
  • Patent number: 11151046
    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 19, 2021
    Assignee: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young, Abhishek Sharma
  • Patent number: 11138499
    Abstract: An apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The memory array includes an embedded dynamic random access memory (eDRAM) memory array. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes a switched capacitor circuit. The switched capacitor circuit includes a back-end-of-line (BEOL) capacitor coupled to a thin film transistor within the metal/dielectric layers of the semiconductor chip. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Abhishek Sharma, Jack T. Kavalieros, Ian A. Young, Sasikanth Manipatruni, Ram Krishnamurthy, Uygar Avci, Gregory K. Chen, Amrita Mathuriya, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul, Nazila Haratipour, Van H. Le
  • Patent number: 11100385
    Abstract: Apparatus and method for a scalable, free running neuromorphic processor. For example, one embodiment of a neuromorphic processing apparatus comprises: a plurality of neurons; an interconnection network to communicatively couple at least a subset of the plurality of neurons; a spike controller to stochastically generate a trigger signal, the trigger signal to cause a selected neuron to perform a thresholding operation to determine whether to issue a spike signal.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: August 24, 2021
    Assignee: INTEL CORPORATION
    Inventors: Raghavan Kumar, Gregory K. Chen, Huseyin E. Sumbul, Ram K. Krishnamurthy, Phil Knag
  • Patent number: 11062203
    Abstract: In one embodiment, a method comprises receiving a selection of a neural network topology type; identifying a synapse memory mapping scheme for the selected neural network topology type from a plurality of synapse memory mapping schemes that are each associated with a respective neural network topology type; and mapping a plurality of synapse weights to locations in a memory based on the identified synapse memory mapping scheme.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: July 13, 2021
    Assignee: Intel Corporation
    Inventors: Gregory K. Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Phil Knag, Ram K. Krishnamurthy
  • Patent number: 11061646
    Abstract: Compute-in memory circuits and techniques are described. In one example, a memory device includes an array of memory cells, the array including multiple sub-arrays. Each of the sub-arrays receives a different voltage. The memory device also includes capacitors coupled with conductive access lines of each of the multiple sub-arrays and circuitry coupled with the capacitors, to share charge between the capacitors in response to a signal. In one example, computing device, such as a machine learning accelerator, includes a first memory array and a second memory array. The computing device also includes an analog processor circuit coupled with the first and second memory arrays to receive first analog input voltages from the first memory array and second analog input voltages from the second memory array and perform one or more operations on the first and second analog input voltages, and output an analog output voltage.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: July 13, 2021
    Assignee: Intel Corporation
    Inventors: Huseyin Ekin Sumbul, Phil Knag, Gregory K. Chen, Raghavan Kumar, Abhishek Sharma, Sasikanth Manipatruni, Amrita Mathuriya, Ram Krishnamurthy, Ian A. Young
  • Patent number: 11048434
    Abstract: A memory circuit has compute-in-memory (CIM) circuitry that performs computations based on time-to-digital conversion (TDC). The memory circuit includes an array of memory cells addressable with column address and row address. The memory circuit includes CIM sense circuitry to sense a voltage for multiple memory cells triggered together. The CIM sense circuitry including a TDC circuit to convert a time for discharge of the multiple memory cells to a digital value. A processing circuit determines a value of the multiple memory cells based on the digital value.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Raghavan Kumar, Phil Knag, Gregory K. Chen, Huseyin Ekin Sumbul, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Ram Krishnamurthy, Ian A. Young
  • Patent number: 11016701
    Abstract: Techniques and mechanisms for a memory device to perform in-memory computing based on a logic state which is detected with a voltage-controlled oscillator (VCO). In an embodiment, a VCO circuit of the memory device receives from a memory array a first signal indicating a logic state that is based on one or more currently stored data bits. The VCO provides a conversion from the logic state being indicated by a voltage characteristic of the first signal to the logic state being indicated by a corresponding frequency characteristic of a cyclical signal. Based on the frequency characteristic, the logic state is identified and communicated for use in an in-memory computation at the memory device. In another embodiment, a result of the in-memory computation is written back to the memory array.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 25, 2021
    Assignee: Intel Corporation
    Inventors: Ian Young, Ram Krishnamurthy, Sasikanth Manipatruni, Amrita Mathuriya, Abhishek Sharma, Raghavan Kumar, Phil Knag, Huseyin Sumbul, Gregory Chen
  • Patent number: 11017288
    Abstract: System and techniques for spike timing dependent plasticity (STDP) in neuromorphic hardware are described herein. A first spike may be received, at a first neuron at a first time, from a second neuron. The first neuron may produce a second spike at a second time after the first time. At a third time after the second time, the first neuron may receive a third spike from the second neuron. Here, the third spike is a replay of the first spike with a defined time offset. The first neuron may then perform long term potentiation (LTP) for the first spike using the third spike.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: May 25, 2021
    Assignee: Intel Corporation
    Inventors: Ram Kumar Krishnamurthy, Gregory Kengho Chen, Raghavan Kumar, Phil Christopher Knag, Huseyin Ekin Sumbul
  • Publication number: 20210119766
    Abstract: Technologies for memory and I/O efficient operations on homomorphically encrypted data are disclosed. In the illustrative embodiment, a cloud compute device is to perform operations on homomorphically encrypted data. In order to reduce memory storage space and network and I/O bandwidth, ciphertext blocks can be manipulated as data structures, allowing operands for operations on a compute engine to be created on the fly as the compute engine is performing other operations, using orders of magnitude less storage space and bandwidth.
    Type: Application
    Filed: December 24, 2020
    Publication date: April 22, 2021
    Inventors: Vikram B. Suresh, Rosario Cammarota, Sanu K. Mathew, Zeshan A. Chishti, Raghavan Kumar, Rafael Misoczki
  • Patent number: 10985903
    Abstract: A processing system includes a processing core and a hardware accelerator communicatively coupled to the processing core. The hardware accelerator includes a random number generator to generate a byte order indicator. The hardware accelerator also includes a first switching module communicatively coupled to the random value indicator generator. The switching module receives an byte sequence in an encryption round of the cryptographic operation and feeds a portion of the input byte sequence to one of a first substitute box (S-box) module or a second S-box module in view of a byte order indicator value generated by the random number generator.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 20, 2021
    Assignee: Intel Corporation
    Inventors: Raghavan Kumar, Sanu K. Mathew, Sudhir K. Satpathy, Vikram B. Suresh
  • Publication number: 20210110067
    Abstract: A method comprises generating, during an enrollment process conducted in a controlled environment, a dark bit mask comprising a plurality of state information values derived from a plurality of entropy sources at a plurality of operating conditions for an electronic device, and using at least a portion of the plurality of state information values to generate a set of challenge-response pairs for use in an authentication process for the electronic device.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 15, 2021
    Applicant: Intel Corporation
    Inventors: Vikram Suresh, Raghavan Kumar, Sanu Mathew
  • Patent number: 10956813
    Abstract: An apparatus is described. The apparatus includes a compute in memory circuit. The compute in memory circuit includes a memory circuit and an encoder. The memory circuit is to provide 2m voltage levels on a read data line where m is greater than 1. The memory circuit includes storage cells sufficient to store a number of bits n where n is greater than m. The encoder is to receive an m bit input and convert the m bit input into an n bit word that is to be stored in the memory circuit, where, the m bit to n bit encoding performed by the encoder creates greater separation between those of the voltage levels that demonstrate wider voltage distributions on the read data line than others of the voltage levels.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: Ian A. Young, Ram Krishnamurthy, Sasikanth Manipatruni, Gregory K. Chen, Amrita Mathuriya, Abhishek Sharma, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul