Patents by Inventor RAGHAVAN KUMAR

RAGHAVAN KUMAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11917053
    Abstract: In one example an apparatus comprises a computer readable memory, an XMSS operations logic to manage XMSS functions, a chain function controller to manage chain function algorithms, a secure hash algorithm-2 (SHA2) accelerator, a secure hash algorithm-3 (SHA3) accelerator, and a register bank shared between the SHA2 accelerator and the SHA3 accelerator. Other examples may be described.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: February 27, 2024
    Assignee: Intel Corporation
    Inventors: Santosh Ghosh, Vikram Suresh, Sanu Mathew, Manoj Sastry, Andrew H. Reinders, Raghavan Kumar, Rafael Misoczki
  • Publication number: 20240007266
    Abstract: In one example an apparatus comprises a first input node to receive a first plaintext input, a second input node to receive a random mask, an advanced encryption standard (AES) engine configurable to operate in one of a first mode in which the random mask is added to the first plaintext input during one or more computations performed by the AES engine, or second mode in which the random mask is not added to the first plaintext input during one or more computations performed by the AES engine. Other examples may be described.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Intel Corporation
    Inventors: Raghavan Kumar, Vikram B. Suresh, Sanu K. Mathew
  • Publication number: 20240007267
    Abstract: In one example an apparatus comprises a first input node to receive a first plaintext input, a second input node to receive a second plaintext input, a third input node to receive a random mask and an advanced encryption standard (AES) circuitry configurable to operate in one of a first mode in which the random mask is added to the first plaintext input during one or more computations to convert the first plaintext input to a first ciphertext output, or a second mode in which the first plaintext input is converted to a first ciphertext output and the second plaintext input is converted to a second ciphertext output without using the random mask. Other examples may be described.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Intel Corporation
    Inventors: Raghavan Kumar, Sanu K. Mathew
  • Publication number: 20230401434
    Abstract: An apparatus is described. The apparatus includes a long short term memory (LSTM) circuit having a multiply accumulate circuit (MAC). The MAC circuit has circuitry to rely on a stored product term rather than explicitly perform a multiplication operation to determine the product term if an accumulation of differences between consecutive, preceding input values has not reached a threshold.
    Type: Application
    Filed: August 24, 2023
    Publication date: December 14, 2023
    Applicant: Intel Corporation
    Inventors: Ram KRISHNAMURTHY, Gregory K. CHEN, Raghavan KUMAR, Phil KNAG, Huseyin Ekin SUMBUL
  • Publication number: 20230334006
    Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input rows as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.
    Type: Application
    Filed: June 20, 2023
    Publication date: October 19, 2023
    Inventors: Huseyin Ekin SUMBUL, Gregory K. CHEN, Phil KNAG, Raghavan KUMAR, Ram KRISHNAMURTHY
  • Patent number: 11790217
    Abstract: An apparatus is described. The apparatus includes a long short term memory (LSTM) circuit having a multiply accumulate circuit (MAC). The MAC circuit has circuitry to rely on a stored product term rather than explicitly perform a multiplication operation to determine the product term if an accumulation of differences between consecutive, preceding input values has not reached a threshold.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: October 17, 2023
    Assignee: Intel Corporation
    Inventors: Ram Krishnamurthy, Gregory K. Chen, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul
  • Patent number: 11783160
    Abstract: Various systems, devices, and methods for operating on a data sequence. A system includes a set of circuits that form an input layer to receive a data sequence; first hardware computing units to transform the data sequence, the first hardware computing units connected using a set of randomly selected weights, a first hardware computing unit to: receive an input from a second hardware computing unit, determine a weight of a connection between the first and second hardware computing units using an identifier of the second hardware computing unit and a fixed random weight generator, and operate on the input using the weight to determine a state of the first hardware computing unit; and second hardware computing units to operate on states of the first computing units to generate an output based on the data sequence.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: October 10, 2023
    Assignee: Intel Corporation
    Inventors: Phil Knag, Gregory Kengho Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Ram Kumar Krishnamurthy
  • Patent number: 11768966
    Abstract: A method comprises generating, during an enrollment process conducted in a controlled environment, a dark bit mask comprising a plurality of state information values derived from a plurality of entropy sources at a plurality of operating conditions for an electronic device, and using at least a portion of the plurality of state information values to generate a set of challenge-response pairs for use in an authentication process for the electronic device.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: September 26, 2023
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Raghavan Kumar, Sanu Mathew
  • Patent number: 11770258
    Abstract: In one example an apparatus comprises a computer readable memory, hash logic to generate a message hash value based on an input message, signature logic to generate a signature to be transmitted in association with the message, the signature logic to apply a hash-based signature scheme to a private key to generate the signature comprising a public key, and accelerator logic to pre-compute at least one set of inputs to the signature logic. Other examples may be described.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: September 26, 2023
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Sanu Mathew, Manoj Sastry, Santosh Ghosh, Raghavan Kumar, Rafael Misoczki
  • Patent number: 11770262
    Abstract: In one example an apparatus comprises a computer-readable memory, signature logic to compute a message hash of an input message using a secure hash algorithm, process the message hash to generate an array of secret key components for the input message, apply a hash chain function to the array of secret key components to generate an array of signature components, the hash chain function comprising a series of even-index hash chains and a series of odd-index hash chains, wherein the even-index hash chains and the odd-index hash chains generate a plurality of intermediate node values and a one-time public key component between the secret key components and the signature components and store at least some of the intermediate node values in the computer-readable memory for use in one or more subsequent signature operations. Other examples may be described.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: September 26, 2023
    Assignee: INTEL CORPORATION
    Inventors: Rafael Misoczki, Vikram Suresh, Santosh Ghosh, Manoj Sastry, Sanu Mathew, Raghavan Kumar
  • Publication number: 20230297819
    Abstract: An apparatus is described. The apparatus includes a circuit to process a binary neural network. The circuit includes an array of processing cores, wherein, processing cores of the array of processing cores are to process different respective areas of a weight matrix of the binary neural network. The processing cores each include add circuitry to add only those weights of an i layer of the binary neural network that are to be effectively multiplied by a non zero nodal output of an i?1 layer of the binary neural network.
    Type: Application
    Filed: May 24, 2023
    Publication date: September 21, 2023
    Inventors: Ram KRISHNAMURTHY, Gregory K. CHEN, Raghavan KUMAR, Phil KNAG, Huseyin Ekin SUMBUL, Deepak Vinayak KADETOTAD
  • Patent number: 11750402
    Abstract: In one example an apparatus comprises a computer readable memory, a signature logic to generate a signature to be transmitted in association with a message, the signature logic to apply a hash-based signature scheme to the message using a private key to generate the signature comprising a public key, or a verification logic to verify a signature received in association with the message, the verification logic to apply the hash-based signature scheme to verify the signature using the public key, and an accelerator logic to apply a structured order to at least one set of inputs to the hash-based signature scheme. Other examples may be described.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: September 5, 2023
    Assignee: INTEL CORPORATION
    Inventors: Vikram Suresh, Sanu Mathew, Manoj Sastry, Santosh Ghosh, Raghavan Kumar, Rafael Misoczki
  • Patent number: 11751404
    Abstract: Embodiments herein describe techniques for a semiconductor device including a RRAM memory cell. The RRAM memory cell includes a FinFET transistor and a RRAM storage cell. The FinFET transistor includes a fin structure on a substrate, where the fin structure includes a channel region, a source region, and a drain region. An epitaxial layer is around the source region or the drain region. A RRAM storage stack is wrapped around a surface of the epitaxial layer. The RRAM storage stack includes a resistive switching material layer in contact and wrapped around the surface of the epitaxial layer, and a contact electrode in contact and wrapped around a surface of the resistive switching material layer. The epitaxial layer, the resistive switching material layer, and the contact electrode form a RRAM storage cell. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: September 5, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek Sharma, Gregory Chen, Phil Knag, Ram Krishnamurthy, Raghavan Kumar, Sasikanth Manipatruni, Amrita Mathuriya, Huseyin Sumbul, Ian A. Young
  • Patent number: 11726950
    Abstract: A compute near memory (CNM) convolution accelerator enables a convolutional neural network (CNN) to use dedicated acceleration to achieve efficient in-place convolution operations with less impact on memory and energy consumption. A 2D convolution operation is reformulated as 1D row-wise convolution. The 1D row-wise convolution enables the CNM convolution accelerator to process input activations row-by-row, while using the weights one-by-one. Lightweight access circuits provide the ability to stream both weights and input rows as vectors to MAC units, which in turn enables modules of the CNM convolution accelerator to implement convolution for both [1×1] and chosen [n×n] sized filters.
    Type: Grant
    Filed: September 28, 2019
    Date of Patent: August 15, 2023
    Assignee: Intel Corporation
    Inventors: Huseyin Ekin Sumbul, Gregory K. Chen, Phil Knag, Raghavan Kumar, Ram Krishnamurthy
  • Patent number: 11727260
    Abstract: An apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The memory array includes an embedded dynamic random access memory (eDRAM) memory array. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip. The CIM circuit includes a mathematical computation circuit coupled to a memory array. The mathematical computation circuit includes a switched capacitor circuit. The switched capacitor circuit includes a back-end-of-line (BEOL) capacitor coupled to a thin film transistor within the metal/dielectric layers of the semiconductor chip. Another apparatus is described. The apparatus includes a compute-in-memory (CIM) circuit for implementing a neural network disposed on a semiconductor chip.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: August 15, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek Sharma, Jack T. Kavalieros, Ian A. Young, Ram Krishnamurthy, Sasikanth Manipatruni, Uygar Avci, Gregory K. Chen, Amrita Mathuriya, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul, Nazila Haratipour, Van H. Le
  • Patent number: 11663452
    Abstract: An apparatus is described. The apparatus includes a circuit to process a binary neural network. The circuit includes an array of processing cores, wherein, processing cores of the array of processing cores are to process different respective areas of a weight matrix of the binary neural network. The processing cores each include add circuitry to add only those weights of an i layer of the binary neural network that are to be effectively multiplied by a non zero nodal output of an i?1 layer of the binary neural network.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: May 30, 2023
    Assignee: Intel Corporation
    Inventors: Ram Krishnamurthy, Gregory K. Chen, Raghavan Kumar, Phil Knag, Huseyin Ekin Sumbul, Deepak Vinayak Kadetotad
  • Patent number: 11625584
    Abstract: Examples described herein relate to a neural network whose weights from a matrix are selected from a set of weights stored in a memory on-chip with a processing engine for generating multiply and carry operations. The number of weights in the set of weights stored in the memory can be less than a number of weights in the matrix thereby reducing an amount of memory used to store weights in a matrix. The weights in the memory can be generated in training using gradients from back propagation. Weights in the memory can be selected using a tabulation hash calculation on entries in a table.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: April 11, 2023
    Assignee: Intel Corporation
    Inventors: Raghavan Kumar, Gregory K. Chen, Huseyin Ekin Sumbul, Phil Knag, Ram Krishnamurthy
  • Publication number: 20230017447
    Abstract: A mechanism is described for facilitating unified accelerator for classical and post-quantum digital signature schemes in computing environments, according to one embodiment. A method of embodiments, as described herein, includes unifying classical cryptography and post-quantum cryptography through a unified hardware accelerator hosted by a trusted platform of the computing device. The method may further include facilitating unification of a first finite state machine associated with the classical cryptography and a second finite state machine associated with the post-quantum cryptography though one or more of a single the hash engine, a set of register file banks, and a modular exponentiation engine.
    Type: Application
    Filed: September 23, 2022
    Publication date: January 19, 2023
    Applicant: Intel Corporation
    Inventors: SANU MATHEW, MANOJ SASTRY, SANTOSH GHOSH, VIKRAM SURESH, ANDREW H. REINDERS, RAGHAVAN KUMAR, RAFAEL MISOCZKI
  • Publication number: 20230004681
    Abstract: A method comprises generating, during an enrollment process conducted in a controlled environment, a dark bit mask comprising a plurality of state information values derived from a plurality of entropy sources at a plurality of operating conditions for an electronic device, and using at least a portion of the plurality of state information values to generate a set of challenge-response pairs for use in an authentication process for the electronic device.
    Type: Application
    Filed: September 7, 2022
    Publication date: January 5, 2023
    Applicant: Intel Corporation
    Inventors: Vikram Suresh, Raghavan Kumar, Sanu Mathew
  • Patent number: 11522012
    Abstract: A DIMA semiconductor structure is disclosed. The DIMA semiconductor structure includes a frontend including a semiconductor substrate, a transistor switch of a memory cell coupled to the semiconductor substrate and a computation circuit on the periphery of the frontend coupled to the semiconductor substrate. Additionally, the DIMA includes a backend that includes an RRAM component of the memory cell that is coupled to the transistor switch.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 6, 2022
    Assignee: Intel Corporation
    Inventors: Jack T. Kavalieros, Ian A. Young, Ram Krishnamurthy, Ravi Pillarisetty, Sasikanth Manipatruni, Gregory Chen, Hui Jae Yoo, Van H. Le, Abhishek Sharma, Raghavan Kumar, Huichu Liu, Phil Knag, Huseyin Sumbul