Patents by Inventor Gobikrishna Dhanuskodi

Gobikrishna Dhanuskodi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966480
    Abstract: Apparatuses, systems, and techniques for supporting fairness of multiple context sharing cryptographic hardware. An accelerator circuit includes a copy engine (CE) with AES-GCM hardware configured to perform both encryption and authentication of data transfers for multiple applications or multiple data streams in a single application or belonging to a single user. The CE splits a data transfer of a specified size into a set of partial transfers. The CE sequentially executes the set of partial transfers using a context for a period of time (e.g., a timeslice) for an application. The CE stores in a secure memory for the application one or more data for encryption or decryption (e.g., a hash key, a block counter, etc.) computed from a last partial transfer. The one or more data for encryption or decryption are retrieved and used when data transfers for the application is resumed by the CE.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: April 23, 2024
    Assignee: Nvidia Corporation
    Inventors: Adam Hendrickson, Vaishali Kulkarni, Gobikrishna Dhanuskodi, Naveen Cherukuri, Wish Gandhi, Raymond Wong
  • Publication number: 20230297696
    Abstract: In examples, a parallel processing unit (PPU) operates within a trusted execution environment (TEE) implemented using a central processing unit (CPU). A virtual machine (VM) executing within the TEE is provided access to the PPU by a hypervisor. However, data of an application executed by the VM is inaccessible to the hypervisor and other untrusted entities outside of the TEE. To protect the data in transit, the VM and the PPU may encrypt or decrypt the data for secure communication between the devices. To protect the data within the PPU, a protected memory region may be created in PPU memory where compute engines of the PPU are prevented from writing outside of the protected memory region. A write protect memory region is generated where access to the PPU memory is blocked from other computing devices and/or device instances.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 21, 2023
    Inventors: Philip Rogers, Mark Overby, Vyas Venkataraman, Naveen Cherukuri, James Leroy Deming, Gobikrishna Dhanuskodi, Dwayne Swoboda, Lucien Dunning, Aruna Manjunatha, Aaron Jiricek, Mark Hairgrove, Michael Woodmansee
  • Publication number: 20230297406
    Abstract: In examples, trusted execution environments (TEE) are provided for an instance of a parallel processing unit (PPU) as PPU TEEs. Different instances of a PPU correspond to different PPU TEEs, and provide accelerated confidential computing to a corresponding TEE. The processors of each PPU instance have separate and isolated paths through the memory system of the PPU which are assigned uniquely to an individual PPU instance. Data in device memory of the PPU may be isolated and access controlled amongst the PPU instances using one or more hardware firewalls. A GPU hypervisor assigns hardware resources to runtimes and performs access control and context switching for the runtimes. A PPU instance uses a cryptographic key to protect data for secure communication. Compute engines of the PPU instance are prevented from writing outside of a protected memory region. Access to a write protected region in PPU memory is blocked from other computing devices and/or device instances.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 21, 2023
    Inventors: Philip Rogers, Mark Overby, Vyas Venkataraman, Naveen Cherukuri, James Leroy Deming, Gobikrishna Dhanuskodi, Dwayne Swoboda, Lucien Dunning, Aruna Manjunatha, Aaron Jiricek, Mark Hairgrove, Mike Woodmansee
  • Publication number: 20230289453
    Abstract: Apparatuses, systems, and techniques for supporting fairness of multiple context sharing cryptographic hardware. An accelerator circuit includes a copy engine (CE) with AES-GCM hardware configured to perform both encryption and authentication of data transfers for multiple applications or multiple data streams in a single application or belonging to a single user. The CE splits a data transfer of a specified size into a set of partial transfers. The CE sequentially executes the set of partial transfers using a context for a period of time (e.g., a timeslice) for an application. The CE stores in a secure memory for the application one or more data for encryption or decryption (e.g., a hash key, a block counter, etc.) computed from a last partial transfer. The one or more data for encryption or decryption are retrieved and used when data transfers for the application is resumed by the CE.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Inventors: Adam Hendrickson, Vaishali Kulkarni, Gobikrishna Dhanuskodi, Naveen Cherukuri, Wish Gandhi, Raymond Wong
  • Publication number: 20230267235
    Abstract: Apparatuses, systems, and techniques for handling faults by a direct memory access (DMA) engine. When a DMA engine detects an error associated with an encryption or decryption operation, the DMA engine reports the error to a CPU, which may be executing an untrusted software directing a DMA operation, and the secure processor. The DMA engine waits for clearance from the secure processor before responding to further directions from the potentially untrusted software.
    Type: Application
    Filed: February 22, 2022
    Publication date: August 24, 2023
    Inventors: Anuj Rao, Adam Hendrickson, Vaishali Kulkarni, Gobikrishna Dhanuskodi, Naveen Cherukuri
  • Patent number: 11698869
    Abstract: The subject application relates to computing an authentication tag for partial transfers scheduled across multiple direct memory access (DMA) engines. Apparatuses, systems, and techniques are described for computing an authentication tag for a data transfer when the data transfer is scheduled as partial transfers across a specified number of direct memory access (DMA) engines. An orchestration circuit stores partial authentication tags, computed by the DMA engines, and corresponding adjustment exponents during one or more rounds in which the partial transfers are scheduled and processed by the specified number of DMA engines. During a last round, a combined authentication tag can be computed based on the partial authentication tags and the corresponding adjustment exponents stored by the orchestration circuit during the rounds.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: July 11, 2023
    Assignee: NVIDIA Corporation
    Inventors: Vaishali Kulkarni, Naveen Cherukuri, Raymond Wong, Adam Hendrickson, Gobikrishna Dhanuskodi, Wish Gandhi
  • Publication number: 20230153146
    Abstract: Various embodiments include a system for launching tasks in a computing system operating in a secure mode. The system includes a central processing unit (CPU) that has access to an unsecure memory and does not have access to a secure memory. The system further includes an accelerator (e.g., GPU) that has access to the unsecure memory and the secure memory. The CPU encrypts copy tasks and secure tasks for the accelerator and stores the copy tasks and secure tasks in the unsecure memory. Copy engines in the accelerator read, decrypt, and authenticate the copy tasks and store the decrypted copy tasks in the secure memory. The copy engines execute the decrypted copy tasks to read, decrypt, and authenticate the secure tasks and store the decrypted secure tasks in the secure memory. The accelerator schedules the decrypted secure tasks for execution in the secure mode.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 18, 2023
    Inventors: Gobikrishna DHANUSKODI, Philip Browing JOHNSON, Muhammad Wasiur RASHID, Sahil SAPRE
  • Publication number: 20230103518
    Abstract: Apparatuses, systems, and techniques to generate a trusted execution environment including multiple accelerators. In at least one embodiment, a parallel processing unit (PPU), such as a graphics processing unit (GPU), operates in a secure execution mode including a protect memory region. Furthermore, in an embodiment, a cryptographic key is utilzed to protect data during transmission between the accelerators.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 6, 2023
    Inventors: Philip John Rogers, Mark Overby, Michael Asbury Woodmansee, Vyas Venkataraman, Naveen Cherukuri, Gobikrishna Dhanuskodi, Dwayne Frank Swoboda, Lucien Burton Dunning, Mark Hairgrove, Sudeshna Guha
  • Publication number: 20230094125
    Abstract: Apparatuses, systems, and techniques to generate a trusted execution environment including multiple accelerators. In at least one embodiment, a parallel processing unit (PPU), such as a graphics processing unit (GPU), operates in a secure execution mode including a protect memory region. Furthermore, in an embodiment, a cryptographic key is utilized to protect data during transmission between the accelerators.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Philip John Rogers, Mark Overby, Michael Asbury Woodmansee, Vyas Venkataraman, Naveen Cherukuri, Gobikrishna Dhanuskodi, Dwayne Frank Swoboda, Lucien Burton Dunning, Mark Hairgrove, Sudeshna Guha