Patents by Inventor Mark HAIRGROVE

Mark HAIRGROVE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230409486
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: August 25, 2023
    Publication date: December 21, 2023
    Inventors: Jerome F. DULUK, Jr., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, John MASHEY
  • Publication number: 20230315328
    Abstract: Various embodiments include techniques for accessing extended memory in a parallel processing system via a high-bandwidth path to extended memory residing on a central processing unit. The disclosed extended memory system extends the directly addressable high-bandwidth memory local to a parallel processing system and avoids the performance penalties associated with low-bandwidth system memory. As a result, execution threads that are highly parallelizable and access a large memory space execute with increased performance on a parallel processing system relative to prior approaches.
    Type: Application
    Filed: March 18, 2022
    Publication date: October 5, 2023
    Inventors: Hemayet HOSSAIN, Steven E. MOLNAR, Jonathon Stuart Ramsay EVANS, Wishwesh Anil GANDHI, Lacky V. SHAH, Vyas VENKATARAMAN, Mark HAIRGROVE, Geoffrey GERFIN, Jeffrey M. SMITH, Terje BERGSTROM, Vikram SETHI, Piyush PATEL
  • Publication number: 20230297696
    Abstract: In examples, a parallel processing unit (PPU) operates within a trusted execution environment (TEE) implemented using a central processing unit (CPU). A virtual machine (VM) executing within the TEE is provided access to the PPU by a hypervisor. However, data of an application executed by the VM is inaccessible to the hypervisor and other untrusted entities outside of the TEE. To protect the data in transit, the VM and the PPU may encrypt or decrypt the data for secure communication between the devices. To protect the data within the PPU, a protected memory region may be created in PPU memory where compute engines of the PPU are prevented from writing outside of the protected memory region. A write protect memory region is generated where access to the PPU memory is blocked from other computing devices and/or device instances.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 21, 2023
    Inventors: Philip Rogers, Mark Overby, Vyas Venkataraman, Naveen Cherukuri, James Leroy Deming, Gobikrishna Dhanuskodi, Dwayne Swoboda, Lucien Dunning, Aruna Manjunatha, Aaron Jiricek, Mark Hairgrove, Michael Woodmansee
  • Publication number: 20230297406
    Abstract: In examples, trusted execution environments (TEE) are provided for an instance of a parallel processing unit (PPU) as PPU TEEs. Different instances of a PPU correspond to different PPU TEEs, and provide accelerated confidential computing to a corresponding TEE. The processors of each PPU instance have separate and isolated paths through the memory system of the PPU which are assigned uniquely to an individual PPU instance. Data in device memory of the PPU may be isolated and access controlled amongst the PPU instances using one or more hardware firewalls. A GPU hypervisor assigns hardware resources to runtimes and performs access control and context switching for the runtimes. A PPU instance uses a cryptographic key to protect data for secure communication. Compute engines of the PPU instance are prevented from writing outside of a protected memory region. Access to a write protected region in PPU memory is blocked from other computing devices and/or device instances.
    Type: Application
    Filed: March 17, 2023
    Publication date: September 21, 2023
    Inventors: Philip Rogers, Mark Overby, Vyas Venkataraman, Naveen Cherukuri, James Leroy Deming, Gobikrishna Dhanuskodi, Dwayne Swoboda, Lucien Dunning, Aruna Manjunatha, Aaron Jiricek, Mark Hairgrove, Mike Woodmansee
  • Patent number: 11741015
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: August 29, 2023
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
  • Publication number: 20230103518
    Abstract: Apparatuses, systems, and techniques to generate a trusted execution environment including multiple accelerators. In at least one embodiment, a parallel processing unit (PPU), such as a graphics processing unit (GPU), operates in a secure execution mode including a protect memory region. Furthermore, in an embodiment, a cryptographic key is utilzed to protect data during transmission between the accelerators.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 6, 2023
    Inventors: Philip John Rogers, Mark Overby, Michael Asbury Woodmansee, Vyas Venkataraman, Naveen Cherukuri, Gobikrishna Dhanuskodi, Dwayne Frank Swoboda, Lucien Burton Dunning, Mark Hairgrove, Sudeshna Guha
  • Publication number: 20230094125
    Abstract: Apparatuses, systems, and techniques to generate a trusted execution environment including multiple accelerators. In at least one embodiment, a parallel processing unit (PPU), such as a graphics processing unit (GPU), operates in a secure execution mode including a protect memory region. Furthermore, in an embodiment, a cryptographic key is utilized to protect data during transmission between the accelerators.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Philip John Rogers, Mark Overby, Michael Asbury Woodmansee, Vyas Venkataraman, Naveen Cherukuri, Gobikrishna Dhanuskodi, Dwayne Frank Swoboda, Lucien Burton Dunning, Mark Hairgrove, Sudeshna Guha
  • Publication number: 20230032278
    Abstract: Apparatuses, systems, and techniques for memory management are disclosed. In at least one embodiment, memory management is provided for a heterogenous system, for example, a system including a CPU and a GPU, in which redundant or unnecessary memory transfers are reduced.
    Type: Application
    Filed: July 19, 2022
    Publication date: February 2, 2023
    Inventors: Weixi Zhu, Guilherme Cox, Mark Hairgrove
  • Publication number: 20220405211
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 22, 2022
    Inventors: Jerome F. DULUK, Jr., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, Josh MASHEY
  • Patent number: 11487673
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: November 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
  • Patent number: 11210253
    Abstract: Techniques are disclosed for tracking memory page accesses in a unified virtual memory system. An access tracking unit detects a memory page access generated by a first processor for accessing a memory page in a memory system of a second processor. The access tracking unit determines whether a cache memory includes an entry for the memory page. If so, then the access tracking unit increments an associated access counter. Otherwise, the access tracking unit attempts to find an unused entry in the cache memory that is available for allocation. If so, then the access tracking unit associates the second entry with the memory page, and sets an access counter associated with the second entry to an initial value. Otherwise, the access tracking unit selects a valid entry in the cache memory; clears an associated valid bit; associates the entry with the memory page; and initializes an associated access counter.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: December 28, 2021
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, James Leroy Deming, Brian Fahs, Mark Hairgrove, John Mashey
  • Publication number: 20200364821
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.
    Type: Application
    Filed: July 2, 2020
    Publication date: November 19, 2020
    Inventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
  • Patent number: 10762593
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: September 1, 2020
    Assignee: NVIDIA CORPORATION
    Inventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
  • Publication number: 20200265543
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.
    Type: Application
    Filed: December 31, 2018
    Publication date: August 20, 2020
    Inventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
  • Patent number: 10546361
    Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one exemplary implementation, an address allocation process comprises: establishing space for managed pointers across a plurality of memories, including allocating one of the managed pointers with a first portion of memory associated with a first one of a plurality of processors; and performing a process of automatically managing accesses to the managed pointers across the plurality of processors and corresponding memories. The automated management can include ensuring consistent information associated with the managed pointers is copied from the first portion of memory to a second portion of memory associated with a second one of the plurality of processors based upon initiation of an accesses to the managed pointers from the second one of the plurality of processors.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: January 28, 2020
    Assignee: NVIDIA CORPORATION
    Inventors: Stephen Jones, Vivek Kini, Piotr Jaroszynski, Mark Hairgrove, David Fontaine, Cameron Buschardt, Lucien Dunning, John Hubbard
  • Publication number: 20190340145
    Abstract: Techniques are disclosed for tracking memory page accesses in a unified virtual memory system. An access tracking unit detects a memory page access generated by a first processor for accessing a memory page in a memory system of a second processor. The access tracking unit determines whether a cache memory includes an entry for the memory page. If so, then the access tracking unit increments an associated access counter. Otherwise, the access tracking unit attempts to find an unused entry in the cache memory that is available for allocation. If so, then the access tracking unit associates the second entry with the memory page, and sets an access counter associated with the second entry to an initial value. Otherwise, the access tracking unit selects a valid entry in the cache memory; clears an associated valid bit; associates the entry with the memory page; and initializes an associated access counter.
    Type: Application
    Filed: June 24, 2019
    Publication date: November 7, 2019
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, James Leroy DEMING, Brian FAHS, Mark HAIRGROVE, John MASHEY
  • Patent number: 10445243
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: October 15, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
  • Patent number: 10409730
    Abstract: One embodiment of the present invention includes a microcontroller coupled to a memory management unit (MMU). The MMU is coupled to a page table included in a physical memory, and the microcontroller is configured to perform one or more virtual memory operations associated with the physical memory and the page table. In operation, the microcontroller receives a page fault generated by the MMU in response to an invalid memory access via a virtual memory address. To remedy such a page fault, the microcontroller performs actions to map the virtual memory address to an appropriate location in the physical memory. By contrast, in prior-art systems, a fault handler would typically remedy the page fault. Advantageously, because the microcontroller executes these tasks locally with respect to the MMU and the physical memory, latency associated with remedying page faults may be decreased. Consequently, overall system performance may be increased.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: September 10, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Cameron Buschardt, Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, James Leroy Deming, Brian Fahs
  • Patent number: 10331603
    Abstract: Techniques are disclosed for tracking memory page accesses in a unified virtual memory system. An access tracking unit detects a memory page access generated by a first processor for accessing a memory page in a memory system of a second processor. The access tracking unit determines whether a cache memory includes an entry for the memory page. If so, then the access tracking unit increments an associated access counter. Otherwise, the access tracking unit attempts to find an unused entry in the cache memory that is available for allocation. If so, then the access tracking unit associates the second entry with the memory page, and sets an access counter associated with the second entry to an initial value. Otherwise, the access tracking unit selects a valid entry in the cache memory; clears an associated valid bit; associates the entry with the memory page; and initializes an associated access counter.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: June 25, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, James Leroy Deming, Brian Fahs, Mark Hairgrove, John Mashey
  • Patent number: 10310973
    Abstract: A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches.
    Type: Grant
    Filed: October 25, 2012
    Date of Patent: June 4, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Nick Barrow-Williams, Brian Fahs, Jerome F. Duluk, Jr., James Leroy Deming, Timothy John Purcell, Lucien Dunning, Mark Hairgrove