Patents by Inventor James Leroy Deming

James Leroy Deming has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9792220
    Abstract: One embodiment of the present invention includes a microcontroller coupled to a memory management unit (MMU). The MMU is coupled to a page table included in a physical memory, and the microcontroller is configured to perform one or more virtual memory operations associated with the physical memory and the page table. In operation, the microcontroller receives a page fault generated by the MMU in response to an invalid memory access via a virtual memory address. To remedy such a page fault, the microcontroller performs actions to map the virtual memory address to an appropriate location in the physical memory. By contrast, in prior-art systems, a fault handler would typically remedy the page fault. Advantageously, because the microcontroller executes these tasks locally with respect to the MMU and the physical memory, latency associated with remedying page faults may be decreased. Consequently, overall system performance may be increased.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: October 17, 2017
    Assignee: NVIDIA Corporation
    Inventors: Cameron Buschardt, Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, James Leroy Deming, Brian Fahs
  • Publication number: 20170286198
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: October 16, 2013
    Publication date: October 5, 2017
    Applicant: NVIDIA CORPORATION
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, John MASHEY
  • Patent number: 9767036
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: September 19, 2017
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
  • Patent number: 9754561
    Abstract: One embodiment of the present invention includes a memory management unit (MMU) that is configured to manage sparse mappings. The MMU processes requests to translate virtual addresses to physical addresses based on page table entries (PTEs) that indicate a sparse status. If the MMU determines that the PTE does not include a mapping from a virtual address to a physical address, then the MMU responds to the request based on the sparse status. If the sparse status is active, then the MMU determines the physical address based on whether the type of the request is a write operation and, subsequently, generates an acknowledgement of the request. By contrast, if the sparse status is not active, then the MMU generates a page fault. Advantageously, the disclosed embodiments enable the computer system to manage sparse mappings without incurring the performance degradation associated with both page faults and conventional software-based sparse mapping management.
    Type: Grant
    Filed: October 4, 2013
    Date of Patent: September 5, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Jonathan Dunaisky, Henry Packard Moreton, Jeffrey A. Bolz, Yury Y. Uralsky, James Leroy Deming, Rui M. Bastos, Patrick R. Brown, Amanpreet Grewal, Christian Amsinck, Poornachandra Rao, Jerome F. Duluk, Jr., Andrew J. Tao
  • Publication number: 20170249254
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: October 16, 2013
    Publication date: August 31, 2017
    Applicant: NVIDIA Corporation
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, John MASHEY
  • Publication number: 20170199689
    Abstract: One embodiment of the present invention is a memory subsystem that includes a sliding window tracker that tracks memory accesses associated with a sliding window of memory page groups. When the sliding window tracker detects an access operation associated with a memory page group within the sliding window, the sliding window tracker sets a reference bit that is associated with the memory page group and is included in a reference vector that represents accesses to the memory page groups within the sliding window. Based on the values of the reference bits, the sliding window tracker causes the selection a memory page in a memory page group that has fallen into disuse from a first memory to a second memory. Because the sliding window tracker tunes the memory pages that are resident in the first memory to reflect memory access patterns, the overall performance of the memory subsystem is improved.
    Type: Application
    Filed: May 31, 2016
    Publication date: July 13, 2017
    Inventors: John MASHEY, Cameron BUSCHARDT, James Leroy DEMING, Jerome F. DULUK, JR., Brian FAHS
  • Publication number: 20170185526
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: October 16, 2013
    Publication date: June 29, 2017
    Applicant: NVIDIA CORPORATION
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, John MASHEY
  • Publication number: 20170161206
    Abstract: One embodiment of the present invention is a parallel processing unit (PPU) that includes one or more streaming multiprocessors (SMs) and implements a replay unit per SM. Upon detecting a page fault associated with a memory transaction issued by a particular SM, the corresponding replay unit causes the SM, but not any unaffected SMs, to cease issuing new memory transactions. The replay unit then stores the faulting memory transaction and any faulting in-flight memory transaction in a replay buffer. As page faults are resolved, the replay unit replays the memory transactions in the replay buffer—removing successful memory transactions from the replay buffer—until all of the stored memory transactions have successfully executed. Advantageously, the overall performance of the PPU is improved compared to conventional PPUs that, upon detecting a page fault, stop performing memory transactions across all SMs included in the PPU until the fault is resolved.
    Type: Application
    Filed: February 20, 2017
    Publication date: June 8, 2017
    Inventors: James Leroy DEMING, Jerome F. DULUK, JR., John MASHEY, Mark HAIRGROVE, Lucien DUNNING, Jonathon Stuart Ramsey EVANS, Samuel H. DUNCAN, Cameron BUSCHARDT, Brian FAHS
  • Publication number: 20170097896
    Abstract: One embodiment of the present invention includes a memory management unit (MMU) that is configured to efficiently process requests to access memory that includes protected regions. Upon receiving an initial request via a virtual address (VA), the MMU translates the VA to a physical address (PA) based on page table entries (PTEs) and gates the response based on page-specific secure state information. To thwart software-based attempts to illicitly access the protected regions, the secure state information is not stored in page tables. However, to expedite subsequent requests, after the MMU identifies the PTE and the corresponding secure state information, the MMU stores both the PTE and the secure state information as a cache line in a translation lookaside buffer.
    Type: Application
    Filed: October 2, 2015
    Publication date: April 6, 2017
    Inventors: Steven E. MOLNAR, James Leroy DEMING, Michael A. WOODMANSEE
  • Patent number: 9588903
    Abstract: One embodiment of the present invention includes a microcontroller coupled to a memory management unit (MMU). The MMU is coupled to a page table included in a physical memory, and the microcontroller is configured to perform one or more virtual memory operations associated with the physical memory and the page table. In operation, the microcontroller receives a page fault generated by the MMU in response to an invalid memory access via a virtual memory address. To remedy such a page fault, the microcontroller performs actions to map the virtual memory address to an appropriate location in the physical memory. By contrast, in prior-art systems, a fault handler would typically remedy the page fault. Advantageously, because the microcontroller executes these tasks locally with respect to the MMU and the physical memory, latency associated with remedying page faults may be decreased. Consequently, overall system performance may be increased.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: March 7, 2017
    Assignee: NVIDIA Corporation
    Inventors: Cameron Buschardt, Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, James Leroy Deming, Brian Fahs
  • Patent number: 9575892
    Abstract: One embodiment of the present invention is a parallel processing unit (PPU) that includes one or more streaming multiprocessors (SMs) and implements a replay unit per SM. Upon detecting a page fault associated with a memory transaction issued by a particular SM, the corresponding replay unit causes the SM, but not any unaffected SMs, to cease issuing new memory transactions. The replay unit then stores the faulting memory transaction and any faulting in-flight memory transaction in a replay buffer. As page faults are resolved, the replay unit replays the memory transactions in the replay buffer—removing successful memory transactions from the replay buffer—until all of the stored memory transactions have successfully executed. Advantageously, the overall performance of the PPU is improved compared to conventional PPUs that, upon detecting a page fault, stop performing memory transactions across all SMs included in the PPU until the fault is resolved.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: February 21, 2017
    Assignee: NVIDIA Corporation
    Inventors: James Leroy Deming, Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, Lucien Dunning, Jonathon Stuart Ramsey Evans, Samuel H. Duncan, Cameron Buschardt, Brian Fahs
  • Patent number: 9569348
    Abstract: One embodiment of the present invention sets forth a technique for performing a method for compressing page table entries (PTEs) prior to storing the PTEs in a translation look-aside buffer (TLB). A page table entry (PTE) request is received for a PTE that is not stored in the TLB. The PTE as well as a plurality of PTEs that are adjacent to the PTE are retrieved from a memory. The PTE and the plurality of PTEs are compressed and then stored in the TLB.
    Type: Grant
    Filed: July 23, 2010
    Date of Patent: February 14, 2017
    Assignee: NVIDIA Corporation
    Inventors: James Leroy Deming, Mark Allen Mosley, William Craig McKnight
  • Publication number: 20160357482
    Abstract: One embodiment of the present invention sets forth a computer-implemented method for migrating a memory page from a first memory to a second memory. The method includes determining a first page size supported by the first memory. The method also includes determining a second page size supported by the second memory. The method further includes determining a use history of the memory page based on an entry in a page state directory associated with the memory page. The method also includes migrating the memory page between the first memory and the second memory based on the first page size, the second page size, and the use history.
    Type: Application
    Filed: August 22, 2016
    Publication date: December 8, 2016
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, James Leroy DEMING, Lucien DUNNING, Brian FAHS, Mark HAIRGROVE, Chenghuan JIA, John MASHEY, James M. VAN DYKE
  • Patent number: 9424201
    Abstract: One embodiment of the present invention sets forth a computer-implemented method for migrating a memory page from a first memory to a second memory. The method includes determining a first page size supported by the first memory. The method also includes determining a second page size supported by the second memory. The method further includes determining a use history of the memory page based on an entry in a page state directory associated with the memory page. The method also includes migrating the memory page between the first memory and the second memory based on the first page size, the second page size, and the use history.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: August 23, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, James Leroy Deming, Lucien Dunning, Brian Fahs, Mark Hairgrove, Chenghuan Jia, John Mashey, James M. Van Dyke
  • Patent number: 9355041
    Abstract: One embodiment of the present invention is a memory subsystem that includes a sliding window tracker that tracks memory accesses associated with a sliding window of memory page groups. When the sliding window tracker detects an access operation associated with a memory page group within the sliding window, the sliding window tracker sets a reference bit that is associated with the memory page group and is included in a reference vector that represents accesses to the memory page groups within the sliding window. Based on the values of the reference bits, the sliding window tracker causes the selection a memory page in a memory page group that has fallen into disuse from a first memory to a second memory. Because the sliding window tracker tunes the memory pages that are resident in the first memory to reflect memory access patterns, the overall performance of the memory subsystem is improved.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: May 31, 2016
    Assignee: NVIDIA Corporation
    Inventors: John Mashey, Cameron Buschardt, James Leroy Deming, Jerome F. Duluk, Jr., Brian Fahs
  • Patent number: 9245129
    Abstract: A system and method are provided for protecting data. In operation, a request to read data from memory is received. Additionally, it is determined whether the data is stored in a predetermined portion of the memory. If it is determined that the data is stored in the predetermined portion of the memory, the data and a protect signal are returned for use in protecting the data. In certain embodiments of the invention, data stored in the predetermined portion of the memory may be further processed and written hack to the predetermined portion of the memory. In other embodiments of the invention, such processing may involve unprotected data stored outside the predetermined portion of the memory.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 26, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jay Kishora Gupta, Jay S. Huang, Steven E. Molnar, Parthasarathy Sriram, James Leroy Deming
  • Publication number: 20150199280
    Abstract: A system and method are provided for implementing multi-stage translation of virtual addresses. The method includes the steps of receiving, at a first memory management unit, a memory request including a virtual address in a first address space, translating the virtual address to generate a second virtual address in a second address space, and transmitting a modified memory request including the second virtual address to a second memory management unit. The second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space. The physical address is associated with a location in a memory.
    Type: Application
    Filed: January 14, 2014
    Publication date: July 16, 2015
    Applicant: NVIDIA Corporation
    Inventors: Steven E. Molnar, Jay Kishora Gupta, James Leroy Deming, Samuel Hammond Duncan, Jeffrey Smith
  • Publication number: 20150097847
    Abstract: One embodiment of the present invention includes a memory management unit (MMU) that is configured to manage sparse mappings. The MMU processes requests to translate virtual addresses to physical addresses based on page table entries (PTEs) that indicate a sparse status. If the MMU determines that the PTE does not include a mapping from a virtual address to a physical address, then the MMU responds to the request based on the sparse status. If the sparse status is active, then the MMU determines the physical address based on whether the type of the request is a write operation and, subsequently, generates an acknowledgement of the request. By contrast, if the sparse status is not active, then the MMU generates a page fault. Advantageously, the disclosed embodiments enable the computer system to manage sparse mappings without incurring the performance degradation associated with both page faults and conventional software-based sparse mapping management.
    Type: Application
    Filed: October 4, 2013
    Publication date: April 9, 2015
    Applicant: NVIDIA CORPORATION
    Inventors: Jonathan DUNAISKY, Henry Packard MORETON, Jeffrey A. BOLZ, Yury Y. URALSKY, James Leroy DEMING, Rui M. BASTOS, Patrick R. BROWN, Amanpreet GREWAL, Christian AMSINCK, Poornachandra RAO, Jerome F. DULUK, JR., Andrew J. TAO
  • Publication number: 20150082001
    Abstract: One embodiment of the present invention includes techniques to support demand paging across a processing unit. Before a host unit transmits a command to an engine that does not tolerate page faults, the host unit ensures that the virtual memory addresses associated with the command are appropriately mapped to physical memory addresses. In particular, if the virtual memory addresses are not appropriately mapped, then the processing unit performs actions to map the virtual memory address to appropriate locations in physical memory. Further, the processing unit ensures that the access permissions required for successful execution of the command are established. Because the virtual memory address mappings associated with the command are valid when the engine receives the command, the engine does not encounter page faults upon executing the command. Consequently, in contrast to prior-art techniques, the engine supports demand paging regardless of whether the engine is involved in remedying page faults.
    Type: Application
    Filed: September 13, 2013
    Publication date: March 19, 2015
    Applicant: NVIDIA CORPORATION
    Inventors: Samuel H. DUNCAN, Jerome F. DULUK, JR., Jonathon Stuart Ramsay EVANS, James Leroy DEMING
  • Publication number: 20140281256
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Application
    Filed: October 16, 2013
    Publication date: September 18, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, Sherry CHEUNG, James Leroy DEMING, Samuel H. DUNCAN, Lucien DUNNING, Robert GEORGE, Arvind GOPALAKRISHNAN, Mark HAIRGROVE, Chenghuan JIA, John MASHEY