Patents by Inventor Samuel Hammond Duncan

Samuel Hammond Duncan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11327900
    Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: May 10, 2022
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
  • Publication number: 20200356492
    Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.
    Type: Application
    Filed: July 23, 2020
    Publication date: November 12, 2020
    Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
  • Patent number: 10769076
    Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: September 8, 2020
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
  • Publication number: 20200159669
    Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 21, 2020
    Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
  • Patent number: 10114760
    Abstract: A system and method are provided for implementing multi-stage translation of virtual addresses. The method includes the steps of receiving, at a first memory management unit, a memory request including a virtual address in a first address space, translating the virtual address to generate a second virtual address in a second address space, and transmitting a modified memory request including the second virtual address to a second memory management unit. The second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space. The physical address is associated with a location in a memory.
    Type: Grant
    Filed: January 14, 2014
    Date of Patent: October 30, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Steven E. Molnar, Jay Kishora Gupta, James Leroy Deming, Samuel Hammond Duncan, Jeffrey Smith
  • Patent number: 9858221
    Abstract: Remotely synchronizing data communicated in an electronic computing system. Ordered writing of a data set of discrete data packets (data) and a following associated semaphore packet (semaphore) from a source electronic device (source) to a bridge interface device (bridge). Relaxed writing of the data set from the bridge to discrete target memory addresses (targets) of a data-consuming electronic device (consumer), wherein the order of the data and the semaphore written to the targets is different than the order of the data and semaphore written with the ordered writing. Monitoring, by the consumer, the relaxed writing of the semaphore to one of the targets. Issuing a synchronization command to the bridge upon detection of the semaphore having been written to the one target. Sending a synchronization confirmation reply from the bridge after all of the data has been written to the targets.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: January 2, 2018
    Assignee: Nvidia Corporation
    Inventors: Mike Osborn, Mark Hummel, Jonathan Owen, Samuel Hammond Duncan
  • Publication number: 20170235690
    Abstract: Remotely synchronizing data communicated in an electronic computing system. Ordered writing of a data set of discrete data packets (data) and a following associated semaphore packet (semaphore) from a source electronic device (source) to a bridge interface device (bridge). Relaxed writing of the data set from the bridge to discrete target memory addresses (targets) of a data-consuming electronic device (consumer), wherein the order of the data and the semaphore written to the targets is different than the order of the data and semaphore written with the ordered writing. Monitoring, by the consumer, the relaxed writing of the semaphore to one of the targets. Issuing a synchronization command to the bridge upon detection of the semaphore having been written to the one target. Sending a synchronization confirmation reply from the bridge after all of the data has been written to the targets.
    Type: Application
    Filed: February 15, 2016
    Publication date: August 17, 2017
    Inventors: Mike Osborn, Mark Hummel, Jonathan Owen, Samuel Hammond Duncan
  • Publication number: 20150199280
    Abstract: A system and method are provided for implementing multi-stage translation of virtual addresses. The method includes the steps of receiving, at a first memory management unit, a memory request including a virtual address in a first address space, translating the virtual address to generate a second virtual address in a second address space, and transmitting a modified memory request including the second virtual address to a second memory management unit. The second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space. The physical address is associated with a location in a memory.
    Type: Application
    Filed: January 14, 2014
    Publication date: July 16, 2015
    Applicant: NVIDIA Corporation
    Inventors: Steven E. Molnar, Jay Kishora Gupta, James Leroy Deming, Samuel Hammond Duncan, Jeffrey Smith
  • Patent number: 8656117
    Abstract: An input/output unit for a computer system that is interfaced with a memory unit having a plurality of partitions manages completions of read requests in the order that they were made. A read request buffer tracks the order in which the read requests were made so that read data responsive to the read requests can be completed and returned to a requesting client in the order the read requests were made.
    Type: Grant
    Filed: October 30, 2008
    Date of Patent: February 18, 2014
    Assignee: Nvidia Corporation
    Inventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 8396993
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Grant
    Filed: February 15, 2012
    Date of Patent: March 12, 2013
    Assignee: NVIDIA Corporation
    Inventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 8380895
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Grant
    Filed: February 15, 2012
    Date of Patent: February 19, 2013
    Assignee: NVIDIA Corporation
    Inventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 8380896
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Grant
    Filed: February 15, 2012
    Date of Patent: February 19, 2013
    Assignee: NVIDIA Corporation
    Inventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 8279231
    Abstract: Read completion buffer space is allocated in accordance with a preset limit. When a read request is received from a client, the sum of a current allocation of the read completion buffer space and a new allocation of the read completion buffer space required by the read request is compared with the preset limit. If the preset limit is not exceeded, read completion buffer space is allocated to the read request. If the preset limit is exceeded, the read request is suspended until sufficient data is read out from the read completion buffer.
    Type: Grant
    Filed: October 29, 2008
    Date of Patent: October 2, 2012
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, John H. Edmondson, Raymond Hoi Man Wong, Lukito Muliadi
  • Publication number: 20120147024
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Application
    Filed: February 15, 2012
    Publication date: June 14, 2012
    Applicant: NVIDIA Corporation
    Inventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Publication number: 20120147019
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Application
    Filed: February 15, 2012
    Publication date: June 14, 2012
    Applicant: NVIDIA Corporation
    Inventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Publication number: 20120139928
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Application
    Filed: February 15, 2012
    Publication date: June 7, 2012
    Applicant: NVIDIA Corporation
    Inventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 8135885
    Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.
    Type: Grant
    Filed: October 30, 2008
    Date of Patent: March 13, 2012
    Assignee: NVIDIA Corporation
    Inventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
  • Patent number: 7685370
    Abstract: A data processing system can establish or maintain data coherency by issuing a data flush operation. An agent can initialize a first flush operation by writing to a flush register. The agent can determine that the flush operation is complete by reading a status indicator from a status register. Additional agents can independently issue flush operations during the pendency of the first flush operation. A second flush instruction and any additional flush instructions that issue during the pendency of the first flush operation set a flush pending indicator in a status register. Once the first flush operation completes, the host performs all pending flush operations in a single second flush operation. The status indicator does not indicate a completed flush operation for the first flush operation until all flush operations are complete. Multiple co-pending flush operations are collapsed into at most two flush operations.
    Type: Grant
    Filed: December 16, 2005
    Date of Patent: March 23, 2010
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, Lincoln G. Garlick
  • Patent number: 7685371
    Abstract: A data processing system can establish or maintain data coherency by issuing a data flush operation. The data processing system can be configured as a host executing one or more independent processes using one or more lower level devices. The lower level devices can be viewed as peer devices. Any of the host or the plurality of peer devices can be configured to initiate the flush operation. A device can determine whether the initiator of a flush operation is the host or a peer device. The device can perform a flush limited to local memory, or a subset of all available memory, if a peer device initiates the flush operation.
    Type: Grant
    Filed: April 19, 2006
    Date of Patent: March 23, 2010
    Assignee: NVIDIA Corporation
    Inventors: Samuel Hammond Duncan, Robert A. Alfieri, John H. Edmondson, David William Nuechterlein, Michael A. Woodmansee
  • Patent number: 7469309
    Abstract: Methods and apparatus for peer-to-peer data transfers in a computing environment provide configurable control over the number of outstanding read requests by one peer device to another. A requesting peer device includes a control register that stores a high-watermark value associated with requests to a target peer device. Each time a read request to the target peer device is generated, the number of such requests already outstanding is compared to the high-water mark. The request is blocked if the number of outstanding requests exceeds the high-water mark and remains blocked until such time as the number of outstanding requests no longer exceeds the high-water mark. Different high-water marks can be associated with different combinations of requesting and target devices.
    Type: Grant
    Filed: December 12, 2005
    Date of Patent: December 23, 2008
    Assignee: Nvidia Corporation
    Inventors: Samuel Hammond Duncan, Wei-Je Huang, Radha Kanekal