Patents by Inventor Samuel Hammond Duncan
Samuel Hammond Duncan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11327900Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.Type: GrantFiled: July 23, 2020Date of Patent: May 10, 2022Assignee: NVIDIA CorporationInventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
-
Publication number: 20200356492Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.Type: ApplicationFiled: July 23, 2020Publication date: November 12, 2020Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
-
Patent number: 10769076Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.Type: GrantFiled: November 21, 2018Date of Patent: September 8, 2020Assignee: NVIDIA CorporationInventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
-
Publication number: 20200159669Abstract: Multiprocessor clusters in a virtualized environment conventionally fail to provide memory access security, which is frequently a requirement for efficient utilization in multi-client settings. Without adequate access security, a malicious process may access what might be confidential data that belongs to a different client sharing the multiprocessor cluster. Furthermore, an inadvertent programming error in the code for one client process may accidentally corrupt data that belongs to the different client. Neither scenario is acceptable. Embodiments of the present disclosure provide access security by enabling each processing node within a multiprocessor cluster to virtualize and manage local memory access and only process access requests possessing proper access credentials. In this way, different applications executing on a multiprocessor cluster may be isolated from each other while advantageously sharing the hardware resources of the multiprocessor cluster.Type: ApplicationFiled: November 21, 2018Publication date: May 21, 2020Inventors: Samuel Hammond Duncan, Sanjeev Jain, Mark Douglas Hummel, Vyas Venkataraman, Olivier Giroux, Larry Robert Dennison, Alexander Toichi Ishii, Hemayet Hossain, Nir Haim Arad
-
Patent number: 10114760Abstract: A system and method are provided for implementing multi-stage translation of virtual addresses. The method includes the steps of receiving, at a first memory management unit, a memory request including a virtual address in a first address space, translating the virtual address to generate a second virtual address in a second address space, and transmitting a modified memory request including the second virtual address to a second memory management unit. The second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space. The physical address is associated with a location in a memory.Type: GrantFiled: January 14, 2014Date of Patent: October 30, 2018Assignee: NVIDIA CORPORATIONInventors: Steven E. Molnar, Jay Kishora Gupta, James Leroy Deming, Samuel Hammond Duncan, Jeffrey Smith
-
Patent number: 9858221Abstract: Remotely synchronizing data communicated in an electronic computing system. Ordered writing of a data set of discrete data packets (data) and a following associated semaphore packet (semaphore) from a source electronic device (source) to a bridge interface device (bridge). Relaxed writing of the data set from the bridge to discrete target memory addresses (targets) of a data-consuming electronic device (consumer), wherein the order of the data and the semaphore written to the targets is different than the order of the data and semaphore written with the ordered writing. Monitoring, by the consumer, the relaxed writing of the semaphore to one of the targets. Issuing a synchronization command to the bridge upon detection of the semaphore having been written to the one target. Sending a synchronization confirmation reply from the bridge after all of the data has been written to the targets.Type: GrantFiled: February 15, 2016Date of Patent: January 2, 2018Assignee: Nvidia CorporationInventors: Mike Osborn, Mark Hummel, Jonathan Owen, Samuel Hammond Duncan
-
Publication number: 20170235690Abstract: Remotely synchronizing data communicated in an electronic computing system. Ordered writing of a data set of discrete data packets (data) and a following associated semaphore packet (semaphore) from a source electronic device (source) to a bridge interface device (bridge). Relaxed writing of the data set from the bridge to discrete target memory addresses (targets) of a data-consuming electronic device (consumer), wherein the order of the data and the semaphore written to the targets is different than the order of the data and semaphore written with the ordered writing. Monitoring, by the consumer, the relaxed writing of the semaphore to one of the targets. Issuing a synchronization command to the bridge upon detection of the semaphore having been written to the one target. Sending a synchronization confirmation reply from the bridge after all of the data has been written to the targets.Type: ApplicationFiled: February 15, 2016Publication date: August 17, 2017Inventors: Mike Osborn, Mark Hummel, Jonathan Owen, Samuel Hammond Duncan
-
Publication number: 20150199280Abstract: A system and method are provided for implementing multi-stage translation of virtual addresses. The method includes the steps of receiving, at a first memory management unit, a memory request including a virtual address in a first address space, translating the virtual address to generate a second virtual address in a second address space, and transmitting a modified memory request including the second virtual address to a second memory management unit. The second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space. The physical address is associated with a location in a memory.Type: ApplicationFiled: January 14, 2014Publication date: July 16, 2015Applicant: NVIDIA CorporationInventors: Steven E. Molnar, Jay Kishora Gupta, James Leroy Deming, Samuel Hammond Duncan, Jeffrey Smith
-
Patent number: 8656117Abstract: An input/output unit for a computer system that is interfaced with a memory unit having a plurality of partitions manages completions of read requests in the order that they were made. A read request buffer tracks the order in which the read requests were made so that read data responsive to the read requests can be completed and returned to a requesting client in the order the read requests were made.Type: GrantFiled: October 30, 2008Date of Patent: February 18, 2014Assignee: Nvidia CorporationInventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 8396993Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: GrantFiled: February 15, 2012Date of Patent: March 12, 2013Assignee: NVIDIA CorporationInventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 8380896Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: GrantFiled: February 15, 2012Date of Patent: February 19, 2013Assignee: NVIDIA CorporationInventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 8380895Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: GrantFiled: February 15, 2012Date of Patent: February 19, 2013Assignee: NVIDIA CorporationInventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 8279231Abstract: Read completion buffer space is allocated in accordance with a preset limit. When a read request is received from a client, the sum of a current allocation of the read completion buffer space and a new allocation of the read completion buffer space required by the read request is compared with the preset limit. If the preset limit is not exceeded, read completion buffer space is allocated to the read request. If the preset limit is exceeded, the read request is suspended until sufficient data is read out from the read completion buffer.Type: GrantFiled: October 29, 2008Date of Patent: October 2, 2012Assignee: NVIDIA CorporationInventors: Samuel Hammond Duncan, John H. Edmondson, Raymond Hoi Man Wong, Lukito Muliadi
-
Publication number: 20120147024Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: ApplicationFiled: February 15, 2012Publication date: June 14, 2012Applicant: NVIDIA CorporationInventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Publication number: 20120147019Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: ApplicationFiled: February 15, 2012Publication date: June 14, 2012Applicant: NVIDIA CorporationInventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Publication number: 20120139928Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: ApplicationFiled: February 15, 2012Publication date: June 7, 2012Applicant: NVIDIA CorporationInventors: Raymond Hoi Man WONG, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 8135885Abstract: A data packer of an input/output hub of a computer system packs and formats write data that is supplied to it before the write data is written into a memory unit of the computer system. More particularly, the data packer accumulates write data received from lower bandwidth clients for delivery to a high bandwidth memory interface. Also, the data packer aligns the write data, so that when the write data is read out from the write data packer, no further alignment is needed.Type: GrantFiled: October 30, 2008Date of Patent: March 13, 2012Assignee: NVIDIA CorporationInventors: Raymond Hoi Man Wong, Samuel Hammond Duncan, Lukito Muliadi, Madhukiran V. Swarna
-
Patent number: 7685370Abstract: A data processing system can establish or maintain data coherency by issuing a data flush operation. An agent can initialize a first flush operation by writing to a flush register. The agent can determine that the flush operation is complete by reading a status indicator from a status register. Additional agents can independently issue flush operations during the pendency of the first flush operation. A second flush instruction and any additional flush instructions that issue during the pendency of the first flush operation set a flush pending indicator in a status register. Once the first flush operation completes, the host performs all pending flush operations in a single second flush operation. The status indicator does not indicate a completed flush operation for the first flush operation until all flush operations are complete. Multiple co-pending flush operations are collapsed into at most two flush operations.Type: GrantFiled: December 16, 2005Date of Patent: March 23, 2010Assignee: NVIDIA CorporationInventors: Samuel Hammond Duncan, Lincoln G. Garlick
-
Patent number: 7685371Abstract: A data processing system can establish or maintain data coherency by issuing a data flush operation. The data processing system can be configured as a host executing one or more independent processes using one or more lower level devices. The lower level devices can be viewed as peer devices. Any of the host or the plurality of peer devices can be configured to initiate the flush operation. A device can determine whether the initiator of a flush operation is the host or a peer device. The device can perform a flush limited to local memory, or a subset of all available memory, if a peer device initiates the flush operation.Type: GrantFiled: April 19, 2006Date of Patent: March 23, 2010Assignee: NVIDIA CorporationInventors: Samuel Hammond Duncan, Robert A. Alfieri, John H. Edmondson, David William Nuechterlein, Michael A. Woodmansee
-
Patent number: 7469309Abstract: Methods and apparatus for peer-to-peer data transfers in a computing environment provide configurable control over the number of outstanding read requests by one peer device to another. A requesting peer device includes a control register that stores a high-watermark value associated with requests to a target peer device. Each time a read request to the target peer device is generated, the number of such requests already outstanding is compared to the high-water mark. The request is blocked if the number of outstanding requests exceeds the high-water mark and remains blocked until such time as the number of outstanding requests no longer exceeds the high-water mark. Different high-water marks can be associated with different combinations of requesting and target devices.Type: GrantFiled: December 12, 2005Date of Patent: December 23, 2008Assignee: Nvidia CorporationInventors: Samuel Hammond Duncan, Wei-Je Huang, Radha Kanekal