Patents by Inventor Andrew Currid
Andrew Currid has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240143372Abstract: Apparatuses, systems, and techniques to determine that a first group including first hardware components is compatible with a second group including second hardware components based at least on a first label associated with the first group and a second label associated with the second group, and cause at least one workload to be migrated from the first group to the second group based at least on determining the first and second groups are compatible with one another.Type: ApplicationFiled: October 31, 2022Publication date: May 2, 2024Inventors: Andrew Currid, Anshul Fadnavis, Chenghuan Jia, Ankit Agrawal
-
Publication number: 20240143408Abstract: Apparatuses, systems, and techniques to determine metrics for paths connecting hardware components, select a plurality of groups of the hardware components based at least in part on the metrics, and perform at least a portion of a workload using a selected group of the plurality of groups.Type: ApplicationFiled: November 1, 2022Publication date: May 2, 2024Inventors: Andrew Currid, Anshul Fadnavis, Chenghuan Jia, Ankit Agrawal
-
Patent number: 11966765Abstract: Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.Type: GrantFiled: September 9, 2020Date of Patent: April 23, 2024Assignee: NVIDIA CorporationInventors: Santosh Kumar Ravindranath Shukla, Andrew Currid, Chenghuan Jia, Arpit R. Jain, Shounak Santosh Deshpande
-
Publication number: 20220075638Abstract: Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.Type: ApplicationFiled: September 9, 2020Publication date: March 10, 2022Inventors: Santosh Kumar Ravindranath Shukla, Andrew Currid, Chenghuan Jia, Arpit R. Jain, Shounak Santosh Deshpande
-
Patent number: 9734546Abstract: A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.Type: GrantFiled: October 3, 2013Date of Patent: August 15, 2017Assignee: NVIDIA CorporationInventors: Kirti Wankhede, Andrew Currid, Surath Raj Mitra, Chenghuan Jia
-
Patent number: 9495723Abstract: A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.Type: GrantFiled: September 13, 2013Date of Patent: November 15, 2016Assignee: NVIDIA CORPORATIONInventors: Andrew Currid, Franck Diard, Chenghuan Jia, Parag Kulkarni
-
Publication number: 20150097844Abstract: A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.Type: ApplicationFiled: October 3, 2013Publication date: April 9, 2015Applicant: NVIDIA CorporationInventors: Kirti Wankhede, Andrew Currid, Surath Raj Mitra, Chenghuan Jia
-
Patent number: 9003000Abstract: One embodiment of the present invention sets forth a technique for automatically provisioning a diskless computing device and an associated server system. A diskless computing device client incorporates an iSCSI initiator that is used to access resources provided by an iSCSI target that is resident on a server computing device. The iSCSI initiator is implemented in the client firmware, providing INT13 disk services entry points, thereby enabling the client to transparently access virtual storage devices at boot time. The client device conducts an apparently local installation using the virtual storage devices provided by the server computing device. A short signature value is associated with the boot image, uniquely associating the boot image with the specific client hardware configuration. When the client device boots normally, the signature value of the client device is presented to the server computing device to automatically reference the appropriate boot image.Type: GrantFiled: July 25, 2006Date of Patent: April 7, 2015Assignee: NVIDIA CorporationInventors: Andrew Currid, Mark A. Overby
-
Publication number: 20150042664Abstract: A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.Type: ApplicationFiled: September 13, 2013Publication date: February 12, 2015Applicant: NVIDIA CorporationInventors: Andrew CURRID, Franck DIARD, Chenghuan JIA, Parag KULKARNI
-
Patent number: 8909746Abstract: One embodiment of the present invention sets forth a technique for automatically provisioning a diskless computing device and an associated server system. A diskless computing device client incorporates an iSCSI initiator that is used to access resources provided by an iSCSI target that is resident on a server computing device. The iSCSI initiator is implemented in the client firmware, providing INT13 disk services entry points, thereby enabling the client to transparently access virtual storage devices at boot time. The client device conducts an apparently local installation using the virtual storage devices provided by the server computing device. A short signature value is associated with the boot image, uniquely associating the boot image with the specific client hardware configuration. When the client device boots normally, the signature value of the client device is presented to the server computing device to automatically reference the appropriate boot image.Type: GrantFiled: July 25, 2006Date of Patent: December 9, 2014Assignee: NVIDIA CorporationInventors: Andrew Currid, Mark A. Overby
-
Patent number: 8713262Abstract: One embodiment of the present invention sets forth a technique for synchronization between two or more processors. The technique implements a spinlock acquire function and a spinlock release function. A processor executing the spinlock acquire function advantageously operates in a low power state while waiting for an opportunity to acquire spinlock. The spinlock acquire function configures a memory monitor to wake up the processor when spinlock is released by a different processor. The spinlock release function releases spinlock by clearing a lock variable and may clear a wait variable.Type: GrantFiled: September 2, 2011Date of Patent: April 29, 2014Assignee: Nvidia CorporationInventors: Mark A. Overby, Andrew Currid
-
Publication number: 20130311548Abstract: User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system. Responsive to the user inputs, data is generated using a graphics processing unit (GPU) configured as multiple virtual GPUs that are concurrently utilized by the applications. The data is then directed to the proper end user devices for display.Type: ApplicationFiled: December 26, 2012Publication date: November 21, 2013Applicant: NVIDIA CORPORATIONInventors: Jen-Hsun Huang, Franck R. Diard, Andrew Currid
-
Patent number: 8549170Abstract: A system and method are provided for performing the retransmission of data in a network. Included is an offload engine in communication with system memory and a network. The offload engine serves for managing the retransmission of data transmitted in the network.Type: GrantFiled: December 19, 2003Date of Patent: October 1, 2013Assignee: NVIDIA CorporationInventors: John Shigeto Minami, Michael Ward Johnson, Andrew Currid, Mrudula Kanuri
-
Publication number: 20130061005Abstract: One embodiment of the present invention sets forth a technique for synchronization between two or more processors. The technique implements a spinlock acquire function and a spinlock release function. A processor executing the spinlock acquire function advantageously operates in a low power state while waiting for an opportunity to acquire spinlock. The spinlock acquire function configures a memory monitor to wake up the processor when spinlock is released by a different processor. The spinlock release function releases spinlock by clearing a lock variable and may clear a wait variable.Type: ApplicationFiled: September 2, 2011Publication date: March 7, 2013Inventors: Mark A. OVERBY, ANDREW CURRID
-
Patent number: 8296515Abstract: One embodiment of the present invention sets forth a technique for performing RAID-6 computations using simple arithmetic functions and two-dimensional table lookup operations. A set of threads within a multi-threaded processor are assigned to perform RAID-6 computations in parallel on a stripe of RAID-6 data. A set of lookup tables are stored within the multi-threaded processor for access by the threads in performing the RAID-6 computations. During normal operation of a related RAID-6 disk array, RAID-6 computations may be performed by the threads using a small set of simple arithmetic operations and a set of lookup operations to the lookup tables. Greater computational efficiency is gained by reducing the RAID-6 computations to simple operations that are performed efficiently on a multi-threaded processor, such as a graphics processing unit.Type: GrantFiled: December 16, 2009Date of Patent: October 23, 2012Assignee: Nvidia CorporationInventors: Nirmal Raj Saxena, Mark A. Overby, Andrew Currid
-
Patent number: 8065439Abstract: A system, method, and related data structure are provided for transmitting data in a network. Included is a data object (i.e. metadata) for communicating between a first network protocol layer and a second network protocol layer. In use, the data object facilitates network communication management utilizing a transport offload engine.Type: GrantFiled: December 19, 2003Date of Patent: November 22, 2011Assignee: NVIDIA CorporationInventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
-
Patent number: 8037391Abstract: One embodiment of the present invention sets forth a technique for performing RAID-6 computations using simple arithmetic functions and two-dimensional table lookup operations. Four lookup tables are computed and saved prior to normal operation of a RAID-6 disk array. During normal operation of the RAID-6 disk array, all RAID-6 related computations may be performed using a small set of simple arithmetic operations and a set of lookup operations to three of the four previously saved lookup tables. Greater computational efficiency is gained by reducing the RAID-6 computations to simple operations that are performed efficiently on a typical central processing unit or graphics processing unit.Type: GrantFiled: May 22, 2009Date of Patent: October 11, 2011Assignee: NVIDIA CorporationInventors: Cyndi Jung, Nirmal Raj Saxena, Mark A. Overby, Andrew Currid
-
Patent number: 7971045Abstract: Embodiments of the invention provide a method for selecting a network boot device using a hardware class identifier. Generally, embodiments of the invention enable a diskless client to communicate a hardware class identifier in a network connection request. The hardware class identifier is used to determine the proper boot server to provide a boot image to the diskless client.Type: GrantFiled: December 15, 2006Date of Patent: June 28, 2011Assignee: NVIDIA CorporationInventors: Andrew Currid, Mark A. Overby
-
Patent number: 7925931Abstract: Embodiments of the present invention provide a method for handling errors in data servers. Generally, embodiments of the invention enable a data packet that is marked as erroneous to be handled so that it is not committed to permanent storage. One or more components are configured to recognize a poisoned data indicator, and to respond to the indicator by taking programmed actions to delete the data, to stop the data from being transmitted, to notify upstream components, and to purge related data from downstream components.Type: GrantFiled: December 13, 2006Date of Patent: April 12, 2011Assignee: NVIDIA CorporationInventors: Michael John Sebastian Smith, Mark A. Overby, Andrew Currid
-
Patent number: 7644205Abstract: One embodiment of the present invention sets forth a technique for mapping a small computer system interface (SCSI) architecture model-3 (SAM-3) task priority to an IEEE Standard 802.1q tag control information (TCI) field. Four bits that define a SAM-3 task priority are mapped to the three user priority bits within a standard 802.1q TCI field. By enabling the SAM-3 task priority of a given SCSI command to determine the user priority within a related IEEE 802.1q Ethernet frame, the Ethernet network is enabled to substantially honor the requested task priority for the SCSI command.Type: GrantFiled: December 15, 2006Date of Patent: January 5, 2010Assignee: NVIDIA CorporationInventors: Mark A. Overby, Andrew Currid