Patents by Inventor Andrew Currid

Andrew Currid has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143372
    Abstract: Apparatuses, systems, and techniques to determine that a first group including first hardware components is compatible with a second group including second hardware components based at least on a first label associated with the first group and a second label associated with the second group, and cause at least one workload to be migrated from the first group to the second group based at least on determining the first and second groups are compatible with one another.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Inventors: Andrew Currid, Anshul Fadnavis, Chenghuan Jia, Ankit Agrawal
  • Publication number: 20240143408
    Abstract: Apparatuses, systems, and techniques to determine metrics for paths connecting hardware components, select a plurality of groups of the hardware components based at least in part on the metrics, and perform at least a portion of a workload using a selected group of the plurality of groups.
    Type: Application
    Filed: November 1, 2022
    Publication date: May 2, 2024
    Inventors: Andrew Currid, Anshul Fadnavis, Chenghuan Jia, Ankit Agrawal
  • Patent number: 11966765
    Abstract: Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Santosh Kumar Ravindranath Shukla, Andrew Currid, Chenghuan Jia, Arpit R. Jain, Shounak Santosh Deshpande
  • Publication number: 20220075638
    Abstract: Systems and methods are disclosed for throttling memory bandwidth accessed by virtual machines (VMs). A technique for dynamically throttling the virtual computer processing units (vCPUs) assigned to a VM (tenant) controls the memory access rate of the VM. When the memory is shared by multiple VMs in a cloud-computing environment, one VM increasing its memory access rate may cause another VM to suffer memory access starvation. This behavior violates the principle of VM isolation in cloud computing. In contrast to conventional systems, a software solution for dynamically throttling the vCPUs may be implemented within a hypervisor and is therefore portable across CPU families and doesn't require specialized server-class CPU capabilities or limit the system configuration.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 10, 2022
    Inventors: Santosh Kumar Ravindranath Shukla, Andrew Currid, Chenghuan Jia, Arpit R. Jain, Shounak Santosh Deshpande
  • Patent number: 9734546
    Abstract: A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: August 15, 2017
    Assignee: NVIDIA Corporation
    Inventors: Kirti Wankhede, Andrew Currid, Surath Raj Mitra, Chenghuan Jia
  • Patent number: 9495723
    Abstract: A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: November 15, 2016
    Assignee: NVIDIA CORPORATION
    Inventors: Andrew Currid, Franck Diard, Chenghuan Jia, Parag Kulkarni
  • Publication number: 20150097844
    Abstract: A computer system includes an operating system having a kernel and configured to launch a plurality of computing processes. The system also includes a plurality of graphics processing units (GPUs), a front-end driver module, and a plurality of back-end driver modules. The GPUs are configured to execute instructions on behalf of the computing processes subject to a GPU service request. The front-end driver module is loaded into the kernel and configured to receive the GPU service request from one of the computing processes. Each back-end driver module is associated with one or more of the GPUs and configured to receive the GPU service request from the front-end driver module and pass the GPU service request to an associated GPU.
    Type: Application
    Filed: October 3, 2013
    Publication date: April 9, 2015
    Applicant: NVIDIA Corporation
    Inventors: Kirti Wankhede, Andrew Currid, Surath Raj Mitra, Chenghuan Jia
  • Patent number: 9003000
    Abstract: One embodiment of the present invention sets forth a technique for automatically provisioning a diskless computing device and an associated server system. A diskless computing device client incorporates an iSCSI initiator that is used to access resources provided by an iSCSI target that is resident on a server computing device. The iSCSI initiator is implemented in the client firmware, providing INT13 disk services entry points, thereby enabling the client to transparently access virtual storage devices at boot time. The client device conducts an apparently local installation using the virtual storage devices provided by the server computing device. A short signature value is associated with the boot image, uniquely associating the boot image with the specific client hardware configuration. When the client device boots normally, the signature value of the client device is presented to the server computing device to automatically reference the appropriate boot image.
    Type: Grant
    Filed: July 25, 2006
    Date of Patent: April 7, 2015
    Assignee: NVIDIA Corporation
    Inventors: Andrew Currid, Mark A. Overby
  • Publication number: 20150042664
    Abstract: A device for processing graphics data includes a plurality of graphics processing units. Each graphics processing unit may correspond to a virtualized operating system. Each graphics processing unit may include a configuration register indicating a 3D class code and a command register indicating that I/O cycle decoding is disabled. The device may be configured to transmit a configuration register value to a virtualized operating system indicating a VGA-compatible class code. The device may be configured to transmit a command register value to the virtualized operating system that indicates that I/O cycle decoding is enabled. In this manner, legacy bus architecture of the device may not limit the number of graphics processing units deployed in the device.
    Type: Application
    Filed: September 13, 2013
    Publication date: February 12, 2015
    Applicant: NVIDIA Corporation
    Inventors: Andrew CURRID, Franck DIARD, Chenghuan JIA, Parag KULKARNI
  • Patent number: 8909746
    Abstract: One embodiment of the present invention sets forth a technique for automatically provisioning a diskless computing device and an associated server system. A diskless computing device client incorporates an iSCSI initiator that is used to access resources provided by an iSCSI target that is resident on a server computing device. The iSCSI initiator is implemented in the client firmware, providing INT13 disk services entry points, thereby enabling the client to transparently access virtual storage devices at boot time. The client device conducts an apparently local installation using the virtual storage devices provided by the server computing device. A short signature value is associated with the boot image, uniquely associating the boot image with the specific client hardware configuration. When the client device boots normally, the signature value of the client device is presented to the server computing device to automatically reference the appropriate boot image.
    Type: Grant
    Filed: July 25, 2006
    Date of Patent: December 9, 2014
    Assignee: NVIDIA Corporation
    Inventors: Andrew Currid, Mark A. Overby
  • Patent number: 8713262
    Abstract: One embodiment of the present invention sets forth a technique for synchronization between two or more processors. The technique implements a spinlock acquire function and a spinlock release function. A processor executing the spinlock acquire function advantageously operates in a low power state while waiting for an opportunity to acquire spinlock. The spinlock acquire function configures a memory monitor to wake up the processor when spinlock is released by a different processor. The spinlock release function releases spinlock by clearing a lock variable and may clear a wait variable.
    Type: Grant
    Filed: September 2, 2011
    Date of Patent: April 29, 2014
    Assignee: Nvidia Corporation
    Inventors: Mark A. Overby, Andrew Currid
  • Publication number: 20130311548
    Abstract: User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system. Responsive to the user inputs, data is generated using a graphics processing unit (GPU) configured as multiple virtual GPUs that are concurrently utilized by the applications. The data is then directed to the proper end user devices for display.
    Type: Application
    Filed: December 26, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Jen-Hsun Huang, Franck R. Diard, Andrew Currid
  • Patent number: 8549170
    Abstract: A system and method are provided for performing the retransmission of data in a network. Included is an offload engine in communication with system memory and a network. The offload engine serves for managing the retransmission of data transmitted in the network.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: October 1, 2013
    Assignee: NVIDIA Corporation
    Inventors: John Shigeto Minami, Michael Ward Johnson, Andrew Currid, Mrudula Kanuri
  • Publication number: 20130061005
    Abstract: One embodiment of the present invention sets forth a technique for synchronization between two or more processors. The technique implements a spinlock acquire function and a spinlock release function. A processor executing the spinlock acquire function advantageously operates in a low power state while waiting for an opportunity to acquire spinlock. The spinlock acquire function configures a memory monitor to wake up the processor when spinlock is released by a different processor. The spinlock release function releases spinlock by clearing a lock variable and may clear a wait variable.
    Type: Application
    Filed: September 2, 2011
    Publication date: March 7, 2013
    Inventors: Mark A. OVERBY, ANDREW CURRID
  • Patent number: 8296515
    Abstract: One embodiment of the present invention sets forth a technique for performing RAID-6 computations using simple arithmetic functions and two-dimensional table lookup operations. A set of threads within a multi-threaded processor are assigned to perform RAID-6 computations in parallel on a stripe of RAID-6 data. A set of lookup tables are stored within the multi-threaded processor for access by the threads in performing the RAID-6 computations. During normal operation of a related RAID-6 disk array, RAID-6 computations may be performed by the threads using a small set of simple arithmetic operations and a set of lookup operations to the lookup tables. Greater computational efficiency is gained by reducing the RAID-6 computations to simple operations that are performed efficiently on a multi-threaded processor, such as a graphics processing unit.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: October 23, 2012
    Assignee: Nvidia Corporation
    Inventors: Nirmal Raj Saxena, Mark A. Overby, Andrew Currid
  • Patent number: 8065439
    Abstract: A system, method, and related data structure are provided for transmitting data in a network. Included is a data object (i.e. metadata) for communicating between a first network protocol layer and a second network protocol layer. In use, the data object facilitates network communication management utilizing a transport offload engine.
    Type: Grant
    Filed: December 19, 2003
    Date of Patent: November 22, 2011
    Assignee: NVIDIA Corporation
    Inventors: Michael Ward Johnson, Andrew Currid, Mrudula Kanuri, John Shigeto Minami
  • Patent number: 8037391
    Abstract: One embodiment of the present invention sets forth a technique for performing RAID-6 computations using simple arithmetic functions and two-dimensional table lookup operations. Four lookup tables are computed and saved prior to normal operation of a RAID-6 disk array. During normal operation of the RAID-6 disk array, all RAID-6 related computations may be performed using a small set of simple arithmetic operations and a set of lookup operations to three of the four previously saved lookup tables. Greater computational efficiency is gained by reducing the RAID-6 computations to simple operations that are performed efficiently on a typical central processing unit or graphics processing unit.
    Type: Grant
    Filed: May 22, 2009
    Date of Patent: October 11, 2011
    Assignee: NVIDIA Corporation
    Inventors: Cyndi Jung, Nirmal Raj Saxena, Mark A. Overby, Andrew Currid
  • Patent number: 7971045
    Abstract: Embodiments of the invention provide a method for selecting a network boot device using a hardware class identifier. Generally, embodiments of the invention enable a diskless client to communicate a hardware class identifier in a network connection request. The hardware class identifier is used to determine the proper boot server to provide a boot image to the diskless client.
    Type: Grant
    Filed: December 15, 2006
    Date of Patent: June 28, 2011
    Assignee: NVIDIA Corporation
    Inventors: Andrew Currid, Mark A. Overby
  • Patent number: 7925931
    Abstract: Embodiments of the present invention provide a method for handling errors in data servers. Generally, embodiments of the invention enable a data packet that is marked as erroneous to be handled so that it is not committed to permanent storage. One or more components are configured to recognize a poisoned data indicator, and to respond to the indicator by taking programmed actions to delete the data, to stop the data from being transmitted, to notify upstream components, and to purge related data from downstream components.
    Type: Grant
    Filed: December 13, 2006
    Date of Patent: April 12, 2011
    Assignee: NVIDIA Corporation
    Inventors: Michael John Sebastian Smith, Mark A. Overby, Andrew Currid
  • Patent number: 7644205
    Abstract: One embodiment of the present invention sets forth a technique for mapping a small computer system interface (SCSI) architecture model-3 (SAM-3) task priority to an IEEE Standard 802.1q tag control information (TCI) field. Four bits that define a SAM-3 task priority are mapped to the three user priority bits within a standard 802.1q TCI field. By enabling the SAM-3 task priority of a given SCSI command to determine the user priority within a related IEEE 802.1q Ethernet frame, the Ethernet network is enabled to substantially honor the requested task priority for the SCSI command.
    Type: Grant
    Filed: December 15, 2006
    Date of Patent: January 5, 2010
    Assignee: NVIDIA Corporation
    Inventors: Mark A. Overby, Andrew Currid