Patents by Inventor Nicholas Patrick Wilt
Nicholas Patrick Wilt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10181172Abstract: Methods, systems, and computer-readable media for disaggregated graphics asset delivery for virtualized graphics are disclosed. A virtual compute instance with attached virtual GPU is provisioned in a multi-tenant provider network. The virtual compute instance is implemented using a physical compute instance, and the virtual GPU is implemented using a physical GPU. An application comprising identifiers of graphics assets is executed on the virtual compute instance. Executing the application comprises sending graphics instructions and the identifiers from the virtual compute instance to the virtual GPU. The graphics assets are obtained by the virtual GPU from a graphics asset repository using the identifiers. The graphics instructions are executed on the virtual GPU using the graphics assets corresponding to the identifiers.Type: GrantFiled: June 8, 2016Date of Patent: January 15, 2019Assignee: Amazon Technologies, Inc.Inventor: Nicholas Patrick Wilt
-
Patent number: 10169841Abstract: Methods, systems, and computer-readable media for dynamic interface synchronization for virtualized graphics processing are disclosed. A GPU interface synchronization request is sent from a compute instance to a graphics processing unit (GPU) server via a network. The GPU server comprises a virtual GPU attached to the compute instance and implemented using at least one physical GPU. Based at least in part on the GPU interface synchronization request, a shared version of a GPU interface is determined for use with the compute instance and the GPU server. Program code of the shared version of the GPU interface is installed on the compute instance and on the GPU server. Using the shared version of the GPU interface, the compute instance sends instructions to the virtual GPU over the network, and the virtual GPU generates GPU output associated with the instructions.Type: GrantFiled: March 27, 2017Date of Patent: January 1, 2019Assignee: Amazon Technologies, Inc.Inventors: Malcolm Featonby, Douglas Cotton Kurtz, Paolo Maggi, Umesh Chandani, John Merrill Phillips, Jr., Yuxuan Liu, Adithya Bhat, Mihir Sadruddin Surani, Andrea Curtoni, Nicholas Patrick Wilt
-
Publication number: 20180204301Abstract: Methods, systems, and computer-readable media for dynamic and application-specific virtualized graphics processing are disclosed. Execution of an application is initiated on a virtual compute instance. The virtual compute instance is implemented using a server. One or more graphics processing unit (GPU) requirements associated with the execution of the application are determined. A physical GPU resource is selected from a pool of available physical GPU resources based at least in part on the one or more GPU requirements. A virtual GPU is attached to the virtual compute instance based at least in part on initiation of the execution of the application. The virtual GPU is implemented using the physical GPU resource selected from the pool and accessible to the server over a network.Type: ApplicationFiled: January 18, 2017Publication date: July 19, 2018Applicant: Amazon Technologies, Inc.Inventors: Malcolm Featonby, Yuxuan Liu, Umesh Chandani, John Merrill Phillips, JR., Nicholas Patrick Wilt, Adithya Bhat, Douglas Cotton Kurtz, Mihir Sadruddin Surani
-
Publication number: 20180182061Abstract: Methods, systems, and computer-readable media for placement optimization for virtualized graphics processing are disclosed. A provider network comprises a plurality of instance locations for physical compute instances and a plurality of graphics processing unit (GPU) locations for physical GPUs. A GPU location for a physical GPU or an instance location for a physical compute instance is selected in the provider network. The GPU location or instance location is selected based at least in part on one or more placement criteria. A virtual compute instance with attached virtual GPU is provisioned. The virtual compute instance is implemented using the physical compute instance in the instance location, and the virtual GPU is implemented using the physical GPU in the GPU location. The physical GPU is accessible to the physical compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance.Type: ApplicationFiled: February 26, 2018Publication date: June 28, 2018Applicant: Amazon Technologies, Inc.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE
-
Publication number: 20180182062Abstract: Methods, systems, and computer-readable media for application-specific virtualized graphics processing are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is selected based at least in part on requirements of an application. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. The application is executed using the virtual GPU on the virtual compute instance.Type: ApplicationFiled: February 26, 2018Publication date: June 28, 2018Applicant: Amazon Technologies, Inc.Inventors: Nicholas Patrick Wilt, Ashutosh Tambe, Nathan Lee Burns
-
Patent number: 9904975Abstract: Methods, systems, and computer-readable media for scaling for virtualized graphics processing are disclosed. A first virtual GPU is attached to a virtual compute instance of a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. The first virtual GPU is replaced by a second virtual GPU based at least in part on a change in GPU requirements for the virtual compute instance. The first and second virtual GPUs are implemented using physical GPU resources that are accessible to the virtual compute instance over a network. Processing for the virtual compute instance is migrated from the first virtual GPU to the second virtual GPU. An application is executed using the second virtual GPU on the virtual compute instance.Type: GrantFiled: November 11, 2015Date of Patent: February 27, 2018Assignee: Amazon Technologies, Inc.Inventors: Nicholas Patrick Wilt, Ashutosh Tambe
-
Patent number: 9904974Abstract: Methods, systems, and computer-readable media for placement optimization for virtualized graphics processing are disclosed. A provider network comprises a plurality of instance locations for physical compute instances and a plurality of graphics processing unit (GPU) locations for physical GPUs. A GPU location for a physical GPU or an instance location for a physical compute instance is selected in the provider network. The GPU location or instance location is selected based at least in part on one or more placement criteria. A virtual compute instance with attached virtual GPU is provisioned. The virtual compute instance is implemented using the physical compute instance in the instance location, and the virtual GPU is implemented using the physical GPU in the GPU location. The physical GPU is accessible to the physical compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance.Type: GrantFiled: November 11, 2015Date of Patent: February 27, 2018Assignee: Amazon Technologies, Inc.Inventors: Nicholas Patrick Wilt, Ashutosh Tambe
-
Patent number: 9904973Abstract: Methods, systems, and computer-readable media for application-specific virtualized graphics processing are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is selected based at least in part on requirements of an application. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. The application is executed using the virtual GPU on the virtual compute instance.Type: GrantFiled: November 11, 2015Date of Patent: February 27, 2018Assignee: Amazon Technologies, Inc.Inventors: Nicholas Patrick Wilt, Ashutosh Tambe, Nathan Lee Burns
-
Patent number: 9886737Abstract: Methods, systems, and computer-readable media for local-to-remote migration for virtualized graphics processing are disclosed. A virtual compute instance comprising a local GPU is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. Graphics processing for the virtual compute instance is migrated from the local GPU to the virtual GPU. An application is executed using the virtual GPU on the virtual compute instance.Type: GrantFiled: November 11, 2015Date of Patent: February 6, 2018Assignee: Amazon Technologies, Inc.Inventors: Nicholas Patrick Wilt, Ashutosh Tambe, Nathan Lee Burns
-
Patent number: 9836354Abstract: A service provider system may implement ECC-like features when executing computations on GPUs that do not include sufficient error detection and recovery for computations that are sensitive to bit errors. During execution of critical computations on behalf of customers, the system may automatically instrument program instructions received from the customers to cause each computation to be executed using multiple sets of hardware resources (e.g., different host machines, processor cores, or internal hardware resources). The service may provide APIs with which customers may instrument their code for execution using redundant resource instances, or specify parameters for applying the ECC-like features. The service or customer may instrument code to perform (or cause the system to perform) checkpointing operations at particular points in the code, and to compare intermediate results produced by different hardware resources.Type: GrantFiled: April 28, 2014Date of Patent: December 5, 2017Assignee: Amazon Technologies, Inc.Inventors: Nachiketh Rao Potlapally, John Merrill Phillips, Nicholas Patrick Wilt, Deepak Singh, Scott Michael Le Grand
-
Publication number: 20170132745Abstract: Methods, systems, and computer-readable media for local-to-remote migration for virtualized graphics processing are disclosed. A virtual compute instance comprising a local GPU is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. Graphics processing for the virtual compute instance is migrated from the local GPU to the virtual GPU. An application is executed using the virtual GPU on the virtual compute instance.Type: ApplicationFiled: November 11, 2015Publication date: May 11, 2017Applicant: Amazon Technologies, Inc.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE, NATHAN LEE BURNS
-
Publication number: 20170132746Abstract: Methods, systems, and computer-readable media for placement optimization for virtualized graphics processing are disclosed. A provider network comprises a plurality of instance locations for physical compute instances and a plurality of graphics processing unit (GPU) locations for physical GPUs. A GPU location for a physical GPU or an instance location for a physical compute instance is selected in the provider network. The GPU location or instance location is selected based at least in part on one or more placement criteria. A virtual compute instance with attached virtual GPU is provisioned. The virtual compute instance is implemented using the physical compute instance in the instance location, and the virtual GPU is implemented using the physical GPU in the GPU location. The physical GPU is accessible to the physical compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance.Type: ApplicationFiled: November 11, 2015Publication date: May 11, 2017Applicant: AMAZON TECHNOLOGIES, INC.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE
-
Publication number: 20170132744Abstract: Methods, systems, and computer-readable media for application-specific virtualized graphics processing are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is selected based at least in part on requirements of an application. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. The application is executed using the virtual GPU on the virtual compute instance.Type: ApplicationFiled: November 11, 2015Publication date: May 11, 2017Applicant: AMAZON TECHNOLOGIES, INC.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE, NATHAN LEE BURNS
-
Publication number: 20170132747Abstract: Methods, systems, and computer-readable media for scaling for virtualized graphics processing are disclosed. A first virtual GPU is attached to a virtual compute instance of a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. The first virtual GPU is replaced by a second virtual GPU based at least in part on a change in GPU requirements for the virtual compute instance. The first and second virtual GPUs are implemented using physical GPU resources that are accessible to the virtual compute instance over a network. Processing for the virtual compute instance is migrated from the first virtual GPU to the second virtual GPU. An application is executed using the second virtual GPU on the virtual compute instance.Type: ApplicationFiled: November 11, 2015Publication date: May 11, 2017Applicant: AMAZON TECHNOLOGIES, INC.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE
-
Publication number: 20170047041Abstract: Methods, systems, and computer-readable media for virtualizing graphics processing in a provider network are disclosed. A virtual compute instance is provisioned from a provider network. The provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances with multi-tenancy. A virtual GPU is attached to the virtual compute instance. The virtual GPU is implemented using a physical GPU, and the physical GPU is accessible to the virtual compute instance over a network. An application is executed using the virtual GPU on the virtual compute instance. Executing the application generates virtual GPU output that is provided to a client device.Type: ApplicationFiled: August 10, 2015Publication date: February 16, 2017Applicant: Amazon Technologies, Inc.Inventors: NICHOLAS PATRICK WILT, ASHUTOSH TAMBE, NATHAN LEE BURNS, NAFEA BSHARA
-
Patent number: 9547535Abstract: One or more embodiments of the invention set forth techniques to create a process in a graphical processing unit (GPU) that has access to memory buffers in the system memory of a computer system that are shared among a plurality of GPUs in the computer system. The GPU of the process is able to engage in Direct Memory Access (DMA) with any of the shared memory buffers thereby eliminating additional copying steps that have been needed to combine data output of the various GPUs without such shared access.Type: GrantFiled: April 30, 2009Date of Patent: January 17, 2017Assignee: NVIDIA CorporationInventor: Nicholas Patrick Wilt
-
Patent number: 9542192Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: January 10, 2017Assignee: NVIDIA CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Patent number: 9513923Abstract: One embodiment of the present invention sets forth a technique for associating arbitrary parallel processing unit (PPU) contexts with a given central processing unit (CPU) thread. The technique introduces two operators used to manage the PPU contexts. The first operator is a PPU context push, which causes a PPU driver to store the current PPU context of a calling thread on a PPU context stack and to associate a named PPU context with the calling thread. The second operator is a PPU context pop, which causes the PPU driver to restore the PPU context of a calling function to the PPU context at the top of the PPU context stack. By performing a PPU context push at the beginning of a function and a PPU context pop prior to returning from the function, the function may execute within a single CPU thread, but operate on a two distinct PPU contexts.Type: GrantFiled: March 30, 2012Date of Patent: December 6, 2016Assignee: NVIDIA CorporationInventor: Nicholas Patrick Wilt
-
Patent number: 9317452Abstract: A virtual machine environment in which a hypervisor provides direct memory mapped access by a virtual guest to a physical memory device. The hypervisor prevents reading from, writing to, or both, any individual register or registers while allowing unrestricted access to other registers, and without raising any abnormal condition in the guest's execution environment. For example, in one embodiment, the hypervisor can apply memory access protection to a memory page containing a restricted register so that a fault condition can be raised. When an instruction is executed, the hypervisor can intercept the fault condition and emulate the faulting guest instruction. When the emulation accesses the restricted address, the hypervisor can selectively decide whether or not to perform the access.Type: GrantFiled: November 18, 2013Date of Patent: April 19, 2016Assignee: Amazon Technologies, Inc.Inventors: Kent David Forschmiedt, Nicholas Patrick Wilt, Matthew David Klein
-
Publication number: 20150339136Abstract: A computing system providing virtual computing services may generate and manage remote computing sessions between client devices and virtual desktop instances (workspaces) hosted on the service provider's network. The system may implement a virtual private cloud for a workspaces service that extends out to gateway components in multiple, geographically distributed point of presence (POP) locations. In response to a client request for a virtual desktop session, the service may configure a virtual computing resource instance for the session and establish a secure, reliable, low latency communication channel (over a virtual private network) between the resource instance and a gateway component at a POP location near the client for communication of a two-way interactive video stream for the session. The availability zone containing the POP location may be different than one hosting the resource instance for the session. Client devices may connect to the gateway component over a public network.Type: ApplicationFiled: May 20, 2014Publication date: November 26, 2015Applicant: AMAZON TECHNOLOGIES, INC.Inventors: DEEPAK SURYANARAYANAN, SHESHADRI SUPREETH KOUSHIK, NICHOLAS PATRICK WILT, KALYANARAMAN PRASAD