Patents by Inventor Lacky Vasant Shah
Lacky Vasant Shah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10430915Abstract: One or more copy commands are scheduled for locating one or more pages of data in a local memory of a graphics processing unit (GPU) for more efficient access to the pages of data during rendering. A first processing unit that is coupled to a first GPU receives a notification that an access request count has reached a specified threshold. The first processing unit schedules a copy command to copy the first page of data to a first memory circuit of the first GPU from a second memory circuit of the second GPU. The copy command is included within a GPU command stream.Type: GrantFiled: January 24, 2018Date of Patent: October 1, 2019Assignee: NVIDIA CorporationInventors: Andrei Khodakovsky, Kirill A. Dmitriev, Rouslan L. Dimitrov, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
-
Patent number: 10402937Abstract: A method for rendering graphics frames allocates rendering work to multiple graphics processing units (GPUs) that are configured to allow access to pages of data stored in locally attached memory of a peer GPU. The method includes the steps of generating, by a first GPU coupled to a first memory circuit, one or more first memory access requests to render a first primitive for a first frame, where at least one of the first memory access requests targets a first page of data that physically resides within a second memory circuit coupled to a second GPU. The first GPU requests the first page of data through a first data link coupling the first GPU to the second GPU and a register circuit within the first GPU accumulates an access request count for the first page of data. The first GPU notifies a driver that the access request count has reached a specified threshold.Type: GrantFiled: December 28, 2017Date of Patent: September 3, 2019Assignee: NVIDIA CorporationInventors: Rouslan L. Dimitrov, Kirill A. Dmitriev, Andrei Khodakovsky, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
-
Publication number: 20190206018Abstract: One or more copy commands are scheduled for locating one or more pages of data in a local memory of a graphics processing unit (GPU) for more efficient access to the pages of data during rendering. A first processing unit that is coupled to a first GPU receives a notification that an access request count has reached a specified threshold. The first processing unit schedules a copy command to copy the first page of data to a first memory circuit of the first GPU from a second memory circuit of the second GPU. The copy command is included within a GPU command stream.Type: ApplicationFiled: January 24, 2018Publication date: July 4, 2019Inventors: Andrei Khodakovsky, Kirill A. Dmitriev, Rouslan L. Dimitrov, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
-
Publication number: 20190206023Abstract: A method for rendering graphics frames allocates rendering work to multiple graphics processing units (GPUs) that are configured to allow access to pages of data stored in locally attached memory of a peer GPU. The method includes the steps of generating, by a first GPU coupled to a first memory circuit, one or more first memory access requests to render a first primitive for a first frame, where at least one of the first memory access requests targets a first page of data that physically resides within a second memory circuit coupled to a second GPU. The first GPU requests the first page of data through a first data link coupling the first GPU to the second GPU and a register circuit within the first GPU accumulates an access request count for the first page of data. The first GPU notifies a driver that the access request count has reached a specified threshold.Type: ApplicationFiled: December 28, 2017Publication date: July 4, 2019Inventors: Rouslan L. Dimitrov, Kirill A. Dmitriev, Andrei Khodakovsky, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
-
Patent number: 10217183Abstract: A system, method, and computer program product are provided for allocating processor resources to process compute workloads and graphics workloads substantially simultaneously. The method includes the steps of allocating a plurality of processing units to process tasks associated with a graphics pipeline, receiving a request to allocate at least one processing unit in the plurality of processing units to process tasks associated with a compute pipeline, and reallocating the at least one processing unit to process tasks associated with the compute pipeline.Type: GrantFiled: December 20, 2013Date of Patent: February 26, 2019Assignee: NVIDIA CORPORATIONInventors: Gregory S. Palmer, Jerome F. Duluk, Jr., Karim Maher Abdalla, Jonathon S. Evans, Adam Clark Weitkemper, Lacky Vasant Shah, Philip Browning Johnson, Gentaro Hirota
-
Patent number: 9654548Abstract: Installation of an application on a test bed machine is monitored to generate a streamed application set of a stream enabled version of the application. Execution of the application on the test bed machine is monitored to generate the streamed application set of the stream enabled version of the application. Stream enabled application pages and a stream enabled application install block to form the streamed application set is generated based on the monitoring of the installation of the application and the monitoring of the execution of the application on the test bed machine. The stream enabled application install block is provided to a client device. A request for a stream enabled application page of the stream enabled application pages is received from the client device. The stream enabled application page is provided to the client device for continued execution of the stream enabled version of the application.Type: GrantFiled: August 5, 2015Date of Patent: May 16, 2017Assignee: Numecent Holdings, Inc.Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
-
Publication number: 20150350311Abstract: Installation of an application on a test bed machine is monitored to generate a streamed application set of a stream enabled version of the application. Execution of the application on the test bed machine is monitored to generate the streamed application set of the stream enabled version of the application. Stream enabled application pages and a stream enabled application install block to form the streamed application set is generated based on the monitoring of the installation of the application and the monitoring of the execution of the application on the test bed machine. The stream enabled application install block is provided to a client device. A request for a stream enabled application page of the stream enabled application pages is received from the client device. The stream enabled application page is provided to the client device for continued execution of the stream enabled version of the application.Type: ApplicationFiled: August 5, 2015Publication date: December 3, 2015Applicant: NUMECENT HOLDINGS, INC.Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
-
Patent number: 9130953Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.Type: GrantFiled: November 18, 2014Date of Patent: September 8, 2015Assignee: Numecent Holdings, Inc.Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
-
Publication number: 20150235015Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.Type: ApplicationFiled: September 3, 2014Publication date: August 20, 2015Applicant: Numecent Holdings, Inc.Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
-
Publication number: 20150178879Abstract: A system, method, and computer program product are provided for allocating processor resources to process compute workloads and graphics workloads substantially simultaneously. The method includes the steps of allocating a plurality of processing units to process tasks associated with a graphics pipeline, receiving a request to allocate at least one processing unit in the plurality of processing units to process tasks associated with a compute pipeline, and reallocating the at least one processing unit to process tasks associated with the compute pipeline.Type: ApplicationFiled: December 20, 2013Publication date: June 25, 2015Applicant: NVIDIA CORPORATIONInventors: Gregory S. Palmer, Jerome F. Duluk, JR., Karim Maher Abdalla, Jonathon S. Evans, Adam Clark Weitkemper, Lacky Vasant Shah, Philip Browning Johnson, Gentaro Hirota
-
Publication number: 20150142880Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.Type: ApplicationFiled: November 18, 2014Publication date: May 21, 2015Applicant: Numecent Holdings, Inc.Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
-
Patent number: 8831995Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.Type: GrantFiled: November 6, 2001Date of Patent: September 9, 2014Assignee: Numecent Holdings, Inc.Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
-
Patent number: 7062567Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.Type: GrantFiled: February 14, 2001Date of Patent: June 13, 2006Inventors: Manuel Enrique Benitez, Anne Marie Holler, Lacky Vasant Shah, Daniel Takeo Arai, Sameer Panwar
-
Patent number: 7043524Abstract: A network caching system for streamed applications provides for the caching of streamed applications within a computer network that are accessible by client systems within the network. Clients request streamed application file pages from other client systems, proxy servers, and application servers as each streamed application file is stored in a cache and used. Streamed application file page requests are broadcast to other clients using a multicast packet. Proxy servers are provided in the network that store a select set of streamed application file pages and respond to client requests by sending a response packet containing the requested streamed application file page if the streamed application file page is stored on the proxy server. Streamed application servers store all of the streamed application file pages. Clients try to send requests to streamed application servers as a last resort. Clients can concurrently send requests to other clients, to a proxy server, and to a streamed application server.Type: GrantFiled: November 6, 2001Date of Patent: May 9, 2006Assignee: OmniShift Technologies, Inc.Inventors: Lacky Vasant Shah, Sridhar Ramakrishnan
-
Patent number: 6959320Abstract: An client-side performance optimization system for streamed applications provides several approaches for fulfilling client-side application code and data file requests for streamed applications. A streaming file system or file driver is installed on the client system that receives and fulfills application code and data requests from a persistent cache or the streaming application server. The client or the server can initiate the prefetching of application code and data to improve interactive application performance. A client-to-client communication mechanism allows local application customization to travel from one client machine to another without involving server communication. Applications are patched or upgraded via a change in the root directory for that application. The client can be notified of application upgrades by the server which can be marked as mandatory, in which case the client will force the application to be upgraded.Type: GrantFiled: May 15, 2001Date of Patent: October 25, 2005Assignee: Endeavors Technology, Inc.Inventors: Lacky Vasant Shah, Daniel Takeo Arai, Manuel Enrique Benitez, Anne Marie Holler, Robert Curtis Wohlgemuth
-
Publication number: 20030009538Abstract: A network caching system for streamed applications provides for the caching of streamed applications within a computer network that are accessible by client systems within the network. Clients request streamed application file pages from other client systems, proxy servers, and application servers as each streamed application file is stored in a cache and used. Streamed application file page requests are broadcast to other clients using a multicast packet. Proxy servers are provided in the network that store a select set of streamed application file pages and respond to client requests by sending a response packet containing the requested streamed application file page if the streamed application file page is stored on the proxy server. Streamed application servers store all of the streamed application file pages. Clients try to send requests to streamed application servers as a last resort. Clients can concurrently send requests to other clients, to a proxy server, and to a streamed application server.Type: ApplicationFiled: November 6, 2001Publication date: January 9, 2003Inventors: Lacky Vasant Shah, Sridhar Ramakrishnan
-
Publication number: 20030004882Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.Type: ApplicationFiled: November 6, 2001Publication date: January 2, 2003Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
-
Publication number: 20020161908Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.Type: ApplicationFiled: February 14, 2001Publication date: October 31, 2002Inventors: Manuel Enrique Benitez, Anne Marie Holler, Lacky Vasant Shah, Daniel Takeo Arai, Sameer Panwar
-
Publication number: 20020091763Abstract: An client-side performance optimization system for streamed applications provides several approaches for fulfilling client-side application code and data file requests for streamed applications. A streaming file system or file driver is installed on the client system that receives and fulfills application code and data requests from a persistent cache or the streaming application server. The client or the server can initiate the prefetching of application code and data to improve interactive application performance. A client-to-client communication mechanism allows local application customization to travel from one client machine to another without involving server communication. Applications are patched or upgraded via a change in the root directory for that application. The client can be notified of application upgrades by the server which can be marked as mandatory, in which case the client will force the application to be upgraded.Type: ApplicationFiled: May 15, 2001Publication date: July 11, 2002Inventors: Lacky Vasant Shah, Daniel Takeo Arai, Manuel Enrique Benitez, Anne Marie Holler, Robert Curtis Wohlgemuth
-
Publication number: 20020087883Abstract: An anti-piracy system for remotely served computer applications provides a client network filesystem that performs several techniques to prevent the piracy of application programs. The invention provides client-side fine-grained filtering of file accesses directed at remotely served files. Another technique filters file accesses based on where the code for the process that originated the request is stored. Yet another technique Identifies crucial portions of remotely served files and filters file accesses depending on the portion targeted. A further technique filters file accesses based on the surmised purpose of the file access as determined by examining the program stack or flags associated with the request. A final technique filters file accesses based on the surmised purpose of the file access as determined by examining a history of previous file accesses by the same process.Type: ApplicationFiled: May 1, 2001Publication date: July 4, 2002Inventors: Curt Wohlgemuth, Nicholas Ryan, Lacky Vasant Shah, Daniel Takeo Arai, Anne Marie Holler