Patents by Inventor Lacky Vasant Shah

Lacky Vasant Shah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10430915
    Abstract: One or more copy commands are scheduled for locating one or more pages of data in a local memory of a graphics processing unit (GPU) for more efficient access to the pages of data during rendering. A first processing unit that is coupled to a first GPU receives a notification that an access request count has reached a specified threshold. The first processing unit schedules a copy command to copy the first page of data to a first memory circuit of the first GPU from a second memory circuit of the second GPU. The copy command is included within a GPU command stream.
    Type: Grant
    Filed: January 24, 2018
    Date of Patent: October 1, 2019
    Assignee: NVIDIA Corporation
    Inventors: Andrei Khodakovsky, Kirill A. Dmitriev, Rouslan L. Dimitrov, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
  • Patent number: 10402937
    Abstract: A method for rendering graphics frames allocates rendering work to multiple graphics processing units (GPUs) that are configured to allow access to pages of data stored in locally attached memory of a peer GPU. The method includes the steps of generating, by a first GPU coupled to a first memory circuit, one or more first memory access requests to render a first primitive for a first frame, where at least one of the first memory access requests targets a first page of data that physically resides within a second memory circuit coupled to a second GPU. The first GPU requests the first page of data through a first data link coupling the first GPU to the second GPU and a register circuit within the first GPU accumulates an access request count for the first page of data. The first GPU notifies a driver that the access request count has reached a specified threshold.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 3, 2019
    Assignee: NVIDIA Corporation
    Inventors: Rouslan L. Dimitrov, Kirill A. Dmitriev, Andrei Khodakovsky, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
  • Publication number: 20190206018
    Abstract: One or more copy commands are scheduled for locating one or more pages of data in a local memory of a graphics processing unit (GPU) for more efficient access to the pages of data during rendering. A first processing unit that is coupled to a first GPU receives a notification that an access request count has reached a specified threshold. The first processing unit schedules a copy command to copy the first page of data to a first memory circuit of the first GPU from a second memory circuit of the second GPU. The copy command is included within a GPU command stream.
    Type: Application
    Filed: January 24, 2018
    Publication date: July 4, 2019
    Inventors: Andrei Khodakovsky, Kirill A. Dmitriev, Rouslan L. Dimitrov, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
  • Publication number: 20190206023
    Abstract: A method for rendering graphics frames allocates rendering work to multiple graphics processing units (GPUs) that are configured to allow access to pages of data stored in locally attached memory of a peer GPU. The method includes the steps of generating, by a first GPU coupled to a first memory circuit, one or more first memory access requests to render a first primitive for a first frame, where at least one of the first memory access requests targets a first page of data that physically resides within a second memory circuit coupled to a second GPU. The first GPU requests the first page of data through a first data link coupling the first GPU to the second GPU and a register circuit within the first GPU accumulates an access request count for the first page of data. The first GPU notifies a driver that the access request count has reached a specified threshold.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Inventors: Rouslan L. Dimitrov, Kirill A. Dmitriev, Andrei Khodakovsky, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
  • Patent number: 10217183
    Abstract: A system, method, and computer program product are provided for allocating processor resources to process compute workloads and graphics workloads substantially simultaneously. The method includes the steps of allocating a plurality of processing units to process tasks associated with a graphics pipeline, receiving a request to allocate at least one processing unit in the plurality of processing units to process tasks associated with a compute pipeline, and reallocating the at least one processing unit to process tasks associated with the compute pipeline.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: February 26, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Gregory S. Palmer, Jerome F. Duluk, Jr., Karim Maher Abdalla, Jonathon S. Evans, Adam Clark Weitkemper, Lacky Vasant Shah, Philip Browning Johnson, Gentaro Hirota
  • Patent number: 9654548
    Abstract: Installation of an application on a test bed machine is monitored to generate a streamed application set of a stream enabled version of the application. Execution of the application on the test bed machine is monitored to generate the streamed application set of the stream enabled version of the application. Stream enabled application pages and a stream enabled application install block to form the streamed application set is generated based on the monitoring of the installation of the application and the monitoring of the execution of the application on the test bed machine. The stream enabled application install block is provided to a client device. A request for a stream enabled application page of the stream enabled application pages is received from the client device. The stream enabled application page is provided to the client device for continued execution of the stream enabled version of the application.
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: May 16, 2017
    Assignee: Numecent Holdings, Inc.
    Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
  • Publication number: 20150350311
    Abstract: Installation of an application on a test bed machine is monitored to generate a streamed application set of a stream enabled version of the application. Execution of the application on the test bed machine is monitored to generate the streamed application set of the stream enabled version of the application. Stream enabled application pages and a stream enabled application install block to form the streamed application set is generated based on the monitoring of the installation of the application and the monitoring of the execution of the application on the test bed machine. The stream enabled application install block is provided to a client device. A request for a stream enabled application page of the stream enabled application pages is received from the client device. The stream enabled application page is provided to the client device for continued execution of the stream enabled version of the application.
    Type: Application
    Filed: August 5, 2015
    Publication date: December 3, 2015
    Applicant: NUMECENT HOLDINGS, INC.
    Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
  • Patent number: 9130953
    Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: September 8, 2015
    Assignee: Numecent Holdings, Inc.
    Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
  • Publication number: 20150235015
    Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.
    Type: Application
    Filed: September 3, 2014
    Publication date: August 20, 2015
    Applicant: Numecent Holdings, Inc.
    Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
  • Publication number: 20150178879
    Abstract: A system, method, and computer program product are provided for allocating processor resources to process compute workloads and graphics workloads substantially simultaneously. The method includes the steps of allocating a plurality of processing units to process tasks associated with a graphics pipeline, receiving a request to allocate at least one processing unit in the plurality of processing units to process tasks associated with a compute pipeline, and reallocating the at least one processing unit to process tasks associated with the compute pipeline.
    Type: Application
    Filed: December 20, 2013
    Publication date: June 25, 2015
    Applicant: NVIDIA CORPORATION
    Inventors: Gregory S. Palmer, Jerome F. Duluk, JR., Karim Maher Abdalla, Jonathon S. Evans, Adam Clark Weitkemper, Lacky Vasant Shah, Philip Browning Johnson, Gentaro Hirota
  • Publication number: 20150142880
    Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.
    Type: Application
    Filed: November 18, 2014
    Publication date: May 21, 2015
    Applicant: Numecent Holdings, Inc.
    Inventors: Daniel T. Arai, Sameer Panwar, Manuel E. Benitez, Anne Marie Holler, Lacky Vasant Shah
  • Patent number: 8831995
    Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.
    Type: Grant
    Filed: November 6, 2001
    Date of Patent: September 9, 2014
    Assignee: Numecent Holdings, Inc.
    Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
  • Patent number: 7062567
    Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.
    Type: Grant
    Filed: February 14, 2001
    Date of Patent: June 13, 2006
    Inventors: Manuel Enrique Benitez, Anne Marie Holler, Lacky Vasant Shah, Daniel Takeo Arai, Sameer Panwar
  • Patent number: 7043524
    Abstract: A network caching system for streamed applications provides for the caching of streamed applications within a computer network that are accessible by client systems within the network. Clients request streamed application file pages from other client systems, proxy servers, and application servers as each streamed application file is stored in a cache and used. Streamed application file page requests are broadcast to other clients using a multicast packet. Proxy servers are provided in the network that store a select set of streamed application file pages and respond to client requests by sending a response packet containing the requested streamed application file page if the streamed application file page is stored on the proxy server. Streamed application servers store all of the streamed application file pages. Clients try to send requests to streamed application servers as a last resort. Clients can concurrently send requests to other clients, to a proxy server, and to a streamed application server.
    Type: Grant
    Filed: November 6, 2001
    Date of Patent: May 9, 2006
    Assignee: OmniShift Technologies, Inc.
    Inventors: Lacky Vasant Shah, Sridhar Ramakrishnan
  • Patent number: 6959320
    Abstract: An client-side performance optimization system for streamed applications provides several approaches for fulfilling client-side application code and data file requests for streamed applications. A streaming file system or file driver is installed on the client system that receives and fulfills application code and data requests from a persistent cache or the streaming application server. The client or the server can initiate the prefetching of application code and data to improve interactive application performance. A client-to-client communication mechanism allows local application customization to travel from one client machine to another without involving server communication. Applications are patched or upgraded via a change in the root directory for that application. The client can be notified of application upgrades by the server which can be marked as mandatory, in which case the client will force the application to be upgraded.
    Type: Grant
    Filed: May 15, 2001
    Date of Patent: October 25, 2005
    Assignee: Endeavors Technology, Inc.
    Inventors: Lacky Vasant Shah, Daniel Takeo Arai, Manuel Enrique Benitez, Anne Marie Holler, Robert Curtis Wohlgemuth
  • Publication number: 20030009538
    Abstract: A network caching system for streamed applications provides for the caching of streamed applications within a computer network that are accessible by client systems within the network. Clients request streamed application file pages from other client systems, proxy servers, and application servers as each streamed application file is stored in a cache and used. Streamed application file page requests are broadcast to other clients using a multicast packet. Proxy servers are provided in the network that store a select set of streamed application file pages and respond to client requests by sending a response packet containing the requested streamed application file page if the streamed application file page is stored on the proxy server. Streamed application servers store all of the streamed application file pages. Clients try to send requests to streamed application servers as a last resort. Clients can concurrently send requests to other clients, to a proxy server, and to a streamed application server.
    Type: Application
    Filed: November 6, 2001
    Publication date: January 9, 2003
    Inventors: Lacky Vasant Shah, Sridhar Ramakrishnan
  • Publication number: 20030004882
    Abstract: An optimized server for streamed applications provides a streamed application server optimized to provide efficient delivery of streamed applications to client systems across a computer network such as the Internet. The server persistently stores streamed application program sets that contain streamed application file pages. Client systems request streamed application file pages from the server using a unique set of numbers common among all servers that store the particular streamed application file pages. A license server offloads the streamed application server by performing client access privilege validations. Commonly accessed streamed application file pages are stored in a cache on the streamed application server which attempts to retrieve requested streamed application file pages from the cache before retrieving them from persistent storage. Requested streamed application file pages are compressed before being sent to a client as well as those stored in the cache.
    Type: Application
    Filed: November 6, 2001
    Publication date: January 2, 2003
    Inventors: Anne Marie Holler, Lacky Vasant Shah, Sameer Panwar, Amit Patel
  • Publication number: 20020161908
    Abstract: An intelligent network streaming and execution system for conventionally coded applications provides a system that partitions an application program into page segments by observing the manner in which the application program is conventionally installed. A minimal portion of the application program is installed on a client system and the user launches the application in the same ways that applications on other client file systems are started. An application program server streams the page segments to the client as the application program executes on the client and the client stores the page segments in a cache. Page segments are requested by the client from the application server whenever a page fault occurs from the cache for the application program. The client prefetches page segments from the application server or the application server pushes additional page segments to the client based on the pattern of page segment requests for that particular application.
    Type: Application
    Filed: February 14, 2001
    Publication date: October 31, 2002
    Inventors: Manuel Enrique Benitez, Anne Marie Holler, Lacky Vasant Shah, Daniel Takeo Arai, Sameer Panwar
  • Publication number: 20020091763
    Abstract: An client-side performance optimization system for streamed applications provides several approaches for fulfilling client-side application code and data file requests for streamed applications. A streaming file system or file driver is installed on the client system that receives and fulfills application code and data requests from a persistent cache or the streaming application server. The client or the server can initiate the prefetching of application code and data to improve interactive application performance. A client-to-client communication mechanism allows local application customization to travel from one client machine to another without involving server communication. Applications are patched or upgraded via a change in the root directory for that application. The client can be notified of application upgrades by the server which can be marked as mandatory, in which case the client will force the application to be upgraded.
    Type: Application
    Filed: May 15, 2001
    Publication date: July 11, 2002
    Inventors: Lacky Vasant Shah, Daniel Takeo Arai, Manuel Enrique Benitez, Anne Marie Holler, Robert Curtis Wohlgemuth
  • Publication number: 20020087883
    Abstract: An anti-piracy system for remotely served computer applications provides a client network filesystem that performs several techniques to prevent the piracy of application programs. The invention provides client-side fine-grained filtering of file accesses directed at remotely served files. Another technique filters file accesses based on where the code for the process that originated the request is stored. Yet another technique Identifies crucial portions of remotely served files and filters file accesses depending on the portion targeted. A further technique filters file accesses based on the surmised purpose of the file access as determined by examining the program stack or flags associated with the request. A final technique filters file accesses based on the surmised purpose of the file access as determined by examining a history of previous file accesses by the same process.
    Type: Application
    Filed: May 1, 2001
    Publication date: July 4, 2002
    Inventors: Curt Wohlgemuth, Nicholas Ryan, Lacky Vasant Shah, Daniel Takeo Arai, Anne Marie Holler