Patents by Inventor Ashish Srivastava
Ashish Srivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960916Abstract: The disclosed technology is generally directed to virtual machines. In one example of the technology, a network change from a first virtual network to a second virtual network is reconfigured for a first virtual machine that is executing on a first virtual machine host. The reconfiguring includes the following. In the first virtual machine host, a mapping change from the first virtual network to the second virtual network is configured by reprogramming drivers in the first virtual machine host for route mapping for the second virtual network. A Dynamic Host Configuration Protocol (DHCP) retrigger is caused without rebooting the first virtual machine. A configuration file is provided to the first virtual machine. The configuration file includes user-specific networking settings. The first virtual machine is reconfigured in accordance with the user-specific networking settings.Type: GrantFiled: April 19, 2021Date of Patent: April 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Sushant Pramod Rewaskar, Md. Daud Hossain Howlader, Ashish Bhargava, Nisheeth Srivastava, Naveen Prabhat, Jayesh Kumaran, Xinyan Zan, Abhishek Shukla, Rishabh Tewari
-
Patent number: 11893653Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).Type: GrantFiled: May 9, 2019Date of Patent: February 6, 2024Assignee: NVIDIA CorporationInventors: Amit Rao, Ashish Srivastava, Yogesh Kini
-
Patent number: 11151120Abstract: There are provided systems and methods for determining data validity during data processing for multiple processing stacks. During processing requests with a service provider, each request may go through a data flow that invokes multiple processing stacks, where the data is transmitted over a network to different data processing nodes. For example, a distributed computing architecture may invoke multiple disparate nodes to process data, which may become corrupted during data transmission and processing. To ensure data validity, a framework may be provided that provided data translators for each processing stack to covert data handled in a processing format for that stack into a base data format utilized by the framework. The framework may utilize checksums or other hash values of the data in the base data format to determine if the data has been altered at different processing nodes or stacks.Type: GrantFiled: March 29, 2019Date of Patent: October 19, 2021Assignee: PAYPAL, INC.Inventors: Shanmugasundaram Alagumuthu, Vikas Prabhakar, Ashish Srivastava
-
Publication number: 20200311049Abstract: There are provided systems and methods for determining data validity during data processing for multiple processing stacks. During processing requests with a service provider, each request may go through a data flow that invokes multiple processing stacks, where the data is transmitted over a network to different data processing nodes. For example, a distributed computing architecture may invoke multiple disparate nodes to process data, which may become corrupted during data transmission and processing. To ensure data validity, a framework may be provided that provided data translators for each processing stack to covert data handled in a processing format for that stack into a base data format utilized by the framework. The framework may utilize checksums or other hash values of the data in the base data format to determine if the data has been altered at different processing nodes or stacks.Type: ApplicationFiled: March 29, 2019Publication date: October 1, 2020Inventors: Shanmugasundaram Alagumuthu, Vikas Prabhakar, Ashish Srivastava
-
Publication number: 20190266695Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).Type: ApplicationFiled: May 9, 2019Publication date: August 29, 2019Inventors: Amit Rao, Ashish Srivastava, Yogesh Kini
-
Patent number: 10333724Abstract: The present disclosure provides a method, non-transitory computer-readable storage medium, and computer system that implement a latency monitoring and reporting service configured to collect and report latency of service transactions. In one embodiment, a chronicler object is generated and transmitted to a charging engine, where the chronicler object is configured to collect a set of time points as the chronicler object travels through one or more components of the charging engine. Upon return of the chronicler object, the set of time points is extracted from the chronicler object and added to one of a plurality of accumulator objects. Each accumulator object includes a plurality of sets of time points from a plurality of chronicler objects that are received during a reporting window. The plurality of sets of times points of each accumulator object is used to calculate the latency of service transactions.Type: GrantFiled: November 25, 2014Date of Patent: June 25, 2019Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Louis Thomas Piro, Jr., Jens Kaemmerer, Ashish Srivastava, Diana Yuryeva
-
Patent number: 10319060Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).Type: GrantFiled: January 20, 2015Date of Patent: June 11, 2019Assignee: Nvidia CorporationInventors: Amit Rao, Ashish Srivastava, Yogesh Kini
-
Patent number: 9736034Abstract: In accordance with various embodiments, systems and methods that provide unified charging across different network interfaces are provided. A system for small batch processing of usage requests, can include a service broker, a plurality of servers wherein each server includes customer data, and a plurality of queues, each associated with a different server. When a usage request is received from a network entity, the service broker is configured to determine an internal ID associated with data requested by the usage request, determine on which particular server of the plurality of servers the data requested by the usage request is stored, enqueue the usage request in a particular queue associated with the particular server, and upon a trigger event, send all requests in the particular queue to the particular server in a batch.Type: GrantFiled: September 19, 2012Date of Patent: August 15, 2017Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Jens Kaemmerer, Ashish Srivastava
-
Patent number: 9419958Abstract: A system with a tenant aware in-memory data grid includes a data grid configured to store data in memory. A request manager is configured to receive a data grid label and a tenant identifier and to request a data grid entry based on the data grid label and tenant identifier. A data grid controller is configured to receive a request for data from the data grid based on a combined data grid label and tenant identifier. A security provider is configured to authenticate and authorize the request for data.Type: GrantFiled: November 25, 2014Date of Patent: August 16, 2016Assignee: Oracle International CorporationInventor: Ashish Srivastava
-
Publication number: 20160149882Abstract: A system with a tenant aware in-memory data grid includes a data grid configured to store data in memory. A request manager is configured to receive a data grid label and a tenant identifier and to request a data grid entry based on the data grid label and tenant identifier. A data grid controller is configured to receive a request for data from the data grid based on a combined data grid label and tenant identifier. A security provider is configured to authenticate and authorize the request for data.Type: ApplicationFiled: November 25, 2014Publication date: May 26, 2016Inventor: Ashish SRIVASTAVA
-
Publication number: 20150206277Abstract: The present invention facilitates efficient and effective utilization of unified virtual addresses across multiple components. In one embodiment, the presented new approach or solution uses Operating System (OS) allocation on the central processing unit (CPU) combined with graphics processing unit (GPU) driver mappings to provide a unified virtual address (VA) across both GPU and CPU. The new approach helps ensure that a GPU VA pointer does not collide with a CPU pointer provided by OS CPU allocation (e.g., like one returned by “malloc” C runtime API, etc.).Type: ApplicationFiled: January 20, 2015Publication date: July 23, 2015Inventors: Amit RAO, Ashish SRIVASTAVA, Yogesh KINI, Alban DOUILLET, Geoffrey GERFIN, Mayank KAUSHIK, Nikita SHULGA, Vyas VENKATARAMAN, David FONTAINE, Mark HAIRGROVE, Piotr JAROSZYNSKI, Stephen JONES, Vivek KINI
-
Publication number: 20150149625Abstract: The present disclosure provides a method, non-transitory computer-readable storage medium, and computer system that implement a latency monitoring and reporting service configured to collect and report latency of service transactions. In one embodiment, a chronicler object is generated and transmitted to a charging engine, where the chronicler object is configured to collect a set of time points as the chronicler object travels through one or more components of the charging engine. Upon return of the chronicler object, the set of time points is extracted from the chronicler object and added to one of a plurality of accumulator objects. Each accumulator object includes a plurality of sets of time points from a plurality of chronicler objects that are received during a reporting window. The plurality of sets of times points of each accumulator object is used to calculate the latency of service transactions.Type: ApplicationFiled: November 25, 2014Publication date: May 28, 2015Inventors: Louis Thomas Piro, JR., Jens Kaemmerer, Ashish Srivastava, Diana Yuryeva
-
Publication number: 20140082170Abstract: In accordance with various embodiments, systems and methods that provide unified charging across different network interfaces are provided. A system for small batch processing of usage requests, can include a service broker, a plurality of servers wherein each server includes customer data, and a plurality of queues, each associated with a different server. When a usage request is received from a network entity, the service broker is configured to determine an internal ID associated with data requested by the usage request, determine on which particular server of the plurality of servers the data requested by the usage request is stored, enqueue the usage request in a particular queue associated with the particular server, and upon a trigger event, send all requests in the particular queue to the particular server in a batch.Type: ApplicationFiled: September 19, 2012Publication date: March 20, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Jens Kaemmerer, Ashish Srivastava
-
Publication number: 20130152196Abstract: Techniques for throttling of rogue entities to push notification servers are described. An apparatus may comprise a processor and a memory communicatively coupled to the processor. The memory may store an application, the application maintaining a monitored domain table, the application maintaining an offending domain table, the application operative to receive an incoming request from a client in a domain, to detect harmful activity based on the request, and to respond to the harmful activity based on one or both of the monitored domain table and the offending domain table. Other embodiments are described and claimed.Type: ApplicationFiled: June 21, 2012Publication date: June 13, 2013Applicant: MICROSOFT CORPORATIONInventors: Neeraj Garg, Suvarna Singh, Rahul Thatte, Amrut Kale, Ashish Srivastava, Devi J V, Poornima Siddabattuni, Rajesh Peddibhotla, Sukumar Rayan, Aidan Downes, Deepak Rao, Vadim Eydelman, Bimal Mehta