Patents by Inventor Ashok Srinivasa Murthy
Ashok Srinivasa Murthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200322253Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.Type: ApplicationFiled: June 23, 2020Publication date: October 8, 2020Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
-
Patent number: 10735307Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.Type: GrantFiled: January 10, 2019Date of Patent: August 4, 2020Assignee: eBay Inc.Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
-
Publication number: 20200228437Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.Type: ApplicationFiled: January 10, 2019Publication date: July 16, 2020Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
-
Publication number: 20200177372Abstract: Technologies are shown for HGM based control for smart contract execution. HGM control rules control function calls at a system level utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for a smart contract. The HGM control rules are applied to HGMs collected from the call stack to allow or prohibit specific HGMs observed in functions or function call chains. HGM control rules can use dynamic state data in the function call chain. If the dynamic state data observed in function call chains does not meet the requirements defined in the HGM control rules, then the function call can be blocked from executing or completing execution. The HGM control rules can be generated by executing known sets of acceptable or vulnerable smart contracts and collecting the resulting HGMs.Type: ApplicationFiled: October 18, 2019Publication date: June 4, 2020Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Michael CHAN, Derek CHAMORRO, Arpit JAIN, Ashok Srinivasa MURTHY
-
Publication number: 20200175155Abstract: Technologies are shown for system level function based access control for smart contract execution on a blockchain. Access control rules control function calls at a system level by utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for execution of a smart contract. The access control rules are applied to the function call stack to allow or prohibit specific functions or function call chains. Access control rules can also define allowed or prohibited parameter data in the function call chain. If the function call chain or parameters do not meet the requirements defined in the access control rules, then the function call can be blocked from executing or completing execution. The access control rules can produce sophisticated access control policies based on complex function call chains.Type: ApplicationFiled: June 3, 2019Publication date: June 4, 2020Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Michael CHAN, Derek CHAMORRO, Arpit JAIN, Ashok Srinivasa MURTHY
-
Publication number: 20200175156Abstract: Technologies are shown for function level permissions control for smart contract execution to implement permissions policy on a blockchain. Permissions control rules control function calls at a system level utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for a smart contract. The permissions control rules are applied to the call stack to implement permissions control policy. Permissions control rules can use dynamic state data in the function call chain. If the dynamic state data observed in function call chains does not meet the requirements defined in the permissions control rules, then the function call can be blocked from executing or completing execution. The permissions control rules can be generated for a variety of different entities, such as a domain, user or resource.Type: ApplicationFiled: November 27, 2019Publication date: June 4, 2020Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Ashok Srinivasa MURTHY, Derek CHAMORRO, Michael CHAN, Arpit JAIN
-
Publication number: 20190342230Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.Type: ApplicationFiled: July 16, 2019Publication date: November 7, 2019Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
-
Publication number: 20190268410Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.Type: ApplicationFiled: April 8, 2019Publication date: August 29, 2019Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
-
Patent number: 10356004Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.Type: GrantFiled: November 12, 2015Date of Patent: July 16, 2019Assignee: PayPal, Inc.Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
-
Patent number: 10284643Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.Type: GrantFiled: September 24, 2015Date of Patent: May 7, 2019Assignee: eBay Inc.Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
-
Publication number: 20170093970Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.Type: ApplicationFiled: September 24, 2015Publication date: March 30, 2017Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
-
Patent number: 9317393Abstract: Methods and apparatus for memory leak detection using clustering and trend detection are disclosed. Performance metrics are collected from an executing process. A first statistical analysis of at least one metric is used to identify trending and non-trending workload periods for the process. A second statistical analysis on the metrics for the non-trending workload periods is used to determine clusters of metrics corresponding to stable workload levels. A third statistical analysis is performed on each of the clusters to determine whether an upward trend in memory usage occurred. If an upward trend in memory usage is detected, a notification of a potential memory leak is generated.Type: GrantFiled: June 13, 2013Date of Patent: April 19, 2016Assignee: Oracle International CorporationInventors: Thyagaraju Poola, Vladimir Volchegursky, Ashok Srinivasa Murthy
-
Publication number: 20160065486Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.Type: ApplicationFiled: November 12, 2015Publication date: March 3, 2016Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
-
Publication number: 20140372807Abstract: Methods and apparatus for memory leak detection using clustering and trend detection are disclosed. Performance metrics are collected from an executing process. A first statistical analysis of at least one metric is used to identify trending and non-trending workload periods for the process. A second statistical analysis on the metrics for the non-trending workload periods is used to determine clusters of metrics corresponding to stable workload levels. A third statistical analysis is performed on each of the clusters to determine whether an upward trend in memory usage occurred. If an upward trend in memory usage is detected, a notification of a potential memory leak is generated.Type: ApplicationFiled: June 13, 2013Publication date: December 18, 2014Inventors: Thyagaraju Poola, Vladimir Volchegursky, Ashok Srinivasa Murthy