Patents by Inventor Ashok Srinivasa Murthy

Ashok Srinivasa Murthy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200322253
    Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.
    Type: Application
    Filed: June 23, 2020
    Publication date: October 8, 2020
    Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
  • Patent number: 10735307
    Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: August 4, 2020
    Assignee: eBay Inc.
    Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
  • Publication number: 20200228437
    Abstract: Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.
    Type: Application
    Filed: January 10, 2019
    Publication date: July 16, 2020
    Inventors: Ashok Srinivasa Murthy, Sunny Rameshkumar Gurnani
  • Publication number: 20200177372
    Abstract: Technologies are shown for HGM based control for smart contract execution. HGM control rules control function calls at a system level utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for a smart contract. The HGM control rules are applied to HGMs collected from the call stack to allow or prohibit specific HGMs observed in functions or function call chains. HGM control rules can use dynamic state data in the function call chain. If the dynamic state data observed in function call chains does not meet the requirements defined in the HGM control rules, then the function call can be blocked from executing or completing execution. The HGM control rules can be generated by executing known sets of acceptable or vulnerable smart contracts and collecting the resulting HGMs.
    Type: Application
    Filed: October 18, 2019
    Publication date: June 4, 2020
    Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Michael CHAN, Derek CHAMORRO, Arpit JAIN, Ashok Srinivasa MURTHY
  • Publication number: 20200175155
    Abstract: Technologies are shown for system level function based access control for smart contract execution on a blockchain. Access control rules control function calls at a system level by utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for execution of a smart contract. The access control rules are applied to the function call stack to allow or prohibit specific functions or function call chains. Access control rules can also define allowed or prohibited parameter data in the function call chain. If the function call chain or parameters do not meet the requirements defined in the access control rules, then the function call can be blocked from executing or completing execution. The access control rules can produce sophisticated access control policies based on complex function call chains.
    Type: Application
    Filed: June 3, 2019
    Publication date: June 4, 2020
    Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Michael CHAN, Derek CHAMORRO, Arpit JAIN, Ashok Srinivasa MURTHY
  • Publication number: 20200175156
    Abstract: Technologies are shown for function level permissions control for smart contract execution to implement permissions policy on a blockchain. Permissions control rules control function calls at a system level utilizing function boundary detection instrumentation in a kernel that executes smart contracts. The detection instrumentation generates a call stack that represents a chain of function calls in the kernel for a smart contract. The permissions control rules are applied to the call stack to implement permissions control policy. Permissions control rules can use dynamic state data in the function call chain. If the dynamic state data observed in function call chains does not meet the requirements defined in the permissions control rules, then the function call can be blocked from executing or completing execution. The permissions control rules can be generated for a variety of different entities, such as a domain, user or resource.
    Type: Application
    Filed: November 27, 2019
    Publication date: June 4, 2020
    Inventors: Venkata Siva Vijayendra BHAMIDIPATI, Ashok Srinivasa MURTHY, Derek CHAMORRO, Michael CHAN, Arpit JAIN
  • Publication number: 20190342230
    Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.
    Type: Application
    Filed: July 16, 2019
    Publication date: November 7, 2019
    Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
  • Publication number: 20190268410
    Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.
    Type: Application
    Filed: April 8, 2019
    Publication date: August 29, 2019
    Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
  • Patent number: 10356004
    Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.
    Type: Grant
    Filed: November 12, 2015
    Date of Patent: July 16, 2019
    Assignee: PayPal, Inc.
    Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
  • Patent number: 10284643
    Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: May 7, 2019
    Assignee: eBay Inc.
    Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
  • Publication number: 20170093970
    Abstract: Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Inventors: Rami El-Charif, Ankit Khera, Ashok Srinivasa Murthy
  • Patent number: 9317393
    Abstract: Methods and apparatus for memory leak detection using clustering and trend detection are disclosed. Performance metrics are collected from an executing process. A first statistical analysis of at least one metric is used to identify trending and non-trending workload periods for the process. A second statistical analysis on the metrics for the non-trending workload periods is used to determine clusters of metrics corresponding to stable workload levels. A third statistical analysis is performed on each of the clusters to determine whether an upward trend in memory usage occurred. If an upward trend in memory usage is detected, a notification of a potential memory leak is generated.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: April 19, 2016
    Assignee: Oracle International Corporation
    Inventors: Thyagaraju Poola, Vladimir Volchegursky, Ashok Srinivasa Murthy
  • Publication number: 20160065486
    Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.
    Type: Application
    Filed: November 12, 2015
    Publication date: March 3, 2016
    Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
  • Publication number: 20140372807
    Abstract: Methods and apparatus for memory leak detection using clustering and trend detection are disclosed. Performance metrics are collected from an executing process. A first statistical analysis of at least one metric is used to identify trending and non-trending workload periods for the process. A second statistical analysis on the metrics for the non-trending workload periods is used to determine clusters of metrics corresponding to stable workload levels. A third statistical analysis is performed on each of the clusters to determine whether an upward trend in memory usage occurred. If an upward trend in memory usage is detected, a notification of a potential memory leak is generated.
    Type: Application
    Filed: June 13, 2013
    Publication date: December 18, 2014
    Inventors: Thyagaraju Poola, Vladimir Volchegursky, Ashok Srinivasa Murthy