Patents by Inventor Rama Prasad Bodepu

Rama Prasad Bodepu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11132218
    Abstract: Techniques are disclosed relating to task execution with non-blocking calls. A computer system may receive a request to perform an operation comprising a plurality of tasks, each of which corresponds to a node in a graph. A particular one of the plurality of tasks specifies a call to a downstream service. The computer system may maintain a plurality of task queues, each of which is associated with a thread pool. The computer system may enqueue, in an order specified by the graph, the plurality of tasks in one or more of the plurality of task queues. The computer system may process the plurality of tasks. Such processing may include a thread of a particular queue in which the particular task is enqueued performing a non-blocking call to the downstream service. After processing the plurality of tasks, the computer system may return a result of performing the operation.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: September 28, 2021
    Assignee: PayPal, Inc.
    Inventors: Prasad Saka, Jian Wan, Rama Prasad Bodepu
  • Patent number: 10884641
    Abstract: Systems and techniques for providing a low latency gateway for an asynchronous orchestration engine using direct memory are presented. A system can directly allocate an array memory space within a first data structure for transaction data associated with transaction requests for an online transaction system. The system can sequentially store respective data threads of the transaction data into respective memory blocks of the array memory space within the first data structure. The system can also sequentially separate the memory blocks of the array memory space within the first data structure into data channels for storage in a second data structure. Furthermore, the system can respectively format data channels and convert the data channels into communication pathways for the online transaction system based on at least one serialization technique for transmission to one or more memories of a virtual machine of the online transaction system.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: January 5, 2021
    Assignee: PayPal, Inc.
    Inventors: Veera Saka, Jian Wan, Rama Prasad Bodepu
  • Publication number: 20200333956
    Abstract: Systems and techniques for providing a low latency gateway for an asynchronous orchestration engine using direct memory are presented. A system can directly allocate an array memory space within a first data structure for transaction data associated with transaction requests for an online transaction system. The system can sequentially store respective data threads of the transaction data into respective memory blocks of the array memory space within the first data structure. The system can also sequentially separate the memory blocks of the array memory space within the first data structure into data channels for storage in a second data structure. Furthermore, the system can respectively format data channels and convert the data channels into communication pathways for the online transaction system based on at least one serialization technique for transmission to one or more memories of a virtual machine of the online transaction system.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Veera Saka, Jian Wan, Rama Prasad Bodepu
  • Publication number: 20200210223
    Abstract: Techniques are disclosed relating to task execution with non-blocking calls. A computer system may receive a request to perform an operation comprising a plurality of tasks, each of which corresponds to a node in a graph. A particular one of the plurality of tasks specifies a call to a downstream service. The computer system may maintain a plurality of task queues, each of which is associated with a thread pool. The computer system may enqueue, in an order specified by the graph, the plurality of tasks in one or more of the plurality of task queues. The computer system may process the plurality of tasks. Such processing may include a thread of a particular queue in which the particular task is enqueued performing a non-blocking call to the downstream service. After processing the plurality of tasks, the computer system may return a result of performing the operation.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Prasad Saka, Jian Wan, Rama Prasad Bodepu