Patents Examined by Camquy Truong
  • Patent number: 11500690
    Abstract: A method for dynamic load balancing between nodes in a network centric process control system. The network centric process control system includes a plurality of nodes and each node includes control service components, where each control service component is a separate executable running in a separate operating system process as provided by a real time operating system of each node. The method is performed by a node manager of a node, and the method includes negotiating a load balancing master role between the plurality of nodes, wherein the negotiating is based on an indication of the plurality of nodes representing load balancing cluster nodes, subscribing, in the negotiated load balancing master role, to a load balancing information from nodes of the load balancing cluster nodes, and reallocating, in the negotiated load balancing master role, one or more control logic tasks from one node to another node of the plurality of nodes based on the subscribed load balancing information.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: November 15, 2022
    Assignee: ABB Schweiz AG
    Inventor: Staffan Andersson
  • Patent number: 11500673
    Abstract: A method for dynamically generating an optimized processing pipeline for tasks is provided. The method identifies one or more tasks to be executed from defined tasks that are defined declaratively as a number of stages of input data, data transformations, and output data. The method processes the identified tasks to determine dependencies between the tasks based on their defined stages and creates one or more optimized data processing pipelines by performing a dependency resolution procedure on stages of all tasks in parallel using the task dependencies to determine the order of the stages and removing duplication of stages between tasks.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Luke Taher, Diogo Alexandre Ferreira Ramos, Vinh Tuan Thai
  • Patent number: 11494239
    Abstract: Embodiments of the present disclosure relate to a method for allocating computing resources, an electronic device, and a corresponding computer program product. The method may include: obtaining an available resource list associated with a computing resource requester according to determination that a resource use request from the computing resource requester is received, wherein the available resource list includes the number and available time periods of computing resources that can be provided by at least one computing resource provider. In addition, the method further includes: allocating computing resources to the computing resource requester based on the available resource list and the resource use request, so that the computing resource requester uses the allocated computing resources to run a workload. The embodiments of the present disclosure can flexibly allocate the computing resources, thereby realizing full utilization of the computing resources.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: November 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Pedro Fernandez Orellana, Zhen Jia
  • Patent number: 11494240
    Abstract: The present disclosure relates to computer-implemented methods, software, and systems for dynamic rate limiting of execution of operation. A request from a user account for execution of an operation by an application service is. A total number of operations registered at an operations registry is determined. In response to determining that all of registered operations exceeds a first threshold value, a number of registered operations associated with a group account of the user account is determined. If it is determined that (i) the total number of registered operations exceeds a first threshold value and that the number of registered operations associated with the group account does not exceed a second threshold value or (ii) if it is determined that the total number of registered operations does not exceed the first threshold value, the operation is registered at the operations registry. An instruction to execute the registered operation is sent.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: November 8, 2022
    Assignee: SAP SE
    Inventor: Stoyan Zhivkov Boshev
  • Patent number: 11487587
    Abstract: An information processing system includes an information processing apparatus and a management apparatus. A first processor of the information processing apparatus controls resource allocation to a first virtual machine that operates on the information processing apparatus and executes a virtual load balancer that distributes a first load to a plurality of second virtual machines. The first processor notifies, when a second load of the virtual load balancer exceeds a predetermined first threshold value, an occurrence of an overload to the management apparatus. The first processor receives and executes an addition command of adding a resource allocated to the first virtual machine. A second processor of the management apparatus creates, upon being notified of the occurrence of the overload, the addition command based on resource information of the information processing apparatus and management information of the virtual load balancer.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: November 1, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Junichi Matsuda, Keiji Miyauchi, Yuuichi Kobayashi
  • Patent number: 11481262
    Abstract: A scaling manager manages deques that track groups of preinitialized instances used to scale respective groups of active compute instances. Various techniques for deque management include a rate-based technique that uses a historical scale-up rate for a particular group and adjusts the size of the deque of preinitialized instances for that group based on the monitored scale-up rate and based on an instance preinitialization time for instances for that group. A total instance quantity may be bounded, in some examples, and an additional “buffer amount” of preinitialized instances may be implemented to provide a safety margin for burst scaling, which can be further enhanced by transferring instances between data structures of different groups of instances in some cases.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: October 25, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Nilesh Girish Telang, Abhishek Saha, Dhruven Nimesh Shah, Pratik Shilwant
  • Patent number: 11474856
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for generating information. The method may include: acquiring at least one to-be-processed service and at least one piece of relationship information, the relationship information being used to represent an execution order between two to-be-processed services in the at least one to-be-processed service; constructing a directed graph with a to-be-processed service in the at least one to-be-processed service as a point and with a piece of relationship information in the at least one piece of relationship information as a directed edge; and generating execution order information of the to-be-processed service in the at least one to-be-processed service according to the directed graph.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: October 18, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xiaoxu Chen, Zhiyuan Zhang, Feng Liu, Tao Yang, Xiang Gao
  • Patent number: 11474877
    Abstract: An object of the present invention is to provide a computer system and a scale-out method of the computer system that can reduce the possibility of occurrence of performance deterioration in the case where cloud bursting is executed. In the case where scale-out is executed in a public cloud, a data processing system starts a data copy process of copying data stored in an on-premises first data storage area, to a second data storage area of a storage cluster in the public cloud via a first network. When starting the data copy process, the data processing system executes the scale-out by increasing the number of processing nodes while accessing the data stored in the first data storage area.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: October 18, 2022
    Assignee: HITACHI, LTD.
    Inventors: Kiyomi Wada, Shinichi Hayashi
  • Patent number: 11416302
    Abstract: A computer system determines an allocation of resources in a task formed of processes. The task includes a transition between processes corresponding to rework. The computer system comprises: at least one predictor configured to calculate predicted values of an inflow amount and an outflow amount of the items of each of the processes forming the task; and a resource allocation determining unit configured to determine an allocation of the resources to each of the processes. The resource allocation determining unit uses the at least one predictor to form a simulator configured to calculate the predicted values of the inflow amount and the outflow amount of the items of each of the processes in any allocation of the resources, in a case of receiving a request including a constraint condition of the resources and an optimization condition; and determines the allocation of the resources to each of the processes.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 16, 2022
    Assignee: HITACHI, LTD.
    Inventors: Kunihiko Harada, Takeshi Uehara, Kazuaki Tokunaga, Toshiyuki Ukai
  • Patent number: 11392414
    Abstract: A node management protocol is disclosed herein. The protocol can be used for task distribution in multi-node systems. The node management protocol can implement a cooperation-based task distribution algorithm that does not rely on consensus. When a task is ingested into a cluster of nodes, the nodes can compete to handle the task. A transport layer helps coordinate among nodes and facilitates the handling of work. A session expiry protocol handles node failures with the remaining nodes reassigning work.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: July 19, 2022
    Assignee: Target Brands, Inc.
    Inventors: Christopher Fretz, Hrishikesh V. Prabhune, Luis F. Stevens
  • Patent number: 11385937
    Abstract: A method for managing data includes obtaining, by a management module, a redeployment request, wherein the redeployment request specifies a first workload and workload specifications for a second workload, and in response to the redeployment request: identifying, using a resource allocation master list, a plurality of resource devices, wherein each resource device in the plurality of resource devices is in an available status, selecting, from the plurality of resource devices, a first resource device based on the workload specifications, initiating a configuration of the first resource device, and updating the resource allocation master list to specify an allocation of the first resource device to the second workload.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: July 12, 2022
    Assignee: Dell Products L.P.
    Inventors: Rizwan Ali, Ravikanth Chaganti, Dharmesh M. Patel
  • Patent number: 11379268
    Abstract: A workflow service implements policies to leverage affinities to improve system performance. Example types of policies include a parent activity affinity (e.g., data for a task already exists locally to a node), a resource group affinity (e.g., a task requires a particular type of resource capability), or a code affinity (e.g., necessary code for the task is already stored locally to a node). A node for a particular task is selected based on an affinity policy or policy statement (e.g., the type indicated in a policy of a workflow definition) and configuration information, and the task routed to the selected node.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: July 5, 2022
    Assignee: Amazon Technologies, Inc.
    Inventor: Kuldeep Gupta
  • Patent number: 11366703
    Abstract: Techniques for dynamic application management are provided. For example, an apparatus comprises at least one processing platform configured to: execute a portion of an application program in a first virtual computing element, wherein the application program comprises at least one portion of marked code; receive a request for execution of the portion of marked code; determine, based at least in part on the portion of marked code, one or more cloud platforms on which to execute the portion of marked code; and cause the portion of marked code identified in the request to be executed on the one or more cloud platforms.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: June 21, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Thinh Lam Truong Nguyen, Victor Fong, Xuebin He, Kenneth Durazzo, Orlando X. Nieves
  • Patent number: 11360778
    Abstract: Techniques for process execution trend prediction and visualization are disclosed. The disclosed system receives a process execution request to be executed on a set of targets. The request may include request characteristics, such as a request type and computations to be performed during execution. The system analyzes the request characteristics to determine the computations to execute and for initiates request execution on the targets. Based on the analysis, the system generates predictions regarding the execution, including an estimated completion time. During execution, the system displays various attributes of the execution in a dynamically updating visualization. The system also provides real-time recommendations on how the process can be optimized, such as to reduce execution time and errors.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: June 14, 2022
    Assignee: Oracle International Corporation
    Inventor: Anadi Upadhyaya
  • Patent number: 11360801
    Abstract: Aspects of a workflow evaluation interface for validating, debugging, and evaluating virtual machine workflows are described. In one example, a method for displaying a workflow includes capturing a workflow for management of at least one virtual machine. The workflow can include a number of schema elements, among other attributes, input parameters, and output parameters for tasks of the workflow. The method can also include evaluating a logical flow among the schema elements in the workflow and populating a flow panel in a workflow evaluation interface. The flow panel can include a hierarchical flow of tasks in the workflow and at least one nested multi-task sequence in the workflow. The flow panel can include a carrot to expand the nested multi-task sequence as a branch of the hierarchical flow of tasks. The method can also include rendering a graphical representation of the logical flow among the plurality of schema elements.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: June 14, 2022
    Assignee: VMWARE, INC.
    Inventors: Nikola Arnaudov, Valentin Likyov, Daniel Vatov
  • Patent number: 11360798
    Abstract: An illustrated embodiment disclosed herein is an apparatus including a processor having programmed instructions to receive, from a user device, a request to identify a service for which a first load capability correlates with a second load capability of the endpoint. The processor has programmed instructions to, for each of a plurality of services of the endpoint, send one or more I/O requests, determine a metric associated with the one or more I/O requests, and determine a load capability based on the metric. The processor has programmed instructions to identify a first service having a load capability that satisfies a threshold and send, to the user device, an indication of the first service.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: June 14, 2022
    Assignee: Nutanix, Inc.
    Inventors: Anirudha Narsinha Sonar, Dhruv Vijay Doshi, Rajkumar Arunkumar Joshi
  • Patent number: 11354155
    Abstract: A system and method for operating fewer servers near maximum capacity as opposed to operating more servers at low capacity is disclosed. Computational tasks are made as small as possible to be completed within the available capacity of the servers. Computational tasks that are similar may be distributed to the same computing node (including a processor) to improve RAM utilization. Additionally, workloads may be scheduled onto multicore processors to maximize the average number of processing cores utilized per clock cycle.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: June 7, 2022
    Assignee: United Services Automobile Association (USAA)
    Inventors: Nathan Lee Post, Bryan J. Osterkamp, William Preston Culbertson, II, Ryan Thomas Russell, Ashley Raine Philbrick
  • Patent number: 11354170
    Abstract: A system, method, and computer-readable medium for performing a workload analysis operation. The workload analysis operation incudes receiving workload data from a data source; defining a plurality of workload seeds, each of the plurality of workload seeds defining a particular type of workload; and, identifying a particular infrastructure configuration using the workload data and the plurality of workload seeds.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: June 7, 2022
    Assignee: Dell Products L.P.
    Inventors: Prakash Sridharan, Bud Koch, Sivaram Lakshminarayanan, Shiek Mohammed Gulam Mohamed Hussain, Pronay Sarkar
  • Patent number: 11354152
    Abstract: Methods, systems, and apparatuses may provide for managing or operating self-evolving microservices. A method, system, computer readable storage medium, or apparatus may provide for receiving a message associated with a service, wherein the message is for a first microservice that contributes to the implementation of the service, wherein the first microservice comprises a plurality of rules; and subsequent to receiving the message, creating an evolved first microservice using a second rule of a plurality of rules of a second microservice.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: June 7, 2022
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Mohammad Nikain, Daniel Connolly
  • Patent number: 11321125
    Abstract: In a multitask computing system, there are multiple tasks include a first task, a second task, and a third task, and the first task has a higher priority than that of the second task and the third task. A method including raising the priority of the second task that shares a first critical section with the first task and is accessing the first critical section when the first task is blocked due to failure to access the first critical section; determining whether there is a third task that shares a second critical section with the second task and is accessing the second critical section; and raising, when the third task is present, the priority of the third task. The techniques of the present disclosure prevent a low-priority third task from delaying the execution of a second task, thus avoiding the priority inversion caused by the delayed execution of a high-priority first task.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: May 3, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Lingjun Chen, Bin Wang, Liangliang Zhu, Xu Zeng, Zilong Liu, Junjie Cai