Patents Examined by Eric C. Wai
  • Patent number: 11521112
    Abstract: A method for database management is disclosed. The method may include receiving an algorithm from a user. Based on the algorithm, a hierarchical dataflow graph (hDFG) may be generated. The method may further include generating an architecture for a chip based on the hDFG. The architecture for a chip may retrieve a data table from a database. The data table may be associated with the architecture for a chip. Finally, the algorithm may be executed against the data table, such that an action included in the algorithm is performed.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: December 6, 2022
    Assignee: Georgia Tech Research Corporation
    Inventors: Hadi Esmaeilzadeh, V, Divya Mahajan, Joon Kyung Kim
  • Patent number: 11520631
    Abstract: A plurality of processing entities in which a plurality of tasks are executed are maintained. Memory access patterns are determined for each of the plurality of tasks by dividing a memory associated with the plurality of processing entities into a plurality of memory regions, and for each of the plurality of tasks, determining how many memory accesses take place in each of the memory regions, by incrementing a counter associated with each memory region in response to a memory access. Each of the plurality of tasks are allocated among the plurality of processing entities, based on the determined memory access patterns for each of the plurality of tasks.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: December 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
  • Patent number: 11507465
    Abstract: The subject technology requests information regarding an instance identifier of a compute service manager instance to a particular job. The subject technology retrieves information related to a set of instances of compute service managers in a set of virtual warehouses. The subject technology filters the information to determine a set of candidates from the set of instances of compute service managers. The subject technology sorts the set of candidates based at least in part on a workload. The subject technology selects a candidate compute service manager to issue a query restart by randomly selecting an execution node.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: November 22, 2022
    Assignee: Snowflake Inc.
    Inventors: Ata E. Husain Bohra, Daniel Geoffrey Karp
  • Patent number: 11507419
    Abstract: A task scheduling method comprises the steps of: in response to the reception of a request for processing a plurality of task sets, creating a current to-be-scheduled task queue in a task processing system based on priorities of the plurality of task sets and tasks in the plurality of task sets, where a plurality of to-be-scheduled tasks in the current to-be-scheduled task queue are scheduled in the same round of scheduling; allocating computing resources used for scheduling the plurality of to-be-scheduled tasks; and enabling the plurality of to-be-scheduled tasks in the current to-be-scheduled task queue to be scheduled by using the computing resources. In this manner, a plurality of tasks with different priorities and quotas can be scheduled according to SLA levels of users, and the efficiency and flexibility of parallel services of cloud computing deep learning models are improved by using a run-time load-balancing scheduling solution.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: November 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jin Li, Jinpeng Liu, Wuichak Wong
  • Patent number: 11494223
    Abstract: In accordance with embodiments disclosed herein, there are provided mechanisms and methods for automating deployment of applications in a multi-tenant database environment. For example, in one embodiment, mechanisms include managing a plurality of machines operating as a machine farm within a datacenter by executing an agent provisioning script at a control hub, instructing the plurality of machines to download and instantiate a lightweight agent; pushing a plurality of URL (Uniform Resource Locator) references from the control hub to the instantiated lightweight agent on each of the plurality of machines specifying one or more applications to be provisioned and one or more dependencies for each of the applications; and loading, via the lightweight agent at each of the plurality of machines, the one or more applications and the one or more dependencies for each of the one or more applications into memory of each respective machine.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: November 8, 2022
    Assignee: salesforce.com, inc.
    Inventors: Pallav Kothari, Phillip Oliver Metting van Rijn
  • Patent number: 11487523
    Abstract: A method for hot updating machine emulator including requesting specified memory which is used to store the virtual machine memory address and virtual machine status information and is not released when updating a machine emulator; restoring the virtual machine status information from the specified memory after the machine emulator is updated. Thus, the techniques of the present disclosure accelerate recovery speed and shorten updating time.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: November 1, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Xiantao Zhang, Junkang Fu
  • Patent number: 11474876
    Abstract: A data processing device (10), which is connectable to another data processing device (20), includes a data collector (150) to collect data from a machine (31, 32), a data processor (120) to process input data, a coordination processor (160) to provide the data collector (150) or the data processor (120) with data transmitted from the another data processing device (20) and provide the another data processing device (20) with the data collected by the data collector (150) or the data processed by the data processor (120), and an execution controller (130) to control collection of data by the data collector (150), processing of data by the data processor (120), and transmission and reception of data to and from the another data processing device (20) by the coordination processor (160) in accordance with predetermined setting information for a data processing flow.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: October 18, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Osamu Nasu
  • Patent number: 11474872
    Abstract: Techniques for implementing an infrastructure orchestration service are described. In some examples, a declarative provisioner of the infrastructure orchestration service receives instructions for deployment of a resource. The declarative provisioner identifies that the deployment of the resource is a long-running task stores state information corresponding to the deployment of the resource. In certain embodiments, upon identifying that the deployment of the resource is a long-running task, the declarative provisioner pauses its execution of the long-running task. Responsive to a trigger received from the infrastructure orchestration service, the declarative provisioner resumes execution of the deployment of the resource using the state information and transmits deployment information corresponding to the deployment of the resource to the infrastructure orchestration service.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: October 18, 2022
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Phillip Vassenkov, Nathaniel Martin Glass, Eric Tyler Barsalou, Caleb Dockter
  • Patent number: 11442433
    Abstract: A receiver (131) receives an acquisition request to acquire a value of real resource information associated with a device (20) connected to a network or a value of virtual resource information associated with a calculation result of calculation performed using a value of the real resource information. A real resource information acquirer (1352) acquires the value of the real resource information by causing a collector (140) to collect a value from the device (20) associated with the real resource information. A virtual resource information acquirer (1353) acquires the value of the virtual resource information by causing a calculator (1354) to perform calculation using the value of the real resource information. A responder (136) returns a response including the value of the real resource information or a response including the value of the virtual resource information based on the received acquisition request.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: September 13, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Osamu Nasu, Kohei Kato
  • Patent number: 11442763
    Abstract: A virtual machine deployment system includes a plurality of processing subsystems, and at least one multi-endpoint adapter device including a plurality of endpoint subsystems. A plurality of communication couplings couple each of the plurality of endpoint subsystems to at least one of the plurality of processing subsystems in order to provide a respective subset of available communication resources to each of the plurality of processing subsystems. A virtual machine deployment engine receives an instruction to deploy a virtual machine, and determines at least one communication resource requirement for the virtual machine. The virtual machine deployment engine then identifies a first processing subsystem that is included in the plurality of processing subsystems and that is provided a first subset of the available communication resources that satisfies the at least one communication resource requirement for the virtual machine, and deploys the virtual machine on the first processing subsystem.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: September 13, 2022
    Assignee: Dell Products L.P.
    Inventors: Shyamkumar T. Iyer, Yogesh Varma, Timothy M. Lambert, William Price Dawkins, Kurtis John Bowman
  • Patent number: 11436149
    Abstract: Distributed computing system (DCS) performance is enhanced by caching optimizations. The DCS includes nodes with local caches. Resource accessors such as users are clustered based on their similarity, and the clusters are assigned to nodes. Then processing workloads are distributed among the nodes based on the accessors the workloads implicate, and based on which nodes were assigned to those accessors' clusters. Clustering may place security peers together in a cluster, and hence place peers together on a node. Security peers tend to access the same resources, so those resources will more often be locally cached, improving performance. Workloads implicating peers also tend to access the same resources, such as peers' behavior histories, so those resources will likewise tend to be cached locally, thus optimizing performance as compared for example to randomly assigning accessors to nodes without clustering and without regard to security peer groupings.
    Type: Grant
    Filed: January 19, 2020
    Date of Patent: September 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Amir Harar, Tomer Haimovich, Itay Argoety
  • Patent number: 11436042
    Abstract: Embodiments of the present disclosure relate to migrating the in-memory state of a containerized application to a destination node. A processing device may identify a destination node on which a container currently running on a source node is to be migrated. The processing device may determine whether the destination node includes a replica of each base layer the container is comprised of, and may transmit a replica of each base layer the destination node is missing to the destination node. The processing device may halt the stream of data from the application to the client device, and transfer a replica of an in-memory layer of the container to the destination node so that the destination node further includes a second in-memory layer that is a replica of the in-memory layer.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: September 6, 2022
    Assignee: Red Hat, Inc.
    Inventors: Aiden Keating, Jason Madigan
  • Patent number: 11429452
    Abstract: This disclosure includes an improvement to hashing methods, which can help achieve faster load balancing of computing resources (e.g., processors, storage systems, web servers or other computer systems, etc.) This improvement may be particularly beneficial when a quantity of the available resources changes. Such hashing methods may include assigning a data object associated with a key to a particular computing resource of the available computing resources by using two auxiliary functions that work together to uniformly distribute data objects across available computing resources and reduce an amount of time to assign the data object to the particular computing resource.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: August 30, 2022
    Assignee: PayPal, Inc.
    Inventor: Eric Leu
  • Patent number: 11412035
    Abstract: Systems and methods for managing tasks in a multi-platform environment are provided. The methods include allocating a set of tasks to a server device; receiving a request for servicing one or more tasks; determining whether rules that are applicable to the tasks have been satisfied; and, based on the determination regarding the rules, either automatically servicing the tasks or transmitting the tasks to the server device for servicing and then receiving a notification of completion of the tasks. Additional tasks may be allocated to additional server devices.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: August 9, 2022
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Gary Welch, Frank Lee, Jeffrey Drew, Riad Mekmouche, Oleg Gerts, Whitney Greene
  • Patent number: 11366692
    Abstract: Tasks of a group are respectively assigned to devices for execution. For each task, a completion time for a task is determined based on an associated cluster of the device to which the task has been assigned for execution is determined. If the completion time of a task exceeds an execution window of the device to which the task has been assigned, the task is removed from the group. The tasks remaining in the group are executed on the devices to which the tasks have been assigned for execution.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: June 21, 2022
    Assignee: MICRO FOCUS LLC
    Inventors: Krishna Mahadevan Ramakrishnan, Venkatesh Ramteke, Shiva Prakash Sm
  • Patent number: 11368452
    Abstract: An analytics tool includes a network interface and an analytics engine. The network interface receives a request for job analytics of a job. The job comprises uploading a plurality of batches, each of the plurality batches comprising a subset of information of a data table. A network node of a plurality of network nodes uploads a batch of the plurality of batches. The analytics engine configured to determines the plurality of network nodes used to complete the job. The analytics engine retrieves network node data for each of the plurality of network nodes. The analytics engine generates the job analytics by aggregating the network node data for each of the plurality of network nodes.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: June 21, 2022
    Assignee: Bank of America Corporation
    Inventor: John Abraham
  • Patent number: 11354156
    Abstract: A master device that manages task processing is provided and includes a communication circuit and at least one processor to obtain first real-time resource information associated with resources that a first task processing device currently uses obtain second real-time resource information associated with resources that a second task processing device currently uses, obtain information associated with processing of a distribution task to be distributed to at least one of the plurality of task processing devices, obtain an amount of resources required for processing the distribution task, identify the first task processing device to be a task processing device to which the distribution task is to be distributed on the basis of the first real-time resource information, the second real-time resource information, and the amount of resources required for processing the distribution task, and transmit the information associated with processing of the distribution task to the first task processing device.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: June 7, 2022
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Kyungrae Kim, Hyeonsang Eom, Seokwon Choi, Yoonsung Nam, Hyunil Shin, Youngmin Won
  • Patent number: 11347530
    Abstract: A method for unifying VMs comprises presenting, in a display device, a unified view that includes GUI elements for multiple applications that execute in respective VMs in a computing device. The operation of presenting the unified view may be performed by a unification console that executes in a dedicated VM. The method also comprises (a) after presenting the unified view, receiving, by the unification console, user input pertaining to a selected application; (b) redirecting the user input from the unification console in the dedicated VM to the selected application in its respective VM; (c) receiving, by the unification console outside of the VM for the selected application, application output from the selected application; and (d) rendering output for a user, based on the application output received by the unification console. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: May 31, 2022
    Assignee: Intel Corporation
    Inventors: Scott H. Robinson, Vijay Tewari, Robin C. Knauerhase
  • Patent number: 11340944
    Abstract: A load shedding system provides improved fault tolerance and resilience in message communications. The requesting service application may be configured to send data request(s) to a responding service application. A load shedding manager is programmed or configured to receive the data request(s) and determine, based on one or more configurable criteria and status information whether to allow the data request(s) to proceed or not. The criteria for making the determination may include various configurable settings, including error rate time window, and threshold values.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: May 24, 2022
    Assignee: ATLASSIAN PTY LTD.
    Inventors: Iccha Sethi, Kevin Conway
  • Patent number: 11340950
    Abstract: A service band management system includes management device(s) coupled to a workload provisioning infrastructure. The management device(s) identify a first workload provisioning system in a workload provisioning infrastructure, and determine its first workload provisioning capability. Based on the first workload provisioning capability, the management device(s) map the first workload provisioning system to a first service band, and provision a workload using the first workload provisioning system based on the first service band satisfying a workload requirement for the workload. Subsequently, the management device(s) identify that a second workload provisioning system has been added to the workload provisioning infrastructure, and determine its second workload provisioning capability.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: May 24, 2022
    Assignee: Dell Products L.P.
    Inventors: Ravikanth Chaganti, Dharmesh M. Patel, Rizwan Ali