Patents Examined by Zujia Xu
  • Patent number: 11983575
    Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: May 14, 2024
    Assignee: XILINX, INC.
    Inventors: Millind Mittal, Jaideep Dastidar
  • Patent number: 11972297
    Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and my not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: April 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Daehyeok Kim
  • Patent number: 11960937
    Abstract: A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention includes receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: April 16, 2024
    Assignee: III Holdings 12, LLC
    Inventor: David B. Jackson
  • Patent number: 11960938
    Abstract: Disclosed system specifies, based on measurement results of communication times taken for accessing a plurality of external databases, relation between the communication times taken for accessing the plurality of external databases, calculates, when accepting an instruction to execute processing using at least one of the plurality of external databases, a processing load when accessing the at least one of the external databases, based on the relation between the communication times, and controls an access to data included in the at least one of the external databases according to the calculated processing load.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: April 16, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Takuma Maeda, Kazuhiro Taniguchi, Junji Kawai
  • Patent number: 11874758
    Abstract: Some embodiments are directed to a logging within a software application executed over an assembly of information processing devices. More particularly, some embodiments relate to a method allowing process logging in the case of a software application operating with several processes and/or threads.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: January 16, 2024
    Assignee: BULL SAS
    Inventor: Pierre Vigneras
  • Patent number: 11875198
    Abstract: At least one processing device comprises a processor and a memory coupled to the processor. The at least one processing device is configured to establish one or more groups of synchronization objects in a storage system based at least in part on object type, and for each of the one or more groups, to insert entries into a corresponding object type queue for respective objects of the group, to execute a monitor thread for the group, the monitor thread being configured to scan the entries of the corresponding object type queue, and responsive to at least one of the scanned entries meeting one or more designated conditions, to take at least one automated action for its associated object. The synchronization objects illustratively comprise respective locks, or other objects. The at least one processing device illustratively comprises at least a subset of a plurality of processing cores of the storage system.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: January 16, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11842224
    Abstract: Client application (112) submits request (118) to resource status service (110) for resource status data (“data”) regarding one or more computing resources (108) provided in a service provider network (102). The resource status service submits requests to the resources for the data. The resource status service provides a reply to the client application that includes any data received from the resources within a specified time. If all requested data was not received from the resources within the specified time the resource status service can also provide, in the reply, an identifier (“ID”) that identifies the request and can be utilized to identify and retrieve additional status data received at a later time. The client application can also submit additional requests for the status data, and may include the ID, may wait for additional data to be pushed to it, or may check a queue for the status data.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: December 12, 2023
    Assignee: Amazon Technologies, Inc.
    Inventor: Nima Sharifi Mehr
  • Patent number: 11829806
    Abstract: An arithmetic processor performs arithmetic processing, and a synchronization processor, including first registers, performs synchronization processing that includes a plurality of processing stages to be processed stepwise. The arithmetic processor sends, to the synchronization processor, setting information to be used in a predetermined processing stage of the synchronization processing, and instruct the synchronization processor to execute the predetermined processing stage for the arithmetic processing. Each of the first registers includes a setting information management area to manage the setting information received from the arithmetic processor, and a destination status area to store a usage state of each of destination registers which are used in a next processing stage following the predetermined processing stage.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: November 28, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Kazuya Yoshimoto, Yuji Kondo
  • Patent number: 11769065
    Abstract: An output rule specified via a distributed system execution request data structure for a requested calculation is determined, and a current rule is initialized to the output rule. A rule lookup table data structure is queried to determine a set of matching rules, corresponding to the current rule. The best matching rule is selected. A logical dependency graph (LDG) data structure is generated by adding LDG nodes and LDG edges corresponding to the best matching rule, precedent rules of the best matching rule, and precedent rules of each precedent rule. An execution complexity gauge value and a set of distributed worker processes are determined. The LDG data structure is divided into a set of subgraphs. Each worker process is initialized with the subgraph assigned to it. Execution of the requested calculation is coordinated and a computation result of the LDG node corresponding to the output rule is obtained.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: September 26, 2023
    Assignee: Julius Technologies LLC
    Inventor: Yadong Li
  • Patent number: 11768699
    Abstract: Systems and methods are provided for managing dynamic controls over access to computer resources and, even more particularly, for evaluating and re-evaluating dynamic conditions and changes associated with user sessions. The systems and methods are configured to automatically make a determination as to whether new or additional authentication credentials are required for a user that is already authorized for accessing resources in a user session, in response to triggering events such as the identification of a new or changed condition associated with the user session.
    Type: Grant
    Filed: October 5, 2019
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexander Esibov, Itamar Azulay
  • Patent number: 11755376
    Abstract: Task/resources are randomly assigned a number of times and a score for each solution of the random assignment is calculated. Using machine learning and artificial intelligence, a subset of the solutions is selected. Assignment of task/resource within the subset may be randomly changed, e.g., a task/resource assignment between two entities, a task/resource within the selected subset may be replaced with another task/resource (without swapping), etc. The additional solutions form a super solution with the selected subset and the score associated with the additional solutions are calculated. The process of selection of assignments, random changes to the assignment and calculating the scores associated with the new solutions is repeated a number of times until a certain condition is met, e.g., a number of iterations, time out, improvement between two iterations is less than a certain threshold, etc. Once the certain condition is satisfied, a solution is selected.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: September 12, 2023
    Assignee: CALLIDUS SOFTWARE, INC.
    Inventors: Nick Pendar, Eric Christopher Hagen
  • Patent number: 11755377
    Abstract: A system to facilitate infrastructure management is described. The system includes one or more processors and a non-transitory machine-readable medium storing instructions that, when executed, cause the one or more processors to execute an infrastructure management controller to receive a request to provide infrastructure management services and generate a mapping between at least one instance of the infrastructure management controller and one or more resource instances at one or more on-premise infrastructure controller instances to provide the cloud based infrastructure management services, wherein the request includes one or more configuration parameters.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: September 12, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Subhajit Dasgupta, Charles E. Fowler, Michelle Frolik, Charles Greenidge, Jerry Harrow, Sandesh V. Madhyastha, Clifford A. McCarthy, Abhay Padlia, Rajeev Pandey, Jonathan M. Sauer, Geoffery Schunicht, Latha Srinivasan, Gary L. Thunquest
  • Patent number: 11740868
    Abstract: Aspects of the disclosure relate to determining relevant content in response to a request for information. One or more computing devices (170) may load data elements into registers (385A-385B), wherein each register is associated with at least one parallel processor in a group of parallel processors (380A-380B). For each of the parallel processors, the data elements loaded in its associated registers may be sorted, in parallel, in descending order. The sorted data elements, for each of the parallel processors, may be merged with the sorted data elements of other processors in the group. The merged and sorted data elements may be transposed and stored.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: August 29, 2023
    Assignee: Google LLC
    Inventor: Allan Stuart Mackinnon, Jr.
  • Patent number: 11734063
    Abstract: Execution end timing of a jobnet is predicted with stable accuracy. An information processing apparatus executes jobnet execution time prediction model generation processing for generating an execution time prediction model for predicting execution time of a jobnet to be executed on the basis of information associated with execution time of each of previously executed jobnets, a prediction model accuracy determination processing for calculating prediction accuracy for the execution time of each of jobnets by the generated execution time prediction model, and delay determination processing for determining whether to predict execution end timing of a designated jobnet among a jobnet group that is a set of a plurality of jobnets currently being executed or to be subsequently executed on the basis of the execution time prediction model, on the basis of the calculated prediction accuracy for the execution time of each of the jobnets.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: August 22, 2023
    Assignee: HITACHI, LTD.
    Inventors: Jun Kitawaki, Takashi Tameshige, Yasuyuki Tamai, Kouichi Murayama, Mineyoshi Masuda, Yosuke Himura
  • Patent number: 11709701
    Abstract: A method includes receiving code of an application, the code structured as a plurality of instructions in a computation graph that corresponds to operational logic of the application. The method also includes processing the code according to an iterative learning process. The iterative learning process includes determining whether to adjust an exploration rate associated with the iterative learning process based on a state of a computing environment. Additionally, the process includes executing the plurality of instructions of the computation graph according to an execution policy that indicates certain instructions to be executed in parallel. The process also includes determining an execution time for executing the plurality of instructions of the computation graph according to the execution policy and based on the execution time and the exploration rate, adjusting the execution policy to reduce the execution time in a subsequent iteration.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: July 25, 2023
    Assignee: PAYPAL, INC.
    Inventor: David Williams
  • Patent number: 11698820
    Abstract: Example implementations relate to a role-based autoscaling approach for scaling of nodes of a stateful application in a large scale virtual data processing (LSVDP) environment. Information is received regarding a role performed by the nodes of a virtual cluster of an LSVDP environment on which a stateful application is or will be deployed. Role-based autoscaling policies are maintained defining conditions under which the roles are to be scaled. A policy for a first role upon which a second role is dependent specifies a condition for scaling out the first role by a first step and a second step by which the second role is to be scaled out in tandem. When load information for the first role meets the condition, nodes in the virtual cluster that perform the first role are increased by the first step and nodes that perform the second role are increased by the second step.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: July 11, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Xiongbing Ou, Lakshm inarayanan Gunaseelan, Joel Baxter, Swami Viswanathan
  • Patent number: 11675620
    Abstract: A disclosed example method to automate deployment of a software defined data center includes generating, by executing an instruction with at least one processor, a task list based on tasks provided in an automation plan to deploy the software defined data center; determining, by executing an instruction with the at least one processor, dependencies between the tasks prior to executing the tasks; determining, by executing an instruction with the at least one processor, whether a resource that is to be an output of a first one of the tasks exists before execution of the first one of the tasks; removing, by executing an instruction with the at least one processor, the first one of the tasks from the task list when the resource exists before execution of the first one of the tasks; generating an execution schedule, by executing an instruction with the at least one processor, based on the dependencies and ones of the tasks remaining in the task list; and executing, with the at least one processor, the ones of the
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: June 13, 2023
    Assignee: VMWARE, INC.
    Inventor: Pavel Mitkov Dobrev
  • Patent number: 11663051
    Abstract: Embodiments are provided for providing workflow pipeline optimization in a computing environment. Execution of a workflow containing dependencies between one or more subject nodes and one or more observer nodes may be dynamically optimized by determining a wait time between successive executions of the workflow for the one or more observer nodes.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vasileios Vasileiadis, Michael Johnston
  • Patent number: 11656902
    Abstract: Disclosed in the present invention are a distributed container image construction scheduling system and method. The system includes a construction node and a management node. The construction node includes an image constructor for executing a construction task issued by the management node. The management node includes a console and a scheduler. The console is responsible for acquiring the relevant parameters such as a development dependency library and system configuration required by a user, and generating tasks with these parameters and sending same to the scheduler. The scheduler is used for receiving the tasks sent by the console, detecting the legitimacy of the tasks, and sending the tasks to the corresponding construction node to be run.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: May 23, 2023
    Inventors: Dengyin Zhang, Junjiang Li, Zijie Liu, Lin Zhu, Yi Cheng, Yingying Zhou, Zhaoxi Shi
  • Patent number: 11645122
    Abstract: The present disclosure relates to a method, device and computer program product for managing jobs in a processing system. The processing system comprises multiple client devices. In the method, based on a group of jobs from the multiple client devices, a current workload of the group of jobs is determined. A group of job descriptions associated with the group of jobs is determined based on configuration information of various jobs in the group of jobs. A future workload associated with the group of jobs is determined based on associations, comprised in a workload model, between job descriptions and future workloads associated with the job descriptions. The group of jobs in the processing system are managed based on the current workload and the future workload. With the foregoing example implementation, jobs in the processing system may be managed more effectively, and latency in processing jobs may be reduced.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 9, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jun Tang, Yi Wang, Qingxiao Zheng