Patents Examined by Abu Zar Ghaffari
  • Patent number: 11487585
    Abstract: An example method of managing a plurality of hardware accelerators in a computing system includes executing workload management software in the computing system configured to allocate a plurality of jobs in a job queue among a pool of resources in the computer system; monitoring the job queue to determine required hardware functionalities for the plurality of jobs; provisioning at least one hardware accelerator of the plurality of hardware accelerators to provide the required hardware functionalities; configuring a programmable device of each provisioned hardware accelerator to implement at least one of the required hardware functionalities; and notifying the workload management software that each provisioned hardware accelerator is an available resource in the pool of resources.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: November 1, 2022
    Assignee: XILINX, INC.
    Inventors: Spenser Gilliland, Andrew Mirkis, Fernando J. Martinez Vallina, Ambujavalli Kesavan, Michael D. Allen
  • Patent number: 11467858
    Abstract: A first instance is caused to execute software code to perform a first portion of a workflow in response to receipt of a workflow request, and performance of the first portion results in submission of an operation request to an entity. A resume workflow request is received from the entity, where the resume workflow request includes a handle to a snapshot that corresponds to a state of execution of the software code and a response to the operation request to the entity. Using the handle to the snapshot and the response to the operation request, a second instance is caused to execute the software code from the first state to perform a second portion of the workflow. A workflow result is received from an instance that executes a last portion of the workflow, and the workflow is provided result in response to the workflow request.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Anthony Nicholas Liguori, Douglas Stewart Laurence
  • Patent number: 11467876
    Abstract: An information processing apparatus for controlling a plurality of nodes mutually coupled via a plurality of cables, the apparatus includes: a memory; a processor coupled to the memory, the processor being configured to cause a first node to execute first processing to extract coupling relationship between the plurality of nodes, the first node being one of the plurality of nodes, being sequentially allocated from each of the plurality of nodes, the first processing including executing allocation processing that allocates unique coordinate information to the first node and allocates common coordinate information to nodes excluding the first node; executing transmission processing that causes the first node to transmit first information to each of the cables coupled to the first node; and executing identification processing that identifies a node having received the first information as neighboring node coupled to one of the plurality of cables coupled to the first node.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: October 11, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Akihiko Kasagi
  • Patent number: 11455193
    Abstract: A system receives a request to deploy a virtual machine (VM) on one of a plurality of nodes running a plurality of VMs in a cloud computing system. The system receives a predicted lifetime for the VM and an indication of an average lifetime of VMs running on each of the plurality of nodes. The system allocates the VM to a first node when a first policy of collocating VMs with similar lifetimes on a node is adopted and the predicted lifetime is within a predetermined range of the average lifetime of VMs running on the first node. The system allocates the VM to a second node when a second policy of collocating VMs with dissimilar lifetimes on a node is adopted and the predicted lifetime is not within the predetermined range of the average lifetime of VMs running on the second node.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: September 27, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ricardo Bianchini, Eli Cortez, Marcus Felipe Fontoura, Anand Bonde
  • Patent number: 11449365
    Abstract: Systems and methods for deploying computer application workload elements among a plurality of computing resources are described. An elastic workload orchestration architecture includes a workload queue that is configured to receive application workload elements for processing using one or more distributed hybrid application services. The workload elements are evaluated to confirm whether they are properly formatted for deployment among the distributed hybrid application services and, if such confirmation cannot be made, the workload elements are adapted into a proper format. An elastic workload operator then deploys the workload elements to the distributed hybrid application services.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: September 20, 2022
    Assignee: TRILIO DATA INC.
    Inventors: Aleksandr Biberman, Andrey Turovsky
  • Patent number: 11436048
    Abstract: Hardware acceleration of task dependency management in parallel computing, wherein solutions are proposed for hardware-based dependency management to support nested tasks, resolve system deadlocks as a result of memory full conditions in the dedicated hardware memory and synergetic operation of software runtime and hardware acceleration to solve otherwise unsolvable deadlocks when nested tasks are processed. Buffered asynchronous communication of larger data exchange are introduced, requiring less support from multi-core processor elements as opposed to standard access through the multi-core processor elements. A hardware acceleration processor may be implemented in the same silicon die as the multi-core processor for achieving gains in performance, fabrication cost reduction and energy consumption saving during operation.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: September 6, 2022
    Assignees: Barcelona Supercomputing Center—Centro Nacional De Supercomputacion, Universitat Politecnica De Catalunya
    Inventors: Xubin Tan, Carlos Alvarez Martinez, Jaume Bosch Pons, Daniel Jimenez Gonzalez, Mateo Valero Cortes
  • Patent number: 11429407
    Abstract: CRYSTAL “Cognitive radio you share, trust and access locally” (CRYSTAL) is a virtualized cognitive access point that may provide for combining multiple wireless access applications on a single hardware platform. Radio technologies such as LTE (Long-Term Evolution), WiMax (Worldwide Interoperability for Microwave Access), GSM (Global System for Mobile Communications), and the like can be supported. CRYSTAL platforms can be aggregated and managed as a cloud, which provides a model for access point sharing, control, and management. CRYSTAL may be used for scenarios such as neighborhood spectrum management. CRYSTAL security features allow for home/residential as well as private infrastructure implementations.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: August 30, 2022
    Assignee: The Trustees of the University of Pennsylvania
    Inventors: Jonathan M. Smith, Eric R. Keller, Thomas W. Rondeau, Kyle B. Super
  • Patent number: 11429439
    Abstract: Provided is a task scheduling method. The method may include: assigning a task to one of first processing units functionally connected to an electronic device; and migrating, at least partially on the basis of a performance control condition related to the task, the task to one of second processing units for processing.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dohyoung Kim, Joohwan Kim, Hyunjin Park, Changhwan Youn, Donghee Han
  • Patent number: 11429414
    Abstract: An opportunistic hypervisor determines that a guest virtual machine of a virtualization host has voluntarily released control of a physical processor. The hypervisor uses the released processor to identify and initiate a virtualization management task which has not been completed. In response to determining that at least a portion of the task has been performed, the hypervisor enters a quiescent state, releasing the physical processor to enable resumption of the guest virtual machine.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: August 30, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Anthony Nicholas Liguori, Jan Schoenherr, Karimallah Ahmed Mohammed Raslan, Konrad Jan Miller, Filippo Sironi
  • Patent number: 11429410
    Abstract: Systems, methods, and software to enhance the management of software defined networks. A controller is configured to maintain a data plane configuration for a virtual machine environment based on forwarding rules. The controller is further configured to identify a virtual machine group to be deployed in the computing environment, and identify tags associated with each virtual machine in the virtual machine group. Once the tags are identified, the controller may update the data plane forwarding configuration based on the identified tags and the forwarding rules.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: August 30, 2022
    Assignee: VMware, Inc.
    Inventors: Kaushal Bansal, Uday Masurekar
  • Patent number: 11422860
    Abstract: In one embodiment, an operating system (OS) or hypervisor running on a computer system can allocate a portion of the volatile memory of the computer system as a persistent memory allocation. The OS/hypervisor can further receive a signal from the computer system's Basic Input/Output System (BIOS) indicating an alternating current (AC) power loss or cycle event and, in response to the signal, can save data in the persistent memory allocation to a nonvolatile backing store. Then, upon restoration of AC power to the computer system, the OS/hypervisor can restore the saved data from the nonvolatile backing store to the persistent memory allocation.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: August 23, 2022
    Assignee: VMware, Inc.
    Inventors: Venkata Subhash Reddy Peddamallu, Kiran Tati, Rajesh Venkatasubramanian, Pratap Subrahmanyam
  • Patent number: 11422851
    Abstract: Cloning a running computing system includes quiescing processes running on a source computing system, saving state data of the source computing system, configuring a target computing system using the state data from the source computing system, and resuming program execution at the source computing system and the target computing system. Quiescing processes running on the source computing system may include marking all of the processes on the source computing system as non-dispatchable. All external resources may be identified for the source computing system prior to quiescing processes running on a source computing system. The external resources may include devices and files. The target computing system may access data that is also accessed by the source computing system. Data accessed by the source computing system may be cloned for access by the target computing system prior to resuming program execution. The data may be cloned using snapshot copies.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: August 23, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Douglas E. LeCrone, Paul A. Linstead
  • Patent number: 11416283
    Abstract: A method and apparatus for processing stream data are provided. The method may include: acquiring a to-be-adjusted number of target execution units, the target execution unit referring to a unit executing a target program segment in a stream computing system; adjusting a number of the target execution units in the stream computing system based on the to-be-adjusted number; determining, for a target execution unit in at least one target execution unit after the adjustment, an identifier set corresponding to the target execution unit, an identifier in the identifier set being used to indicate to-be-processed data; and processing, through the target execution unit, the to-be-processed data indicated by the identifier in the corresponding identifier set.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: August 16, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Weikang Gao, Yanlin Wang, Yue Xing, Jianwei Zhang, Yi Cheng
  • Patent number: 11416285
    Abstract: This technology is directed to facilitating scalable and secure data collection. In particular, scalability of data collection is enabled in a secure manner by, among other things, abstracting a connector(s) to a pod(s) and/or container(s) that executes separate from other data-collecting functionality. For example, an execution manager can initiate deployment of a collect coordinator on a first pod associated with a first job and deployment of a first connector on a second pod associated with a second job separate from the first job of a container-managed platform. The collect coordinator can provide a data collection task to the first connector deployed on the second pod of the second job. The first connector can then obtain the set of data from the data source and provide the set of data to the collect coordinator for providing the set of data to a remote source.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: August 16, 2022
    Assignee: Splunk Inc.
    Inventors: Denis Vergnes, Zhimin Liang
  • Patent number: 11416262
    Abstract: A system for assigning a workload to compute resources includes an interface and a processor. The interface is configured to receive a workload. The processor is configured to break the workload into a set of subproblems; and for a subproblem of the set of subproblems: determine whether the subproblem benefits from intersheet parallelism; determine whether the subproblem benefits from intrasheet parallelism; determine whether the subproblem benefits from directed acyclic graph (DAG) partitioning; and assign the subproblem, wherein assigning the subproblem utilizes optimization when appropriate based at least in part on benefits from the intersheet parallelism, the intrasheet parallelism, and the DAG partitioning.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: August 16, 2022
    Assignee: Workday, Inc.
    Inventors: Christof Bornhoevd, Neil Thombre
  • Patent number: 11409579
    Abstract: An apparatus to facilitate thread barrier synchronization is disclosed. The apparatus includes a plurality of processing resources to execute a plurality of execution threads included in a thread workgroup and barrier synchronization hardware to assign a first named barrier to a first set of the plurality of execution threads in the thread workgroup, assign a second named barrier to a second set of the plurality of execution threads in the thread workgroup, synchronize execution of the first set of execution threads via the first named barrier and synchronize execution of the second set of execution threads via the second named barrier.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: James Valerio, Vasanth Ranganathan, Joydeep Ray
  • Patent number: 11409560
    Abstract: In one embodiment, a processor includes a current protection controller to: receive instruction width information and instruction type information associated with one or more instructions stored in an instruction queue prior to execution of the one or more instructions by an execution circuit; determine a power license level for the core based on the corresponding instruction width information and the instruction type information; generate a request for a license for the core corresponding to the power license level; and communicate the request to a power controller when the one or more instructions are non-speculative, and defer communication of the request when at least one of the one or more instructions is speculative. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Krishnamurthy Jambur Sathyanarayana, Robert Valentine, Alexander Gendler, Shmuel Zobel, Gavri Berger, Ian M. Steiner, Nikhil Gupta, Eyal Hadas, Edo Hachamo, Sumesh Subramanian
  • Patent number: 11403131
    Abstract: Predictive scaling of containers is provided based on obtaining, by one or more processors, prior transaction data on one or more of a user's prior transactions on one or more transaction-centric applications, and analyzing by the processor(s), the prior transaction data to predict whether a current transaction of the user on an application is deterministic or non-deterministic as to one or more functional layers of multiple functional layers of the application. Based on predicting that the current transaction is deterministic as to the functional layer(s) of the application, the processor(s) determines compute resource requirements for the deterministic transaction, and launches one or more containers in advance to run the functional layer(s) based, at least in part, on the computed resource requirements of the deterministic transaction.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: August 2, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Biswajit Mohapatra, Prasanta Kumar Pal, Venkata Vinay Kumar Parisa
  • Patent number: 11403144
    Abstract: A method of providing a service to a requesting Infrastructure Element belonging to plurality of Infrastructure Elements interconnected as a data network is proposed. The method includes operating a computing system for receiving a service request requesting a service from the requesting Infrastructure Element. The service request includes an indication of one or more performance requirements. The method also includes converting the service request to a service graph, which includes at least one task to be accomplished by complying with the performance requirements to provide the service. At least one Infrastructure Element currently capable of accomplishing the task complying with the performance requirements is selected, and the selected Infrastructure Element for accomplishing the task is configured. The method further includes causing the selected Infrastructure Element to accomplish the task to provide the service to the requesting Infrastructure Element.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: August 2, 2022
    Assignee: TELECOM ITALIA S.p.A.
    Inventors: Luigi Artusio, Antonio Manzalini
  • Patent number: 11397624
    Abstract: A data processing system including a data processor which is operable to execute programs to perform data processing operations and in which execution threads executing a program to perform data processing operations may be grouped together into thread groups. The data processor comprises a cross-lane permutation circuit which is operable to perform processing for cross-lane instructions which require data to be permuted (copied or moved) between the threads of a thread group. The cross-lane permutation circuit has plural data lanes between which data may be permuted (moved or copied). The number of data lanes is fewer than the number of threads in a thread group.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 26, 2022
    Assignee: Arm Limited
    Inventors: Luka Dejanovic, Mladen Wilder