Patents Examined by Charles M Swift
  • Patent number: 11403136
    Abstract: A task execution application programming interface may include a pipeline execution service interface configured to communicate between a task image and a pipeline execution service adapter. The pipeline execution service adapter may be configured to receive, from a pipeline execution service, a request to execute the task image in a pipeline. The request may include arguments. The task image may include executable code and an execution environment. The pipeline execution service interface may be further configured to obtain results by executing the executable code using the arguments in the execution environment. The pipeline execution service adapter may be further configured to provide, to the pipeline execution service, access to the results. The pipeline execution service may control execution of the pipeline using the results.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: August 2, 2022
    Assignee: Intuit Inc.
    Inventors: Michael Willson, Gennadiy Ziskind
  • Patent number: 11403128
    Abstract: The disclosure relates to a method, executed in a Network Function Virtualization Infrastructure (NFVI) software modification manager, for coordination of NFVI software modifications of a NFVI providing at least one Virtual Resource (VR) hosting at least one Virtual Network Function (VNF), comprising receiving an NFVI software modifications request; sending a notification that a software modification procedure of the at least one VR is about to start to a VNF level manager, the VNF level manager managing a VNF hosted on the at least one VR provided by the NFVI; applying software modifications to at least one resource of the at least one VR; and notifying the VNF level manager about completion of the software modifications.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 2, 2022
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventor: Maria Toeroe
  • Patent number: 11403147
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve cloud management. An example apparatus includes at least one processor, and memory including instructions that, when executed, cause the at least one processor to execute a cloud manager installer generated by a container platform manager, the cloud manager installer is to configure a cloud computing environment based on environment information, determine one or more virtual resources based on a blueprint, and deploy a cloud platform manager in the cloud computing environment to manage a lifecycle of an application executing in the cloud computing environment by provisioning the one or more virtual resources to the cloud computing environment, and installing the cloud platform manager in the cloud computing environment by storing the cloud manager installer and the blueprint in the cloud computing environment.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: August 2, 2022
    Assignee: VMWARE, INC.
    Inventors: Evgeny Aronov, Ivo Petkov, Diana Kovacheva, Anna Delcheva, Zahari Ivanov, Georgi Mitsov, Alexander Dimitrov
  • Patent number: 11403152
    Abstract: Embodiments of the disclosure provide a method and system for task orchestration. A method may include: providing, by a task master control unit, an execution instruction of a task related to a module in an application container to a node agent service unit in an auxiliary application container bound to the application container, the auxiliary application container sharing a file system with the application container; and executing, by the node agent service unit, a command for completing the task, in response to acquiring the execution instruction of the task.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 2, 2022
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventor: Haodong Chen
  • Patent number: 11392265
    Abstract: Provided is a method for controlling a plurality of work processes in a multitasking mobile terminal, and more particularly, a method for selecting a second work process during a first work process and controlling a predetermined function of the selected second work process. In the controlling method, icons corresponding to the respective work processes are displayed in response to a user command, and a desired work process is selected through the displayed icons. A predetermined function of the selected work process is controlled through a pop-up menu activated in response to the user command.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: July 19, 2022
    Inventors: Seul Ki Choi, Sang Jin Yoon
  • Patent number: 11385939
    Abstract: An application manager receives or defines a service specification for a first application that defines a set of required computing resources that are necessary to run each application component of the first application. A resource supply manager in communication with the application manager manages a plurality of computing resources in a shared computing environment.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: July 12, 2022
    Assignee: ServiceNow, Inc.
    Inventors: Wai Ming Wong, Michael C. Hui
  • Patent number: 11385930
    Abstract: Methods and systems for receiving an indication that an application running on a first device is ready to perform a task, determining a device capability associated with performing the task, determining one or more devices associated with a user of the first device, wherein each of the one or more devices is associated with the device capability, selecting, based on the task and one or more user preferences associated with the user, a second device from the one or more devices, and sending an instruction to the second device, wherein the instruction causes the second device to perform the task, are described herein.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: July 12, 2022
    Assignee: Citrix Systems, Inc.
    Inventor: Simon Frost
  • Patent number: 11347544
    Abstract: In one embodiment, a method includes generating one or more queues by an application executing on a client system, wherein each queue is associated with one or more declarative attributes, wherein each declarative attribute declares a processing requirement or a processing preference, generating one or more work items to be processed, for each of the one or more work items enqueuing the work item into a selected one of the one or more queues based on the one or more declarative attributes associated with the selected queue, and providing the one or more queues to a scheduler of an operating system of the client system, wherein the scheduler is configured to schedule each of the one or more work items for processing based on one or more policies and the one or more declarative attributes of the selected queue for that work item.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: May 31, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Vadim Victor Spivak, Bernhard Poess
  • Patent number: 11347548
    Abstract: Methods, systems, and computer-readable media for a transformation specification format for multiple execution engines are disclosed. A transformation specification is expressed according to a transformation specification format. The transformation specification represents a polytree or graph linking one or more data producer nodes, one or more data transformation nodes, and one or more data consumer nodes. An execution engine is selected from among a plurality of available execution engines for execution of the transformation specification. The execution engine is used to acquire data from one or more data producers corresponding to the one or more data producer nodes, perform one or more transformations of the data corresponding to the one or more data transformation nodes, and output one or more results of the one or more transformations to one or more data consumers corresponding to the one or more data consumer nodes.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: May 31, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Fletcher Liverance, Chance Ackley, Dominic Corona
  • Patent number: 11340934
    Abstract: A cloud oversubscription system including one or more processors and a memory coupled with the one or more processors. The one or more processors effectuate operations including obtaining a list of service level agreement (SLA) availability values for each of one or more virtual machines (VMs) of a host. The one or more processors further effectuate operations including analyzing the list to determine a maximum availability number for the host. The one or more processors further effectuate operations including identifying a probable overload condition value based on the SLA availability values. The one or more processors further effectuate operations including performing at least one recommended action when the probable overload condition value exceeds an SLA before an occurrence of an overload condition.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: May 24, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Imad Ahmad, Frederick M. Armanino, Raghvendra Savoor
  • Patent number: 11340933
    Abstract: A method and system for managing dynamic runtime information provision for a container implementing a Session Management Function (SMF) executed by an electronic device in a 3rd generation partnership project (3GPP) 5th Generation (5G) mobile network core. The method includes starting a container image load, the container image including at least a secret sub unit and an application sub unit, the application sub unit providing the SMF, determining an input source to provide a secret value for the container, the input source identified by information in the secret sub unit in the container image, and providing the secret value to a destination sub unit of the container.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: May 24, 2022
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: James Donald Reno, Michael Brown, Akshay Rajesh Baheti, Michael Cameron
  • Patent number: 11340959
    Abstract: Provided is a control method of an electronic apparatus, the method including displaying content corresponding to a first application in a first area of a display, displaying content corresponding to a second application in a second area of the display, identifying resource allocation information associated with the first application and the second application, and running the first application and the second application based on the identified resource allocation information.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: May 24, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Kensin Noh, Dongwan Kang, Seungyong Lee
  • Patent number: 11334392
    Abstract: A method of deploying a task includes allocating nodes to the task; determining, in the network, a subnetwork, for interconnecting the allocated nodes, satisfying one or more predefined determination criteria including a first criterion according to which the determined subnetwork is the one, from among at least two subnetworks meeting the criteria other than the first criterion, using the most switches already allocated, each to at least one already deployed task; allocating the subnetwork, and in particular the links belonging to that subnetwork, to the task; and implementing inter-node communication routes in the allocated subnetwork.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: May 17, 2022
    Assignee: BULL SAS
    Inventor: Jean-Noël Quintin
  • Patent number: 11327794
    Abstract: A computing system may run a recurring task, which may use resources, such as logic resources and time, to operate on and/or with a set of data. Accordingly, the frequency at which the recurring task is executed may limit the performance and/or efficiency of the computing system. As such, a scheduler routine may, based on configuration information associated with the recurring task and/or the set of data, schedule the recurring task with a periodicity that may improve the performance and/or efficiency of the computing system.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: May 10, 2022
    Assignee: ServiceNow, Inc.
    Inventors: Venkata Satya Sai Rama Murthy Manda, Peng Wang
  • Patent number: 11321111
    Abstract: The present disclosure provides systems, methods, and computer-readable media for managing graphics processing unit (GPU) allocation for a virtual machine (VM). A first GPU driver, associated with a first GPU, is offloaded from an operating system (OS) of the VM. Then, the first GPU is deallocated from the VM. A second GPU is allocated to the VM, and a second GPU driver, associated with the second GPU, is loaded in the OS of the VM. To restore a GPU context from the first GPU within the second GPU, a GPU command log from the first GPU is replayed to the second GPU.
    Type: Grant
    Filed: September 5, 2016
    Date of Patent: May 3, 2022
    Assignees: Huawei Technologies Co., Ltd., The Governing Council of the University of Toronto
    Inventors: Eyal de Lara, Daniel Kats, Graham Allsop, Weidong Han, Feng Xie
  • Patent number: 11321116
    Abstract: The electronic device with one or more processors and memory receives an input of a user. The electronic device, in accordance with the input, identifies a respective task type from a plurality of predefined task types associated with a plurality of third party service providers. The respective task type is associated with at least one third party service provider for which the user is authorized and at least one third party service provider for which the user is not authorized. In response to identifying the respective task type, the electronic device sends a request to perform at least a portion of a task to a third party service provider of the plurality of third party service providers that is associated with the respective task type.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: May 3, 2022
    Assignee: Apple Inc.
    Inventors: Thomas R. Gruber, Christopher D. Brigham, Adam J. Cheyer, Daniel Keen, Kenneth Kocienda
  • Patent number: 11315014
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: April 26, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan
  • Patent number: 11314542
    Abstract: A multi-layer compute sizing correction stack may generate prescriptive compute sizing correction tokens for controlling sizing adjustments for computing resources. The input layer of the compute sizing correction stack may generate cleansed utilization data based on historical utilization data received via network connection. A prescriptive engine layer may generate a compute sizing correction trajectory detailing adjustments to sizing for the computing resources. Based on the compute sizing correction trajectory, the prescriptive engine layer may generate the compute sizing correction tokens that that may be used to control compute sizing adjustments prescriptively.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 26, 2022
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Madhan Kumar Srinivasan, Arun Purushothaman, Guruprasad PV, Michael S. Eisenstein, Vijay Desai
  • Patent number: 11301274
    Abstract: Disclosed is an improved approach to implement I/O and storage device management in a virtualization environment. According to some approaches, a Service VM is employed to control and manage any type of storage device, including directly attached storage in addition to networked and cloud storage. The Service VM implements the Storage Controller logic in the user space, and can be migrated as needed from one node to another. IP-based requests are used to send I/O request to the Service VMs. The Service VM can directly implement storage and I/O optimizations within the direct data access path, without the need for add-on products.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: April 12, 2022
    Assignee: Nutanix, Inc.
    Inventors: Mohit Aron, Dheeraj Pandey, Ajeet Singh
  • Patent number: 11301307
    Abstract: Systems, methods, and machine-readable instructions stored on machine-readable media are disclosed for selecting, based on an analysis of a first process executing on a first host, at least one of a plurality of other hosts to which to migrate the first process, the selecting being further based on an analysis of the plurality of the other hosts and an analysis of processes executing on the plurality of the other hosts. At least one predictive analysis technique is used to predict an amount of time to complete migrating the first process to the selected at least one of the plurality of other hosts and an end time of the second process. In response to determining that a current time incremented by the predicted amount of time to complete migrating the first process is later than or equal to the predicted end time of the second process, a migration time at which to migrate the first process from the first host to the selected at least one of the plurality of other hosts is scheduled.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: April 12, 2022
    Assignee: RED HAT, INC.
    Inventor: Steven Eric Rosenberg