Patents Examined by Charles M Swift
  • Patent number: 11847500
    Abstract: A method can include receiving, at a workflow controller, a machine learning workflow, the machine learning workflow associated with a first task and a second task. The first task is training a machine learning model and the second task is deploying the model. The method can include segmenting, by the workflow controller, the machine learning workflow into a first sub-workflow associated with the first task and a second sub-workflow associated with the second task, assigning a first workflow agent to the first sub-workflow and assigning a second workflow agent to the second sub-workflow, selecting, by the first workflow agent and based on first resources needed to perform the first task, a first cluster for performing the first task and selecting, by the second workflow agent and based on second resources needed to perform the second task, a second cluster for performing the second task.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: December 19, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Johnu George, Sourav Chakraborty, Amit Kumar Saha, Debojyoti Dutta, Xinyuan Huang, Adhita Selvaraj
  • Patent number: 11847493
    Abstract: A system may include a receiver to receive a task. The task may include a portion of an algorithm, and may include a task power level and a task precision. The system may also include a circuit including a circuit power level and a circuit precision. The system may include first software to identify the circuit, and second software to assign the task to the circuit to reduce total power. The circuit precision may be greater than the task precision.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: December 19, 2023
    Inventor: Yang Seok Ki
  • Patent number: 11847494
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for allocating computing resources. In one aspect, a method includes receiving intent data specifying one or more computing services to be hosted by a computing network, requested characteristics of computing resources for use in hosting the computing service, and a priority value for each requested characteristic. A budget constraint is identified for each computing service. Available resources data is identified that specifies a set of available computing resources. A resource allocation problem for allocating computing resources for the one or more computing resources is generated based on the intent data, each budget constraint, and the available resources data. At least a portion of the set of computing resources is allocated for the one or more computing services based on results of evaluating the resource allocation problem to meet a particular resource allocation objective.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: December 19, 2023
    Assignee: Google LLC
    Inventors: David J. Helstroom, Patricia Weir, Cameron Cody Smith, Zachary A. Hirsch, Ulric B. Longyear
  • Patent number: 11803414
    Abstract: Methods and systems for scaling computing processes within a serverless computing environment are provided. In one embodiment, a method is provided that includes receiving a request to execute a computing process in the serverless computing environment. A first node may be created within the serverless computing environment to execute the computing process. A first amount of computing resources may be assigned to the first node. It may be determined later that the first amount of computing resources are not sufficient to implement the first node. A second amount of computing resources may be determined with a vertical autoscaling process and a second node may be created within the serverless computing environment using a horizontal autoscaling process. The second node may be assigned the second amount of computing resources. The computing process may then be executed using both the first and second nodes within the serverless computing environment.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: October 31, 2023
    Assignee: Red Hat, Inc.
    Inventors: Huamin Chen, Roland Huss
  • Patent number: 11782751
    Abstract: A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) may include obtaining, from an NBMP source, a workflow having a workflow descriptor (WD) indicating a workflow descriptor document (WDD); based on the workflow, obtaining a task having a task descriptor (TD) indicating a task descriptor document (TDD); based on the task, obtaining, from a function repository, a function having a function descriptor (FD) indicating a function descriptor document (FDD); and processing the media content, using the workflow, the task, and the function.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: October 10, 2023
    Assignee: TENCENT AMERICA LLC
    Inventor: Iraj Sodagar
  • Patent number: 11775348
    Abstract: In general, various aspects of the present invention provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for generating and managing custom workflows for domain objects defined within microservices. In accordance with various aspects, a method is that comprises: receiving an attribute value for a custom workflow to include in a microservice that corresponds to an attribute defined for a workflow component; accessing mapping data for an attribute; identifying, based on the mapping data, a corresponding field of a workflows table mapped to the attribute; storing a record in the workflows table for the custom workflow with the attribute value stored in the corresponding field for the record to persist the custom workflow in the microservice.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: October 3, 2023
    Assignee: OneTrust, LLC
    Inventors: Subramanian Viswanathan, Milap Shah
  • Patent number: 11775350
    Abstract: A method of resource estimation for implementing processing functions is described. The method can include receiving a default value of a resource requirement parameter indicating a default resource requirement for instantiating a reference instance of a processing function on a computing platform or more default parameter values of configuration parameters and input parameters of the processing function, and estimating a current value of the requirement parameter indicating a current resource requirement for instantiating a current instance of the processing function on the computing platform with one or more current parameter values of the configuration parameters and input parameters of the processing function based on the default value of the resource requirement parameter, and the one or more default parameter values and the one or more current parameter values of the configuration parameter and input parameters of the processing function.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: October 3, 2023
    Assignee: TENCENT AMERICA LLC
    Inventor: Iraj Sodagar
  • Patent number: 11768705
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: September 26, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
  • Patent number: 11755369
    Abstract: The present disclosure relates generally to virtualization, and more particularly to techniques for deploying containers in a virtual environment. The container scheduling can be based on information determined by a virtual machine scheduler. For example, a container scheduler can receive a request to deploy a container. The container scheduler can send container information to the virtual machine scheduler. The virtual machine scheduler can use the container information along with resource utilization of one or more virtual machines to determine an optimal virtual machine for the container. The virtual machine scheduler can send an identification of the optimal virtual machine back to the container scheduler so that the container scheduler can deploy the container on the optimal virtual machine.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: September 12, 2023
    Assignee: VMware, Inc.
    Inventors: Thaleia Dimitra Doudali, Zhelong Pan, Pranshu Jain
  • Patent number: 11755358
    Abstract: A virtual machine (VM) management utility tool may deploy an object model that may persist one or more virtual machine dependencies and relationships. Through a web front-end interface, for example, the VMs may be started in a specific order or re-booted, and the tool automatically determines the additional VMs that need to be re-booted order to maintain the integrity of the environment. Through the web interface, for example, the object model may be managed, and start-up orders or VM dependencies may be updated. For VMs that may not start under load, the object model may access to the VM until the VM is fully initialized.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: September 12, 2023
    Assignee: Intel Corporation
    Inventors: Christopher Thomas Wilkinson, Neelsen Edward Cyrus
  • Patent number: 11748161
    Abstract: A method and apparatus for job submission are described. In one embodiment, the jobs are submitted by a job submission service or gateway that schedules large-scale data processing jobs on remote infrastructure. In one embodiment, the method comprises: receiving a request at a proxy service from a first client, via a first network communication, to submit a first job to a cluster; and managing the first job externally to the first client, including sending a request to an orchestration system to launch an orchestration system job in a container to start the first job running on the cluster via a client process run on a job client in the container and provide state information back to the proxy service regarding the orchestration system job.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: September 5, 2023
    Assignee: Stripe, Inc.
    Inventors: Andrew Johnson, Daniel Snitkovskiy, Marti Motoyama, Jonathan Bender
  • Patent number: 11748146
    Abstract: Implementations describe a computing system that implements a plurality of virtual machines inside a trust domain (TD), enabled via a secure arbitration mode (SEAM) of the processor. A processor includes one or more registers to store a SEAM range of memory, a TD key identifier of a TD private encryption key. The processor is capable of initializing a trust domain resource manager (TDRM) to manage the TD, and a virtual machine monitor within the TD to manage the plurality of virtual machines therein. The processor is further capable of exclusively associating a plurality of memory pages with the TD, wherein the plurality of memory pages associated with the TD is encrypted with a TD private encryption key inaccessible to the TDRM. The processor is further capable of using the SEAM range of memory, inaccessible to the TDRM, to provide isolation between the TDRM and the plurality of virtual machines.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: September 5, 2023
    Assignee: Intel Corporation
    Inventors: Ravi L. Sahita, Tin-Cheung Kung, Vedvyas Shanbhogue, Barry E. Huntley, Arie Aharon
  • Patent number: 11740986
    Abstract: The present invention is a method and system for automatedly producing at least one desktop analytics trigger. Upon receiving at least one type of data input, the system analyzes the data input and produces at least one desktop analytics trigger based on the results of the analysis of the data input. The data input can include data on the programs, applications, or information a user utilizes during a task, to allow use of desktop process analytics. This process may be used to either generate a new desktop analytics trigger or update an existing desktop analytics trigger.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: August 29, 2023
    Assignee: Verint Americas Inc.
    Inventors: Senan Burgess, Chris Schnurr
  • Patent number: 11740922
    Abstract: Systems and methods are provided for implementing a Virtual Switch (vSwitch) architecture that supports transparent virtualization and live migration. In an embodiment, a vSwitch with prepopulated Local Identifiers (LIDs). Another embodiment provides for vSwitch with dynamic LID assignment. Another embodiment provides for vSwitch with prepopulated LIDS and dynamic LID assignment Moreover, embodiments of the present invention provide scalable dynamic network reconfiguration methods which enable live migrations of VMs in network environments.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: August 29, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Bjørn Dag Johnsen, Evangelos Tasoulas, Ernst Gunnar Gran
  • Patent number: 11734072
    Abstract: Systems and techniques for managing and executing digital workflows are described. A technique described includes obtaining a job record from a job queue from a first server; assigning a node associated with a second server to handle a task indicated by the job record; operating, at the second server, a first action block in the node to produce output results in response to executing the task and to forward the output results to batch blocks; operating, at the second server, the batch blocks in the node to respectively accumulate different batch groups of the output results; operating, at the second server, the batch blocks in the node to respectively forward the different batch groups of the output results to respective second action blocks; and operating, at the second server, the second action blocks in the node to respectively process the different batch groups of the output results.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: August 22, 2023
    Assignee: Nuvolo Technologies Corporation
    Inventor: Collin Parker
  • Patent number: 11726817
    Abstract: Scheduling multiple processes with varying delay sensitivity is disclosed herein. In one example, a processor device iteratively executes a processing workload that includes a fixed-execution-time process and an adjustable-execution-time process. During each iteration of the processing workload, the processor device first determines, for that iteration, a maximum cycle time interval during which both the fixed-execution-time process and an adjustable-execution-time process will execute. The processor device further determines a maximum execution time interval for the adjustable-execution-time process, based on the maximum cycle time interval and a fixed execution time interval for the fixed-execution-time process. The processor device then modifies an adjustable execution time interval for adjustable-execution-time process in the current iteration of the processing workload based on the maximum execution time interval.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: August 15, 2023
    Assignee: Red Hat, Inc.
    Inventors: Jered J. Floyd, Ali Ok
  • Patent number: 11720416
    Abstract: A computer's processes and/or threads generate and store in memory, data to reimplement or reverse a transaction on a database, so that the database can be recovered. This data is written to persistent memory storage (“persisted”) by another process, for which the processes and/or threads may wait. This wait includes at least a sleep phase, and additionally a spin phase which is entered if after awakening from sleep and checking (“on-awakening” check), the data to be persisted is found to not have been persisted. To sleep in the sleep phase, each process/thread specifies a sleep duration determined based at least partially on previous results of on-awakening checks. The previous results in which to-be-persisted data was found to be not persisted are indications the sleeps were insufficient, and these indications are counted and used to determine the sleep duration. Repeated determination of sleep duration makes the sleep phase adaptive.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: August 8, 2023
    Assignee: Oracle International Corporation
    Inventors: Graham Ivey, Yunrui Li
  • Patent number: 11714679
    Abstract: A system for reinforcement learning in a dynamic resource environment includes at least one memory and at least one processor configured to provide an electronic resource environment comprising: a matching engine and the resource generating agent configured for: obtaining from a historical data processing task database a plurality of historical data processing tasks, each historical data processing task including respective task resource requirement data; for a historical data processing task of the plurality of historical data processing tasks, generating layers of data processing tasks wherein a first layer data processing task has an incremental variant in its resource requirement data relative to resource requirement data for a second layer data processing task; and providing the layers of data processing tasks for matching by the machine engine.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: August 1, 2023
    Assignee: ROYAL BANK OF CANADA
    Inventors: Hasham Burhani, Zichang Long, Jonathan Cupillari
  • Patent number: 11704151
    Abstract: A method, system, and computer program product to plan and schedule executions of various utility tasks of a utility command during a maintain window, the method including receiving a utility command. The method may also include identifying possible utility tasks used to execute the utility command. The method may also include determining preferred utility tasks. The method may also include calculating a degree of parallelism for the preferred utility tasks. The method may also include generating a utility execution plan for the utility command. The method may also include analyzing the utility execution plan against resource constraints of a time window and sub time windows of the time window. The method may also include generating a time window execution plan for each sub time window of the sub time windows. The method may also include updating the utility execution plan with the time window execution plans.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: July 18, 2023
    Assignee: International Business Machines Corporation
    Inventors: Hong Mei Zhang, Xiaobo Wang, Sheng Yan Sun, Shuo Li
  • Patent number: 11693687
    Abstract: An example operation may include a method comprising one or more of receiving a VNFC module LCM request where the LCM request specifies a VNFC instance (VNFCI), a target VNFC module, and an LCM operation to be performed, comprising retrieving a VNFCI data entry, determining a target OS installation of the VNFCI, establishing a secure connection to a target OS on a VNFCI hosting VM/container, determining a default command for the LCM operation, adapting the default command to the target OS, executing the adapted command, normalizing a response code, and sending a response to the VNFC module LCM request.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: July 4, 2023
    Assignee: International Business Machines Corporation
    Inventor: Keith William Melkild