Patents Examined by Tammy E Lee
  • Patent number: 11599395
    Abstract: Some embodiments provide a method for updating a core allocation among processes of a gateway datapath executing on a gateway computing device having multiple cores. The gateway datapath processes include a first set of data message processing processes to which a first set of the cores are allocated and a second set of processes to which a second set of the cores are allocated in a first core allocation. Based on data regarding usage of the cores, the method determines a second core allocation that allocates a third set of the cores to the first set of processes and a fourth set of the cores to the second set of processes. The method updates a load balancing operation to load balance received data messages over the third set of cores rather than the first set of cores. The method reallocates the cores from the first allocation to the second allocation.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: March 7, 2023
    Assignee: VMWARE, INC.
    Inventors: Yong Wang, Mani Kancherla, Kevin Li, Sreeram Ravinoothala, Mochi Xue
  • Patent number: 11599394
    Abstract: Coordinated application processing. A method identifies processing engines available for coordinated application processing, distributes to the processing engines an application configured for execution to perform image processing, and distributes images to the processing engines.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: March 7, 2023
    Assignee: Illumina, Inc.
    Inventors: David Kimmel, Eunho Noh, Paul Smith
  • Patent number: 11593154
    Abstract: The present disclosure is directed to dynamically prioritizing, selecting or ordering a plurality threads for execution by processor circuitry based on a quality of service and/or class of service value/indicia assigned to the thread by an operating system executed by the processor circuitry. As threads are executed by processor circuitry, the operating system dynamically updates/associates respective class of service data with each of the plurality of threads. The current quality of service/class of service data assigned to the thread by the operating system is stored in a manufacturer specific register (MSR) associated with the respective thread. Selection circuitry polls the MSRs on a periodic, aperiodic, intermittent, continuous, or event-driven basis and determines an execution sequence based on the current class of service value associated with each of the plurality of threads.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Ahmad Samih, Rajshree Chabukswar, Russell Fenger, Shadi Khasawneh, Vijay Dhanraj, Muhammad Abozaed, Mukund Ramakrishna, Atsuo Kuwahara, Guruprasad Settuvalli, Eugene Gorbatov, Monica Gupta, Christine M. Lin
  • Patent number: 11544104
    Abstract: A method for off-loading tasks between a set of wireless earpieces in an embodiment of the present invention may have one or more of the following steps: (a) monitoring battery levels of the set of wireless earpieces, (b) determining the first wireless earpiece battery level and the second wireless battery level, (c) communicating the battery levels of each wireless earpiece to the other wireless earpiece of the set of wireless earpieces, (d) assigning a first task involving one or more of the following: computing tasks, background tasks, audio processing tasks, and sensor data analysis tasks from one of the set of wireless earpieces to the other wireless earpiece if the battery level of the one of the set of wireless earpieces falls below a critical threshold, (e) communicating data for use in performing a second task to the other wireless earpiece if the second task is communicated to the first wireless earpiece.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: January 3, 2023
    Assignee: BRAGI GMBH
    Inventor: Peter Vincent Boesen
  • Patent number: 11544118
    Abstract: One embodiment provides an information processing apparatus effective to execute a parallel job in coordination with other information processing apparatuses. In an example, the information processing apparatus includes: a memory configured to store computer readable instructions; and a processor configured to execute the computer readable instructions sored in the memory, the computer readable instructions including: providing an instruction to issue barrier communication of error information; and propagating the error information to each of the other information processing apparatuses based on the instruction for the barrier communication.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: January 3, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Nobutaka Ihara, Takahiro Kawashima
  • Patent number: 11526379
    Abstract: Embodiments of the present disclosure relate to a method for building an application. According to the method, a request is received from a building environment to acquire at least one component for executing at least one function of at least one feature of the application. The at least one feature is to be deployed to at least one target node in a distributed service platform comprising a plurality of nodes. The at least one target node and the at least one component are determined based on the request. The at least one component is acquired from the at least one target node. The at least one component is sent to the building environment for building the at least one feature.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: December 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ping Xiao, Peng Hui Jiang, Xin Peng Liu, Guang Han Sui
  • Patent number: 11526376
    Abstract: Embodiments of the present disclosure relate to a method for running an application, an electronic device, and a computer program product. The method includes determining, based on historical data associated with running of the application, a target time period and a computing resource to be used for running the application within the target time period, a load rate associated with the computing resource being higher than a threshold load rate in the target time period. The method further includes determining an interruption tolerance of the application based on a type of the application, determining costs for running the application by a plurality of types of virtual machines and determining a target type from the plurality of types based on the costs and the computing resource, to cause the application to be run by a virtual machine of the target type.
    Type: Grant
    Filed: May 31, 2020
    Date of Patent: December 13, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Yuting Zhang, Kaikai Jia
  • Patent number: 11513852
    Abstract: A data transferring apparatus and a method for transferring data with overlap are provided. The data transferring apparatus includes a command splitter circuit and a plurality of tile processing circuits. The command splitter circuit splits a block level transfer command into a plurality of tile transfer tasks. The command splitter circuit may issue the tile transfer tasks to the tile processing circuits in a plurality of batches. The tile processing circuits may execute the tile transfer tasks in a current batch, so as to read data of a plurality of corresponding tiles among a plurality of source tiles of a source block to the tile processing circuits. After all the tile transfer tasks in the current batch have been executed by the tile processing circuits, the command splitter circuit issues the tile transfer tasks in a next batch of the batches to the tile processing circuits.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: November 29, 2022
    Assignee: GlenFly Technology Co., Ltd.
    Inventors: Heng Que, Yuanfeng Wang, Deming Gu, Fengxia Wu
  • Patent number: 11494211
    Abstract: An electronic device includes a processor that executes a guest operating system and a hypervisor, an input-output (IO) device, and an input-output memory management unit (IOMMU). The IOMMU handles communications between the IOMMU and the guest operating system by: replacing, in communications received from the guest operating system, guest domain identifiers (domainIDs) with corresponding host domainIDs and/or guest device identifiers (deviceIDs) with corresponding host deviceIDs before further processing the communications; replacing, in communications received from the IO device, host deviceIDs with guest deviceIDs before providing the communications to the guest operating system; and placing, into communications generated in the IOMMU and destined for the guest operating system, guest domainIDs and/or guest deviceIDs before providing the communications to the guest operating system. The IOMMU handles the communications without intervention by the hypervisor.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: November 8, 2022
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Maggie Chan, Philip Ng, Paul Blinzer
  • Patent number: 11474859
    Abstract: A method for integrating infrastructure software functions and automotive applications on an automotive electronic control unit (ECU) device. The ECU device includes a hardware architecture and a software architecture, wherein the hardware architecture includes two or more system-on-chips, at least two of which each comprise two or more processing cores and means to communicate with at least one other system-on-chip. The hardware architecture includes memory and means to communicate with other ECU devices. The software architecture includes one, two, or more virtual machine monitors, each of which executes one, two, or more virtual machines. At least two of said virtual machines each execute an operating system, which executes one, two, or more tasks, and the execution of two or more of the tasks uses the time-triggered paradigm. The tasks are tasks of automotive applications from at least two different automotive domains and are tasks of infrastructure software functions.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: October 18, 2022
    Assignee: TTTECH AUTO AG
    Inventors: Stefan Poledna, Wilfried Steiner
  • Patent number: 11429361
    Abstract: Techniques for installing agents on host computing systems in data centers are disclosed. In one example, load information and resource capability associated with a host computing system in a data center may be determined. Further, a maximum number of concurrent installations to be performed on the host computing system may be determined based on the load information and the resource capability. Furthermore, a channel with the maximum number of concurrent installations may be configured for the host computing system and agents may be installed on the host computing system based on the configured channel.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: August 30, 2022
    Assignee: VMWARE, INC.
    Inventors: V Vimal Das Kammath, Zacharia George, Narendra Madanapalli, Rahav Vembuli, Aditya Sushilendra Kolhar
  • Patent number: 11416294
    Abstract: An example method includes receiving a resource management request associated with resources provided by at least one data center, creating, based on the resource management request, task data elements including at least first and second task data elements, adding the task data elements to a task data structure accessible at least by a first and second worker processes, removing, by the first worker process, a first task data element from the task data structure and initiate execution of a first task, removing, by the second worker process, a second task data element from the task data structure and initiate execution of a second task, wherein the second worker process executes at least a portion of the second task while the first worker process executes at least a portion of the first task in parallel, and sending, to the client computing device, a response to the resource management request.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: August 16, 2022
    Assignee: Juniper Networks, Inc.
    Inventor: Dale Davis
  • Patent number: 11403133
    Abstract: An apparatus, method, and computer program product are provided to translate request data objects into ordered sequence of tasks to be performed by network response assets and related systems to allow for the efficient movement of network resources and other resources in high-volume network environments. In some example implementations, otherwise unrelated request data objects and related parameters are interleaved into an ordered sequence of tasks, and a renderable object associated therewith is provided to a user interface of a mobile system associated with a network response asset. Location information such as triangulated position information associated with one or more mobile devices, along with other system characteristics may be used to ascertain system status and otherwise effectuate request translation.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: August 2, 2022
    Assignee: GROUPON, INC.
    Inventors: Edward Schmalzle, Phillip Sasser, Nicholas Pellegrini, Ross Moulton
  • Patent number: 11360817
    Abstract: This application provides a method and a terminal for allocating a system resource to an application. The method includes: predicting, by a terminal based on a current status of the terminal, a target application to be used; reserving, by the terminal for the target application based on the prediction result, a system resource required for running the target application; and providing, by the terminal according to a resource allocation request of the target application, the reserved system resource for the target application to use.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: June 14, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Zhenkun Zhou, Yuqiong Xu, Wei Wu
  • Patent number: 11347532
    Abstract: Systems and methods for hot-swapping storage pool backend functional modules of a host computer system. An example method may comprise: identifying, by a processing device of a host computer system executing a virtual machine managed by a virtual machine manager, a storage pool backend functional module; and activating the identified storage pool backend functional module by directing, to the identified storage pool backend functional module, backend storage function calls.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: May 31, 2022
    Assignee: Red Hat, Inc.
    Inventor: Federico Simoncelli
  • Patent number: 11340928
    Abstract: The method includes performing virtual machine (VM) discovery on a transitioned VM to obtain secondary information, classifying, using a tag mapping, the transitioned VM using at least the secondary information to identify a tag, associating the transitioned VM with a backup policy based on the tag, and sending the backup policy and the tag to a production host hosting the transitioned VM.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: May 24, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Mahipat Rao Kulkarni, Gururaj Kulkarni, Preeti Sharma
  • Patent number: 11327809
    Abstract: An aspect of the invention includes a method for receiving a request to reclaim a portion of a memory assigned to a virtual machine (VM). In response to receiving the request, an increment of the plurality of increments to vacate is selected. The selecting is based at least in part on the failure counts corresponding to each of the plurality of increments. An attempt is made to vacate all contents of the selected increment. Based at least in part on determining that all contents of the selected increment were not vacated, a failure count corresponding to the selected increment is incremented. Based at least in part on determining that all contents of the selected increment were vacated, an assignment of the selected increment to the VM is removed.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: May 10, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Walter Church, IV, Ronald C. Pierson
  • Patent number: 11327807
    Abstract: Methods, systems, and media for a platform for collaborative processing of computing tasks. The method includes sending, to client devices, a one or more client applications including program code associated with an interactive application and a machine learning application. When executed, the program code causes the client devices to generate a user interface for the interactive application; request, using the generated user interface, inputs from a user of the client devices; receive the requested inputs; process, using computing resources of the client devices, at least part of the machine learning application; and transmit data associated with results of the received inputs and the processing of at least part of the machine learning application. The method further includes receiving and processing the data associated with the results of the received inputs and the processing of at least part of the machine learning application to process the computing tasks.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: May 10, 2022
    Assignee: Balanced Media Technology, LLC
    Inventor: Corey Clark
  • Patent number: 11310113
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to improve cluster efficiency. An example apparatus includes a cluster manager to identify cluster resource details to execute a workload, a workload manager to parse the workload to identify services to be executed by cluster resources, and an optimization formula manager to identify service optimization formulas associated with respective ones of the identified services, and improve cluster resource efficiency by generating a cluster formula configuration to calculate cluster parameter values for the cluster resources.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: April 19, 2022
    Assignee: Intel Corporation
    Inventors: Rene O. Dorado, Abolfazl Shahbazi
  • Patent number: 11294712
    Abstract: Task management techniques, in a storage system, involve: dividing a task to be processed into a plurality of child tasks, so that a processing time required by each of the plurality of child tasks is same, the number of the plurality of child tasks being a first number; dividing a progress to be reported and being associated with the processing of the task into a plurality of child progresses, the number of the plurality of the child progresses being a second number, the second number being less than the first number, and each of the plurality of child progresses having a same value; and associating, based on the first and the second number, and according to a predetermined mapping between the plurality of child progresses and the plurality of child tasks, each of the plurality of child progresses with a respective child task of the plurality of child tasks.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: April 5, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Binbin Deng, Tianfang Xiong, Mancheng Xiong, Shaocong Liang, Zhipeng Zhang