Patents Examined by Meng-Ai An
  • Patent number: 11630687
    Abstract: Embodiments of an invention related to compacted context state management are disclosed. In one embodiment, a processor includes instruction hardware and state management logic. The instruction hardware is to receive a first save instruction and a second save instruction. The state management logic is to, in response to the first save instruction, save context state in an un-compacted format in a first save area. The state management logic is also to, in response to the second save instruction, save a compaction mask and context state in a compacted format in a second save area and set a compacted-save indicator in the second save area. The state management logic is also to, in response to a single restore instruction, determine, based on the compacted-save indicator, whether to restore context from the un-compacted format in the first save area or from the compacted format in the second save area.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: April 18, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Atul Khare, Leena Puthiyedath, Asit Mallick, Jim Coke, Michael Mishaeli, Gilbert Neiger, Vivekananthan Sanjeepan, Jason Brandt
  • Patent number: 11630698
    Abstract: This disclosure describes methods, devices, systems, and procedures in a computing system for capturing a configuration state of an operating system executing on a central processing unit (CPU), and offloading memory management tasks, based on the configuration state, to a resource management unit such as a system-on-a-chip (SoC). The resource management unit identifies a status of a resource requiring memory swapping based on the captured configuration state of the operating system. The resource management unit then swaps the memory to alleviate the CPU from processing the swap thereby improving overall computing system performance.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: April 18, 2023
    Assignee: Google LLC
    Inventors: Alex Levin, Todd Alan Broch
  • Patent number: 11625257
    Abstract: A managed object of a virtualized computing environment, which contains the runtime state of a parent virtual machine (VM) and can be placed in any host of the virtualized computing environment, is used for instantly cloning child VMs off that managed object. The managed object is not an executable object (i.e., the state of the managed object is static) and thus it does not require most of the overhead memory associated with a VM. As a result, this managed object can support instant cloning of VMs with a reduction in memory, storage, and CPU overhead relative to when a parent template VM is used.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: April 11, 2023
    Assignee: VMware, Inc.
    Inventors: Arunachalam Ramanathan, Li Zheng, Gabriel Tarasuk-Levin
  • Patent number: 11620154
    Abstract: In a computing system, an application thread is executed on a hardware thread. Based on a configuration of the computing system, a first threshold is determined comprising a threshold percentage of execution time spent servicing a set of interrupts to the application thread relative to a total execution time for the hardware thread. For the hardware thread, a length of a first time period spent servicing an interrupt in the set of interrupts and a length of a second time period spent executing the application thread are measured. A cumulative percentage of execution time spent in the first time period relative to execution time spent in the first time period and the second time period is calculated. Responsive to the cumulative percentage being above the threshold percentage, interrupt servicing on the hardware thread is disabled.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: April 4, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dirk Michel, Bret R. Olszewski, Matthew R. Ochs
  • Patent number: 11614962
    Abstract: System, methods, and other embodiments described herein relate to improving scheduling of computing tasks in a mobile environment for a vehicle. In one embodiment, a method includes receiving an offloading request associated with a computing task from the vehicle, wherein the offloading request includes context information and a task descriptor related to the computing task. The method also includes scheduling the computing task to execute on a server if the context information and the task descriptor satisfy criteria for using computing resources associated with the server for the vehicle. The method also includes partitioning the computing task into subtasks if the context information satisfies the criteria. A machine learning module may decide partitions of the computing task according to the context information.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: March 28, 2023
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Qiang Liu, BaekGyu Kim
  • Patent number: 11609796
    Abstract: Systems, methods, devices, and other techniques for managing a computing resource shared by a set of online entities. A system can receive a request from a first online entity to reserve capacity of the computing resource. The system determines a relative priority of the first online entity and identifies a reservation zone that corresponds to the relative priority of the first online entity. The system determines whether to satisfy the request based on comparing (i) an amount of the requested capacity of the computing resource and (ii) an amount of the portion of unused capacity of the computing resource designated by the reservation zone that online entities having relative priorities at or below the relative priority of the first online entity are permitted to reserve.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: March 21, 2023
    Assignee: Google LLC
    Inventors: Jose Casillas, Ozan Demir, Brent Welch, Mikhail Basilyan, Roy Peterkofsky, Timothy Smith, Philipp Keller
  • Patent number: 11599393
    Abstract: Systems, methods, apparatuses, and computer-readable media for guaranteed quality of service (QoS) in cloud computing environments. A workload related to an immutable log describing a transaction may be received. A determination is made based on the immutable log that a first compute node stores at least one data element to process the transaction. Utilization levels of computing resources of the first compute node may be determined. Utilization levels of links connecting the first compute node to the fabric may be determined. A determination may be made, based on the utilization levels, that processing the workload on the first compute node satisfies one or more QoS parameters specified in a service level agreement (SLA). The workload may be scheduled for processing on the first compute node based on the determination that processing the workload on the first compute node satisfies the one or more QoS parameters specified in the SLA.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: March 7, 2023
    Assignee: State Street Corporation
    Inventors: Fadi Gebara, Ram Rajamony, Ahmed Gheith
  • Patent number: 11593177
    Abstract: Various examples are disclosed for placing virtual machine (VM) workloads in a computing environment. Ephemeral workloads can be placed onto reserved instances or reserved hosts in a cloud-based VM environment. If a request to place a guaranteed workload is received, ephemeral workloads can be evacuated to make way for the guaranteed workload.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: February 28, 2023
    Assignee: VMWARE, INC.
    Inventors: Dragos Victor Misca, Sahan Bamunavita Gamage, Pranshu Jain, Zhelong Pan
  • Patent number: 11593168
    Abstract: Zero copy message reception for devices is disclosed. For example, a host has a memory, a processor, a supervisor, and a device with access to device memory addresses mapped in a device page table via an IOMMU. An application has access to application memory addresses and is configured to identify a first page of memory addressed by an application memory address to share with the device as a receiving buffer to store data received by the device for the application, where the first page is mapped to a first device memory address in a first device page table entry (PTE). A supervisor is configured to detect that the first application has disconnected from the device, and in response to detecting the application disconnecting, to update the first device PTE to address a second page instead of the first page.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: February 28, 2023
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11593134
    Abstract: An approach for a hypervisor to throttle CPU utilization based on a CPU utilization throttling request received for a data flow is presented. A method comprises receiving a request for a CPU utilization throttling. The request is parsed to extract a CPU utilization level and a data flow identifier of the data flow. Upon receiving a data packet that belongs to the data flow identified by the data flow identifier, a packet size of the data packet is determined, and a rate limit table is accessed to determine, based on the CPU utilization level and the packet size, a rate limit for the data packet. If it is determined, based at least on the rate limit, that the CPU utilization level for the data flow would be exceeded if the data packet is transmitted toward its destination, then a recommendation is generated to drop the data packet.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: February 28, 2023
    Assignee: NICIRA, INC.
    Inventor: Dexiang Wang
  • Patent number: 11586468
    Abstract: The present invention relates to a Docker-container-oriented method for isolation of file system resources, which allocates host file system resources according to access requests from containers and checks lock resources corresponding to the access requests. The method creates a plurality of new containers; allocating the host file system resources according to file resource request parameters required by the new containers; and controlling execution of the file system operation according to an amount of the file system resources that have been allocated to the new containers.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: February 21, 2023
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Song Wu, Hai Jin, Ximing Chen
  • Patent number: 11579921
    Abstract: Systems and methods for performing parallel computation are disclosed. The system can include: a task manager; and a plurality of cores coupled with the task manager and configured to respectively perform a set of parallel computation tasks based on instructions from the task manager, wherein each of the plurality of cores further comprises: a processing unit configured to generate a first output feature map corresponding to a first computation task among the set of parallel computation tasks; an interface configured to receive one or more instructions from the task manager to collect external output feature maps corresponding to the set of parallel computation tasks from other cores of the plurality of cores; a reduction unit configured to generate a reduced feature map based on the first output feature map and received external output feature maps.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: February 14, 2023
    Assignee: Alibaba Group Holding Limited
    Inventor: Liang Han
  • Patent number: 11579929
    Abstract: Disclosed herein are system, method, and computer program product embodiments for configuring a dynamic reassignment of an application flow across different computation layers based on various conditions. An embodiment operates by assigning a first rule of an application flow to a first computation layer of a plurality of computation layers. The embodiment assigns a second rule of the application flow to a second computation layer of the plurality of computation layers. The embodiment assigns a transition rule of the application flow to the first computation layer. The transition rule includes an action that causes the first rule of the application flow to be executed in the second computation layer of the plurality of computation layers based on a condition. The embodiment then transmits the application flow to the plurality of computation layers thereby causing the application flow to be configured for execution.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: February 14, 2023
    Assignee: Salesforce. Inc.
    Inventor: Charles Hart Isaacs
  • Patent number: 11579926
    Abstract: A request manager analyzes API calls from a client to a host application for state and performance information. If current utilization of host application processing or memory footprint resources exceed predetermined levels, then the incoming API call is not forwarded to the application. If current utilization of the host application processing and memory resources do not exceed the predetermined levels, then the request manager quantifies the processing or memory resources required to report the requested information and determines whether projected utilization of the host application processing or memory resources inclusive of the resources required to report the requested information exceed predetermined levels. If the predetermined levels are not exceeded, then the request manager forwards the API call to the application for processing.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: February 14, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Aidan Hally, Paul Mcsweeney, Kenneth Byrne
  • Patent number: 11579906
    Abstract: Embodiments of systems and methods for managing performance optimization of applications executed by an Information Handling System (IHS) are described. In an illustrative, non-limiting embodiment, a method may include: identifying, by an IHS, a first application; assigning a first score to the first application based upon: (i) a user's presence state, (ii) a foreground or background application state, (iii) a power adaptor state, and (iv) a hardware utilization state, detected during execution of the first application; identifying, by the IHS, a second application; assigning a second score to the second application based upon: (i) another user's presence state, (ii) another foreground or background application state, (iii) another power adaptor state, and (iv) another hardware utilization state, detected during execution of the second application; and prioritizing performance optimization of the first application over the second application in response to the first score being greater than the second score.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: February 14, 2023
    Assignee: Dell Products, L.P.
    Inventors: Nikhil Manohar Vichare, Vivek Viswanathan Iyer
  • Patent number: 11579924
    Abstract: Techniques are disclosed for scheduling artificial intelligence model partitions for execution in an information processing system. For example, a method comprises the following steps. An intermediate representation of an artificial intelligence model is obtained. A reversed computation graph corresponding to a computation graph generated based on the intermediate representation is obtained. Nodes in the reversed computation graph represent functions related to the artificial intelligence model, and one or more directed edges in the reversed computation graph represent one or more dependencies between the functions. The reversed computation graph is partitioned into sequential partitions, such that the partitions are executed sequentially and functions corresponding to nodes in each partition are executed in parallel.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: February 14, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Jin Li, Jinpeng Liu, Christopher S. MacLellan
  • Patent number: 11573833
    Abstract: Allocating CPU cores to a thread running in a system that supports multiple concurrent threads includes training a first model to optimize core allocations to threads using training data that includes performance data, initially allocating cores to threads based on the first model, and adjusting core allocations to threads based on a second model that uses run time data and run time performance measurements. The system may be a storage system. The training data may include I/O workload data obtained at customer sites. The I/O workload data may include data about I/O rates, thread execution times, system response times, and Logical Block Addresses. The training data may include data from a site that is expected to run the second model. The first model may categorize storage system workloads and determine core allocations for different categories of workloads. Initially allocating cores to threads may include using information from the first model.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: February 7, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Edward P. Goodwin
  • Patent number: 11573831
    Abstract: Embodiments for optimizing resource usage in a distributed computing environment. Resource usage of each task in a set of running tasks associated with a job is monitored to collect resource usage information corresponding to each respective task. A resource unit size of at least one resource allocated to respective tasks in the set of running tasks is adjusted based on the resource usage information to improve overall resource usage in the distributed computing environment.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: February 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jie Li, Zhimin Lin, Jinming Lv, Guang Han Sui, Hao Zhou
  • Patent number: 11561823
    Abstract: In general, the disclosure describes techniques for lockless management of immutable objects by multi-threaded processes. A device comprising a processor may implement the techniques, where the processor execute a multi-threaded process including a producer thread and a consumer thread. The producer thread may instantiate an immutable object, and provide, to the consumer thread, a reference to the immutable object. The producer thread may also increment a reference counter to indicate that the reference has been provided to the consumer thread, where the reference counter is local to the producer thread and inaccessible to the at least two consumer threads. The producer thread may receive, from the consumer thread, a notification that the consumer thread has finished processing the immutable object, and decrement, responsive to receiving the notification, the reference counter. The producer thread may then delete, based on the reference counter, the immutable object.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: January 24, 2023
    Assignee: Juniper Networks, Inc.
    Inventors: Jaihari V. Loganathan, Ashutosh K. Grewal, Sanjay Khanna
  • Patent number: 11556386
    Abstract: Resource allocation problems involve identification of resource, selection by certain criteria and offering of resources to the requester. Identification of required resources may involve matching the type of resource, selecting based on user requirements and policy criteria, and offering the resource through an assignment system. An apparatus and a method are provided that enable identification and selection of resources. The method includes receiving a resource allocation request for the allocation of a resource, the resource allocation request specifying a set of user requirements. The method includes receiving an operator policy associated with the resource, the operator policy including one or more policy requirements. The method includes synthesizing a resource request based on the resource allocation request and the operator policy.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: January 17, 2023
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Mahesh Babu Jayaraman, Ganapathy Raman Madanagopal, Ashis Kumar Roy