Patents Examined by Sisley N Kim
  • Patent number: 11372686
    Abstract: A service provider network may provider one or more global cloud services across multiple regions. A client may submit a request to create multiple replicas of a service resource in respective instantiations of a service in the multiple regions. The receiving region of the request may determine the capacities of the multiple regions as to serving respective replicas of the service resource. The receiving region may provide a response to the client based on the determined capacities of the regions.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 28, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Akshat Vig, Somasundaram Perianayagam, Arijit Choudhury, Oren Yossef, Shitanshu Aggarwal, Sharatkumar Nagesh Kuppahally, Yang Nan, Arturo Hinojosa, Mark Roper, Wen Han Albert Huang, Sudhir Konduru, Alexander Richard Keyes
  • Patent number: 11366694
    Abstract: A computer-implemented method and a computer program product for estimating attributes of running workloads on platforms in a system of multiple platforms as a service. A computer receives definitions of respective workloads and respective platforms that are eligible to run a set of the respective workloads. The computer maps the respective workloads and the respective platforms to attributes of running the respective workloads on the respective platforms. The computer estimates the attributes and storing the attributes in a matrix. The computer updates the attribute in the matrix, in response to a triggering event for modifying the matrix.
    Type: Grant
    Filed: December 6, 2020
    Date of Patent: June 21, 2022
    Assignee: International Business Machines Corporation
    Inventor: Lior Aronovich
  • Patent number: 11360945
    Abstract: Systems and methods for file transfer and processing in a network environment are disclosed. In one embodiment, the system may comprise one or more processors. The one or more processors may be coupled to a first device. The one or more processors may be configured to retrieve a file from a file queue. The file may be stored in a local store of the first device. The file may be transferred from a second remote device via Remote Direct Memory Access. The one or more processors may further be configured to determine if the file is complete. The one or more processors may further be configured to remove the file from the file queue, if the file is determined to be complete.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: June 14, 2022
    Assignee: UMBRA TECHNOLOGIES LTD.
    Inventor: Joseph E. Rubenstein
  • Patent number: 11347550
    Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: May 31, 2022
    Assignee: Snowflake Inc.
    Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
  • Patent number: 11347549
    Abstract: A notification for an application stack is received, where the application stack includes a plurality of resource types. At least one policy associated with the notification is obtained, with the first policy being a policy for scaling a first resource of a first resource type and a second resource of a second resource type of the application stack. A first capacity for the first resource and a second capacity for the second resource is determined based at least in part on the at least one policy. The first resource and the second resource are caused to be scaled according to the first capacity and the second capacity respectively.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 31, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Kai Fan Tang, Ahmed Usman Khalid
  • Patent number: 11347566
    Abstract: Methods and systems are provided for supporting operation of a plurality of software plugins of an IHS (Information Handling System). Incoming plugin commands are received and stored to a queue of a plurality of progressively weighted queues. The weighted queue is selected for storing the incoming plugin command based on a time constraint associated with the command. A proximate command is selected for processing from a queue of the plurality of weighted queues based on a weighted time for processing the proximate command. A recipient plugin of the proximate command is determined. Any plugin groups that the recipient is a member of are identified. The plugins of the first plugin group, including the recipient plugin, are activated to allocate use of IHS resources to the activated plugin.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: May 31, 2022
    Assignee: Dell Products, L.P.
    Inventors: Vivek Viswanathan Iyer, Srikanth Kondapi, Abhinav Gupta
  • Patent number: 11340958
    Abstract: Disclosed are various embodiments of real-time simulation of the performance of a compute accelerator workload for distributed resource scheduling. The compute accelerator workload is executed on candidate hosts to select a destination host. Efficiency metrics are determined for the candidate hosts based on the execution of the compute accelerator workload on the candidate hosts. A destination host is selected from the candidate hosts based on the efficiency metrics, and the compute accelerator workload can be assigned to execute on the selected destination host.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: May 24, 2022
    Assignee: VMWARE, INC.
    Inventor: Matthew D. McClure
  • Patent number: 11340949
    Abstract: A method and a hardware acceleration managing node for managing a request for hardware acceleration (HA). The hardware acceleration managing node receives, from a HA interfacing node, the request for hardware acceleration of processing of source data. The hardware acceleration managing node sends an indication of a source memory location(s) for storing of the source data. The hardware acceleration managing node selects one or more hardware acceleration devices. The hardware acceleration managing node receives a chunk of code to be accelerated. The hardware acceleration managing node sends, to the one hardware acceleration device, a set of acceleration instructions related to the chunk of code and the indication of the source memory location. The hardware acceleration managing node receives an indication of a result memory location indicating result data. The hardware acceleration managing node sends an indication of completed hardware acceleration to the HA interfacing node.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: May 24, 2022
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Chakri Padala, Mozhgan Mahloo, Joao Monteiro Soares, Nhi Vo
  • Patent number: 11327804
    Abstract: The disclosed embodiments generally relate to methods, systems and apparatuses for directing Autonomous Driving (AD) vehicles. In one embodiment, an upcoming condition is assessed to determine the computational needs for addressing the condition. A performance value and latency requirement value are assigned to the upcoming condition. A database of available nodes in the network is then accessed to select an optimal node to conduct the required computation. The database may be configured to maintain real time information concerning performance and latency values for all available network nodes. In certain embodiments, all nodes are synchronized to maintain substantially the same database.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: May 10, 2022
    Assignee: INTEL CORPORATION
    Inventors: Hassnaa Moustafa, Soo Jin Tan, Valerie Parker
  • Patent number: 11314559
    Abstract: A cloud management method and a cloud management device are provided. The cloud management method determines whether a plurality of pods are overloaded, identifies resource usage current states of a cluster and a node, and determines a method of scaling resources of a specific pod that is overloaded from among the plurality of pods, according to the resource usage current states of the cluster and the node, and scales the resources of the specific pod according to the determined method. Accordingly, scaling for uniformly extending resources of a node and a pod in a cluster horizontally and vertically can be automatically performed.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: April 26, 2022
    Assignee: Korea Electronics Technology Institute
    Inventors: Jae Hoon An, Young Hwan Kim
  • Patent number: 11314678
    Abstract: A computer implemented method of providing connectivity between two or more primary assets, each primary asset comprising a hardware component or a software component, the method comprising receiving a request to provide connectivity between the primary assets; accessing a repository that stores a list of assets and metadata associated with each asset; using the metadata in the repository to generate an hierarchical list of assets; rationalising the hierarchical list of assets by selecting a group of assets from the hierarchical list of assets to be used in providing connectivity between the primary assets; and generating, based on operating parameters of each asset in the group of assets, a deployment plan for the group of assets, wherein the deployment plan defines settings and/or connections to be used between the assets in the group of assets in order to provide connectivity between the two or more primary assets.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: April 26, 2022
    Assignee: THALES HOLDINGS UK PLC
    Inventors: James Williams, Louis Simpson, Simon Morris
  • Patent number: 11307885
    Abstract: Techniques for a service provider network to generate suitability scores that indicate how well VM instance types are performing given the workloads they are running. Using these suitability scores, users are able to easily determine the suitability of VM instance types for supporting their workloads, and diagnose potential issues with the pairings of VM instance types and workloads, such as over-utilization and under-utilization of VM instances. Further, the techniques include training a model to determine VM instance types recommended for supporting workloads. The model may receive utilization data representing resource-usage characteristics of the workload as input, and be trained to output one or more recommended VM instance types that are optimized or suitable to host the workload. Thus, the service provider network may provide users with easily-digestible suitability scores indicating the suitability of VM instance types for workloads along with VM instance types recommended for their workloads.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: April 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Lorenzo Luciano, Peter William Beardshear, Imre Attila Kiss, Esther Kadosh
  • Patent number: 11307895
    Abstract: Improved techniques for dynamically responding to a fluctuating workload. Resources are reactively scaled for memory-intensive applications and automatically adapted to in response to workload changes without requiring pre-specified thresholds. A miss ratio curve (MRC) is generated for an application based on application runtime statistics. This MRC is then modeled as a hyperbola. An area on the hyperbola is identified as satisfying a flatten threshold. A resource allocation threshold is then established based on the identified area. This resource allocation threshold indicates how many resources are to be provisioned for the application. The resources are scaled using a resource scaling policy that is based on the resource allocation threshold.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: April 19, 2022
    Assignee: UNIVERSITY OF UTAH RESEARCH FOUNDATION
    Inventors: Joe H. Novak, Sneha K. Kasera, Ryan Stutsman
  • Patent number: 11301301
    Abstract: Embodiments of the present disclosure relate to a method, system and computer program product for offloading a workload between computing environments. According to the method, a workload of a target function of a service provisioned in a first computing environment is determined. A processing capacity of the service available for the target function in the first computing environment is determined. In accordance with a determination that the workload exceeds the processing capacity, at least one incoming request for the target function is caused to be routed to a target instance of the target function, the target instance of the target function being provisioned in a second computing environment different from the first computing environment.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Gang Tang, Yue Wang, Liang Rong, Wen Tao Zhang
  • Patent number: 11294735
    Abstract: A desktop cloud virtual machine access method includes receiving, by a desktop cloud controller, an access request from a desktop cloud client, where a target virtual machine (VM) specified in the access request is deployed on a target computing node, and then, when determining that a target virtual access gateway (VAG) is deployed on the target computing node, instructing, by the desktop cloud controller, the desktop cloud client to establish communication with the target virtual machine using the target virtual access gateway.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: April 5, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Mingdeng Li
  • Patent number: 11288096
    Abstract: One embodiment provides a computer implemented method of balancing mixed workload performance including monitoring the compression and decompression workload at a hardware accelerator using the hardware accelerator quality of service (QoS) scheduler; monitoring the compression and decompression workload at a CPU using the CPU QoS scheduler; comparing the workload at the hardware accelerator and the workload at the CPU; and allocating tasks between the hardware accelerator and the CPU to obtain an optimal bandwidth at the hardware accelerator and the CPU.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 29, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Rahul Ugale, Colin Zou
  • Patent number: 11288070
    Abstract: A method for optimization of low-level memory operations in a distributed memory storage configuration that includes receiving, at a first processor, a request to migrate data from the first processor to a second processor, where the first processor and the second processor comprise a processor and memory, and identifying a command instruction associated with the requested data. The method also includes comparing a first performance metric associated with the first processor to a second performance metric associated with the second processor, where the first performance metric and the second performance metric are associated with executing the command instruction, and where, based on the comparing, a decision to move the command instruction to the second processor is formed, and migrating, responsive to the decision, the data and the command instruction to the second processor.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: March 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William F. Quinn, Anil Kalavakolanu, Douglas Griffith, Sreenivas Makineedi, Mathew Accapadi
  • Patent number: 11281504
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to receive a plurality of thermal parameters for a device, identify one or more of the plurality of thermal parameters that affect a thermal response of the device, and create a thermal vector for the device using the one or more of the plurality of thermal parameters that affect the thermal response of the device, where the thermal vector can be used to predict a new thermal response of the device. In an example, the thermal vector includes weighted thermal parameters.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: March 22, 2022
    Assignee: Intel Corporation
    Inventor: Paul J. Gwin
  • Patent number: 11269689
    Abstract: A determination is made of values for measures that affect processing of data in a computing environment comprising a client communicatively coupled to a server, wherein the measures include data factors, client factors, and server factors. A determination is made as to whether a load on the client is greater than the load on the server by calculating a load on the client and a load on the server based on the data factors, the client factors and the server factors, and then comparing the load on the client to the load on the server. In response to determining that the load on the client is greater than the load on the server, the data is stored at a location in the server, and an indication is made in a data structure in the client of a pointer to the location in the server.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: March 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew Paul Gellai, Sr., Terry Wade Niemeyer, Mark Allen Sistrunk, Jiandong Tang, Navin Manohar, Lori Christine Simcox
  • Patent number: 11263055
    Abstract: Methods and systems are described for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads. The methods and systems may include determining an initial allocation of the plurality of processing requests to the plurality of available domains that has a lowest initial sum excess processing load. The methods and systems may then retrieve an updated estimated processing load for at least one of the plurality of processing requests and determine a secondary allocation of the plurality of processing requests to the plurality of available domains.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: March 1, 2022
    Assignee: The Bank of New York Mellon
    Inventor: Qun Deng