Patents Examined by Sisley N Kim
  • Patent number: 11693668
    Abstract: A parallel processing apparatus includes a plurality of compute nodes, and a job management device that allocates computational resources of the plurality of compute nodes to jobs, the job management device including circuitry configured to determine a resource search time range based on respective scheduled execution time periods of a plurality of jobs including a job being executed and a job waiting for execution, and search for free computational resources to be allocated to a job waiting for execution that is a processing target among the plurality of jobs, from among computational resources of the plurality of compute nodes within the resource search time range, by backfill scheduling.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: July 4, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Akitaka Iwata
  • Patent number: 11681665
    Abstract: Systems and methods for file transfer and processing in a network environment are disclosed. In one embodiment, the system may comprise one or more processors. The one or more processors may be coupled to a first device. The one or more processors may be configured to retrieve a file from a file queue. The file may be stored in a local store of the first device. The file may be transferred from a second remote device via Remote Direct Memory Access. The one or more processors may further be configured to determine if the file is complete. The one or more processors may further be configured to remove the file from the file queue, if the file is determined to be complete.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: June 20, 2023
    Assignee: UMBRA TECHNOLOGIES LTD.
    Inventor: Joseph E. Rubenstein
  • Patent number: 11675613
    Abstract: Techniques and mechanisms provide a flexible mapping for physical functions and virtual functions in an environment including virtual machines.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: June 13, 2023
    Assignee: Altera Corporation
    Inventors: Jiefan Zhang, Abdel Hafiz Rabi, Allen Chen, Mark Jonathan Lewis
  • Patent number: 11669375
    Abstract: A multi-tenant load balancing system that includes artificial intelligence based algorithm to dynamically route requests from one or more channels to an agent best suited to process the request. The AI based algorithm routes the request based on company's business goals, agent attributes, and channel attributes. The AI based algorithm also predicts agent availability.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: June 6, 2023
    Assignee: Freshworks Inc.
    Inventors: Karthikeyan Marudhachalam, Rohit Agarwal, Hariharan Ganapathiraman, Abinaya K. Sarathi
  • Patent number: 11669373
    Abstract: A system for finding and identifying computer nodes in a network includes a network having multiple computer nodes and a planning module. The computer nodes are connected to one another by communication connections and are configured to perform a workload of one or more software application(s). The planning module includes at least one probe having a test code and is configured to send the probe with the test code to the computer nodes to test the properties of the computer nodes with respect of their ability to perform a specific workload of at least one software application. The planning module is configured to take the test results as a basis for selecting one or more computer nodes for performing the workload of at least one software application, and to start the performance of the workload of the at least one software application on the selected computer node.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: June 6, 2023
    Assignee: Siemens Aktiengesellschaft
    Inventor: Ludwig Andreas Mittermeier
  • Patent number: 11663491
    Abstract: An allocation system for machine learning, comprising a terminal server and a cloud server. The terminal server is used for: acquiring demand information; generating a control instruction according to the demand information, wherein the control instruction comprises a terminal control instruction and a cloud control instruction; parsing the terminal control instruction to obtain a terminal control signal; and calculating a terminal workload of a machine learning algorithm of each stage according to the terminal control signal to obtain a terminal computation result. The cloud server is used for parsing the cloud control instruction to obtain a cloud control signal, and calculating a cloud workload of the machine learning algorithm of each stage according to the cloud control signal to obtain a cloud computation result. The terminal computation result and the cloud computation result together compose an output result.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: May 30, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Xiaofu Meng, Yongzhe Sun, Zidong Du
  • Patent number: 11656918
    Abstract: A production cluster executes a workload, such that jobs associated with the executed workload are allocated, according to a first configuration. A cluster monitor extracts production cluster information from the production cluster, monitors configuration information during execution of the workload, and transmits each to a cluster tuner. The cluster tuner receives the information and determines a first recommended configuration for the production cluster. The cluster tuner causes the test cluster to execute a simulated workload according to the first recommended configuration. In response to determining that the first recommended configuration results in a decrease in resource consumption, the cluster tuner causes the production cluster to operate according to the first recommended configuration.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: May 23, 2023
    Assignee: Bank of America Corporation
    Inventor: Anirudh Kumar Sharma
  • Patent number: 11656919
    Abstract: Disclosed are various embodiments of real-time simulation of the performance of a compute accelerator workload for distributed resource scheduling. A compute kernel of a compute accelerator workload is augmented to include instructions that increment an execution counter at artificial halting points. Execution of the compute accelerator workload is suspended at an artificial halting point. The compute accelerator workload is executed on a plurality of candidate hosts and a performance counter is incremented during the execution of the compute accelerator workload on the various hosts. The compute accelerator workload is migrated to a destination host selected using an efficiency metric that is identified using the performance counter.
    Type: Grant
    Filed: April 30, 2022
    Date of Patent: May 23, 2023
    Assignee: VMWARE, INC.
    Inventor: Matthew D. McClure
  • Patent number: 11650852
    Abstract: Techniques are disclosed for dynamically adjusting a throttling threshold in a multi-tenant virtualized computing environment. System health parameters are collected during a predetermined time interval. A system health status of the multi-tenant virtualized computing environment is determined. Based on the system health status, a throttling threshold for service requests for the multi-tenant virtualized computing environment is determined. The throttling threshold is applied for further service requests. During a subsequent time interval, an updated system health status of the multi-tenant virtualized computing environment is determined based on system health parameters received during the subsequent time interval. The throttling threshold is updated based on the updated system health status. The updated throttling threshold is applied for further service requests.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: May 16, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Deepak Bansal, Vaibhav Kumar, Xin Yan
  • Patent number: 11651251
    Abstract: Methods and systems for recommending one or more computing devices for accessing one or more applications are described herein. Resource requirements may be determined for at least one application. Such resource requirements may be, e.g., a display resolution. Computing device attributes may be determined for computing devices capable of executing the application. The resource requirements and/or the computing device attributes may be normalized and/or modified based on machine learning techniques. The machine learning techniques may modify the application resource requirements and/or computing device attributes based on user feedback. Distances between the resource requirements and the computing device attributes may be determined. A recommendation to use a particular preferred computing device may be transmitted based on the distance comparison. The recommendation may be based on the minimum or maximum distance calculated. User feedback regarding the recommendation may be received.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: May 16, 2023
    Inventors: Xiaolu Chu, Tie Liu, Jie Zhuang, Zongpeng Qiao
  • Patent number: 11645119
    Abstract: Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for optimized resource transformation given a set of resource optimization parameters.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: May 9, 2023
    Assignee: OPTUM SERVICES (IRELAND) LIMITED
    Inventors: Vicente Ruben Del Pino Ruiz, Hendrik Kleine
  • Patent number: 11630686
    Abstract: Novel tools and techniques are provided for implementing virtual machine (“VM”) management, and, more particularly, to methods, systems, and apparatuses for implementing VM management using hardware compression. In various embodiments, a computing system might identify one or more first virtual machines (“VM's”) among a plurality of VM's that are determined to be currently inactive and might identify one or more second VM's among the plurality of VM's that are determined to be currently active. The computing system might compress a virtual hard drive associated with each of the identified one or more first VM's that are determined to be currently inactive. The computing system might also perform or continue to perform one or more operations using each of the identified one or more second VM's that are determined to be currently active.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: April 18, 2023
    Assignee: CenturyLink Intellectual Property LLC
    Inventor: Ronald A. Lewis
  • Patent number: 11625263
    Abstract: In one embodiment, a method for improved management of virtual machine clusters may include: determining a current utilization value for each of a plurality of virtual machines (VMs) in a cluster, the VMs associated with a plurality of applications; storing the current utilization values for each of the plurality of VMs in a utilization table; determining that a capacity threshold for the cluster has not been reached based on an aggregation of the current utilization values for the plurality of VMs; provisioning a new VM into the cluster; storing a default utilization value for the new VM in the utilization table; and re-determining the capacity threshold based on the aggregated stored current utilization values for the plurality of VMs and the stored default utilization value for the new VM until a maturity threshold for the new VM is reached.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: April 11, 2023
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Tommi Salli, Kirk A. Frey, David J. Sullivan
  • Patent number: 11625279
    Abstract: In general, an application executes on a compute unit, such as a central processing unit (CPU) or graphics processing unit (GPU), to perform some function(s). In some circumstances, improved performance of an application, such as a graphics application, may be provided by executing the application across multiple compute units. However, when using multiple compute units in this manner, synchronization must be provided between the compute units. Synchronization, including the sharing of the data, is typically accomplished through memory. While a shared memory may cause bottlenecks, employing local memory for each compute unit may itself require synchronization (coherence) which can be costly in terms of resources, delay, etc. The present disclosure provides read-write page replication for multiple compute units that avoids the traditional challenges associated with coherence.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: April 11, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Daniel Lustig, Oreste Villa, David Nellans
  • Patent number: 11599389
    Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: March 7, 2023
    Assignee: Snowflake Inc.
    Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
  • Patent number: 11586478
    Abstract: Systems, computer-implemented methods and/or computer program products that facilitate management of resources are provided. In one embodiment, a computer-implemented method comprises: employing, by a system operatively coupled to a processor, at least one model to predict respective token needs by a set of processing elements during execution of a workload; and exchanging, by the system, one or more tokens between a subset of the processing elements as a function of the predicted token needs.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Augusto Vega, Alper Buyuktosunoglu, Pradip Bose, Vaidyanathan Srinivasan, Ranjal Gautham Shenoy
  • Patent number: 11586472
    Abstract: A method, system, and apparatus determines that one or more tasks should be relocated from a first processor to a second processor by comparing performance metrics to associated thresholds or by using other indications. To relocate the one or more tasks from the first processor to the second processor, the first processor is stalled and state information from the first processor is copied to the second processor. The second processor uses the state information and then services incoming tasks instead of the first processor.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: February 21, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander J. Branover, Benjamin Tsien, Elliot H. Mednick
  • Patent number: 11586471
    Abstract: A computer system is constituted by a plurality of physical computers including a first physical computer and a second physical computer. One or more application instances that perform an application service and a storage service instance that provides a storage service including a volume used by the application instance operate on the first physical computer. The computer system predicts a future resource usage status of the first physical computer, creates a plan to move the one or more application instances operating on the first physical computer to the second physical computer based on the predicted future resource usage status, and executes the created plan.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: February 21, 2023
    Assignee: Hitachi, Ltd.
    Inventors: Takaki Nakamura, Hitoshi Kamei, Yuki Sakashita, Yoshinori Ohira, Masakuni Agetsuma
  • Patent number: 11579944
    Abstract: In one embodiment, a processor includes: a plurality of cores each comprising a multi-threaded core to concurrently execute a plurality of threads; and a control circuit to concurrently enable at least one of the plurality of cores to operate in a single-threaded mode and at least one other of the plurality of cores to operate in a multi-threaded mode. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Daniel J. Ragland, Guy M. Therien, Ankush Varma, Eric J. DeHaemer, David T. Mayo, Ariel Gur, Yoav Ben-Raphael, Mark P. Seconi
  • Patent number: 11579937
    Abstract: A data model characterizing a plurality of resources is received. The data model associates a first resource within a first remote computing environment with a first tag and a second resource within a second remote computing environment with a second tag. The data model is received from a database that is separate from the first remote computing environment and the second remote computing environment. The plurality of resources is grouped based on the first tag and the second tag. The grouping can form a first group associated with the first tag and a second group associated with the second tag. A first list of resources characterizing the first group and a second list of resources characterizing the second group is provided. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: February 14, 2023
    Assignee: Citrix Systems, Inc.
    Inventors: Sindy Giraldo, Sai Varun Prasanth Soundararajan