Load Balancing Patents (Class 718/105)
  • Patent number: 11513861
    Abstract: Disclosed is a computer implemented method to manage queue overlap in storage systems, the method comprising, identifying, by a storage system, a plurality of queues including a first queue and a second queue. The storage system includes a plurality of cores, including a first core and a second core, and wherein the first queue is associated with a first host and the second queue is associated with a second host. The method also comprises, determining the first queue and the second queue are being processed by the first core. The method further comprises, monitoring the workload of each cores and identifying a load imbalance, wherein the loam imbalance a difference between a first workload associated with the first core, and a second workload associated with the second core. The method also comprises, notifying the second host that the load imbalance is present.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: November 29, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ankur Srivastava, Kushal Patel, Sarvesh S. Patel, Subhojit Roy
  • Patent number: 11507434
    Abstract: Methods and systems are provided for the deployment of machine learning based processes to public clouds. For example, a method for deploying a machine learning based process may include developing and training the machine learning based process to perform an activity, performing at least one of identifying and receiving an identification of a set of one or more public clouds that comply with a set of regulatory criteria used to regulate the activity, selecting a first public cloud of the set of one or more public clouds that complies with the set of regulatory criteria used to regulate the activity, and deploying the machine learning based process to the first public cloud of the set of one or more public clouds.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: November 22, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sagar Ratnakara Nikam, Mayuri Ravindra Joshi, Raj Narayan Marndi
  • Patent number: 11507381
    Abstract: Closed loop performance controllers of asymmetric multiprocessor systems may be configured and operated to improve performance and power efficiency of such systems by adjusting control effort parameters that determine the dynamic voltage and frequency state of the processors and coprocessors of the system in response to the workload. One example of such an arrangement includes applying hysteresis to the control effort parameter and/or seeding the control effort parameter so that the processor or coprocessor receives a returning workload in a higher performance state. Another example of such an arrangement includes deadline driven control, in which the control effort parameter for one or more processing agents may be increased in response to deadlines not being met for a workload and/or decreased in response to deadlines being met too far in advance. The performance increase/decrease may be determined by comparison of various performance metrics for each of the processing agents.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: November 22, 2022
    Assignee: Apple Inc.
    Inventors: Aditya Venkataraman, Bryan R. Hinch, John G. Dorsey
  • Patent number: 11487760
    Abstract: Disclosed aspects relate to query plan management associated with a shared pool of configurable computing resources. A query, which relates to a set of data located on the shared pool of configurable computing resources, is detected. A virtual machine includes the set of data. With respect to the virtual machine, a set of burden values of performing a set of asset actions is determined. Based on the set of burden values, a query plan to access the set of data is established. Using at least one asset action of the set of asset actions, the query plan is processed.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: November 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Rafal P. Konik, Roger A. Mittelstadt, Brian R. Muras
  • Patent number: 11481020
    Abstract: In certain embodiments, an electronic device comprises a temperature sensor; and a processor, wherein the processor is configured to: detect that a temperature of the electronic device exceeds a predetermined temperature; when the temperature exceeds the predetermined temperature, drive at least one process satisfying a predetermined condition for a proportion of time periods and not driving the at least one process during remaining time periods.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: October 25, 2022
    Assignee: Samsung Electronics CO., LTD.
    Inventors: Sungyong Bang, Hyunjin Noh, Byungsoo Kwon, Jongwoo Kim, Sangmin Lee, Hakryoul Kim, Mooyoung Kim
  • Patent number: 11467951
    Abstract: An embodiment of the present invention is directed to a Mainframe CI/CD design solution and pattern that provides a complete end to end process for Mainframe application. This enables faster time to market by performing critical SDLC processes, including build, test, scan and deployment in an automated fashion on a regular basis. An embodiment of the present invention is directed to a CI/CD approach that journeys from receiving requirements to final deployment. For any new application onboarding, teams may implement the CI/CD approach that may be customized per requirements of each LOB/Application.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: October 11, 2022
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Vinish Pillai, Monish Pingle, Ashwin Sudhakar Shetty, Dharmesh Mohanlal Jain
  • Patent number: 11461133
    Abstract: Embodiments of the present disclosure relate to a method for managing backup jobs, an electronic device, and a computer program product. The method includes: determining expected execution durations of a group of to-be-executed backup jobs; dividing the group of to-be-executed backup jobs into a plurality of backup job subsets based on the expected execution durations, wherein a difference between the expected execution durations of every two backup jobs in each backup job subset does not exceed a predetermined threshold duration; and adjusting an execution plan of the group of to-be-executed backup jobs to cause the backup jobs in at least one backup job subset in the plurality of backup job subsets to simultaneously begin to be executed.
    Type: Grant
    Filed: May 31, 2020
    Date of Patent: October 4, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Min Liu, Ming Zhang, Ren Wang, Xiaoliang Zhu, Jing Yu
  • Patent number: 11461622
    Abstract: Embodiments include techniques for enabling execution of N inferences on an execution engine of a neural network device. Instruction code for a single inference is stored in a memory that is accessible by a DMA engine, the instruction code forming a regular code block. A NOP code block and a reset code block for resetting an instruction DMA queue are stored in the memory. The instruction DMA queue is generated such that, when it is executed by the DMA engine, it causes the DMA engine to copy, for each of N inferences, both the regular code block and an additional code block to an instruction buffer. The additional code block is the NOP code block for the first N?1 inferences and is the reset code block for the Nth inference. When the reset code block is executed by the execution engine, the instruction DMA queue is reset.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: October 4, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Samuel Jacob, Ilya Minkin, Mohammad El-Shabani
  • Patent number: 11455024
    Abstract: Systems and methods for improving idle time estimation by a process scheduler are disclosed. An example method comprises calculating, by a process scheduler operating in a kernel space of a computing system, an estimated idle time for a processing core, responsive to detecting a transition of the processing core from an idle state to an active state, recording, an actual idle time of the processing core, and making the estimated idle time and the actual idle time available to a user space process.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: September 27, 2022
    Assignee: Red Hat, Inc.
    Inventor: Michael S. Tsirkin
  • Patent number: 11429424
    Abstract: A method of selectively assigning virtual CPUs (vCPUs) of a virtual machine (VM) to physical CPUs (pCPUs), where execution of the VM is supported by a hypervisor running on a hardware platform including the pCPUs, includes determining that a first vCPU of the vCPUs is scheduled to execute a latency-sensitive workload of the VM and a second vCPU of the vCPUs is scheduled to execute a non-latency-sensitive workload of the VM and assigning the first vCPU to a first pCPU of the pCPUs and the second vCPU to a second pCPU of the pCPUs. A kernel component of the hypervisor pins the assignment of the first vCPU to the first pCPU and does not pin the assignment of the second vCPU to the second pCPU. The method further comprises selectively tagging or not tagging by a user or an automated tool, a plurality of workloads of the VM as latency-sensitive.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: August 30, 2022
    Assignee: VMware, Inc.
    Inventors: Xunjia Lu, Haoqiang Zheng
  • Patent number: 11422856
    Abstract: Techniques are disclosed relating to scheduling program tasks in a server computer system. An example server computer system is configured to maintain first and second sets of task queues that have different performance characteristics, and to collect performance metrics relating to processing of program tasks from the first and second sets of task queues. Based on the collected performance metrics, the server computer system is further configured to update a scheduling algorithm for assigning program tasks to queues in the first and second sets of task queues. In response to receiving a particular program task associated with a user transaction, the server computer system is also configured to select the first set of task queues for the particular program task, and to assign the particular program task in a particular task queue in the first set of task queues.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: August 23, 2022
    Assignee: PayPal, Inc.
    Inventors: Xin Li, Libin Sun, Chao Zhang, Xiaohan Yun, Jun Zhang, Frédéric Tu, Yang Yu, Lei Wang, Zhijun Ling
  • Patent number: 11416286
    Abstract: Aspects of the technology described herein can facilitate computing on transient resources. An exemplary computing device may use a task scheduler to access information of a computational task and instability information of a transient resource. Moreover, the task scheduler can schedule the computational task to use the transient resource based at least in part on the rate of data size reduction of the computational task. Further, a checkpointing scheduler in the exemplary computing device can determine a checkpointing plan for the computational task based at least in part on a recomputation cost associated with the instability information of the transient resource. Resultantly, the overall utilization rate of computing resources is improved by effectively utilizing transient resources.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: August 16, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ying Yan, Yanjie Gao, Yang Chen, Thomas Moscibroda, Narayanan Ganapathy, Bole Chen, Zhongxin Guo
  • Patent number: 11403220
    Abstract: An apparatus, a method, a method of manufacturing an apparatus, and a method of constructing an integrated circuit are provided. A processor of an application server layer detects a degree of a change in a workload in an input/output stream received through a network from one or more user devices. The processor determines a degree range, from a plurality of preset degree ranges, that the degree of the change in the workload is within. The processor determines a distribution strategy, from among a plurality of distribution strategies, to distribute the workload across one or more of a plurality of solid state devices (SSDs) in a performance cache tier of a centralized multi-tier storage pool, based on the determined degree range. The processor distributes the workload across the one or more of the plurality of solid state devices based on the determined distribution strategy.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: August 2, 2022
    Inventors: Zhengyu Yang, Morteza Hoseinzadeh, Thomas David Evans, Clay Mayers, Thomas Bolt
  • Patent number: 11399082
    Abstract: A client node may execute an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment. A load balancing component is coupled to the client node, and a first virtual provider entity for the first messaging service component is coupled to the load balancing component. The first virtual provider entity may represent a first HA message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. A shared database is accessible by the first broker node, the first HA message broker pair, and the second broker node, and includes an administration registry data store.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: July 26, 2022
    Assignee: SAP SE
    Inventor: Daniel Ritter
  • Patent number: 11397578
    Abstract: An apparatus such as a graphics processing unit (GPU) includes a plurality of processing elements configured to concurrently execute a plurality of first waves and accumulators associated with the plurality of processing elements. The accumulators are configured to store accumulated values representative of behavioral characteristics of the plurality of first waves that are concurrently executing on the plurality of processing elements. The apparatus also includes a dispatcher configured to dispatch second waves to the plurality of processing elements based on comparisons of values representative of behavioral characteristics of the second waves and the accumulated values stored in the accumulators. In some cases, the behavioral characteristics of the plurality of first waves comprise at least one of fetch bandwidths, usage of an arithmetic logic unit (ALU), and number of export operations.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: July 26, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Randy Ramsey, William David Isenberg, Michael Mantor
  • Patent number: 11386351
    Abstract: A machine learning service implements programmatic interfaces for a variety of operations on several entity types, such as data sources, statistics, feature processing recipes, models, and aliases. A first request to perform an operation on an instance of a particular entity type is received, and a first job corresponding to the requested operation is inserted in a job queue. Prior to the completion of the first job, a second request to perform another operation is received, where the second operation depends on a result of the operation represented by the first job. A second job, indicating a dependency on the first job, is stored in the job queue. The second job is initiated when the first job completes.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: July 12, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Leo Parker Dirac, Nicolle M. Correa, Aleksandr Mikhaylovich Ingerman, Sriram Krishnan, Jin Li, Sudhakar Rao Puvvadi, Saman Zarandioon
  • Patent number: 11385929
    Abstract: Techniques are described for detecting failure of one or more virtual computing environments and causing a migration of workloads. In some examples, a computing system includes a storage medium and processing circuitry having access to the storage medium. The processing circuitry is configured to communicate with a plurality of virtual computing environments (VCEs), including a first VCE and a second VCE, wherein each of the plurality of VCEs is operated by a different public cloud provider. The processing circuitry is further configured to deploy a group of workloads to the first VCE, detect a failure of at least a portion of the first VCE, and output, to the first VCE and responsive to detecting the failure, an instruction to transfer a set of workloads of the group of workloads to the second VCE to thereby cause a migration of the set of workloads to the second VCE.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: July 12, 2022
    Assignee: Juniper Networks, Inc.
    Inventors: Sukhdev S. Kapur, Sanju C. Abraham
  • Patent number: 11385758
    Abstract: A gaming system and processor module are therefore adapted to support simultaneous execution of two or more operating system instances. Program code is provided for play of the game uses two or more cooperating component processes partitioned such that at least one of the component processes executes using a first operating system instance, and at least one other cooperating component process executes using a further operating system instance. Each operating system instance may execute in its own virtual machine.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: July 12, 2022
    Inventors: Victor Blanco, Zoran Tomicic, Drazen Lenger
  • Patent number: 11388630
    Abstract: An information processing method and device in a baseband processing split architecture, and a computer storage medium, the method includes: receiving, by a distributed unit (DU), load information of a plurality of central units (CUs), and determining, by the DU and according to the load information of the plurality of CUs, a first CU having a load greater than a target threshold value and a second CU having a load less than the target threshold value among the plurality of CUs; and, sending, by the DU, control signaling to the first CU and the second CU respectively to instruct to migrate cell data of the first CU from the first CU to the second CU to balance the load among the plurality of CUs.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: July 12, 2022
    Assignee: ZTE CORPORATION
    Inventors: Xiaxiang Yuan, Wei Zhang
  • Patent number: 11347563
    Abstract: A computing system includes an ISA identifier to identify an ISA (Instruction Set Architecture) of a task; a core selector to select a core having a highest power-performance efficiency among a plurality of cores based on the identified ISA; and a task allocator to allocate the task to the selected core.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: May 31, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jun Mo Park, Bum Gyu Park, Dae Yeong Lee, Lak-Kyung Jung, Dae Hyun Cho
  • Patent number: 11347488
    Abstract: Systems and methods for programming a network device using a domain-specific language (DSL) are provided. According to one embodiment, source code in a form of a DSL, describing a slow-path task that is to be performed by a network device, is received by a processing resource. A determination is made regarding one or more types of processors are available within the network device to implement the slow-path task. For each portion of the source code, a preferred type of processor is determined by which the portion of the source code would be most efficiently implemented. When the preferred type of processor is available within the network device, executable code is generated targeting the preferred type of processor based on the portion of the source code; otherwise, intermediate code is generated in a form of a high-level programming language, targeting a general purpose processor of the network device.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: May 31, 2022
    Assignee: Fortinet, Inc.
    Inventors: Zhi Guo, John Cortes, Hao Wang
  • Patent number: 11334422
    Abstract: A method for data redistribution of a job data in a first datanode (DN) to at least one additional DN in a Massively Parallel Processing (MPP) Database (DB) is provided. The method includes recording a snapshot of the job data, creating a first data portion in the first DN and a redistribution data portion in the first DN, collecting changes to a job data copy stored in a temporary table, and initiating transfer of the redistribution data portion to the at least one additional DN.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: May 17, 2022
    Assignee: Futurewei Technologies, Inc.
    Inventors: Le Cai, Qingqing Zhou, Yang Sun
  • Patent number: 11334390
    Abstract: A resource reservation system includes a media module that includes a plurality of media devices and a media controller that is coupled to the plurality of media devices. The media controller retrieves media device attributes from each of the plurality of media devices that identify performance capabilities for each of the plurality of media devices and determines one or more media module partitions that are included in the media module. Each of the one or more media module partitions are provided by a subset of the plurality of media devices. The media controller then determines, for each of the media module partitions, a minimum partition performance for that media module partition based the media device attributes for the subset of the one or more media devices that provide that partition and provides the minimum partition performance for each of the media module partitions to a resource reservation device.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 17, 2022
    Assignee: Dell Products L.P.
    Inventors: Yung-Chin Fang, Jingjuan Gong, Xiaoye Jiang
  • Patent number: 11336583
    Abstract: A computing resource service provider may provide computing instances organized into logical groups, such as auto-scale groups. Computing instances assigned to an auto-scale group may be associated with one or more load balancers configured to direct traffic to the computing instances. Furthermore, customers of the computing resource service provider may add or remove load balancer from the auto-scale groups. A background process may be used to add and remove computer instances of the auto-scale group from the load balancers customers are attempting to have added or removed.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 17, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Marcel Robert Guzman, Norman Jordan, Shawn Jones, Ahmed Usman Khalid
  • Patent number: 11314541
    Abstract: A task definition is received. The task definition indicates at least a location from which one or more software image can be obtained and information usable to determine an amount of resources to allocate to one or more software containers for the one or more software image. A set of virtual machine instances in which to launch the one or more software containers is determined, the one or more software image is obtained from the location included in the task definition and is launched as the one or more of software containers within the set of virtual machine instances.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: April 26, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Deepak Singh, Anthony Joseph Suarez, William Andrew Thurston, Anirudh Balachandra Aithal, Daniel Robert Gerdesmeier, Euan Skyler Kemp, Kiran Kumar Meduri, Muhammad Umer Azad
  • Patent number: 11301578
    Abstract: A computer-implemented method according to an aspect includes determining a sensitivity level for an instance of data, comparing the sensitivity level to one or more policies, and conditionally performing a backup of the instance of data, based on the comparing.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Nilesh P. Bhosale, Joseph W. Dain, Gregory T. Kishi, Sandeep R. Patil
  • Patent number: 11301151
    Abstract: A multi-die memory apparatus and identification method thereof are provided. The identification method includes: sending an identification initial command and a first start command to a plurality of memory devices by a controller for starting a first identification period; respectively generating a plurality of first target numbers by the memory devices; respectively performing first counting actions and comparing a plurality of first counting numbers with the first target numbers by a plurality of un-identified memory devices to set a first time-up memory device of the memory devices; and, setting an identification code of the first time-up memory device of the un-identified memory devices to be a first value.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: April 12, 2022
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Chun-Hsiung Hung, Su-Chueh Lo
  • Patent number: 11301301
    Abstract: Embodiments of the present disclosure relate to a method, system and computer program product for offloading a workload between computing environments. According to the method, a workload of a target function of a service provisioned in a first computing environment is determined. A processing capacity of the service available for the target function in the first computing environment is determined. In accordance with a determination that the workload exceeds the processing capacity, at least one incoming request for the target function is caused to be routed to a target instance of the target function, the target instance of the target function being provisioned in a second computing environment different from the first computing environment.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Gang Tang, Yue Wang, Liang Rong, Wen Tao Zhang
  • Patent number: 11301307
    Abstract: Systems, methods, and machine-readable instructions stored on machine-readable media are disclosed for selecting, based on an analysis of a first process executing on a first host, at least one of a plurality of other hosts to which to migrate the first process, the selecting being further based on an analysis of the plurality of the other hosts and an analysis of processes executing on the plurality of the other hosts. At least one predictive analysis technique is used to predict an amount of time to complete migrating the first process to the selected at least one of the plurality of other hosts and an end time of the second process. In response to determining that a current time incremented by the predicted amount of time to complete migrating the first process is later than or equal to the predicted end time of the second process, a migration time at which to migrate the first process from the first host to the selected at least one of the plurality of other hosts is scheduled.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: April 12, 2022
    Assignee: RED HAT, INC.
    Inventor: Steven Eric Rosenberg
  • Patent number: 11297004
    Abstract: A novel method for dynamic network service allocation that maps generic services into specific configurations of service resources in a network is provided. An application that is assigned to be performed by computing resources in the network is associated with a set of generic services, and the method maps the set of generic services to the service resources based on the assignment of the application to the computing resources. The mapping of generic services is further based on a level of service that is chosen for the application, where the set of generic services are mapped to different sets of network resources according to different levels of services.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: April 5, 2022
    Assignee: NICIRA, INC.
    Inventors: Jayant Jain, Raju Koganty, Anirban Sengupta
  • Patent number: 11295747
    Abstract: A method and a voice processor that includes (i) an input that is configured to receive of audio signals that represent audio, (ii) a wake word detection circuit, (iii) a first buffer that is configured to store at least wake word signals and prebuffer signals, and (iv) a communication module that is configured to (a) output, over an interrupt port, an interrupt request to an application processor, following a detection of the wake word signals, (b) following an acceptance of the application processor to receive content, access the first buffer and retrieve the prebuffer signals and the wake word signals; and (b) output the content, over the I2S port, to the application processor. The content includes the wake word signals, the prebuffer signals, and query or command signals.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: April 5, 2022
    Assignee: DSP GROUP LTD.
    Inventor: Avi Keren
  • Patent number: 11290528
    Abstract: The present invention relates to a system for automating processes, and in particular to a system for optimizing the distribution of work items among available processing resources within such a system. The system includes an active queue controller, executed on an application server that manages the creation and deletion of virtual machines on available resources while querying a data store for work items and instructions for executing that automated processes.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: March 29, 2022
    Inventors: David Moss, Stuart Wood
  • Patent number: 11281510
    Abstract: In an approach to intelligent scaling in a cloud platform, an attribute template is stored for one or more target services based on one or more system data. One or more request metrics for each target service is stored, wherein the request metrics are based on an analysis of one or more incoming requests of one or more service call chains. Responsive to receiving a request for a target service in a service call chain, the target service is scaled based on the attribute template of the target service and the request metrics of the target service.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: March 22, 2022
    Assignee: Kyndryl, Inc.
    Inventors: Xu Hui Bai, Yue Wang, Wen Rui Zhao, Min Xiang, Li Long Chen
  • Patent number: 11275991
    Abstract: Systems and methods for training neural networks. One embodiment is a system that includes a memory configured to store samples of training data for a Deep Neural Network (DNN), and a distributor. The distributor identifies a plurality of work servers provisioned for training the DNN by processing the samples via a model of the DNN, receives information indicating Graphics Processing Unit (GPU) processing powers at the work servers, determines differences in the GPU processing powers between the work servers based on the information, and allocates the samples among the work servers based on the differences.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: March 15, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Fangzhe Chang, Dong Liu, Thomas Woo
  • Patent number: 11275604
    Abstract: Mobility service providers and others can use cloud platforms to meet customer demand. Due to changing demand or changing technology numerous issues arise. For example, server utilization within the cloud platform can become less efficient over time. As another example, virtual machines and virtual network functions processed by the cloud platform typically need to be extensively tested and certified, which can be expensive. Moreover, intra-platform communication can play a significant role in the costs to operate a cloud platform. Techniques detailed herein can address many of these issues, e.g., by providing mechanisms for increasing host or server utilization in response to changing demand, introducing a container technique for virtual machines to mitigate testing costs, and modeling bandwidth resources.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: March 15, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Rosenberg, Kartik Pandit
  • Patent number: 11271858
    Abstract: A method for reassigning exit internet protocol (IP) addresses in a virtual private network (VPN), the method including receiving a message indicating an amount of communication data associated with a first exit IP address, determining that the amount of communication data satisfies a data threshold associated with the first exit IP address, and transmitting, based at least in part on the determining, a notification indicating that the amount of communication data satisfies the data threshold. Various other aspects are contemplated.
    Type: Grant
    Filed: July 25, 2021
    Date of Patent: March 8, 2022
    Assignee: UAB 360 IT
    Inventors: Karolis Pabijanskas, Kiril Mikulskij
  • Patent number: 11269593
    Abstract: A system, a method, and a computer program product for generation and consumption of global numbers. A range of global numbers for consumption by a plurality of processes of a software application in a plurality of software applications is generated. The range of global numbers is generated in accordance with one or more requirements of the software application and includes a plurality of blocks of global numbers. The generated range of global numbers is provided to the software application for consumption by the plurality of processes. Each process is assigned a block of global numbers in the plurality of blocks of global numbers and consumes the assigned block of global numbers. A count of global numbers in the global number range consumed by each process in the plurality of processes is determined. Another range of global numbers is generated upon determination of the count being below a predefined threshold.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: March 8, 2022
    Assignee: SAP SE
    Inventor: Anbusivam S
  • Patent number: 11269517
    Abstract: Example implementations relate to determining a storage utilization attributable to object data stored in deduplicated form. The storage utilization attributable to the object data may be determined from an amount of object data not shared with other objects of the deduplication store and a portion of an amount of object data shared with other objects of the deduplication store. It is determined whether the storage utilization results in exceeding a storage threshold of a storage tier to which the object data is assigned. Where the storage utilization is determined to exceed the storage threshold, the object data may be reassigned to a different storage tier, or data of another object may be removed from the storage tier.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: March 8, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Aswin Jayaraman, Sijesh Thondapilly Balakrishnan, Sankar Ramasamy, Naveen Kumar Selvarajan
  • Patent number: 11265227
    Abstract: Aspects of the present disclosure involve systems, methods, devices, and the like for obtaining real-time estimates. In one embodiment, a novel customizable, transferable system architecture is presented that enables the characterization of a task, designated for time estimate, into steps or processes. The characterization may occur using the novel architecture which introduces an estimates service module that can provide an initial timing estimate by obtaining a composite time estimate of all steps. Timing of each of the steps can be predicted using a job scheduling enterprise designed to model the job or step. In another embodiment, delays are captured by time estimate monitors which can provide the alerts and delta time differences. Such alerts and time differences can be updated and presented to the user.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: March 1, 2022
    Assignee: PAYPAL, INC.
    Inventors: Joshua Allen, Joshua Van Blake, Archana Murali
  • Patent number: 11249817
    Abstract: To enhance the scaling of data processing systems in a computing environment, a number of data objects indicated in an allocation queue and a first attribute of the allocation queue are determined, where the allocation queue is accessible to a plurality of data processing systems. A number of data objects indicated in the allocation queue at a subsequent time is predicted based on the determined number of data objects and the first attribute. It is determined whether the active subset of the plurality of data processing systems satisfies a criterion for quantity adjustment based, at least in part, on the predicted number of data objects indicated in the allocation queue and a processing time goal. Based on determining that the active subset of data processing systems satisfies the criterion for quantity adjustment, a quantity of the active subset of data processing systems is adjusted.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: February 15, 2022
    Assignee: Palo Alto Networks, Inc.
    Inventor: Philip Simon Tuffs
  • Patent number: 11249876
    Abstract: A system and method for estimating execution time of an application with Spark™ platform in a production environment. The application on Spark™ platform is executed as a sequence of Spark jobs. Each Spark job is executed as a directed acyclic graph (DAG) consisting of stages. Each stage has multiple executors running in parallel and the each executor has set of concurrent tasks. Each executor spawns multiple threads, one for each task. All jobs in the same executor share the same JVM memory. The execution time for each Spark job is predicted as summation of the estimated execution time of all its stages. The execution time constitutes scheduler delay, serialization time, de-serialization time, and JVM overheads. The JVM time estimation depends on type of computation hardware system and number of threads.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: February 15, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Rekha Singhal, Praveen Kumar Singh
  • Patent number: 11245571
    Abstract: A system and method for monitoring a plurality of servers by a monitoring server in a computer network. A list of servers and a plurality of services to monitor in the computer network is generated at the monitoring server. A status query is transmitted sequentially by the monitoring server to each of the plurality of servers, the status query including the plurality of services to monitor at each server. A status message report is received from each of the plurality of servers in response to each status query. An event is reported in an event log for each server that has an abnormal service status. The transmission of the status query to each server is performed by the monitoring server at a specified service time interval.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: February 8, 2022
    Assignee: OPEN INVENTION NETWORK LLC
    Inventors: Samuel Hendon, Colin Feeser
  • Patent number: 11200088
    Abstract: An information processing system, an information processing method, and an information processing apparatus. The information processing system includes at least one memory configured to store a plurality of jobs in order, by type of processing to be executed and a plurality of processors assigned to a specific type of processing to be executed, processes a job assigned to other processor stored in the memory in substitution for the other processor based on a determination that the job of the assigned type of processing is not stored in the memory, and cancels substituting of the processing of the job assigned to the other processor according to a processing status of at least one of other processors.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: December 14, 2021
    Assignee: Ricoh Company, Ltd.
    Inventor: Tadashi Honda
  • Patent number: 11200096
    Abstract: A system is described that has a node and runtime logic. The node has a plurality of processing elements operatively coupled by interconnects. The runtime logic is configured to receive target interconnect bandwidth, target interconnect latency, rated interconnect bandwidth and rated interconnect latency. The runtime logic responds by allocating to configuration files defined by the application graph: (1) processing elements in the plurality of processing elements, and (2) interconnects between the processing elements. The runtime logic further responds by executing the configuration files using the allocated processing elements and the allocated interconnects.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: December 14, 2021
    Assignee: SambaNova Systems, Inc.
    Inventors: Raghunath Shenbagam, Ravinder Kumar
  • Patent number: 11201939
    Abstract: Techniques for using one or more satellites as a part of a content delivery network are described. For example, in some instances a satellite of a cluster of satellites is to receive a request for a resource hosted by the content delivery network; determine that the request for the resource cannot be served by the cluster of satellites; determine a first entity to ask for the resource; send a secondary request for the resource to the determined first entity; receive the resource from the determined first entity; and respond, to a user of the content delivery network, to the request using the received resource for the resource.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: December 14, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ronil Sudhir Mokashi, Prashant Verma, Karthik Uthaman
  • Patent number: 11201824
    Abstract: Embodiments of the present disclosure provide a method, an electronic device and a computer program product of load balancing. The method comprises collecting, at a target device in a distributed system, resource usage information of a plurality of devices in the distributed system. The method further comprises determining a first work task for the target device to be stopped based on the resource usage information, the target device having a first authority to execute the first work task. The method further comprises causing the first authority to be released. With the embodiments of the present disclosure, each node in the distributed system can individually balance different task loads and the use of resources by different operations of the task, thereby improving the performance of the distributed system.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: December 14, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Tianyu Ma, Lu Tian
  • Patent number: 11194815
    Abstract: Service interruptions in a multi-tenancy, network-based storage system can be mitigated by constraining the execution of queries. In various examples, a network-based storage system may receive a request to execute a query against data maintained by the network-based storage system. The network-based storage system may perform a unit of work to execute the query, progressing through some, but not all, of a set of operations that are to be completed for completing execution of the query. Upon completion of the unit of work, query execution may be paused, query state data may be saved, and query results may be generated for consumption by the requesting computing device. In some embodiments, tokens that are usable to resume query execution based on the saved query state data may be sent to customer computing devices for resuming query execution on-demand.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: December 7, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ankit Kumar, Alazel Acheson, Matthew William Berry, Ankul Rastogi, Amit Sahasrabudhe
  • Patent number: 11194353
    Abstract: A method for controlling a data center, comprising a plurality of server systems, each associated with a cooling system and a thermal constraint, comprising: a concurrent physical condition of a first server system; predicting a future physical condition based on a set of future states of the first server system; dynamically controlling the cooling system in response to at least the input and the predicted future physical condition, to selectively cool the first server system sufficient to meet the predetermined thermal constraint; and controlling an allocation of tasks between the plurality of server systems to selectively load the first server system within the predetermined thermal constraint and selectively idle a second server system, wherein the idle second server system can be recruited to accept tasks when allocated to it, and wherein the cooling system associated with the idle second server system is selectively operated in a low power consumption state.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: December 7, 2021
    Assignee: The Research Foundation for The State University
    Inventor: Kanad Ghose
  • Patent number: 11188433
    Abstract: A data processing node includes a management environment, an application environment, and a shared memory segment (SMS). The management environment includes at least one management services daemon (MSD) running on one or more dedicated management processors thereof. One or more application protocols are executed by the at least one MSD on at least one of the dedicated management processors. The management environment has a management interface daemon (MID) running on one or more application central processing unit (CPU) processors thereof. The SMS is accessible by the at least one MSD and the MID for enabling communication of information of the one or more application protocols to be provided between the at least one MSD and the MID. The MID provides at least one of management service to processes running within the application environment and local resource access to one or more processes running on another data processing node.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: November 30, 2021
    Assignee: III Holdings 2, LLC
    Inventors: Niall Joseph Dalton, Trevor Robinson
  • Patent number: 11181571
    Abstract: An electronic device including a processor and a sensor may be provided. The processor obtains a first degree of degradation of a first core based on a first parameter value associated with a lifetime of the first core and a first operating level associated with an operation of the first core. The processor obtains a second degree of degradation of a second core based on a second parameter value associated with a lifetime of the second core and a second operating level associated with an operation of the second core. The processor schedules a task of the first core and the second core based on the first degree of degradation and the second degree of degradation. The sensor provides the first parameter value and the first operating level to the first core and the second parameter value and the second operating level to the second core.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: November 23, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Uk Ryu, Seongbeom Kim, Janghyuk An