Load Balancing Patents (Class 718/105)
-
Patent number: 12107740Abstract: Provided is an infrastructure for enforcing target service level parameters in a network. In one example, a network service level agreement (SLA) registry obtains one or more input service level parameters for at least one service offered by an application. Based on the one or more input service level parameters, the network SLA registry provides one or more target service level parameters to a plurality of network controllers. Each network controller of the plurality of network controllers is configured to enforce the one or more target service level parameters in a respective network domain configured to carry network traffic associated with the application.Type: GrantFiled: January 30, 2023Date of Patent: October 1, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Fabio R. Maino, Saswat Praharaj, Alberto Rodriguez-Natal, Pradeep K. Kathail
-
Patent number: 12107823Abstract: Multiple Anycast regions may be defined, and a separate Anycast address may be used for each region in order to localize client requests. In examples, when one or more Anycast servers in a first Anycast region fail or become overburdened (or are predicted to do so), one or more Anycast server in another, geographically or logically separate Anycast region that has additional capacity to handle client service requests may be dynamically added to the first Anycast region.Type: GrantFiled: July 28, 2023Date of Patent: October 1, 2024Assignee: CenturyLink Intellectual PropertyInventors: Dean Ballew, John R. B. Woodworth
-
Patent number: 12093745Abstract: Various approaches for managing one or more computational commodities in a virtual desktop infrastructure (VDI) include receiving a collection of utilization records for a user utilizing a desktop resource supported by the computational commodity in a desktop pool, each utilization record corresponding to a utilization rate of the computational commodity by the user; and augmenting or reducing allocation of the computational commodity to the desktop resource utilized by the user based at least in part on the utilization rates.Type: GrantFiled: August 31, 2020Date of Patent: September 17, 2024Assignee: International Business Machines CorporationInventors: Vivek Nandavanam, Shravan Sriram, Jerrold Leichter, Alexander Nish, Apostolos Dailianas, Dmitry Illichev
-
Patent number: 12086652Abstract: Techniques described herein relate to a method for managing a computer vision environment. The method includes identifying a CV alert; in response to identifying the CV alert: making a first determination that the CV node is not participating in a distributed workload associated with a higher priority CV alert; in response to the first determination, the CV node: selects candidate CV nodes of the plurality of CV nodes; initiates performance of the distributed CV workload by the candidate CV nodes to generate CV data associated with the CV alert; generates a CV alert case associated with the CV alert; obtains CV data from the candidate CV nodes that are performing the distributed CV workload; updates the CV alert case using the CV data generated during the performance of the distributed CV workload; and provides the updated CV alert case to a VMS.Type: GrantFiled: January 21, 2022Date of Patent: September 10, 2024Assignee: DELL PRODUCTS L.P.Inventors: Ian Roche, Philip Hummel, Dharmesh M. Patel
-
Patent number: 12068975Abstract: The present disclosure relates to the field of communication technology, and provides a resource scheduling method including: acquiring utilization rates of resources of a plurality of proxy servers, the plurality of proxy servers being deployed on a virtual machine; and using at least one first proxy server to share a utilization of resources of at least one second proxy server, where the utilization rate of resources of each of the at least one first proxy server is smaller than a first threshold, the utilization rate of resources of each of the at least one second proxy server is greater than a second threshold, and the first threshold is smaller than the second threshold.Type: GrantFiled: September 15, 2021Date of Patent: August 20, 2024Assignee: XI'AN ZHONGXING NEW SOFTWARE CO., LTD.Inventors: Yao Tong, Haixin Wang
-
Patent number: 12058056Abstract: Systems and methods for providing web service instances to support traffic demands for a particular web service in a large-scale distributed system are disclosed. An example method includes determining a peak historical service load for the web service. The service load capacity for each existing web service instance may then be determined. The example method may then calculate the remaining service load after subtracting the sum of the service load capacity of the existing web service instances from the peak historical service load for the web service. The number of web service instances necessary in the large-scale distributed system may be determined based on the remaining service load. The locations of the web service instances may be determined and changes may be applied to the large-scale system based on the number of web service instances necessary in the large-scale distributed system.Type: GrantFiled: June 3, 2021Date of Patent: August 6, 2024Assignee: Google LLCInventors: Kamil Skalski, Elzbieta Czajka, Filip Grzadkowski, Krzysztof Grygiel
-
Patent number: 12050944Abstract: Embodiments herein describe a describe an interface shell in a SmartNIC that reduces data-copy overhead in CPU-centric solutions that rely on hardware compute engine (which can include one or more accelerators). The interface shell offloads tag matching and address translation without CPU involvement. Moreover, the interface shell enables the compute engine to read messages directly from the network without extra data copy—i.e., without first copying the data into the CPU's memory.Type: GrantFiled: May 4, 2021Date of Patent: July 30, 2024Assignee: XILINX, INC.Inventors: Guanwen Zhong, Chengchen Hu, Gordon John Brebner
-
Patent number: 12047439Abstract: Methods and systems for managing workloads are disclosed. The workloads may be supported by operation of workload components that are hosted by infrastructure. The hosted locations of the workload components by the infrastructure may impact the performance of the workloads. To manage performance of the workloads, an optimization process may be performed to identify a migration plan for migrating some of the workload components to other infrastructure such as shared edge infrastructure. Migration of the workload components may reduce the computing resource cost for performing various workloads.Type: GrantFiled: April 26, 2023Date of Patent: July 23, 2024Assignee: Dell Products L.P.Inventors: Ofir Ezrielev, Roman Bober, Lior Gdaliahu, Yonit Lopatinski, Eliyahu Rosenes
-
Patent number: 12047273Abstract: A control system facilitates active management of a streaming data system. Given historical data traffic for each data stream processed by a streaming data system, the control system uses a machine learning model to predict future data traffic for each data stream. The control system selects a matching between data streams and servers for a future time that minimizes a cost comprising a switching cost and a server imbalance cost based on the predicted data traffic for the future time. In some configurations, the matching is selected using a planning window comprising a number of future time steps dynamically selected based on uncertainty associated with the predicted data traffic. Given the selected matching, the control system may manage the streaming data system by causing data streams to be moved between servers based on the matching.Type: GrantFiled: February 14, 2022Date of Patent: July 23, 2024Assignee: ADOBE INC.Inventors: Georgios Theocharous, Kai Wang, Zhao Song, Sridhar Mahadevan
-
Patent number: 12039195Abstract: Techniques are provided for provisioning zoned storage devices to sequential workloads. One method comprises obtaining a sequentiality classification of at least one workload of an application associated with a storage system comprising a plurality of zoned storage devices; and provisioning at least one of the zoned storage devices for storing the data of the at least one workload in response to the at least one workload being classified as a sequential workload. A sequentiality classification of a workload (e.g., as a sequential workload or a random workload) can be determined by: (i) evaluating the application name and/or application type of an application, (ii) learning input-output workload patterns, such as sequential read/write operations or random read/write operations, and/or (iii) detecting the application access mode to persistent volumes, such as a sequential write access mode.Type: GrantFiled: March 30, 2021Date of Patent: July 16, 2024Assignee: EMC IP Holding Company LLCInventors: Kurumurthy Gokam, Kundan Kumar, Remesh Parakunnath
-
Patent number: 12039375Abstract: The processing performance of an entire system is enhanced by efficiently using CPU resources shared by a plurality of guests. A server 10 includes a host OS 104 and a plurality of guest OSs 110A and 110B running on a plurality of virtual machines 108A and 108B, respectively, which are virtually constructed on the host OS 104. The plurality of virtual machines 108A and 108B shares CPU resources implemented by hardware 102. A guest priority calculation unit 202 of a resource management device (resource management unit 20) calculates a processing priority of at least one of the guest OSs 110 based on at least one of a packet transfer rate from the host OS 104 to the guest OS 110 and an available capacity status of a kernel buffer of the host OS 104. A resource utilization control unit 204 controls allocation of a utilization time for CPU resources to be used by the plurality of guest OSs 110 based on the calculated processing priority.Type: GrantFiled: February 4, 2020Date of Patent: July 16, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Kei Fujimoto, Kohei Matoba, Makoto Araoka
-
Patent number: 12039377Abstract: Load leveling between hosts (computes) is realized in a virtual infrastructure regardless of application restrictions on the virtualization technique and by reducing the influence on services.Type: GrantFiled: October 29, 2019Date of Patent: July 16, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Eriko Iwasa, Makoto Hamada
-
Patent number: 12032545Abstract: Systems, methods, and computer-readable and executable instructions are provided for providing a device agnostic active/active data center. Providing a device agnostic active/active data center can include receiving user communication assigned from a content delivery network (CDN) provider. In addition, providing a device agnostic active/active data center can include determining a designated database for the user communication. Furthermore, providing a device agnostic active/active data center can include assigning a destination address to the designated database for the user communication.Type: GrantFiled: June 27, 2022Date of Patent: July 9, 2024Assignee: United Services Automobile Association (USAA)Inventors: Christopher T. Wilkinson, Shannon Thornton, Phillip C. Schwesinger, Jason P. Larrew, Tommy B. Lavelle
-
Patent number: 12026178Abstract: System and techniques for determining an optimal number of regions in an IMS system include receiving a transaction report from a log dataset. A first table is generated from the transaction report, where the first table includes a class identified by a class identifier (ID), a number of regions the class is assigned, and a total percent region occupancy by the class. Classes ineligible to be shut down are identified based on a set of criteria and the classes ineligible to be shut down are eliminated. For each remaining class assigned to a threshold number of regions, candidate regions from the threshold number of regions eligible for shut down are identified and remaining regions from the threshold number of regions that can handle a workload from the candidate regions eligible for shut down are identified, where the remaining regions represent the optimal number of regions in the IMS system.Type: GrantFiled: March 30, 2022Date of Patent: July 2, 2024Assignee: BMC Software, Inc.Inventors: Sagar Rajendraprasad Bansal, Loc Dinh Tran, Graham Fox
-
Patent number: 12013870Abstract: Technology for routing queries in a system with a plurality of nodes (for example online analytical processing sub-systems) where each node has an associated replicated local database and a local latency value and replication velocity values. The workload balancing for incoming received queries among and between the plurality of nodes is based, at least in part, on consideration of latency values and/or replication velocity values for the various nodes. The best node to handle a given query is thereby selected and the query is routed to the selected node for response.Type: GrantFiled: July 29, 2022Date of Patent: June 18, 2024Assignee: International Business Machines CorporationInventors: Manogari Nogi Simanjuntak, Sowmya Kameswaran, Daniel Martin, Jia Heng Zhong
-
Patent number: 12014215Abstract: An active scheduling method performed with a master processor and a plurality of slave processors. The method includes determining whether a job to be performed has a dependency by referencing a job queue; in a case in which it is determined that the job to be performed has a dependency, updating a state of the job to be performed in a table in which information of each of a plurality of jobs is recorded; analyzing a state of a job preceding the job to be performed based on the table; and in a case in which the job preceding the job to be performed is determined to have been completed, performing the job to be performed by retrieving the job to be performed from the job queue.Type: GrantFiled: May 21, 2021Date of Patent: June 18, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jieun Lee, Jin-Hong Kim, Jaehyung Ahn, Sungduk Cho
-
Patent number: 12008674Abstract: Systems, apparatuses and methods may provide a way to monitor, by a process monitor, one or more processing factors of one or more client devices hosting one or more user sessions. More particularly, the systems, apparatuses and methods may provide a way to generate, responsively, a scene generation plan based on one or more of a digital representation of an N dimensional space or at least one of the one or more processing factors, and generate, by a global scene generator, a global scene common to the one or more client devices based on the digital representation of the space. The systems, apparatuses and methods may further provide for performing, by a local scene generator, at least a portion of the global illumination based on one or more of the scene generation plan, or application parameters.Type: GrantFiled: December 28, 2020Date of Patent: June 11, 2024Assignee: Intel CorporationInventors: Balaji Vembu, David M. Cimini, Elmoustapha Ould-Ahmed-Vall, Jacek Kwiatkowski, Philip R. Laws, Abhishek R. Appu
-
Patent number: 12001679Abstract: An apparatus comprises a processing device configured to detect an input-output (IO) pressure condition relating to at least one logical storage volume of a storage system, to receive IO operations directed to the at least one logical storage volume, to extract processing entity identifiers from respective ones of the received IO operations, and to perform IO throttling for the at least one logical storage volume based at least in part on the extracted processing entity identifiers. For example, a first group of one or more of the IO operations each having a first processing entity identifier may be subject to the IO throttling, while a second group of one or more of the IO operations each having a second processing entity identifier different than the first processing entity identifier is not subject to the IO throttling. Other differences in IO throttling can be implemented using the extracted processing entity identifiers.Type: GrantFiled: March 31, 2022Date of Patent: June 4, 2024Assignee: Dell Products L.P.Inventors: Sanjib Mallick, Vinay G. Rao, Arieh Don
-
Patent number: 11991240Abstract: Methods and systems for managing distribution of inference models throughout a distributed system are disclosed. To manage distribution of inference models, a system may include a data aggregator and one or more data collectors. The data aggregator may obtain a threshold, the threshold indicating an acceptable inference error rate for an inference model. The data aggregator may obtain an inference model based on the threshold by training an inference model, performing a lookup in an inference model lookup table, or via other methods. The data aggregator may optimize the inference model to determine the minimum quantity of computing resources consumed by an inference model in order to generate inferences accurate within the threshold. In order to do so, the data aggregator may simulate the operation of more computationally-costly inference models and less computationally-costly inference models.Type: GrantFiled: June 27, 2022Date of Patent: May 21, 2024Assignee: Dell Products L.P.Inventors: Ofir Ezrielev, Jehuda Shemer
-
Patent number: 11985076Abstract: An example method for automated cluster configuration includes the operations of: receiving cluster configuration data identifying a plurality of nodes of a cluster; receiving a workload description characterizing plurality of respective workloads of the plurality of nodes; analyzing the workload description to identify, among the plurality of nodes, a plurality of nodes of a first type and a plurality of nodes of the second type; configuring, on at least a subset of the plurality of nodes of the second type, respective node proxies, wherein each node proxy is configured to forward, over a second network, to a chosen node of the first type, incoming requests received over a first network; and configuring an endpoint proxy to forward, over a first network, to one of: a chosen node of the first type or a chosen node of the second type, incoming requests received over an external network.Type: GrantFiled: December 14, 2022Date of Patent: May 14, 2024Assignee: Red Hat, Inc.Inventors: Yehoshua Salomon, Gabriel Zvi BenHanokh
-
Patent number: 11977926Abstract: Techniques are described for orchestrating a cohort deployment in a computing network comprising a plurality of computing nodes implementing a virtualized computing network managed by an orchestrator. The cohort deployment is managed by a deployment broker configured to coordinate the cohort deployment. The cohort deployment includes multiple deployments, where the cohort deployment comprises a parent deployment and a spawned deployment that includes a dependency on the parent deployment.Type: GrantFiled: June 26, 2023Date of Patent: May 7, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ajay Punreddy, Piotr Galecki, Dinesh Kumar Ramasamy, Thuy Phuong Fernandes, Huanglin Xiong
-
Patent number: 11972321Abstract: Systems, computer-implemented methods, and computer program products to facilitate quantum computing job scheduling are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a scheduler component that can determine a run order of quantum computing jobs based on one or more quantum based run constraints. The computer executable components can further comprise a run queue component that can store the quantum computing jobs based on the run order. In an embodiment, the scheduler component can determine the run order based on availability of one or more qubits comprising a defined level of fidelity.Type: GrantFiled: March 11, 2021Date of Patent: April 30, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John A. Gunnels, Mark Wegman, David Kaminsky
-
Patent number: 11968237Abstract: A processing blade is assigned from the plurality of processing blades to a session of data packets. The load balancing engine manages a session table and an IPsec routing table by updating the session table with a particular security engine card assigned to the session and by updating the IPsec routing table for storing a remote IP address for a particular session. Outbound raw data packets of a particular session are parsed for matching cleartext tuple information prior to IPsec encryption, and inbound encrypted data packets of the particular session are parsed for matching cipher tuple information prior to IPsec decryption. Inbound data packets assigned to the processing blade from the session table are parsed and forwarded to the station.Type: GrantFiled: March 31, 2022Date of Patent: April 23, 2024Assignee: Fortinet, Inc.Inventors: Yita Lee, Sen Yang, Ting Liu
-
Patent number: 11962474Abstract: A method (1000) for performance modeling of a plurality of microservices (215) includes deploying the plurality of microservices (215) within a network (1260). The plurality of microservices (215) are communicatively coupled to generate at least one service chain (310) for providing at least one service. Based on a resource allocation configuration, an initial set of training data for the plurality of microservices within the network (1260) is determined. At least a portion of data is excluded from the initial set of training data to generate a subset of training data. A Quality of Service (QoS) behaviour model is generated based on the subset of the training data.Type: GrantFiled: October 2, 2019Date of Patent: April 16, 2024Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Michel Gokan Khan, Wenfeng Hu, Carolyn Cartwright, Huiyuan Wang
-
Patent number: 11961070Abstract: Systems and methods for providing a resource-based distributed public crypto currency blockchain include system provider device(s) that receive first crypto currency transaction information for a first crypto currency transaction that is configured to provide for the transfer of a crypto currency to a payee via a primary distributed public crypto currency blockchain maintained by computing devices. The system provider device(s) identify resource information provided by each computing device and use the resource information to select a subset of the computing devices for processing the first crypto currency transaction. The system provider device(s) then broadcast, via the network to each computing device, the first crypto currency transaction information for the first crypto currency transaction in order to cause a first computing device to process the first crypto currency transaction as part of a first block that is added to the primary distributed public crypto currency blockchain.Type: GrantFiled: October 31, 2022Date of Patent: April 16, 2024Assignee: PayPal, Inc.Inventor: Pankaj Sarin
-
Patent number: 11960913Abstract: A system for dynamically auto-scaling allocated capacity of a virtual desktop environment includes: base capacity resources and burst capacity resources and memory coupled to a controller; wherein, in response to executing program instructions, the controller is configured to: in response to receiving a log in request from a first user device, connect the first user device to a first host pool to which the first device user is assigned; execute a load-balancing module to determine a first session host virtual machine to which to connect the first user device; and execute an auto-scaling module comprising a user-selectable auto-scaling trigger and a user-selectable conditional auto-scaling action, wherein, in response to recognition of the conditional auto-scaling action, the controller powers on or powers off one or more base capacity resources or creates or destroys one or more burst capacity resources.Type: GrantFiled: March 16, 2022Date of Patent: April 16, 2024Assignee: Nerdio, Inc.Inventor: Vadim Vladimirskiy
-
Patent number: 11947988Abstract: A process for ingesting raw machine data that reduces network and data intake and query system resources is described herein. For example, instead of routing the raw machine data to an intake ingestion buffer via a load balancer, a publisher may instead route metadata to the load balancer. The load balancer can use the metadata to identify an available virtual machine in the intake ingestion buffer. The load balancer can then provide to the publisher the public IP address of the available virtual machine. The publisher can communicate with the available virtual machine using the public IP address, and the available virtual machine can identify which virtual machine owns the topic related to the raw machine data. The publisher can then transmit raw machine data to the virtual machine that owns the topic.Type: GrantFiled: October 19, 2020Date of Patent: April 2, 2024Assignee: Splunk Inc.Inventors: Sanjeev Kulkarni, Matteo Merli, Boyang Peng
-
Patent number: 11949760Abstract: In accordance with an embodiment, described herein is a system and method for receiving content to be parsed, and configuring a network of parsing devices for use in parsing the content in accordance with templates. The system comprises a management server in communication with the parsing network, and the management server is configured to determine a parsing assignment for one or more parsing devices within the parsing network. The parsing network comprises a plurality of parsing devices, each comprising or associated with an endpoint for enabling communication with the management server. The parsing assignment indicates content items to be parsed by the parsing devices and associated templates for use by the parsing devices.Type: GrantFiled: May 2, 2022Date of Patent: April 2, 2024Assignee: Utech, Inc.Inventor: Igor Fedyak
-
Patent number: 11936722Abstract: Techniques are described for wireless communication. One method, for processing a request received via a first mesh network using resources of a second mesh network, includes receiving, at a first node, a request that was generated by a requesting node of the first mesh network. The method further includes determining at the first node, based on configuration information about the second mesh network that is different from the first mesh network, that a second node of the second mesh network has an available computing resources level to process data related to the request. In accordance with the determining, the method additionally includes instructing the second node of the second mesh network to process the data related to the request to create requested data. And the requested data is provided to the requesting node of the first mesh network.Type: GrantFiled: October 12, 2021Date of Patent: March 19, 2024Assignee: AERVIVO, INC.Inventor: Michael John Hart
-
Patent number: 11936562Abstract: A method to offload network function packet processing from a virtual machine onto an offload destination is disclosed. In an embodiment, a method comprises: defining an application programing interface (“API”) for capturing, in a packet processor offload, a network function packet processing for a data flow by specifying how to perform the network function packet processing on data packets that belong to the data flow. Based on capabilities of the packet processor offload and available resources, a packet processing offload destination is selected. Based at least on the API, the packet processor offload for the packet processing offload destination is generated. The packet processor offload is downloaded to the packet processing offload destination to configure the packet processing offload destination to provide the network function packet processing on the data packets that belong to the data flow. The packet processing offload destination is a PNIC or a hypervisor.Type: GrantFiled: July 19, 2018Date of Patent: March 19, 2024Assignee: VMware, Inc.Inventors: Boon Seong Ang, Yong Wang, Guolin Yang, Craige Wenyi Jiang
-
Patent number: 11918896Abstract: An apparatus for managing an online game. The apparatus including a processor and a memory. The processor is configured to identify a set of client devices engaged in an online game; identify game parameters associated with client devices in the set; define group of client devices from the set of client devices, wherein the at least one group of client devices includes client devices with similar game parameter; determine communication latency between each of the client devices in the group of client devices and a server; define subgroup of client devices from each of the group of client devices, wherein the subgroup of client devices includes client devices with a similar communication latency; and enable the client devices in the subgroup of client devices to engage in a game session of the online game.Type: GrantFiled: February 14, 2022Date of Patent: March 5, 2024Assignee: Supercell OyInventors: Robert Kamphuis, Jonne Loikkanen, Jon Franzas
-
Patent number: 11924019Abstract: The present disclosure relates to a system comprising an alarm management module (AMM) that receives an alarm raised by an application running on a network function virtualization unit (NFVU) infrastructure, said NFVU infrastructure comprising a virtualization layer; and facilitates enrichment of the received alarm with NFVU infrastructure specific information based on a physical-and-virtual inventory associated with the NFVU infrastructure, said NFVU infrastructure specific information pertaining to hardware and virtual resources of the NFVU infrastructure that are involved in running said application.Type: GrantFiled: May 31, 2022Date of Patent: March 5, 2024Assignee: JIO PLATFORMS LIMITEDInventors: Dilip Krishnaswamy, Aayush Bhatnagar, Ankit Murarka
-
Patent number: 11907764Abstract: Techniques regarding the management of computational resources based on clinical priority associated with one or more computing tasks are provided. For example, one or more embodiments described herein can regard a system comprising a memory that can store computer-executable components. The system can also comprise a processor, operably coupled to the memory, that executes the computer-executable components stored in the memory. The computer-executable components can include a prioritization component that can prioritize computer applications based on a clinical priority of tasks performed by the computer applications. The clinical priority can characterize a time sensitivity of the tasks. The computer-executable components can also include a resource pool component that can divide computational resources across a plurality of resource pools and can assign the computer applications to the plurality of resource pools based on the clinical priority.Type: GrantFiled: October 7, 2020Date of Patent: February 20, 2024Assignee: GE PRECISION HEALTHCARE LLCInventors: Evgeny Drapkin, Michael Braunstein, Fausto Espinal, David Minor, Greg Ohme, Ben Dayan, David Chevalier, Manoj Unnikrishnan
-
Patent number: 11907219Abstract: A node includes a plurality of processing core resources. Each processing core resource of the plurality of processing core resources includes a corresponding processing module, a corresponding memory interface module, a corresponding memory device, and a corresponding cache memory. The plurality of processing core resources of the node is operable to collectively perform corresponding operations of the node. Each processing core resource of the plurality of processing core resources of the node is operable to perform operations independently from other ones of the plurality of processing core resources of the node.Type: GrantFiled: March 3, 2023Date of Patent: February 20, 2024Assignee: Ocient Holdings LLCInventors: George Kondiles, Jason Arnold
-
Patent number: 11909666Abstract: A method for optimizing network device resources that includes receiving, by an optimizer, first resource utilization data, making a first determination, based on the first resource utilization data, that resource utilization exceeds an upper threshold, starting, based on the first determination, an optimization process, that includes identifying a resource optimization entry of a resource class optimization queue, and initiating optimization of a resource fragment specified by the resource optimization entry. After initiating optimization of the region of the memory, the method additionally includes receiving second resource utilization data, making a second determination, based on the second resource utilization data, that the resource utilization is below a lower threshold, and halting, based on the second determination, the optimization process.Type: GrantFiled: November 21, 2022Date of Patent: February 20, 2024Assignee: ARISTA NETWORKS, INC.Inventors: Binglai Niu, Mayukh Saubhasik
-
Patent number: 11895181Abstract: Examples of the disclosure include a microserver system comprising a plurality of microservers, a common hardware bus interconnecting the microservers, each microserver of the plurality of microservers being configured to execute one or more applications, and a controller coupled to the plurality of microservers, the controller being configured to determine, based on application-load data associated with the one or more applications, a first application load of a first set of one or more applications executed by a first microserver of the plurality of microservers and a second application load of a second set of one or more applications executed by a second microserver of the plurality of microservers, determine that a combination of the first application load and the second application load is below a maximum-application-load threshold of the second microserver, and migrate the first set of one or more applications from the first microserver to the second microserver.Type: GrantFiled: December 4, 2020Date of Patent: February 6, 2024Assignee: SCHNEIDER ELECTRIC IT CORPORATIONInventor: Michael Kenneth Schmidt
-
Patent number: 11888756Abstract: A load balancer receives a sequence of requests for computing service and distributes the requests for computing service to a computing node in an ordered list of computing nodes until the computing node reaches its maximum allowable compute capability. Responsive to an indication that the computing node has reached its maximum allowable compute capability, the load balancer distributes subsequent requests for computing service to another computing node in the ordered list. If the computing node is the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to a computing node other than one of the computing nodes in the ordered list of computing nodes. If the computing node is not the last computing node in the ordered list, the load balancer distributes a subsequent request for computing service to another computing node in the ordered list of computing nodes.Type: GrantFiled: June 8, 2021Date of Patent: January 30, 2024Assignee: PayPal, Inc.Inventors: Rema Hariharan, Sathyamangalam Ramaswamy Venkatramanan, Ashok Srinivasa Murthy, Rami El-Charif
-
Patent number: 11875191Abstract: Methods, systems, and computer-readable media for energy-optimizing placement of resources in data centers are disclosed. A resource placement manager determines information descriptive of energy usage by one or more data centers. The one or more data centers comprise a plurality of computing resources in a plurality of corresponding locations. The resource placement manager selects, from the plurality of computing resources in the plurality of corresponding locations, a particular computing resource in a particular location for performing one or more computing tasks. The particular computing resource in the particular location is selected based at least in part on reducing energy usage associated with the one or more data centers according to the information descriptive of energy usage. The particular computing resource in the particular location is used to perform the one or more computing tasks.Type: GrantFiled: March 2, 2020Date of Patent: January 16, 2024Assignee: Amazon Technologies, Inc.Inventors: Jamie Plenderleith, Brian Hayward, Monika Marta Gnyp, Sarah Rose Quigley, Suzie Cuddy
-
Patent number: 11861272Abstract: A system configured to implement Comprehensive Contention-Based Thread Allocation and Placement, may generate a description of a workload from multiple profiling runs and may combine this workload description with a description of the machine's hardware to model the workload's performance over alternative thread placements. For instance, the system may generate a machine description based on executing stress applications and machine performance counters monitoring various performance indicators during execution of a synthetic workload. Such a system may also generate a workload description based on profiling sessions and the performance counters. Additionally, behavior of a workload with a proposed thread placement may be modeled based on the machine description and workload description and a prediction of the workload's resource demands and/or performance may be generated.Type: GrantFiled: August 30, 2017Date of Patent: January 2, 2024Assignee: Oracle International CorporationInventors: Timothy L. Harris, Daniel J. Goodman
-
Patent number: 11863352Abstract: Some embodiments of the invention provide a novel network architecture for deploying guest clusters (GCs) including workload machines for a tenant (or other entity) within an availability zone. The novel network architecture includes a virtual private cloud (VPC) deployed in the availability zone (AZ) that includes a centralized routing element that provides access to a gateway routing element of the AZ. In some embodiments, the centralized routing element provides a set of services for packets traversing a boundary of the VPC. The services, in some embodiments, include load balancing, firewall, quality of service (QoS) and may be stateful or stateless. Guest clusters are deployed within the VPC and use the centralized routing element of the VPC to access the gateway routing element of the AZ.Type: GrantFiled: February 25, 2021Date of Patent: January 2, 2024Assignee: VMWARE, INC.Inventors: Jianjun Shen, Mark Johnson, Gaetano Borgione, Benjamin John Corrie, Derek Beard, Zach James Shepherd, Vinay Reddy
-
Patent number: 11853317Abstract: Creating replicas using queries may be implemented for a time series database. A new host for a new copy of time series database data may be added and idempotent ingestion of additional data to be included in the new copy after a creation time for the new copy may be performed. Queries to other hosts that store the time series database data may be performed to obtain time series data prior to the creation time. Idempotent ingestion of the results of the queries may be performed at the new host after which performance of queries to the new copy of the time series database may be allowed at the new host.Type: GrantFiled: March 18, 2019Date of Patent: December 26, 2023Assignee: Amazon Technologies, Inc.Inventor: Dumanshu Goyal
-
Patent number: 11853193Abstract: An approach is provided for a program profiler to implement inverse performance driven program analysis, which enables a user to specify a desired optimization end state and receive instructions on how to implement the optimization end state. The program profiler accesses profile data from an execution of a plurality of tasks executed on a plurality of computing resources. The program profiler constructs a dependency graph based on the profile data. The program profiler causes a user interface to be presented that represents the profile data. The program profiler receives an input for a modification of one or more execution attributes of one or more target tasks. The program profiler determines that the modification is projected to improve a performance metric while maintaining a validity of the dependency graph. The program profiler presents, via the user interface, one or more steps to implement the modification.Type: GrantFiled: October 29, 2021Date of Patent: December 26, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Budirijanto Purnomo, Chen Shen
-
Patent number: 11853394Abstract: The present disclosure relates to a method for image classes definition and to a method for image multiprocessing and related vision system, which implement said method for image classes definition.Type: GrantFiled: November 21, 2018Date of Patent: December 26, 2023Assignee: Datalogic IP Tech S.R.L.Inventors: Francesco D'Ercoli, Francesco Paolo Muscaridola
-
Patent number: 11855847Abstract: A method for managing a virtual desktop infrastructure (VDI) environment includes: obtaining a plurality of target resource specific pool specific configuration templates for a target resource, in which each of the plurality of target resource specific pool specific configuration templates is associated with one or a plurality of virtual desktop (VD) pools, in which the target resource is a network resource; obtaining a common configuration template set; generating a VD pool configuration for each of the plurality of VD pools using the plurality of target resource specific pool specific configuration templates and the common configuration template set to obtain a plurality of VD pool configurations; selecting a default VD pool from the plurality of VD pools; and deploying, based on the selection, a plurality of VDs into the default VD pool using a VD pool configuration associated with the default VD pool.Type: GrantFiled: April 18, 2022Date of Patent: December 26, 2023Assignee: Dell Products L.P.Inventors: John Kelly, Dharmesh M. Patel
-
Patent number: 11831562Abstract: Systems and methods for efficient database management of non-transitory readable media, including a memory configured to store information associated with service instance requests across a plurality of distributed network resources and a processor configured to receive a service instance request, determine the first native domain object associated with the service instance request, allocate the plurality of network resources to a plurality of distributed worker instances dependent upon a first native domain object, assign the first service instance request to a first worker instance that includes a microservice instance that define service instance blocks to execute the request, and a service instance block manager configured to manage the first service instance request in conjunction with subsequent service instance requests associated with the plurality of worker instances, track running and completed requests, and allocate resources for similar requests across the distributed network nodes.Type: GrantFiled: October 4, 2021Date of Patent: November 28, 2023Inventors: Ronald M. Parker, Jeremy Brown, Haibo Qian
-
Patent number: 11809953Abstract: Embodiments include techniques for enabling execution of N inferences on an execution engine of a neural network device. Instruction code for a single inference is stored in a memory that is accessible by a DMA engine, the instruction code forming a regular code block. A NOP code block and a reset code block for resetting an instruction DMA queue are stored in the memory. The instruction DMA queue is generated such that, when it is executed by the DMA engine, it causes the DMA engine to copy, for each of N inferences, both the regular code block and an additional code block to an instruction buffer. The additional code block is the NOP code block for the first N?1 inferences and is the reset code block for the Nth inference. When the reset code block is executed by the execution engine, the instruction DMA queue is reset.Type: GrantFiled: September 2, 2022Date of Patent: November 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Samuel Jacob, Ilya Minkin, Mohammad El-Shabani
-
Patent number: 11811862Abstract: Methods and systems for managing workloads are disclosed. The workloads may be supported by operation of workload components that are hosted by infrastructure. The hosted locations of the workload components by the infrastructure may impact the performance of the workloads. To manage performance of the workloads, an optimization process may be performed to identify a migration plan for migrating some of the workload components to different infrastructure locations. Some of the different infrastructure locations may reduce computing resource cost for performance of the workloads.Type: GrantFiled: April 26, 2023Date of Patent: November 7, 2023Assignee: Dell Products L.P.Inventors: Ofir Ezrielev, Lior Gdaliahu, Roman Bober, Yonit Lopatinski, Eliyahu Rosenes
-
Patent number: 11785117Abstract: Embodiments described herein provide methods and apparatuses for relate to methods and apparatuses for providing processing functions by microservices in a service. A first microservice is capable of providing a first processing function in a service comprising a plurality of microservices. The method includes receiving a processing request to provide the first processing function; obtaining a sequence of a plurality of microservices associated with the processing request, wherein the sequence comprises the first microservice; obtaining a current latency requirement associated with the remaining microservices in the sequence; obtaining an estimated latency associated with the remaining microservices in the sequence; and placing the processing request in a processing queue based on a comparison between the current latency requirement and the estimated latency.Type: GrantFiled: June 26, 2019Date of Patent: October 10, 2023Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Xuejun Cai, Zhang Fu, Kun Wang
-
Patent number: 11775351Abstract: A method for processing data on a programmable logic controller includes a priority with a predetermined priority level assigned to at least one parallel processing section of a program of a master-processor core of a control task. Respective priority levels are inserted into a data structure as the respective master-processor core arrives at the parallel processing section. A parallel-processor core examines whether entries are present in the data structure and processes partial tasks from a work package of the master-processor core the priority level of which ranks first among the entries. A real-time condition of the control task is met by setting executing times of the programs for the master-processor core so that the master-processor core is capable of processing the partial tasks from the work packages without being supported by the parallel-processor core. The master-processor core further processes partial tasks not processed by the at least one parallel-processor core.Type: GrantFiled: December 4, 2018Date of Patent: October 3, 2023Assignee: Beckhoff Automation GmbHInventor: Robin Vogt
-
Patent number: 11762697Abstract: The present disclosure discloses a method and apparatus for scheduling a resource for a deep learning framework. The method can comprise: querying statuses of all deep learning job objects from a Kubernetes platform at a predetermined interval; and submitting, in response to finding from the queried deep learning job objects a deep learning job object having a status conforming to a resource request submission status, a resource request to the Kubernetes platform to schedule a physical machine where the Kubernetes platform is located to initiate a deep learning training task. The method can completely automate the allocation and release on the resource of the deep learning training task.Type: GrantFiled: January 15, 2019Date of Patent: September 19, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Kun Liu, Kai Zhou, Qian Wang, Yuanhao Xiao, Lan Liu, Dongze Xu, Tianhan Xu, Jiangliang Guo, Jin Tang, Faen Zhang, Shiming Yin