Load Balancing Patents (Class 718/105)
-
Patent number: 12386666Abstract: A working method and device for a deep learning training task. GPUs are allocated to multiple deep learning training tasks according to the remaining resources of the GPUs in a single server node or multiple server nodes to achieve the effect of considering multiple deep learning training tasks while ensuring the utilization rate of the GPUs. The method comprises: obtaining a deep learning training task parameter input by a user; determining the type of the deep learning training task from the task parameter, the type of the deep learning training task type comprising: single model and multi-model; selecting GPUs by different policies according to different deep learning training task types; and selecting, according to the position of the GPU, a CPU having a shortest communication distance from the GPU for working.Type: GrantFiled: December 30, 2019Date of Patent: August 12, 2025Assignee: GUANGDONG INSPUR SMART COMPUTING TECHNOLOGY CO., LTD.Inventors: Renming Zhao, Pei Chen
-
Patent number: 12386828Abstract: This disclosure provides a query task execution method, apparatus, computer device, storage medium. The method includes: receiving a query task for a target data table; determining at least one candidate degree of parallelism for executing the query task based on a first number of child tables obtained by pre-partitioning the target data table; selecting a target degree of parallelism within a preset range of degrees of parallelism from the at least one candidate degree of parallelism; and determining required computing units according to the target degree of parallelism, and evenly allocating the child tables to the computing units which concurrently execute the query task based on the allocated child tables.Type: GrantFiled: June 18, 2024Date of Patent: August 12, 2025Assignee: Beijing Volcano Engine Technology Co., Ltd.Inventors: Wei Ding, Yuanjin Lin, Jianfeng Qian, Li Zhang, Jianjun Chen, Rui Shi
-
Patent number: 12389270Abstract: A management apparatus in a radio access network including RUs (Radio Units) and vDUs (virtual Distributed Units), the management apparatus comprising: a monitoring unit configured to acquire load states of the respective vDUs; and a control unit configured to determine, among the vDUs, based on the load states that have been acquired, a first vDU in which a load should be lowered and a second vDU for receiving part of the load from the first vDU, to determine, among RUs connected to the first vDU, an RU for which a connection destination is to be changed to the second vDU, and to change the connection destination of the determined RU from the first vDU to the second vDU.Type: GrantFiled: November 26, 2021Date of Patent: August 12, 2025Assignee: RAKUTEN MOBILE, INC.Inventors: Jin Nakazato, Saki Tanaka
-
Patent number: 12375552Abstract: A system is provided that includes: a first load balancing device cluster, the first load balancing device cluster includes a first load balancing device pool and a second load balancing device pool; at least one first switch respectively coupled with each load balancing device in the first load balancing device pool via a routing protocol link; and at least one second switch respectively coupled with each load balancing device in the second load balancing device pool via a routing protocol link, the at least one first switch and the at least one second switch are configured to be able to be connected with the Internet; and one of the first load balancing device pool and the second load balancing device pool is configured as a standby load balancing device pool of the other.Type: GrantFiled: April 14, 2022Date of Patent: July 29, 2025Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Fenghui Zhang, Feitong Wang, Lin Jiang, Aiyi Liang
-
Patent number: 12367074Abstract: A resource controller module of a network management system receives a request for an allocation of threads to perform a job associated with a job category for a tenant associated with the network management system. The resource controller module determines, based on the request, a number of available threads associated with the job category of the system at a particular time and causes, based on the request and the number of available threads associated with the job category at the particular time, a group of threads associated with the job category to be allocated to perform the job to be allocated to perform the job.Type: GrantFiled: December 20, 2021Date of Patent: July 22, 2025Assignee: Juniper Networks, Inc.Inventors: Chandrasekhar A, Jayanthi R
-
Patent number: 12353894Abstract: An electronic device is provided. The electronic device includes a memory configured to store at least one application, a database storing determination criteria information to determine an abnormal operation of the at least one application, and an agent configured to control a process of the at least application, and a processor, coupled to the memory, configured to execute the agent, in which, when executing the agent, the processor is configured to receive state information related to the abnormal operation of the at least one application from an operating system (OS), determine a target application from the at least one application in which an abnormal operation is expected based on the determination criteria information, and preload the target application in a background.Type: GrantFiled: July 14, 2023Date of Patent: July 8, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Changho Lee, Byoungkug Kim, Kwangtaek Woo, Deukkyu Oh, Jinwan An
-
Patent number: 12355632Abstract: The disclosure describes a method and system for dynamic frequency scaling of multi-core processor in wireless communication networks. The method comprising: transmitting, by a network node (NN), core-load data and a plurality of key indicators of each core group of a plurality of core groups in the multi-core processor to a central management entity (CME); receiving, by the NN, a core-load prediction model associated for each core group from the CME; determining, by the NN, an estimated core-load data for each core group using the associated core-load prediction model and determining, by the NN, a maximum estimated core-load data among the estimated core-load data of each core group; and determining, by the NN, an optimum multi-core processor frequency for the network node based on the maximum estimated core-load data.Type: GrantFiled: February 7, 2024Date of Patent: July 8, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Vishal Murgai, Swaraj Kumar, Srihari Das Sunkada Gopinath, Gihyun Kim, Hyunho Lee
-
Patent number: 12340257Abstract: Technologies are provided for a multi-cloud bursting service. The service generates a template for provisioning a compute environment in a selected cloud service provider. The template defines a software application to be used for a job to be processed by the compute environment and a hardware setting identifying hardware to be used in the compute environment. The service receives, from a local compute environment, a request to provision the compute environment and transmits the job and the template to the selected cloud service provider such that the selected cloud service provider, based on the cloud agnostic burst template: provisions the compute environment; and deploys the job on the compute environment in the selected cloud service provider.Type: GrantFiled: April 18, 2024Date of Patent: June 24, 2025Assignee: Adaptive Computing Enterprises, Inc.Inventor: Arthur L. Allen
-
Patent number: 12333332Abstract: In an application execution system having a plurality of application servers, each application server stores a plurality of applications, and has computational resources for executing applications in response to received requests. Each application server also includes instructions for loading a respective application into volatile storage and executing the application in response to a request from a client, and for returning a result. A generic application instance may be cloned, creating a pool of generic application instance clones that can be loaded with code for a requested application to produce an application instance. The application instance can then be stored in a cache to be used for a future application request.Type: GrantFiled: February 22, 2023Date of Patent: June 17, 2025Assignee: Google LLCInventors: Kenneth Ashcraft, Jon P. McAlister, Kevin A. Gibbs, Ryan C. Barrett
-
Patent number: 12326822Abstract: A data processing method, applied to a server including a network interface card, a central processing unit and a storage medium. The network interface card performs traffic distribution processing on initial data, determines control data, index data and service data of the initial data, and stores the control data and the index data in the central processing unit. The central processing unit parses the control data, determines a data execution operator corresponding to the control data, issues the data execution operator to the network interface card, processes the index data of the initial data, and stores the processed index data in the medium. The network interface card performs calculation on the service data based on the data execution operator, and stores, in the medium, target service data, index data of the target service data, and metadata of the target service data and the index data determined through the calculation.Type: GrantFiled: July 19, 2022Date of Patent: June 10, 2025Assignee: Hangzhou AliCloud Feitian Information Technology Co., Ltd.Inventors: Xuhui Li, Zhongjie Wu
-
Patent number: 12321781Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for scheduling tasks of ML workloads. A system receives requests to perform the workloads and determines, based on the requests, resource requirements to perform the workloads. The system includes multiple hosts and each host includes multiple accelerators. The system determines a quantity of hosts assigned to execute tasks of the workload based on the resource requirement and the accelerators for each host. For each host in the quantity of hosts, the system generates a task specification based on a memory access topology of the host. The specification specifies the task to be executed at the host using resources of the host that include the multiple accelerators. The system provides the task specifications to the hosts and performs the workloads when each host executes assigned tasks specified in the task specifications for the host.Type: GrantFiled: December 29, 2022Date of Patent: June 3, 2025Assignee: Google LLCInventors: Jue Wang, Hui Huang
-
Patent number: 12288283Abstract: Methods, systems and apparatuses may provide for technology that determines that a state of a plurality of primitives is associated with out-of-order execution. The plurality of primitives is associated with a raster order. The technology reorders the plurality of primitives from a raster order, and distributes one or more of pixel processing operations or rasterization operations associated with the plurality of primitives to load balance across one or more of a plurality of execution units of a graphics processor or a graphics pipeline of the graphics processor.Type: GrantFiled: June 24, 2021Date of Patent: April 29, 2025Assignee: Intel CorporationInventors: Prasoonkumar Surti, Jorge Garcia Pabon, John Gierach
-
Patent number: 12285694Abstract: Systems, methods, and computer-readable media for providing a game to a user through a user interface. A gaming platform may include a game client, a game window, a message flow, a launcher, a game launch service, a game provider, a platform back end, and a testing module. The game client may provide the game. The game client may be embedded in the game window. The game window may provide the ability for a user to interact with the game through an input, as well as one or more interface elements agnostic to the game. The message flow may facilitate the translation and communication of a message from a first format associated with the game client and a second format associated with the game window.Type: GrantFiled: April 2, 2024Date of Patent: April 29, 2025Assignee: FanDuel LimitedInventors: Vlad-Alexandru Cioflica, Dan Laslo-Faur
-
Patent number: 12282800Abstract: Devices and techniques for thread replay to preserve state in a barrel processor are described herein. An apparatus includes a barrel processor, which includes a temporary memory; and a thread scheduling circuitry; wherein the barrel processor is configured to perform operations through use of the thread scheduling circuitry, the operations including those to: schedule a current thread to place into a pipeline for the barrel processor on a clock cycle, the barrel processor to schedule threads on each clock cycle; store the current thread in the temporary memory; detect that no thread is available on a clock cycle subsequent to the cycle that the current thread is scheduled; and in response to detecting that no thread is available on the subsequent clock cycle, repeat scheduling the current thread based on the contents of the temporary memory.Type: GrantFiled: October 20, 2020Date of Patent: April 22, 2025Assignee: Micron Technology, Inc.Inventors: Chris Baronne, Dean E. Walker, John Amelio
-
Patent number: 12277019Abstract: An electronic device may include a communication module, a temperature sensor, a memory and a processor operatively connected to the communication module, the temperature sensor, and the memory, wherein the processor is configured to identify whether the electronic device is in an overheating state, perform first scheduling by using a scheduling method designated for processes, when the electronic device is not in the overheating state, and control the processes based on the first scheduling, and, when the electronic device is in the overheating state, identify processor usage of at least one background process among the processes, identify at least one background process group based on the processor usage of the at least one background process, identify a first time interval, in which the at least one background process group operates, and a second time interval, in which the at least one background process group does not operate, perform second scheduling for the processes based on the first time interval aType: GrantFiled: November 15, 2022Date of Patent: April 15, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Sungyong Bang, Jongwoo Kim, Hyunjin Noh, Hakryoul Kim, Mooyoung Kim
-
Patent number: 12271762Abstract: A method may include allocating, based on a first load requirement of a first tenant, a first bin having a fixed capacity for handing the first load requirement of the first tenant. In response to the first load requirement of the first tenant exceeding a first threshold of the fixed capacity of the first bin, packing a second bin allocated to handle a second load requirement of a second tenant. The second bin may be packed by transferring, to the second bin, the first load requirement of the first tenant based on the transfer not exceeding the first threshold of the fixed capacity of the second bin. In response to the transfer exceeding the first threshold of the fixed capacity of the second bin, allocating a third bin to handle the first load requirement of the first tenant.Type: GrantFiled: June 25, 2021Date of Patent: April 8, 2025Assignee: SAP SEInventors: Vengateswaran Chandrasekaran, Sriram Narasimhan, Panish Ramakrishna, Vinay Santurkar, Venkatesh Iyengar, Amit Joshi
-
Patent number: 12271285Abstract: A technique for generating component usage statistics involves associating components with blocks of a stream-enabled application. When the streaming application is executed, block requests may be logged by Block ID in a log. The frequency of component use may be estimated by analyzing the block request log with the block associations.Type: GrantFiled: August 28, 2023Date of Patent: April 8, 2025Assignee: Numecent Holdings, Inc.Inventors: Jeffrey de Vries, Arthur S. Hitomi
-
Patent number: 12265854Abstract: Terminating and serializing HTTP load is provided. The method comprising receiving, by a load balancer, a client request. An HTTP parser in the load balancer is invokes, which parses the client request. A lambda function in the load balancer is then invoked, wherein the lambda function specifies data format requirements for a language used in a backend server. The load balancer parses the client request according to the lambda function in a manner specific to the language used in the backend server. The load balancer then serializes the client request according to the lambda function in a manner specific to the language used in the backend server. The load balancer sends the serialized client request to the backend server.Type: GrantFiled: November 30, 2021Date of Patent: April 1, 2025Assignee: International Business Machines CorporationInventor: Gireesh Punathil
-
Patent number: 12259767Abstract: Performance adaptation for an integrated circuit includes receiving, by a workload prediction system of a hardware processor, telemetry data for one or more systems of the hardware processor. A workload prediction is determined by processing the telemetry data through a workload prediction model executed by a workload prediction controller of the workload prediction system. A profile is selected, from a plurality of profiles, that matches the workload prediction. The selected profile specifies one or more operating parameters for the hardware processor. The selected profile is provided to a power management controller of the hardware processor for controlling an operational characteristic of the one or more systems.Type: GrantFiled: March 8, 2023Date of Patent: March 25, 2025Assignee: Advanced Micro Devices, Inc.Inventor: Julian Daniel John
-
Patent number: 12261826Abstract: A system of one embodiment allows for redirecting service and API calls for containerized applications in a computer network. The system includes a memory and a processor. The system processes a plurality of application workflows of a containerized application workload. The system then identifies at least one application workflow of the plurality of application workflows and at least one workflow-specific routing rule associated with the at least one application workflow. The system then determines at least one proxy server address for each identified application workflow based on the at least one associated workflow-specific routing rule. Then the system determines at least one proxy server address for each identified application workflow based on the at least one associated workflow-specific routing rule. The system then may communicate the at least one identified application workflow to the at least one proxy server using the at least one determined proxy server addresses.Type: GrantFiled: July 5, 2022Date of Patent: March 25, 2025Assignee: CISCO TECHNOLOGY, INC.Inventors: Hendrikus G. P. Bosch, Alessandro Duminuco, Zohar Kaufman
-
Patent number: 12254331Abstract: A system for dynamically auto-scaling allocated capacity of a virtual desktop environment includes: base capacity resources and burst capacity resources and memory coupled to a controller; wherein, in response to executing program instructions, the controller is configured to: in response to receiving a log in request from a first user device, connect the first user device to a first host pool to which the first device user is assigned; execute a load-balancing module to determine a first session host virtual machine to which to connect the first user device; and execute an auto-scaling module comprising a user-selectable auto-scaling trigger and a user-selectable conditional auto-scaling action, wherein, in response to recognition of the conditional auto-scaling action, the controller powers on or powers off one or more base capacity resources or creates or destroys one or more burst capacity resources.Type: GrantFiled: April 15, 2024Date of Patent: March 18, 2025Assignee: Nerdio, Inc.Inventor: Vadim Vladimirskiy
-
Patent number: 12254357Abstract: A method and a first agent controlling computing resources in a first edge cloud, for supporting a machine learning operation. When detecting that additional computing resources outside the first edge cloud are needed for the machine learning operation, the first agent obtains said additional computing resources from a second edge cloud. The machine learning operation is then performed by using computing resources in the first edge cloud and the additional computing resources obtained from the second edge cloud.Type: GrantFiled: September 26, 2018Date of Patent: March 18, 2025Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Miljenko Opsenica, Joel Patrik Reijonen
-
Patent number: 12255870Abstract: A method used by a domain name system (DNS) server is disclosed. The DNS server receives a DNS request containing a host name and a resource record specifying data. The DNS server resolves an internet protocol (IP) address based on the host name. The DNS server resolves a server address of a resource server containing the data specified in the resource record. The DNS server transmits a DNS response including the IP address and the server address.Type: GrantFiled: May 9, 2023Date of Patent: March 18, 2025Assignee: Huawei Technologies Co., Ltd.Inventors: Michael McBride, Yingzhen Qu, James Neil Guichard
-
Patent number: 12248821Abstract: Technologies are provided for a multi-cloud bursting service. An example method can include receiving, via a cloud bursting service associated with different clouds, a cloud bursting configuration enabling the cloud bursting service for a local compute environment; based on the cloud bursting configuration, determining a number of jobs in a jobs queue associated with one or more cloud environments from the different clouds; determining a number of nodes available to process the number of jobs in the jobs queue; based on the number of jobs in the jobs queue and number of nodes available, determining whether to spin up a new node, take offline an existing node, or shutdown the existing node to yield a determination; and based on the determination and cloud bursting configuration, performing a cloud bursting action including spinning up the new node, taking offline the existing node, or shutting down the existing node.Type: GrantFiled: April 16, 2024Date of Patent: March 11, 2025Assignee: Adaptive Computing Enterprises, Inc.Inventor: Arthur L. Allen
-
Patent number: 12217088Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.Type: GrantFiled: October 30, 2023Date of Patent: February 4, 2025Assignee: Snowflake Inc.Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
-
Patent number: 12219005Abstract: A method used by an egress router is disclosed. The egress router obtains a capacity index of an application server attached to the egress router. The egress router further obtains a load index describing a load measurement between the egress router and the application server during a certain time period. The egress router encodes the capacity index and the load index into a packet. The egress router transmits the packet to one or more routers in an Internet protocol (IP) network.Type: GrantFiled: April 20, 2023Date of Patent: February 4, 2025Assignee: Huawei Technologies Co., Ltd.Inventors: Linda Dunbar, Huaimo Chen
-
Patent number: 12217098Abstract: Techniques are described for orchestrating a cohort deployment in a computing network comprising a plurality of computing nodes implementing a virtualized computing network managed by an orchestrator. The cohort deployment is managed by a deployment broker configured to coordinate the cohort deployment. The cohort deployment includes multiple deployments, where the cohort deployment comprises a parent deployment and a spawned deployment that includes a dependency on the parent deployment.Type: GrantFiled: April 3, 2024Date of Patent: February 4, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Ajay Punreddy, Piotr Galecki, Dinesh Kumar Ramasamy, Thuy Phuong Fernandes, Huanglin Xiong
-
Patent number: 12207099Abstract: An example device includes one or more processors; an image capture device coupled to the one or more processors and configured to generate image capture data representative of a three-dimensional (3D) physical environment; an electronic display coupled to the one or more processors; and a memory coupled to the one or more processors, the memory storing instructions to cause the one or more processors to: obtain characteristics of a network associated with the device, generate overlay image data indicative of the one or more characteristics of the network, augment the image capture data with the overlay image data to create augmented image capture data, and output, to the electronic display, the augmented image capture data.Type: GrantFiled: March 7, 2022Date of Patent: January 21, 2025Assignee: JUNIPER NETWORKS, INC.Inventors: Lyubov Nesteroff, Yelena Kozlova, Fatima Rafiqui, Arda Akman, Burcu Sahin
-
Patent number: 12206567Abstract: Techniques for dynamically cloning application infrastructures are provided. In one embodiment, a computer system can monitor one or more metrics pertaining to an infrastructure for an application at a first site. If the one or more metrics exceed or fall below one or more corresponding thresholds, the computer system can clone the infrastructure at a second site distinct from the first site, thereby enabling the application to be hosted at the second site.Type: GrantFiled: August 30, 2022Date of Patent: January 21, 2025Assignee: Avago Technologies International Sales Pte. LimitedInventors: Jeffrey P. Hartley, Atul Gosain
-
Patent number: 12197757Abstract: Techniques for providing a virtual federation approach to increasing efficiency of processing circuitry utilization in storage nodes with a high number of cores. The techniques include, for each of two (2) physical nodes, logically partitioning a plurality of cores into a first domain of cores and a second domain of cores. The techniques include designating the first domain of cores of each physical node as belonging to a first virtual node. The techniques include designating the second domain of cores of each physical node as belonging to a second virtual node. The techniques include operating the first virtual nodes on the two (2) underlying physical nodes as a first virtual appliance, and operating the second virtual nodes on the two (2) underlying physical nodes as a second virtual appliance. In this way, scalability and speedup efficiency can be increased in a multi-core processing environment with a high number of cores.Type: GrantFiled: October 4, 2023Date of Patent: January 14, 2025Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Amitai Alkalay, Steven A. Morley
-
Patent number: 12190103Abstract: In some examples, a system represents tasks of a project as feature nodes of a force-directed graph, and connects, in the force-directed graph, sub-feature nodes representing sub-features associated by links to the feature nodes in the force-directed graph. The system sets a size of each respective sub-feature node of the sub-feature nodes based on an amount of resource usage expended on a respective sub-feature represented by the respective sub-feature node. The system causes display of the force-directed graph, and collapses or expands a portion of the force-directed graph responsive to user interaction with the force-directed graph.Type: GrantFiled: June 11, 2018Date of Patent: January 7, 2025Assignee: Micro Focus LLCInventor: Er-Xin Shang
-
Patent number: 12184759Abstract: A computer-implemented method for controlling one or more devices of a first network. The first network comprises a set of bridging nodes and a set of devices controllable by one or more of the set of bridging nodes. Each bridging node is also a node of a blockchain network, and each device has a respective device identifier. The method is performed by a first one of the bridging nodes and comprises generating a first blockchain transaction. The first blockchain transaction comprises a first input comprising a signature linked to a first public key of the first node, and a first output comprising command data. The command data comprises a respective identifier of a first one of the devices controlled by a second one of the bridging nodes, and a command message for controlling the first device.Type: GrantFiled: October 5, 2020Date of Patent: December 31, 2024Assignee: nChain Licensing AGInventors: Alexander MacKay, Chloe Tartan, Jad Wahab, Antoaneta Serguieva, Craig Steven Wright
-
Patent number: 12175097Abstract: An illustrative method includes a storage-aware serverless function management system determining a status of a serverless system that implements one or more serverless functions configured to access one or more components of a storage system, determining a utilization of the storage system, and requesting that the storage system adjust storage of data in the storage system based on the status of the serverless system and the utilization of the storage system.Type: GrantFiled: May 4, 2023Date of Patent: December 24, 2024Assignee: Pure Storage, Inc.Inventors: Taher Vohra, Luis Pablo Pabón
-
Patent number: 12158830Abstract: One example method includes identifying a source of a performance issue in a virtualized environment. Telemetry data is collected relative to the flow of a request/response in the virtualized environment. The collected telemetry data can be compared to normal data. A probability can be generated for each layer to identify which of the layers is the most likely source of the performance issue. The layers can be prioritized based on their probability. The most likely layer or virtual machine is recommended for analysis to determine the cause of the performance issue.Type: GrantFiled: April 6, 2021Date of Patent: December 3, 2024Assignee: EMC IP Holding Company LLCInventors: Parminder Singh Sethi, Bing Liu
-
Patent number: 12153951Abstract: A system and a method for managing workload of an application in a cloud infrastructure is disclosed. The cloud infrastructure may include an existing cloud infrastructure (ECI) and an Elastic Machine Pool Infrastructure (EMPI). The method may include connecting the EMPI to the ECI by configuring cloud control manager of the ECI. Further, the method may include receiving the workload from the application running on the cloud infrastructure. The workload may be allocated to an Elastic Virtual Machine (EVM) hosted by the EMPI or a VM hosted by the ECI based on at least one of an EMP profile of the application, status of the EVM, and workload characteristics of the EVM. Further, the one or more bare metal servers and the one or more EVMs may be managed based on at least one of the workload characteristics and the status of the EVM.Type: GrantFiled: February 1, 2024Date of Patent: November 26, 2024Assignee: Platform9, Inc.Inventors: Roopak Parikh, Madhura Maskasky, Pushkar Acharya, Mayuresh Kulakarni, Ashutosh Tiwari, Anirudh Pokala, Omkar Deshpande, Shubham Agarwal
-
Patent number: 12147473Abstract: Graph data processing methods and system are disclosed. One example method comprises obtaining, by a master node, graph data, wherein the graph data comprises M vertexes and a plurality of directional edges, each edge connects two vertexes, a direction of each edge is from a source to a destination vertex in the two vertexes, and M is an integer greater than two. The node divides the graph data into P non-overlapping shards, where each shard comprises at least one incoming edge directed to at least one vertex in the corresponding shard. The node schedules at least two edge sets comprised in a first shard of the P shards and an associate edge set comprised in a second shard of the P shards for processing by at least two worker nodes.Type: GrantFiled: January 20, 2022Date of Patent: November 19, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Yinglong Xia, Jian Xu, Mingzhen Xia
-
Patent number: 12130813Abstract: A node of a computing system includes a main memory and a plurality of processing core resources. The main memory includes a computing device section and a database section. The computing device section includes a computing device operating system area and a computing device general area. The database section includes a database section that includes a database operating system area, a disk area, a network area, and a database general area. The database operating system area allocates at least one portion of the main memory for database operations that is locked from access by the computing device operating system area.Type: GrantFiled: December 11, 2023Date of Patent: October 29, 2024Assignee: Ocient Holdings LLCInventors: George Kondiles, Jason Arnold
-
Patent number: 12112212Abstract: Methods, systems, and apparatus, including computer-readable storage media for load. A load balancer can input data to the plurality of computing devices configured to process the input data according to a load-balancing distribution. The load balancer can receive from a first computing device of the plurality of computing devices, data characterizing memory bandwidth for a memory device on the first computing device and over a period of time. The load balancer can determine, at least from the data characterizing the memory bandwidth and a memory bandwidth saturation point for the first computing device, that the first computing device can process additional data within a predetermined latency threshold. In response to the determining, the load balancer can send the additional data to the first computing device.Type: GrantFiled: February 26, 2021Date of Patent: October 8, 2024Assignee: Google LLCInventors: Dmytro Tymofieiev, Jaideep Singh, Kusum Kumar Madarasu
-
Patent number: 12112209Abstract: A system and method for facilitating management of application infrastructure for plurality of users is disclosed. The method includes identifying a set of infrastructure components corresponding to an application and determining configuration information associated with the application based on the set of infrastructure components. The method further includes identifying a plurality of patterns of the application based on the configuration information of the set of infrastructure components and generating one or more application fingerprints corresponding to the application based on the plurality of patterns. Further, the method includes determining one or more anomalies in the application based on the one or more application fingerprints and generating one or more recommendations for resolving the one or more anomalies based on the one or more application fingerprints and prestored information.Type: GrantFiled: November 13, 2021Date of Patent: October 8, 2024Assignee: MONTYCLOUD INCInventors: Kannan Parthasarathy, Venkatanathan Krishnamachari
-
Patent number: 12107823Abstract: Multiple Anycast regions may be defined, and a separate Anycast address may be used for each region in order to localize client requests. In examples, when one or more Anycast servers in a first Anycast region fail or become overburdened (or are predicted to do so), one or more Anycast server in another, geographically or logically separate Anycast region that has additional capacity to handle client service requests may be dynamically added to the first Anycast region.Type: GrantFiled: July 28, 2023Date of Patent: October 1, 2024Assignee: CenturyLink Intellectual PropertyInventors: Dean Ballew, John R. B. Woodworth
-
Patent number: 12107740Abstract: Provided is an infrastructure for enforcing target service level parameters in a network. In one example, a network service level agreement (SLA) registry obtains one or more input service level parameters for at least one service offered by an application. Based on the one or more input service level parameters, the network SLA registry provides one or more target service level parameters to a plurality of network controllers. Each network controller of the plurality of network controllers is configured to enforce the one or more target service level parameters in a respective network domain configured to carry network traffic associated with the application.Type: GrantFiled: January 30, 2023Date of Patent: October 1, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Fabio R. Maino, Saswat Praharaj, Alberto Rodriguez-Natal, Pradeep K. Kathail
-
Patent number: 12093745Abstract: Various approaches for managing one or more computational commodities in a virtual desktop infrastructure (VDI) include receiving a collection of utilization records for a user utilizing a desktop resource supported by the computational commodity in a desktop pool, each utilization record corresponding to a utilization rate of the computational commodity by the user; and augmenting or reducing allocation of the computational commodity to the desktop resource utilized by the user based at least in part on the utilization rates.Type: GrantFiled: August 31, 2020Date of Patent: September 17, 2024Assignee: International Business Machines CorporationInventors: Vivek Nandavanam, Shravan Sriram, Jerrold Leichter, Alexander Nish, Apostolos Dailianas, Dmitry Illichev
-
Patent number: 12086652Abstract: Techniques described herein relate to a method for managing a computer vision environment. The method includes identifying a CV alert; in response to identifying the CV alert: making a first determination that the CV node is not participating in a distributed workload associated with a higher priority CV alert; in response to the first determination, the CV node: selects candidate CV nodes of the plurality of CV nodes; initiates performance of the distributed CV workload by the candidate CV nodes to generate CV data associated with the CV alert; generates a CV alert case associated with the CV alert; obtains CV data from the candidate CV nodes that are performing the distributed CV workload; updates the CV alert case using the CV data generated during the performance of the distributed CV workload; and provides the updated CV alert case to a VMS.Type: GrantFiled: January 21, 2022Date of Patent: September 10, 2024Assignee: DELL PRODUCTS L.P.Inventors: Ian Roche, Philip Hummel, Dharmesh M. Patel
-
Patent number: 12068975Abstract: The present disclosure relates to the field of communication technology, and provides a resource scheduling method including: acquiring utilization rates of resources of a plurality of proxy servers, the plurality of proxy servers being deployed on a virtual machine; and using at least one first proxy server to share a utilization of resources of at least one second proxy server, where the utilization rate of resources of each of the at least one first proxy server is smaller than a first threshold, the utilization rate of resources of each of the at least one second proxy server is greater than a second threshold, and the first threshold is smaller than the second threshold.Type: GrantFiled: September 15, 2021Date of Patent: August 20, 2024Assignee: XI'AN ZHONGXING NEW SOFTWARE CO., LTD.Inventors: Yao Tong, Haixin Wang
-
Patent number: 12058056Abstract: Systems and methods for providing web service instances to support traffic demands for a particular web service in a large-scale distributed system are disclosed. An example method includes determining a peak historical service load for the web service. The service load capacity for each existing web service instance may then be determined. The example method may then calculate the remaining service load after subtracting the sum of the service load capacity of the existing web service instances from the peak historical service load for the web service. The number of web service instances necessary in the large-scale distributed system may be determined based on the remaining service load. The locations of the web service instances may be determined and changes may be applied to the large-scale system based on the number of web service instances necessary in the large-scale distributed system.Type: GrantFiled: June 3, 2021Date of Patent: August 6, 2024Assignee: Google LLCInventors: Kamil Skalski, Elzbieta Czajka, Filip Grzadkowski, Krzysztof Grygiel
-
Patent number: 12050944Abstract: Embodiments herein describe a describe an interface shell in a SmartNIC that reduces data-copy overhead in CPU-centric solutions that rely on hardware compute engine (which can include one or more accelerators). The interface shell offloads tag matching and address translation without CPU involvement. Moreover, the interface shell enables the compute engine to read messages directly from the network without extra data copy—i.e., without first copying the data into the CPU's memory.Type: GrantFiled: May 4, 2021Date of Patent: July 30, 2024Assignee: XILINX, INC.Inventors: Guanwen Zhong, Chengchen Hu, Gordon John Brebner
-
Patent number: 12047273Abstract: A control system facilitates active management of a streaming data system. Given historical data traffic for each data stream processed by a streaming data system, the control system uses a machine learning model to predict future data traffic for each data stream. The control system selects a matching between data streams and servers for a future time that minimizes a cost comprising a switching cost and a server imbalance cost based on the predicted data traffic for the future time. In some configurations, the matching is selected using a planning window comprising a number of future time steps dynamically selected based on uncertainty associated with the predicted data traffic. Given the selected matching, the control system may manage the streaming data system by causing data streams to be moved between servers based on the matching.Type: GrantFiled: February 14, 2022Date of Patent: July 23, 2024Assignee: ADOBE INC.Inventors: Georgios Theocharous, Kai Wang, Zhao Song, Sridhar Mahadevan
-
Patent number: 12047439Abstract: Methods and systems for managing workloads are disclosed. The workloads may be supported by operation of workload components that are hosted by infrastructure. The hosted locations of the workload components by the infrastructure may impact the performance of the workloads. To manage performance of the workloads, an optimization process may be performed to identify a migration plan for migrating some of the workload components to other infrastructure such as shared edge infrastructure. Migration of the workload components may reduce the computing resource cost for performing various workloads.Type: GrantFiled: April 26, 2023Date of Patent: July 23, 2024Assignee: Dell Products L.P.Inventors: Ofir Ezrielev, Roman Bober, Lior Gdaliahu, Yonit Lopatinski, Eliyahu Rosenes
-
Patent number: 12039377Abstract: Load leveling between hosts (computes) is realized in a virtual infrastructure regardless of application restrictions on the virtualization technique and by reducing the influence on services.Type: GrantFiled: October 29, 2019Date of Patent: July 16, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Eriko Iwasa, Makoto Hamada
-
Patent number: 12039375Abstract: The processing performance of an entire system is enhanced by efficiently using CPU resources shared by a plurality of guests. A server 10 includes a host OS 104 and a plurality of guest OSs 110A and 110B running on a plurality of virtual machines 108A and 108B, respectively, which are virtually constructed on the host OS 104. The plurality of virtual machines 108A and 108B shares CPU resources implemented by hardware 102. A guest priority calculation unit 202 of a resource management device (resource management unit 20) calculates a processing priority of at least one of the guest OSs 110 based on at least one of a packet transfer rate from the host OS 104 to the guest OS 110 and an available capacity status of a kernel buffer of the host OS 104. A resource utilization control unit 204 controls allocation of a utilization time for CPU resources to be used by the plurality of guest OSs 110 based on the calculated processing priority.Type: GrantFiled: February 4, 2020Date of Patent: July 16, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Kei Fujimoto, Kohei Matoba, Makoto Araoka