Load Balancing Patents (Class 718/105)
-
Patent number: 11831562Abstract: Systems and methods for efficient database management of non-transitory readable media, including a memory configured to store information associated with service instance requests across a plurality of distributed network resources and a processor configured to receive a service instance request, determine the first native domain object associated with the service instance request, allocate the plurality of network resources to a plurality of distributed worker instances dependent upon a first native domain object, assign the first service instance request to a first worker instance that includes a microservice instance that define service instance blocks to execute the request, and a service instance block manager configured to manage the first service instance request in conjunction with subsequent service instance requests associated with the plurality of worker instances, track running and completed requests, and allocate resources for similar requests across the distributed network nodes.Type: GrantFiled: October 4, 2021Date of Patent: November 28, 2023Inventors: Ronald M. Parker, Jeremy Brown, Haibo Qian
-
Patent number: 11811862Abstract: Methods and systems for managing workloads are disclosed. The workloads may be supported by operation of workload components that are hosted by infrastructure. The hosted locations of the workload components by the infrastructure may impact the performance of the workloads. To manage performance of the workloads, an optimization process may be performed to identify a migration plan for migrating some of the workload components to different infrastructure locations. Some of the different infrastructure locations may reduce computing resource cost for performance of the workloads.Type: GrantFiled: April 26, 2023Date of Patent: November 7, 2023Assignee: Dell Products L.P.Inventors: Ofir Ezrielev, Lior Gdaliahu, Roman Bober, Yonit Lopatinski, Eliyahu Rosenes
-
Patent number: 11809953Abstract: Embodiments include techniques for enabling execution of N inferences on an execution engine of a neural network device. Instruction code for a single inference is stored in a memory that is accessible by a DMA engine, the instruction code forming a regular code block. A NOP code block and a reset code block for resetting an instruction DMA queue are stored in the memory. The instruction DMA queue is generated such that, when it is executed by the DMA engine, it causes the DMA engine to copy, for each of N inferences, both the regular code block and an additional code block to an instruction buffer. The additional code block is the NOP code block for the first N?1 inferences and is the reset code block for the Nth inference. When the reset code block is executed by the execution engine, the instruction DMA queue is reset.Type: GrantFiled: September 2, 2022Date of Patent: November 7, 2023Assignee: Amazon Technologies, Inc.Inventors: Samuel Jacob, Ilya Minkin, Mohammad El-Shabani
-
Patent number: 11785117Abstract: Embodiments described herein provide methods and apparatuses for relate to methods and apparatuses for providing processing functions by microservices in a service. A first microservice is capable of providing a first processing function in a service comprising a plurality of microservices. The method includes receiving a processing request to provide the first processing function; obtaining a sequence of a plurality of microservices associated with the processing request, wherein the sequence comprises the first microservice; obtaining a current latency requirement associated with the remaining microservices in the sequence; obtaining an estimated latency associated with the remaining microservices in the sequence; and placing the processing request in a processing queue based on a comparison between the current latency requirement and the estimated latency.Type: GrantFiled: June 26, 2019Date of Patent: October 10, 2023Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Xuejun Cai, Zhang Fu, Kun Wang
-
Patent number: 11775351Abstract: A method for processing data on a programmable logic controller includes a priority with a predetermined priority level assigned to at least one parallel processing section of a program of a master-processor core of a control task. Respective priority levels are inserted into a data structure as the respective master-processor core arrives at the parallel processing section. A parallel-processor core examines whether entries are present in the data structure and processes partial tasks from a work package of the master-processor core the priority level of which ranks first among the entries. A real-time condition of the control task is met by setting executing times of the programs for the master-processor core so that the master-processor core is capable of processing the partial tasks from the work packages without being supported by the parallel-processor core. The master-processor core further processes partial tasks not processed by the at least one parallel-processor core.Type: GrantFiled: December 4, 2018Date of Patent: October 3, 2023Assignee: Beckhoff Automation GmbHInventor: Robin Vogt
-
Patent number: 11762697Abstract: The present disclosure discloses a method and apparatus for scheduling a resource for a deep learning framework. The method can comprise: querying statuses of all deep learning job objects from a Kubernetes platform at a predetermined interval; and submitting, in response to finding from the queried deep learning job objects a deep learning job object having a status conforming to a resource request submission status, a resource request to the Kubernetes platform to schedule a physical machine where the Kubernetes platform is located to initiate a deep learning training task. The method can completely automate the allocation and release on the resource of the deep learning training task.Type: GrantFiled: January 15, 2019Date of Patent: September 19, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Kun Liu, Kai Zhou, Qian Wang, Yuanhao Xiao, Lan Liu, Dongze Xu, Tianhan Xu, Jiangliang Guo, Jin Tang, Faen Zhang, Shiming Yin
-
Patent number: 11748037Abstract: A first storage node communicates with at least one second storage node. A physical disk included in the at least one second storage node is mapped as a virtual disk of the first storage node. The method may include: receiving a first write request, where the first write request carries first to-be-written data; striping the first to-be-written data to obtain striped data, and writing the striped data to a physical disk and/or the virtual disk of the first storage node; and recording a write location of the striped data. For example, the technical solution may be applied to a storage system that includes an NVMe SSD.Type: GrantFiled: July 25, 2022Date of Patent: September 5, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Huawei Liu, Yu Hu, Can Chen, Jinshui Liu, Xiaochu Li, Chunyi Tan
-
Patent number: 11734172Abstract: The present application discloses a data transmission method and apparatus. Multiple first data blocks of one service are received by a network interface card and the card allocates the received multiple first data blocks to a same data queue. When a tuner generates scheduling information for the service, the multiple first data blocks is sent to a virtual machine by using a resource in a resource pool of a NUMA node designated in the scheduling information; or when a tuner does not generate scheduling information, determining, according to a correspondence between the data queue and a resource pool of a NUMA node, a resource pool corresponding to the data queue in which the multiple first data blocks are located, and sending the multiple first data blocks to a virtual machine.Type: GrantFiled: April 27, 2021Date of Patent: August 22, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Lei Zhou
-
Patent number: 11726684Abstract: Distributed storage systems are implemented with rule based rebalancing mechanisms. Methods includes steps for creating a set of rules for rebalancing data storage space in a storage node cluster, as well as steps for performing a rebalance operation across the storage node cluster using the set of rules. The distributed storage systems include one or more labels for storage pools and storage volumes.Type: GrantFiled: February 26, 2021Date of Patent: August 15, 2023Assignee: Pure Storage, Inc.Inventors: Ganesh Sangle, Harsh Desai, Vinod Jayaraman
-
Patent number: 11707324Abstract: A spinal correction rod implant manufacturing process includes: estimating a targeted spinal correction rod implant shape based on a patient specific spine shape correction and including spine 3D modeling, one or more simulation loops each including: first simulating an intermediate spinal correction rod implant shape from modeling mechanical interaction between the patient specific spine and: either, for the first simulation, the implant shape, or, for subsequent simulation, if any, an overbent implant shape resulting from the previous simulation loop, a second simulation of an implant shape overbending applied to the targeted spinal correction rod implant shape producing an overbent spinal correction rod implant shape representing a difference between: either, for the first loop, the targeted spinal correction rod implant shape, or, for subsequent loop, if any, the overbent spinal correction rod implant shape resulting from the previous simulation loop, and the intermediate spinal correction rod implant shaType: GrantFiled: September 1, 2017Date of Patent: July 25, 2023Assignees: SPINOLOGICS INC., EOS IMAGINGInventors: Joe Hobeika, David Invernizzi, Julien Clin
-
Patent number: 11709717Abstract: Disclosed is a method for designing an application task architecture for an electronic control unit based on an AUTOSAR operating system that is adaptable to a plurality of microcontrollers. Prior to association with a microcontroller, the method involves developing the application task architecture by using at least one virtual core different from the one or more cores of the microcontroller, the various tasks being assigned respectively to the at least one virtual core, and associating the at least one virtual core with the one or more cores of the microcontroller so as to allocate tasks assigned to the at least one virtual core to the core or among the cores of the microcontroller.Type: GrantFiled: January 22, 2019Date of Patent: July 25, 2023Assignee: VITESCO TECHNOLOGIES GMBHInventors: Denis Claraz, André Goebel, Ralph Mader
-
Patent number: 11706290Abstract: An edge server of an infrastructure service establishes a transport connection in user space with a client and in accordance with a transport layer network protocol. The edge server receives a packet over the transport connection with the client that comprises a request for an object. If the edge server cannot serve the object, it forwards the request to a cluster server with an intent indicated for the cluster server to reply directly to the client. The cluster server receives the forwarded request and determines whether to accept the intent indicated by the edge server. If so, the edge server conveys instructions to the cluster server for sending at least a portion of the object directly to the client. The cluster server then sends at least the portion of the object to the client in accordance with the instructions.Type: GrantFiled: October 15, 2021Date of Patent: July 18, 2023Assignee: Fastly, Inc.Inventors: Kazuho Oku, Janardhan Iyengar, Artur Bergman
-
Patent number: 11698812Abstract: In one embodiment, a processor includes a power controller having a resource allocation circuit. The resource allocation circuit may: receive a power budget for a first core and at least one second core and scale the power budget based at least in part on at least one energy performance preference value to determine a scaled power budget; determine a first maximum operating point for the first core and a second maximum operating point for the at least one second core based at least in part on the scaled power budget; determine a first efficiency value for the first core based at least in part on the first maximum operating point for the first core and a second efficiency value for the at least one second core based at least in part on the second maximum operating point for the at least one second core; and report a hardware state change to an operating system scheduler based on the first efficiency value and the second efficiency value. Other embodiments are described and claimed.Type: GrantFiled: August 29, 2019Date of Patent: July 11, 2023Assignee: Intel CorporationInventors: Praveen Kumar Gupta, Avinash N. Ananthakrishnan, Eugene Gorbatov, Stephen H. Gunther
-
Patent number: 11687245Abstract: An apparatus comprises at least one processing device that includes a processor coupled to a memory, and is configured to monitor latencies associated with processing of input-output operations in a plurality of storage nodes of a distributed storage system, to detect an unbalanced condition between the storage nodes based at least in part on the monitored latencies, and responsive to the detected unbalanced condition, to adjust an assignment of slices of a logical address space of the distributed storage system to the storage nodes. Adjusting the assignment of slices of the logical address space of the distributed storage system to the storage nodes responsive to the detected unbalanced condition illustratively comprises increasing a number of the slices assigned to one or more of the storage nodes having relatively low latencies and decreasing a number of slices assigned to one or more of the storage nodes having relatively high latencies.Type: GrantFiled: November 19, 2020Date of Patent: June 27, 2023Assignee: EMC IP Holding Company LLCInventors: Vladimir Shveidel, Lior Kamran
-
Patent number: 11683364Abstract: A distributed device management system specifies a device capable of supplying request data used for providing a service, from among a plurality of devices connected to a network. Device management function units are disposed so as to be geographically distributed and manage the states of the devices located in deployed areas. A device specifying function unit has a device inquiry cache in which a response log including the type of data which was previously required for the service and an identifier of the device management function unit that manages the device which was capable of supplying the data is recorded. In a case where this request data coincides with the type of data included in the response log, an inquiry is transmitted to the device management function unit associated with the request data in the response log.Type: GrantFiled: February 13, 2019Date of Patent: June 20, 2023Assignee: Nippon Telegraph and Telephone CorporationInventors: Hirofumi Noguchi, Yoji Yamato, Tatsuya Demizu, Misao Kataoka
-
Patent number: 11681554Abstract: A workload distribution scheme is provided for a multicore memory system. The memory system includes a memory device including blocks and a controller including cores. The controller receives multiple logical addresses from a host, determines a range of logical addresses among the multiple logical addresses to be allocated for the cores, and distributes multiple subsets of the logical addresses in the range to the cores, based on an operation of modulo and shuffling on the multiple logical addresses.Type: GrantFiled: November 6, 2019Date of Patent: June 20, 2023Assignee: SK hynix Inc.Inventors: Aliaksei Tolstsikau, Maksim Skurydzin
-
Patent number: 11669397Abstract: A storage network receives data and a corresponding task, selects a storage units for the task, determines whether the data slice is locally available and when the data slice is not locally available, determines whether a redundant data slice is available from another storage unit. When the redundant data slice is not available from another storage unit, the storage network facilitates rebuilding the data slice to produce a rebuilt data slice by retrieving a decode threshold number of data slices corresponding to the data slice, decoding the decode threshold number of data slices to reproduce a data segment and re-encoding the data segment to produce a pillar width number of data slices that includes the rebuilt data slice.Type: GrantFiled: October 13, 2022Date of Patent: June 6, 2023Assignee: Pure Storage, Inc.Inventors: Greg R. Dhuse, Jason K. Resch
-
Patent number: 11652720Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting deployment growth on one or more node clusters and selectively permitting deployment requests on a per cluster basis. For example, systems disclosed herein may apply tenant growth prediction system trained to output a deployment growth classification indicative of a predicted growth of deployments on a node cluster. The system disclosed herein may further utilize the deployment growth classification to determine whether a deployment request may be permitted while maintaining a sufficiently sized capacity buffer to avoid deployment failures for existing deployments previously implemented on the node cluster. By selectively permitting or denying deployments based on a variety of factors, the systems described herein can more efficiently utilize cluster resources on a per-cluster basis without causing a significant increase in deployment failures for existing customers.Type: GrantFiled: September 20, 2019Date of Patent: May 16, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, John Lawrence Miller, Christopher Cowdery, Thomas Moscibroda, Shanti Kemburu, Yong Xu, Si Qin, Qingwei Lin, Eli Cortez, Karthikeyan Subramanian
-
Patent number: 11647072Abstract: The present invention relates to communications methods and apparatus for distributing Session Initiation Protocol (SIP) messages among SIP processing entities including during periods of failure recovery. An exemplary method embodiment includes the steps of: establishing a first connection oriented protocol connection between a first Session Initiation Protocol Load Balancer (SLB) of a plurality of SLBs and a client device; receiving via the first connection oriented protocol connection at the first SLB a first SIP REGISTER request message from the first client device; determining, by the first SLB, based on information received from the client device, a first Session Border Controller (SBC) from a plurality of SBCs to send the first SIP REGISTER request message, said information uniquely identifying the first SBC from other SBCs in the plurality of SBCs; and sending, by the first SLB, the first SIP REGISTER request message to the first SBC.Type: GrantFiled: January 11, 2022Date of Patent: May 9, 2023Assignee: Ribbon Communications Operating Company, Inc.Inventors: Tolga Asveren, Subhransu S. Nayak, Aby Kuriakose
-
Patent number: 11645113Abstract: In some examples, a system receives a first unit of work to be scheduled in the system that includes a plurality of collections of processing units to execute units of work, where each respective collection of processing units of the plurality of collections of processing units is associated with a corresponding scheduling queue. The system selects, for the first unit of work according to a first criterion, candidate collections from among the plurality of collections of processing units, and enqueues the first unit of work in a schedule queue associated with a selected collection of processing units that is selected, according to a selection criterion, from among the candidate collections.Type: GrantFiled: April 30, 2021Date of Patent: May 9, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Christopher Joseph Corsi, Prashanth Soundarapandian, Matti Antero Vanninen, Siddharth Munshi
-
Patent number: 11636379Abstract: A distributed cluster training method and an apparatus thereof are provided. The method includes reading a sample set, the sample set including at least one piece of sample data; using the sample data and current weights to substitute into a target model training function for iterative training to obtain a first gradient before receiving a collection instruction, the collection instruction being issued by a scheduling server when a cluster system environment meets a threshold condition; sending the first gradient to an aggregation server if a collection instruction is received, wherein the aggregation server collects each first gradient and calculates second weights; and receiving the second weights sent by the aggregation server to update current weights. The present disclosure reduces an amount of network communications and an impact on switches, and avoids the use of an entire cluster from being affected.Type: GrantFiled: September 25, 2018Date of Patent: April 25, 2023Assignee: Alibaba Group Holding LimitedInventor: Jun Zhou
-
Patent number: 11604682Abstract: A resource usage platform is disclosed. The platform performs preemptive container load balancing, auto scaling, and placement in a computing system. Resource usage data is collected from containers and used to train a model that generates inferences regarding resource usage. The resource usage operations are performed based on the inferences and on environment data such as available resources, service needs, and hardware requirements.Type: GrantFiled: December 31, 2020Date of Patent: March 14, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Xuebin He, Amy N. Seibel, Himanshu Arora, Victor Fong
-
Patent number: 11606409Abstract: Provided is a method and system for a network-assisted Quality of Experience (QoE)-based smart and proactive video streaming framework to be deployed at Multi-access Edge Computing (MEC) servers. Quality of Experience (QoE) levels for a plurality of video sessions streamed from a cloud server to computing devices through a central edge server are estimated based on one or more metrics associated with the plurality of video sessions. Additionally, channel status of one or more neighboring edge servers proximate to the central edge server is determined. The QoE levels of the plurality of video sessions are then maximized based on the estimating, by employing a local optimization, a global optimization and a combination of both the local optimization and the global optimization based on the one or more metrics and the channel status determined for the one or more neighboring edge servers.Type: GrantFiled: November 19, 2021Date of Patent: March 14, 2023Assignee: AMBEENT INC.Inventors: Mustafa Ergen, Mehmet Fatih Tuysuz
-
Patent number: 11599302Abstract: A storage device, including a feature information database configured to store feature information about a memory device; and a machine learning module configured to select a machine learning model from a plurality of machine learning models the corresponding to an operation of the memory device based on the feature information, wherein the memory device is configured to operate according to the selected machine learning model.Type: GrantFiled: March 25, 2020Date of Patent: March 7, 2023Assignee: SAMSUNG ELECTRONIC CO., LTD.Inventors: Jeong Woo Lee, Chan Ha Kim, Kang Ho Roh, Kwang Woo Lee, Hee Won Lee
-
Patent number: 11599389Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.Type: GrantFiled: August 31, 2021Date of Patent: March 7, 2023Assignee: Snowflake Inc.Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
-
Patent number: 11601497Abstract: A system can include a gateway, a plurality of network function nodes, and a distributed load balancer including load balancer nodes each having a flow table portion stored thereon. The load balancer nodes can form a node chain having a tail and head nodes. A load balancer node can receive a packet from the gateway. In response, the load balancer node can generate a query, directed to the tail node, that identifies the packet and a network function identifier associated with a network function node that is proposed to handle a connection. The tail node can determine whether an entry for the connection exists in a flow table portion associated with the tail node. If not, the tail node can initiate an insert request for writing the entry for the connection via the head node. The entry can then be written to all load balancer nodes in the node chain.Type: GrantFiled: October 8, 2021Date of Patent: March 7, 2023Assignee: AT&T Intellectual Property I, L.P.Inventors: Abhigyan, Kaustubh Joshi, Edward Scott Daniels
-
Patent number: 11579942Abstract: Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU.Type: GrantFiled: June 2, 2020Date of Patent: February 14, 2023Assignee: VMWARE, INC.Inventors: Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
-
Patent number: 11575740Abstract: There is provided a cloud management method and apparatus for performing load balancing so as to make a service in a cluster that is geographically close in an associative container environment and has a good resource current status. The cloud management method according to an embodiment includes: monitoring, by a cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that owns a first service supported by a first cluster an available resource rate of which is less than a threshold value; calculating, by the cloud management apparatus, scores regarding an available resource current status and geographical proximity of each cluster; and performing, by the cloud management apparatus, load balancing of the first service, based on a result of calculating the scores.Type: GrantFiled: September 7, 2021Date of Patent: February 7, 2023Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 11568089Abstract: In one embodiment, a method is provided. The method includes receiving first set of data via a communication interface of a data storage system. The method also includes determining whether a primary processing device should be used for performing checksum computations for the first set of data. If the primary processing device should be used, the method further includes providing the first set of data to the primary processing device. If the primary processing device should not be used, the method further includes providing the first set of data to the secondary processing device.Type: GrantFiled: August 31, 2020Date of Patent: January 31, 2023Assignee: FRONTIIR PTE LTD.Inventors: Changbin Liu, Boon Thau Loo
-
Patent number: 11567896Abstract: In one embodiment, a processor includes a plurality of cores each including a first storage to store a physical identifier for the core and a second storage to store a logical identifier associated with the core; a plurality of thermal sensors to measure a temperature at a corresponding location of the processor; and a power controller including a dynamic core identifier logic to dynamically remap a first logical identifier associated with a first core to associate the first logical identifier with a second core, based at least in part on a temperature associated with the first core, the dynamic remapping to cause a first thread to be migrated from the first core to the second core transparently to an operating system. Other embodiments are described and claimed.Type: GrantFiled: June 30, 2020Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Ankush Varma, Krishnakanth V. Sistla, Guy G. Sotomayor, Andrew D. Henroid, Robert E. Gough, Tod F. Schiff
-
Patent number: 11561700Abstract: Load balancing may include: receiving I/O workloads of storage server entities that service I/O operations received for logical devices, wherein each logical device has an owner that is one of the storage server entities that processes I/O operations directed to the logical device; determining normalized I/O workloads corresponding to the I/O workloads of the storage server entities; determining, in accordance with utilization criteria, imbalance criteria and the normalized I/O workloads, whether to rebalance the I/O workloads of the storage server entities; and responsive to determining to rebalance the I/O workloads of the storage server entities, performing processing to alleviate a detected I/O workload imbalance between two storage server entities. The processing may include moving logical device from a first storage server entity to a second storage server entity; and transferring ownership of the logical device from the first to the second storage server entity.Type: GrantFiled: January 21, 2021Date of Patent: January 24, 2023Assignee: EMC IP Holding Company LLCInventors: Shaul Dar, Gajanan S. Natu, Vladimir Shveidel
-
Patent number: 11556254Abstract: A plurality of logical storage segments of storage drives of a plurality of storage nodes are identified. At least one of the storage nodes includes at least a first logical storage segment and a second logical storage segment included in the plurality of logical storage segments. A distributed and replicated data store using a portion of the plurality of logical storage segments that excludes at least the second logical storage segment is provided. An available storage capacity metric associated with the plurality of logical storage segments is determined to meet a first threshold. In response to the determination that the available storage capacity metric meets the first threshold, at least the second logical storage segment is dynamically deployed for use in providing the distributed and replicated data store in a manner that increases a storage capacity of the data store while maintaining a fault tolerance policy of the distributed and replicated data store.Type: GrantFiled: February 28, 2020Date of Patent: January 17, 2023Assignee: Cohesity, Inc.Inventors: Venkatesh Pallipadi, Sachin Jain, Deepak Ojha, Apurv Gupta
-
Patent number: 11556446Abstract: A method, system, and computer program product are provided for performance anomaly detection. Velocity data is periodically received from a workload manager for one or more address spaces. An expected velocity value is created for each of the one or more address spaces. A factor of the expected velocity value is compared to a current velocity value from the velocity data. Based on the current velocity value being lower than the factor, a remedial action is generated indicating an anomaly.Type: GrantFiled: September 25, 2020Date of Patent: January 17, 2023Assignee: International Business Machines CorporationInventors: Robert M. Abrams, Karla Arndt, Friedrich Matthias Gubitz, Dieter Wellerdiek, Nicholas C. Matsakis
-
Patent number: 11546421Abstract: A connection management device is communicatively connected to a plurality of server devices. A receiver receives from a terminal device a request for connection to one of the plurality of server devices. A location information extractor extracts, from the request for connection, location information indicating a location where the terminal device exists. A region determiner determines, based on the extracted location information, a region where the terminal device exists. A connection destination determiner determines, based on the determined region, which of the plurality of server devices is the server device to which the terminal device is to connect. A transmitter transmits to the determined server device the request for connection received from the terminal device.Type: GrantFiled: April 3, 2019Date of Patent: January 3, 2023Assignee: Mitsubishi Electric CorporationInventors: Yoshinori Nakajima, Kanji Mizuno, Masayuki Komatsu
-
Patent number: 11534917Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to improve resource utilization for binary tree structures. An example apparatus to improve resource utilization for field programmable gate array (FPGA) resources includes a computation determiner to identify a computation capability value associated with the FPGA resources, a k-ary tree builder to build a first k-ary tree having a number of k-ary nodes equal to the computation capability value, and an FPGA memory controller to initiate collision computation by transferring the first k-ary tree to a first memory of the FPGA resources.Type: GrantFiled: March 29, 2018Date of Patent: December 27, 2022Assignee: Intel CorporationInventors: Ganmei You, Dawei Wang, Ling Liu, Xuesong Shi, Chunjie Wang
-
Patent number: 11513861Abstract: Disclosed is a computer implemented method to manage queue overlap in storage systems, the method comprising, identifying, by a storage system, a plurality of queues including a first queue and a second queue. The storage system includes a plurality of cores, including a first core and a second core, and wherein the first queue is associated with a first host and the second queue is associated with a second host. The method also comprises, determining the first queue and the second queue are being processed by the first core. The method further comprises, monitoring the workload of each cores and identifying a load imbalance, wherein the loam imbalance a difference between a first workload associated with the first core, and a second workload associated with the second core. The method also comprises, notifying the second host that the load imbalance is present.Type: GrantFiled: August 29, 2019Date of Patent: November 29, 2022Assignee: International Business Machines CorporationInventors: Ankur Srivastava, Kushal Patel, Sarvesh S. Patel, Subhojit Roy
-
Patent number: 11507434Abstract: Methods and systems are provided for the deployment of machine learning based processes to public clouds. For example, a method for deploying a machine learning based process may include developing and training the machine learning based process to perform an activity, performing at least one of identifying and receiving an identification of a set of one or more public clouds that comply with a set of regulatory criteria used to regulate the activity, selecting a first public cloud of the set of one or more public clouds that complies with the set of regulatory criteria used to regulate the activity, and deploying the machine learning based process to the first public cloud of the set of one or more public clouds.Type: GrantFiled: January 28, 2020Date of Patent: November 22, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Sagar Ratnakara Nikam, Mayuri Ravindra Joshi, Raj Narayan Marndi
-
Patent number: 11507381Abstract: Closed loop performance controllers of asymmetric multiprocessor systems may be configured and operated to improve performance and power efficiency of such systems by adjusting control effort parameters that determine the dynamic voltage and frequency state of the processors and coprocessors of the system in response to the workload. One example of such an arrangement includes applying hysteresis to the control effort parameter and/or seeding the control effort parameter so that the processor or coprocessor receives a returning workload in a higher performance state. Another example of such an arrangement includes deadline driven control, in which the control effort parameter for one or more processing agents may be increased in response to deadlines not being met for a workload and/or decreased in response to deadlines being met too far in advance. The performance increase/decrease may be determined by comparison of various performance metrics for each of the processing agents.Type: GrantFiled: April 29, 2021Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Aditya Venkataraman, Bryan R. Hinch, John G. Dorsey
-
Patent number: 11487760Abstract: Disclosed aspects relate to query plan management associated with a shared pool of configurable computing resources. A query, which relates to a set of data located on the shared pool of configurable computing resources, is detected. A virtual machine includes the set of data. With respect to the virtual machine, a set of burden values of performing a set of asset actions is determined. Based on the set of burden values, a query plan to access the set of data is established. Using at least one asset action of the set of asset actions, the query plan is processed.Type: GrantFiled: October 9, 2020Date of Patent: November 1, 2022Assignee: International Business Machines CorporationInventors: Rafal P. Konik, Roger A. Mittelstadt, Brian R. Muras
-
Patent number: 11481020Abstract: In certain embodiments, an electronic device comprises a temperature sensor; and a processor, wherein the processor is configured to: detect that a temperature of the electronic device exceeds a predetermined temperature; when the temperature exceeds the predetermined temperature, drive at least one process satisfying a predetermined condition for a proportion of time periods and not driving the at least one process during remaining time periods.Type: GrantFiled: January 20, 2021Date of Patent: October 25, 2022Assignee: Samsung Electronics CO., LTD.Inventors: Sungyong Bang, Hyunjin Noh, Byungsoo Kwon, Jongwoo Kim, Sangmin Lee, Hakryoul Kim, Mooyoung Kim
-
Patent number: 11467951Abstract: An embodiment of the present invention is directed to a Mainframe CI/CD design solution and pattern that provides a complete end to end process for Mainframe application. This enables faster time to market by performing critical SDLC processes, including build, test, scan and deployment in an automated fashion on a regular basis. An embodiment of the present invention is directed to a CI/CD approach that journeys from receiving requirements to final deployment. For any new application onboarding, teams may implement the CI/CD approach that may be customized per requirements of each LOB/Application.Type: GrantFiled: November 6, 2019Date of Patent: October 11, 2022Assignee: JPMORGAN CHASE BANK, N.A.Inventors: Vinish Pillai, Monish Pingle, Ashwin Sudhakar Shetty, Dharmesh Mohanlal Jain
-
Patent number: 11461622Abstract: Embodiments include techniques for enabling execution of N inferences on an execution engine of a neural network device. Instruction code for a single inference is stored in a memory that is accessible by a DMA engine, the instruction code forming a regular code block. A NOP code block and a reset code block for resetting an instruction DMA queue are stored in the memory. The instruction DMA queue is generated such that, when it is executed by the DMA engine, it causes the DMA engine to copy, for each of N inferences, both the regular code block and an additional code block to an instruction buffer. The additional code block is the NOP code block for the first N?1 inferences and is the reset code block for the Nth inference. When the reset code block is executed by the execution engine, the instruction DMA queue is reset.Type: GrantFiled: June 28, 2019Date of Patent: October 4, 2022Assignee: Amazon Technologies, Inc.Inventors: Samuel Jacob, Ilya Minkin, Mohammad El-Shabani
-
Patent number: 11461133Abstract: Embodiments of the present disclosure relate to a method for managing backup jobs, an electronic device, and a computer program product. The method includes: determining expected execution durations of a group of to-be-executed backup jobs; dividing the group of to-be-executed backup jobs into a plurality of backup job subsets based on the expected execution durations, wherein a difference between the expected execution durations of every two backup jobs in each backup job subset does not exceed a predetermined threshold duration; and adjusting an execution plan of the group of to-be-executed backup jobs to cause the backup jobs in at least one backup job subset in the plurality of backup job subsets to simultaneously begin to be executed.Type: GrantFiled: May 31, 2020Date of Patent: October 4, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Min Liu, Ming Zhang, Ren Wang, Xiaoliang Zhu, Jing Yu
-
Patent number: 11455024Abstract: Systems and methods for improving idle time estimation by a process scheduler are disclosed. An example method comprises calculating, by a process scheduler operating in a kernel space of a computing system, an estimated idle time for a processing core, responsive to detecting a transition of the processing core from an idle state to an active state, recording, an actual idle time of the processing core, and making the estimated idle time and the actual idle time available to a user space process.Type: GrantFiled: April 10, 2019Date of Patent: September 27, 2022Assignee: Red Hat, Inc.Inventor: Michael S. Tsirkin
-
Patent number: 11429424Abstract: A method of selectively assigning virtual CPUs (vCPUs) of a virtual machine (VM) to physical CPUs (pCPUs), where execution of the VM is supported by a hypervisor running on a hardware platform including the pCPUs, includes determining that a first vCPU of the vCPUs is scheduled to execute a latency-sensitive workload of the VM and a second vCPU of the vCPUs is scheduled to execute a non-latency-sensitive workload of the VM and assigning the first vCPU to a first pCPU of the pCPUs and the second vCPU to a second pCPU of the pCPUs. A kernel component of the hypervisor pins the assignment of the first vCPU to the first pCPU and does not pin the assignment of the second vCPU to the second pCPU. The method further comprises selectively tagging or not tagging by a user or an automated tool, a plurality of workloads of the VM as latency-sensitive.Type: GrantFiled: July 22, 2020Date of Patent: August 30, 2022Assignee: VMware, Inc.Inventors: Xunjia Lu, Haoqiang Zheng
-
Patent number: 11422856Abstract: Techniques are disclosed relating to scheduling program tasks in a server computer system. An example server computer system is configured to maintain first and second sets of task queues that have different performance characteristics, and to collect performance metrics relating to processing of program tasks from the first and second sets of task queues. Based on the collected performance metrics, the server computer system is further configured to update a scheduling algorithm for assigning program tasks to queues in the first and second sets of task queues. In response to receiving a particular program task associated with a user transaction, the server computer system is also configured to select the first set of task queues for the particular program task, and to assign the particular program task in a particular task queue in the first set of task queues.Type: GrantFiled: June 28, 2019Date of Patent: August 23, 2022Assignee: PayPal, Inc.Inventors: Xin Li, Libin Sun, Chao Zhang, Xiaohan Yun, Jun Zhang, Frédéric Tu, Yang Yu, Lei Wang, Zhijun Ling
-
Patent number: 11416286Abstract: Aspects of the technology described herein can facilitate computing on transient resources. An exemplary computing device may use a task scheduler to access information of a computational task and instability information of a transient resource. Moreover, the task scheduler can schedule the computational task to use the transient resource based at least in part on the rate of data size reduction of the computational task. Further, a checkpointing scheduler in the exemplary computing device can determine a checkpointing plan for the computational task based at least in part on a recomputation cost associated with the instability information of the transient resource. Resultantly, the overall utilization rate of computing resources is improved by effectively utilizing transient resources.Type: GrantFiled: June 24, 2019Date of Patent: August 16, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ying Yan, Yanjie Gao, Yang Chen, Thomas Moscibroda, Narayanan Ganapathy, Bole Chen, Zhongxin Guo
-
Patent number: 11403220Abstract: An apparatus, a method, a method of manufacturing an apparatus, and a method of constructing an integrated circuit are provided. A processor of an application server layer detects a degree of a change in a workload in an input/output stream received through a network from one or more user devices. The processor determines a degree range, from a plurality of preset degree ranges, that the degree of the change in the workload is within. The processor determines a distribution strategy, from among a plurality of distribution strategies, to distribute the workload across one or more of a plurality of solid state devices (SSDs) in a performance cache tier of a centralized multi-tier storage pool, based on the determined degree range. The processor distributes the workload across the one or more of the plurality of solid state devices based on the determined distribution strategy.Type: GrantFiled: August 28, 2020Date of Patent: August 2, 2022Inventors: Zhengyu Yang, Morteza Hoseinzadeh, Thomas David Evans, Clay Mayers, Thomas Bolt
-
Patent number: 11397578Abstract: An apparatus such as a graphics processing unit (GPU) includes a plurality of processing elements configured to concurrently execute a plurality of first waves and accumulators associated with the plurality of processing elements. The accumulators are configured to store accumulated values representative of behavioral characteristics of the plurality of first waves that are concurrently executing on the plurality of processing elements. The apparatus also includes a dispatcher configured to dispatch second waves to the plurality of processing elements based on comparisons of values representative of behavioral characteristics of the second waves and the accumulated values stored in the accumulators. In some cases, the behavioral characteristics of the plurality of first waves comprise at least one of fetch bandwidths, usage of an arithmetic logic unit (ALU), and number of export operations.Type: GrantFiled: August 30, 2019Date of Patent: July 26, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Randy Ramsey, William David Isenberg, Michael Mantor
-
Patent number: 11399082Abstract: A client node may execute an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment. A load balancing component is coupled to the client node, and a first virtual provider entity for the first messaging service component is coupled to the load balancing component. The first virtual provider entity may represent a first HA message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. A shared database is accessible by the first broker node, the first HA message broker pair, and the second broker node, and includes an administration registry data store.Type: GrantFiled: March 24, 2021Date of Patent: July 26, 2022Assignee: SAP SEInventor: Daniel Ritter