Patents Examined by Eric C. Wai
-
Patent number: 11941450Abstract: A system and method place an incoming workload within a data center having infrastructure elements (IEs) for execution. Instrumentation data are collected for both individual IEs in the data center, and workload instances executing on each of these IEs. These data are used to train a future load model according to machine learning techniques, especially supervised learning. Future loads, in turn, are used to train a ranking model that ranks IEs according to suitability to execute additional workloads. After receiving an incoming workload, the first model is used to predict, for each IE, the load on its computing resources if the workload were executed on that IE. The resulting predicted loads are then fed into the second model to predict the best ranking of IEs, and the workload is placed on the highest-ranked IE that is available to execute the workload.Type: GrantFiled: April 27, 2021Date of Patent: March 26, 2024Assignee: Dell Products L.P.Inventors: Rômulo Teixeira De Abreu Pinho, Satyam Sheshansh, Hung Dinh, Bijan Mohanty
-
Patent number: 11934887Abstract: The present disclosure discloses a distributed model compilation system. A master node of the system determines the logic calculation graph of the model based on model information, divides the logic calculation graph into multiple logic calculation sub-graphs, generates a distributing message for each logic calculation sub-graph, and then transmits the distributing message to a slave node. Each of the slave nodes allocates a local computing resource to compile the logic calculation sub-graph based on the received distributing message, and transmits compilation completion information to the master node. The master node determines the completion of model compilation based on the compilation completion information returned by each slave node, and executes the target work based on the compiled model.Type: GrantFiled: September 13, 2023Date of Patent: March 19, 2024Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Fei Wu, Guang Chen, Feng Lin
-
Patent number: 11934879Abstract: Presented herein are system and methods for handling processing of data in cloud environments. A server receives a first dataset generated in response to a function of a first application. A server generates a set of identifiers defining a sequence of processing of the first dataset associated with the function. The identifiers include a first identifier indicating the first application as a predecessor for the first dataset and a second identifier indicating a second application as a successor for the first dataset. The server identifies the second application corresponding to the second identifier as the successor for processing the first dataset. The server communicates at least a portion of the first dataset with a second server hosting the second application to receive a second dataset generated by the second application. The server stores the second dataset in the cloud environment.Type: GrantFiled: November 7, 2023Date of Patent: March 19, 2024Assignee: CITIBANK, N.A.Inventors: Hansraj Jain, Ma Jun, Rajagopalan Premkumar, Vidyalakshmi Pathai Ramakrishnan
-
Patent number: 11934864Abstract: In one embodiment, a method includes empirically analyzing a set of active reservations and a current set of consumable resources belonging to a class of consumable resources. Each active reservation is of a managed task type and includes a group of one or more tasks requiring access to a consumable resource of the class. The method further includes, based on the empirically analyzing, clocking the set of active reservations each clocking cycle. In addition, the method includes, responsive to the clocking, sorting a priority queue of the set of active reservations.Type: GrantFiled: March 29, 2022Date of Patent: March 19, 2024Assignee: MessageOne, Inc.Inventor: Jon Franklin Matousek
-
Systems and methods for establishing a user purpose class resource information computing environment
Patent number: 11922215Abstract: Systems and methods for purposeful computing are disclosed that, among other things, include a user purpose class resource information computing environment. Such environment supports resource purpose classes, and further supports resource identification information sets that characterize their respective subject matter resources. The computing environment can be used to identify and evaluate one or more purpose class subject matter resource members.Type: GrantFiled: November 13, 2020Date of Patent: March 5, 2024Assignee: Advanced Elemental Technologies, Inc.Inventors: Victor Henry Shear, Peter Robert Williams, Jaisook Rho, Timothy St. John Redmond, James Jay Horning -
Patent number: 11922198Abstract: Systems and methods are provided for assigning and associating resources in a cloud computing environment. Virtual machines in the cloud computing environment can be assigned or associated with pools corresponding to users as dedicated, standby, or preemptible machines. The various states provide users with the ability to reserve a desired level of resources while also allowing the operator of the cloud computing environment to increase resource utilization.Type: GrantFiled: November 23, 2021Date of Patent: March 5, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Bradley Gene Calder, Ju Wang, Vaman Bedekar, Sriram Sankaran, Marvin McNett, II, Pradeep Kumar Gunda, Yang Zhang, Shyam Antony, Kavitha Manivannan, Hemal Khatri
-
Patent number: 11914335Abstract: An industrial control system may receive processing information from at least two control systems associated with at least two components within an industrial automation system. The processing information may include a processing load value for each of the at least two control systems. The industrial control system may then distribute processing loads associated with the at least two control systems when a total processing load between the at least two control systems is unbalanced.Type: GrantFiled: January 29, 2021Date of Patent: February 27, 2024Assignee: Rockwell Automation Technologies, Inc.Inventors: Charles M. Rischar, William Sinner, Michael Kalan, Haithem Mansouri, Subbian Govindaraj, Juergen Weinhofer, Andrew R. Stump, Daniel S. DeYoung, Frank Kulaszewicz, Edward A. Hill, Keith Staninger, Matheus Bulho
-
Patent number: 11900162Abstract: Implementations described herein relate to methods, systems, and computer-readable media to manage a computing resource allocation for a software application. In some implementations, a method may include executing a first test function using the distributed computing system at a first plurality of allocation setpoints for the computing resource, based on the execution, obtaining one or more performance metrics for the first test function for each setpoint of the first plurality of allocation setpoints, training a machine learning model based on the obtained one or more performance metrics; and utilizing the trained machine learning model to manage the computing resource for a second function.Type: GrantFiled: February 23, 2022Date of Patent: February 13, 2024Assignee: SEDAI, INC.Inventors: Suresh Mathew, Nikhil Gopinath Kurup, Hari Chandrasekhar, Benjamin Thomas
-
Patent number: 11886929Abstract: The present disclosure relates to systems, methods, and computer-readable media for deploying cloud-native services across a plurality of cloud-computing platforms. For example, systems disclosed herein identify resource identifiers associated with cloud-computing services (e.g., types of services) to be deployed on one or more resources capable of executing or otherwise providing cloud-native services. The systems disclosed herein further generate resource bindings including deployment specifications that include data for deploying cloud-native services on corresponding platform resources (e.g., cloud resources, edge resources). Using the resource bindings, the systems disclosed herein can deploy cloud-native services across multiple platforms via control planes configured to manage operation of resources on the different platforms.Type: GrantFiled: August 3, 2021Date of Patent: January 30, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Haishi Bai, Mark Eugene Russinovich, Boris Markus Scholl, Yaron Schneider
-
Patent number: 11880712Abstract: In a computing resource environment including at least one resource capable of being allocated to at least one of a plurality of tasks, techniques are disclosed for applying a taint to a resource in a computing resource environment, the taint being configured to prevent the resource from being claimed for a resource request without a toleration to that taint. Variations include receiving, at a resource scheduler in the resource environment, a request to allocate the resource to perform a particular task and determining whether the resource is subject to a taint. If the resource is subject to a taint, analyzing the request to determine if it includes a toleration for the taint. If the request includes a toleration for the taint, allocating the resource to the task. If the request does not include a toleration for the taint, not allocating the resource due to the taint.Type: GrantFiled: January 20, 2022Date of Patent: January 23, 2024Assignee: Google LLCInventors: John Wilkes, Brian Grant
-
Patent number: 11880715Abstract: Methods and systems for load balancing in a neural network system using metadata are disclosed. Any one or a combination of one or more kernels, one or more neurons, and one or more layers of the neural network system are tagged with metadata. A scheduler detects whether there are neurons that are available to execute. The scheduler uses the metadata to schedule and load balance computations across compute resources and available resources.Type: GrantFiled: April 5, 2021Date of Patent: January 23, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Nicholas Malaya, Yasuko Eckert
-
Patent number: 11871540Abstract: Method, system, and computer program product embodiments of heating a flow of liquid by transfer of heat with computing devices. Embodiments also include determining a dynamic cooling capacity index for each of the computing devices, and allocating processing workload among the first computing device and the second computing device based on the dynamic cooling capacity indexes of the computing devices. Embodiments further include allocating workload and/or regulating flow rate of the flow of liquid to maintain a predetermined value or range of values of temperature of the liquid.Type: GrantFiled: February 26, 2021Date of Patent: January 9, 2024Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventors: Chunjian Ni, Vinod Kamath, Jeffrey Scott Holland, Bejoy Jose Kochuparambil, Andrew Thomas Junkins, Paul Artman
-
Patent number: 11868164Abstract: Systems and methods are described for management of a coordinated environment for execution of on-demand code with reduced memory footprint provided. A coordinator receives individual on-demand code execution requests or tasks from coordinated devices. The coordinate can process the on-demand code execution requests to associate at least a subset of the on-demand code execution with one or more groups sharing executable code. The coordinated device can implement the execution of the individual tasks without requiring a separate loading and execution of the on-demand executable code. Accordingly, the coordinated device may be implemented on computing devices having more limited computing resources by reducing the memory footprint required to execute the on-demand task.Type: GrantFiled: January 4, 2021Date of Patent: January 9, 2024Assignee: Amazon Technologies, Inc.Inventors: Arunachalam Sundararam, Erik Jacob Sipsma
-
Patent number: 11847506Abstract: In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.Type: GrantFiled: December 2, 2022Date of Patent: December 19, 2023Assignee: Oracle International CorporationInventors: Roy Mehdi Zeighami, Craig Alderson Pennington
-
Patent number: 11842185Abstract: A gateway device is connected via network(s) to electronic controllers on-board a vehicle, where at least one of the electronic controllers is implemented in a virtual machine. The gateway device includes one or more memories, and circuitry that acquires firmware update information. The circuitry determines whether a first electronic controller satisfies a second condition based on second information, which is whether the first electronic controller includes a firmware cache for performing a pre-update firmware cache operation. The circuitry also causes, when the second condition is not satisfied, the gateway device to execute a proxy process, where the gateway device requests the first electronic controller to transmit boot ROM data to the gateway device, creates updated boot ROM data with the updated firmware, and transmits the updated boot ROM data to the first electronic controller that updates the boot ROM and resets the first electronic controller with the updated firmware.Type: GrantFiled: January 10, 2023Date of Patent: December 12, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Yoshihiro Ujiie, Hideki Matsushima, Jun Anzai, Toshihisa Nakano, Tomoyuki Haga, Manabu Maeda, Takeshi Kishikawa
-
Patent number: 11841698Abstract: Arrangement and method for securely executing an automation program in a cloud computing environment, wherein the automation program is installed on computer hardware in a public IT infrastructure, and wherein the computer hardware is connected via a data connection to a cloud server, where the connection and a dedicated runtime environment of the computer hardware are configured such that the automation program is transferrable onto the computer hardware and its execution can be monitored via the server and data connection, such that the automation program and sensitive information, i.e.Type: GrantFiled: September 23, 2020Date of Patent: December 12, 2023Assignee: SIEMENS AKTIENGESELLSCHAFTInventors: Markus Höfele, Peter Kob, Rolf Schrey, Armin Zeltner
-
Patent number: 11836506Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.Type: GrantFiled: December 29, 2022Date of Patent: December 5, 2023Assignee: APPLE INC.Inventors: Aaftab Munshi, Jeremy Sandmel
-
Patent number: 11829799Abstract: A method, a structure, and a computer system for predicting pipeline training requirements. The exemplary embodiments may include receiving one or more worker node features from one or more worker nodes, extracting one or more pipeline features from one or more pipelines to be trained, and extracting one or more dataset features from one or more datasets used to train the one or more pipelines. The exemplary embodiments may further include predicting an amount of one or more resources required for each of the one or more worker nodes to train the one or more pipelines using the one or more datasets based on one or more models that correlate the one or more worker node features, one or more pipeline features, and one or more dataset features with the one or more resources. Lastly, the exemplary embodiments may include identifying a worker node requiring a least amount of the one or more resources of the one or more worker nodes for training the one or more pipelines.Type: GrantFiled: October 13, 2020Date of Patent: November 28, 2023Assignee: International Business Machines CorporationInventors: Saket Sathe, Gregory Bramble, Long Vu, Theodoros Salonidis
-
Patent number: 11829804Abstract: A processing system is described which assigns jobs to heterogeneous processing modules. The processing system assigns jobs to the processing modules in a manner that attempts to accommodate the service demands of the jobs, but without advance knowledge of the service demands. In one case, the processing system implements the processing modules as computing units that have different physical characteristics. Alternatively, or in addition, the processing system may implement the processing modules as threads that are executed by computing units. Each thread which runs on a computing unit offers a level of performance that depends on a number of other threads that are simultaneously being executed by the same computing unit.Type: GrantFiled: September 15, 2021Date of Patent: November 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Yuxiong He, Sameh Elnikety, Kathryn S. McKinley, Shaolei Ren
-
Methods and systems for automating deployment of applications in a multi-tenant database environment
Patent number: 11822954Abstract: In accordance with embodiments disclosed herein, there are provided mechanisms and methods for automating deployment of applications in a multi-tenant database environment. For example, in one embodiment, mechanisms include managing a plurality of machines operating as a machine farm within a datacenter by executing an agent provisioning script at a control hub, instructing the plurality of machines to download and instantiate a lightweight agent; pushing a plurality of URL (Uniform Resource Locator) references from the control hub to the instantiated lightweight agent on each of the plurality of machines specifying one or more applications to be provisioned and one or more dependencies for each of the applications; and loading, via the lightweight agent at each of the plurality of machines, the one or more applications and the one or more dependencies for each of the one or more applications into memory of each respective machine.Type: GrantFiled: January 27, 2021Date of Patent: November 21, 2023Assignee: Salesforce, Inc.Inventors: Pallav Kothari, Phillip Oliver Metting van Rijn