Abstract: An embodiment for speech-to-text auto-scaling of computational resources is provided. The embodiment may include computing a delta for each word in a transcript between a wall clock time and a time when the word is delivered to a client. The embodiment may also include submitting the deltas to a group of metrics servers. The embodiment may further include requesting from the group of metrics servers current values of the deltas. The embodiment may also include determining whether the current values of the deltas exceed a pre-defined max-latency threshold. The embodiment may further include adjusting the allocated computational resources based on a frequency of the current values of the deltas that exceed the pre-defined max-latency threshold. The embodiment may also include creating a histogram from the current values of the deltas and scaling-up the allocated computational resources based on a percentage of data points that fall above the pre-defined max-latency threshold.
Type:
Grant
Filed:
September 3, 2020
Date of Patent:
December 6, 2022
Assignee:
International Business Machines Corporation
Abstract: A single instruction multiple data processor may accomplish register allocation by identifying live ranges that have incompatible write masks during compilation. Then, edges are added in an interference graph between live ranges that have incompatible masks so that those live ranges will not be assigned to the same physical register.
Abstract: Systems, methods, and computer media for scheduling vertices in a distributed data processing network and allocating computing resources on a processing node in a distributed data processing network are provided. Vertices, subparts of a data job including both data and computer code that runs on the data, are assigned by a job manager to a distributed cluster of process nodes for processing. The process nodes run the vertices and transmit computing resource usage information, including memory and processing core usage, back to the job manager. The job manager uses this information to estimate computing resource usage information for other vertices in the data job that are either still running or waiting to be run. Using the estimated computing resource usage information, each process node can run multiple vertices concurrently.
Type:
Grant
Filed:
April 23, 2009
Date of Patent:
September 11, 2012
Assignee:
Microsoft Corporation
Inventors:
Bikas Saha, Ronnie Chaiken, James David Ryseff