Patents by Inventor Alexandra Fedorova
Alexandra Fedorova has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8818760Abstract: A modular dynamically re-configurable profiling core may be used to provide both operating systems and applications with detailed information about run time performance bottlenecks and may enable them to address these bottlenecks via scheduling or dynamic compilation. As a result, application software may be able to better leverage the intrinsic nature of the multi-core hardware platform, be it homogeneous or heterogeneous. The profiling functionality may be desirably isolated on a discrete, separate and modular profiling core, which may be referred to as a configurable profiler (CP). The modular configurable profiling core may facilitate inclusion of rich profiling functionality into new processors via modular reuse of the inventive CP. The modular configurable profiling core may improve a customer's experience and productivity when used in conjunction with commercial multi-core processors.Type: GrantFiled: October 29, 2010Date of Patent: August 26, 2014Assignee: Simon Fraser UniversityInventors: Lesley Lorraine Shannon, Alexandra Fedorova
-
Patent number: 8539491Abstract: A thread scheduling technique for assigning multiple threads on a single integrated circuit is dependent on the CPIs of the threads. The technique attempts to balance, to the extent possible, the loads among the processing cores by assigning threads of relatively long-latency (low CPIs) with threads of relatively short-latency (high CPIs) to the same processing core.Type: GrantFiled: July 26, 2004Date of Patent: September 17, 2013Assignee: Oracle America, Inc.Inventors: Christopher A. Small, Daniel S. Nussbaum, Alexandra Fedorova
-
Patent number: 8533719Abstract: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.Type: GrantFiled: April 5, 2010Date of Patent: September 10, 2013Assignee: Oracle International CorporationInventors: Alexandra Fedorova, David Vengerov, Kishore Kumar Pusukuri
-
Patent number: 8490101Abstract: A computer system includes an integrated circuit that has a plurality of processing cores fabricated therein and configured to perform operations in parallel. Each processing core is configured to process multiple threads, where a thread is assigned to one of the plurality of processing cores dependent on a cache hit rate of the thread.Type: GrantFiled: November 29, 2004Date of Patent: July 16, 2013Assignee: Oracle America, Inc.Inventors: Christopher A. Small, Alexandra Fedorova, Daniel S. Nussbaum
-
Publication number: 20120022832Abstract: A modular dynamically re-configurable profiling core may be used to provide both operating systems and applications with detailed information about run time performance bottlenecks and may enable them to address these bottlenecks via scheduling or dynamic compilation. As a result, application software may be able to better leverage the intrinsic nature of the multi-core hardware platform, be it homogeneous or heterogeneous. The profiling functionality may be desirably isolated on a discrete, separate and modular profiling core, which may be referred to as a configurable profiler (CP). The modular configurable profiling core may facilitate inclusion of rich profiling functionality into new processors via modular reuse of the inventive CP. The modular configurable profiling core may improve a customer's experience and productivity when used in conjunction with commercial multi-core processors.Type: ApplicationFiled: October 29, 2010Publication date: January 26, 2012Inventors: Lesley Lorraine SHANNON, Alexandra FEDOROVA
-
Patent number: 8069444Abstract: In a computer system with a multi-core processor having a shared cache memory level, an operating system scheduler adjusts the CPU latency of a thread running on one of the cores to be equal to the fair CPU latency which that thread would experience when the cache memory was equally shared by adjusting the CPU time quantum of the thread. In particular, during a reconnaissance time period, the operating system scheduler gathers information regarding the threads via conventional hardware counters and uses an analytical model to estimate a fair cache miss rate that the thread would experience if the cache memory was equally shared. During a subsequent calibration period, the operating system scheduler computes the fair CPU latency using runtime statistics and the previously computed fair cache miss rate value to determine the fair CPI value.Type: GrantFiled: August 29, 2006Date of Patent: November 29, 2011Assignee: Oracle America, Inc.Inventor: Alexandra Fedorova
-
Publication number: 20110246995Abstract: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.Type: ApplicationFiled: April 5, 2010Publication date: October 6, 2011Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Alexandra Fedorova, David Vengerov, Kishore Kumar Pusukuri
-
Patent number: 8028286Abstract: A thread scheduler identifies a thread operable to be scheduled by a scheduling policy for execution on the chip multiprocessor. The thread scheduler estimates, for the thread, a performance value that is based on runtime statistics of the thread for a shared resource on the chip multiprocessor. Additionally, the thread scheduler applies the performance value to the scheduling policy in order to reallocate processor time of the thread commensurate with the performance value under fair distribution of the shared resource on the chip multiprocessor. The thread scheduler also applies the performance value to the scheduling policy in order to reallocate processor time of at least one co-executing thread to compensate for the reallocation of processor time to the thread.Type: GrantFiled: November 30, 2006Date of Patent: September 27, 2011Assignee: Oracle America, Inc.Inventor: Alexandra Fedorova
-
Patent number: 7818747Abstract: A chip multithreading processor schedules and assigns threads to its processing cores dependent on estimated miss rates in a shared cache memory of the threads. A cache miss rate of a thread is estimated by measuring cache miss rates of one or more groups of executing threads, where at least one of the groups includes the thread of interest. Using a determined estimated cache miss rate of the thread, the thread is scheduled with other threads to achieve a relatively low cache miss rate in the shared cache memory.Type: GrantFiled: November 3, 2005Date of Patent: October 19, 2010Assignee: Oracle America, Inc.Inventors: Alexandra Fedorova, Christopher A. Small
-
Patent number: 7689773Abstract: A caching estimator process identifies a thread for determining the fair cache miss rate of the thread. The caching estimator process executes the thread concurrently on the chip multiprocessor with a plurality of peer threads to measure the actual cache miss rates of the respective threads while executing concurrently. Additionally, the caching estimator process computes the fair cache miss rate of the thread based on the relationship between the actual miss rate of the thread and the actual miss rates of the plurality of peer threads. As a result, the caching estimator applies the fair cache miss rate of the thread to a scheduling policy of the chip multiprocessor.Type: GrantFiled: November 30, 2006Date of Patent: March 30, 2010Assignee: Sun Microsystems, Inc.Inventor: Alexandra Fedorova
-
Patent number: 7487317Abstract: A chip multithreading processor schedules and assigns threads to its processing cores dependent on estimated miss rates in a shared cache memory of the threads. A cache miss rate of a thread is estimated by measuring cache miss rates of one or more groups of executing threads, where at least one of the groups includes the thread of interest. Using a determined estimated cache miss rate of the thread, the thread is scheduled with other threads to achieve a relatively low cache miss rate in the shared cache memory.Type: GrantFiled: November 3, 2005Date of Patent: February 3, 2009Assignee: Sun Microsystems, Inc.Inventors: Alexandra Fedorova, Christopher A. Small
-
Patent number: 7457931Abstract: An estimate of the throughput of a multi-threaded processor based on measured miss rates of a cache memory associated with the processor is adjusted to account for cache miss processing delays due to memory bus access contention. In particular, the throughput calculated from the cache memory miss rates is initially calculated assuming that a memory bus between the cache memory and main memory has infinite bandwidth, this throughput estimate is used to estimate a request cycle time between memory access attempts for a typical thread. The request cycle time, in turn, is used to determine a memory bus access delay that is then used to adjust the initial processor throughput estimate. The adjusted estimate can be used for thread scheduling in a multiprocessor system.Type: GrantFiled: June 1, 2005Date of Patent: November 25, 2008Assignee: Sun Microsystems, Inc.Inventor: Alexandra Fedorova
-
Publication number: 20080134184Abstract: A caching estimator process identifies a thread for determining the fair cache miss rate of the thread. The caching estimator process executes the thread concurrently on the chip multiprocessor with a plurality of peer threads to measure the actual cache miss rates of the respective threads while executing concurrently. Additionally, the caching estimator process computes the fair cache miss rate of the thread based on the relationship between the actual miss rate of the thread and the actual miss rates of the plurality of peer threads. As a result, the caching estimator applies the fair cache miss rate of the thread to a scheduling policy of the chip multiprocessor.Type: ApplicationFiled: November 30, 2006Publication date: June 5, 2008Inventor: Alexandra Fedorova
-
Publication number: 20080134185Abstract: A thread scheduler identifies a thread operable to be scheduled by a scheduling policy for execution on the chip multiprocessor. The thread scheduler estimates, for the thread, a performance value that is based on runtime statistics of the thread for a shared resource on the chip multiprocessor. Additionally, the thread scheduler applies the performance value to the scheduling policy in order to reallocate processor time of the thread commensurate with the performance value under fair distribution of the shared resource on the chip multiprocessor. The thread scheduler also applies the performance value to the scheduling policy in order to reallocate processor time of at least one co-executing thread to compensate for the reallocation of processor time to the thread.Type: ApplicationFiled: November 30, 2006Publication date: June 5, 2008Inventor: Alexandra Fedorova
-
Patent number: 7363450Abstract: An estimate is calculated of the throughput of a multi-threaded processor having N threads based on measured miss rates of a cache memory associated with the processor by calculating, based on the cache miss rates a probability that the processor is in a state with one thread running, a probability that the processor is in a state with two threads are running and continuing to a probability that the processor is in a state with N threads running, multiplying each probability by a measured throughput of the processor when it is in the corresponding state and summing the resulting products. This estimate may also be corrected for bus delays in transferring information between the cache memory and main memory. The estimate can be used for thread scheduling in a multiprocessor system.Type: GrantFiled: June 1, 2005Date of Patent: April 22, 2008Assignee: Sun Microsystems, Inc.Inventor: Alexandra Fedorova
-
Publication number: 20080059712Abstract: In a computer system with a multi-core processor having a shared cache memory level, an operating system scheduler adjusts the CPU latency of a thread running on one of the cores to be equal to the fair CPU latency which that thread would experience when the cache memory was equally shared by adjusting the CPU time quantum of the thread. In particular, during a reconnaissance time period, the operating system scheduler gathers information regarding the threads via conventional hardware counters and uses an analytical model to estimate a fair cache miss rate that the thread would experience if the cache memory was equally shared. During a subsequent calibration period, the operating system scheduler computes the fair CPU latency using runtime statistics and the previously computed fair cache miss rate value to determine the fair CPI value.Type: ApplicationFiled: August 29, 2006Publication date: March 6, 2008Applicant: Sun Microsystems, Inc.Inventor: Alexandra Fedorova