Patents by Inventor Alexandra Fedorova

Alexandra Fedorova has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8818760
    Abstract: A modular dynamically re-configurable profiling core may be used to provide both operating systems and applications with detailed information about run time performance bottlenecks and may enable them to address these bottlenecks via scheduling or dynamic compilation. As a result, application software may be able to better leverage the intrinsic nature of the multi-core hardware platform, be it homogeneous or heterogeneous. The profiling functionality may be desirably isolated on a discrete, separate and modular profiling core, which may be referred to as a configurable profiler (CP). The modular configurable profiling core may facilitate inclusion of rich profiling functionality into new processors via modular reuse of the inventive CP. The modular configurable profiling core may improve a customer's experience and productivity when used in conjunction with commercial multi-core processors.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: August 26, 2014
    Assignee: Simon Fraser University
    Inventors: Lesley Lorraine Shannon, Alexandra Fedorova
  • Patent number: 8539491
    Abstract: A thread scheduling technique for assigning multiple threads on a single integrated circuit is dependent on the CPIs of the threads. The technique attempts to balance, to the extent possible, the loads among the processing cores by assigning threads of relatively long-latency (low CPIs) with threads of relatively short-latency (high CPIs) to the same processing core.
    Type: Grant
    Filed: July 26, 2004
    Date of Patent: September 17, 2013
    Assignee: Oracle America, Inc.
    Inventors: Christopher A. Small, Daniel S. Nussbaum, Alexandra Fedorova
  • Patent number: 8533719
    Abstract: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.
    Type: Grant
    Filed: April 5, 2010
    Date of Patent: September 10, 2013
    Assignee: Oracle International Corporation
    Inventors: Alexandra Fedorova, David Vengerov, Kishore Kumar Pusukuri
  • Patent number: 8490101
    Abstract: A computer system includes an integrated circuit that has a plurality of processing cores fabricated therein and configured to perform operations in parallel. Each processing core is configured to process multiple threads, where a thread is assigned to one of the plurality of processing cores dependent on a cache hit rate of the thread.
    Type: Grant
    Filed: November 29, 2004
    Date of Patent: July 16, 2013
    Assignee: Oracle America, Inc.
    Inventors: Christopher A. Small, Alexandra Fedorova, Daniel S. Nussbaum
  • Publication number: 20120022832
    Abstract: A modular dynamically re-configurable profiling core may be used to provide both operating systems and applications with detailed information about run time performance bottlenecks and may enable them to address these bottlenecks via scheduling or dynamic compilation. As a result, application software may be able to better leverage the intrinsic nature of the multi-core hardware platform, be it homogeneous or heterogeneous. The profiling functionality may be desirably isolated on a discrete, separate and modular profiling core, which may be referred to as a configurable profiler (CP). The modular configurable profiling core may facilitate inclusion of rich profiling functionality into new processors via modular reuse of the inventive CP. The modular configurable profiling core may improve a customer's experience and productivity when used in conjunction with commercial multi-core processors.
    Type: Application
    Filed: October 29, 2010
    Publication date: January 26, 2012
    Inventors: Lesley Lorraine SHANNON, Alexandra FEDOROVA
  • Patent number: 8069444
    Abstract: In a computer system with a multi-core processor having a shared cache memory level, an operating system scheduler adjusts the CPU latency of a thread running on one of the cores to be equal to the fair CPU latency which that thread would experience when the cache memory was equally shared by adjusting the CPU time quantum of the thread. In particular, during a reconnaissance time period, the operating system scheduler gathers information regarding the threads via conventional hardware counters and uses an analytical model to estimate a fair cache miss rate that the thread would experience if the cache memory was equally shared. During a subsequent calibration period, the operating system scheduler computes the fair CPU latency using runtime statistics and the previously computed fair cache miss rate value to determine the fair CPI value.
    Type: Grant
    Filed: August 29, 2006
    Date of Patent: November 29, 2011
    Assignee: Oracle America, Inc.
    Inventor: Alexandra Fedorova
  • Publication number: 20110246995
    Abstract: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.
    Type: Application
    Filed: April 5, 2010
    Publication date: October 6, 2011
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Alexandra Fedorova, David Vengerov, Kishore Kumar Pusukuri
  • Patent number: 8028286
    Abstract: A thread scheduler identifies a thread operable to be scheduled by a scheduling policy for execution on the chip multiprocessor. The thread scheduler estimates, for the thread, a performance value that is based on runtime statistics of the thread for a shared resource on the chip multiprocessor. Additionally, the thread scheduler applies the performance value to the scheduling policy in order to reallocate processor time of the thread commensurate with the performance value under fair distribution of the shared resource on the chip multiprocessor. The thread scheduler also applies the performance value to the scheduling policy in order to reallocate processor time of at least one co-executing thread to compensate for the reallocation of processor time to the thread.
    Type: Grant
    Filed: November 30, 2006
    Date of Patent: September 27, 2011
    Assignee: Oracle America, Inc.
    Inventor: Alexandra Fedorova
  • Patent number: 7818747
    Abstract: A chip multithreading processor schedules and assigns threads to its processing cores dependent on estimated miss rates in a shared cache memory of the threads. A cache miss rate of a thread is estimated by measuring cache miss rates of one or more groups of executing threads, where at least one of the groups includes the thread of interest. Using a determined estimated cache miss rate of the thread, the thread is scheduled with other threads to achieve a relatively low cache miss rate in the shared cache memory.
    Type: Grant
    Filed: November 3, 2005
    Date of Patent: October 19, 2010
    Assignee: Oracle America, Inc.
    Inventors: Alexandra Fedorova, Christopher A. Small
  • Patent number: 7689773
    Abstract: A caching estimator process identifies a thread for determining the fair cache miss rate of the thread. The caching estimator process executes the thread concurrently on the chip multiprocessor with a plurality of peer threads to measure the actual cache miss rates of the respective threads while executing concurrently. Additionally, the caching estimator process computes the fair cache miss rate of the thread based on the relationship between the actual miss rate of the thread and the actual miss rates of the plurality of peer threads. As a result, the caching estimator applies the fair cache miss rate of the thread to a scheduling policy of the chip multiprocessor.
    Type: Grant
    Filed: November 30, 2006
    Date of Patent: March 30, 2010
    Assignee: Sun Microsystems, Inc.
    Inventor: Alexandra Fedorova
  • Patent number: 7487317
    Abstract: A chip multithreading processor schedules and assigns threads to its processing cores dependent on estimated miss rates in a shared cache memory of the threads. A cache miss rate of a thread is estimated by measuring cache miss rates of one or more groups of executing threads, where at least one of the groups includes the thread of interest. Using a determined estimated cache miss rate of the thread, the thread is scheduled with other threads to achieve a relatively low cache miss rate in the shared cache memory.
    Type: Grant
    Filed: November 3, 2005
    Date of Patent: February 3, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Alexandra Fedorova, Christopher A. Small
  • Patent number: 7457931
    Abstract: An estimate of the throughput of a multi-threaded processor based on measured miss rates of a cache memory associated with the processor is adjusted to account for cache miss processing delays due to memory bus access contention. In particular, the throughput calculated from the cache memory miss rates is initially calculated assuming that a memory bus between the cache memory and main memory has infinite bandwidth, this throughput estimate is used to estimate a request cycle time between memory access attempts for a typical thread. The request cycle time, in turn, is used to determine a memory bus access delay that is then used to adjust the initial processor throughput estimate. The adjusted estimate can be used for thread scheduling in a multiprocessor system.
    Type: Grant
    Filed: June 1, 2005
    Date of Patent: November 25, 2008
    Assignee: Sun Microsystems, Inc.
    Inventor: Alexandra Fedorova
  • Publication number: 20080134184
    Abstract: A caching estimator process identifies a thread for determining the fair cache miss rate of the thread. The caching estimator process executes the thread concurrently on the chip multiprocessor with a plurality of peer threads to measure the actual cache miss rates of the respective threads while executing concurrently. Additionally, the caching estimator process computes the fair cache miss rate of the thread based on the relationship between the actual miss rate of the thread and the actual miss rates of the plurality of peer threads. As a result, the caching estimator applies the fair cache miss rate of the thread to a scheduling policy of the chip multiprocessor.
    Type: Application
    Filed: November 30, 2006
    Publication date: June 5, 2008
    Inventor: Alexandra Fedorova
  • Publication number: 20080134185
    Abstract: A thread scheduler identifies a thread operable to be scheduled by a scheduling policy for execution on the chip multiprocessor. The thread scheduler estimates, for the thread, a performance value that is based on runtime statistics of the thread for a shared resource on the chip multiprocessor. Additionally, the thread scheduler applies the performance value to the scheduling policy in order to reallocate processor time of the thread commensurate with the performance value under fair distribution of the shared resource on the chip multiprocessor. The thread scheduler also applies the performance value to the scheduling policy in order to reallocate processor time of at least one co-executing thread to compensate for the reallocation of processor time to the thread.
    Type: Application
    Filed: November 30, 2006
    Publication date: June 5, 2008
    Inventor: Alexandra Fedorova
  • Patent number: 7363450
    Abstract: An estimate is calculated of the throughput of a multi-threaded processor having N threads based on measured miss rates of a cache memory associated with the processor by calculating, based on the cache miss rates a probability that the processor is in a state with one thread running, a probability that the processor is in a state with two threads are running and continuing to a probability that the processor is in a state with N threads running, multiplying each probability by a measured throughput of the processor when it is in the corresponding state and summing the resulting products. This estimate may also be corrected for bus delays in transferring information between the cache memory and main memory. The estimate can be used for thread scheduling in a multiprocessor system.
    Type: Grant
    Filed: June 1, 2005
    Date of Patent: April 22, 2008
    Assignee: Sun Microsystems, Inc.
    Inventor: Alexandra Fedorova
  • Publication number: 20080059712
    Abstract: In a computer system with a multi-core processor having a shared cache memory level, an operating system scheduler adjusts the CPU latency of a thread running on one of the cores to be equal to the fair CPU latency which that thread would experience when the cache memory was equally shared by adjusting the CPU time quantum of the thread. In particular, during a reconnaissance time period, the operating system scheduler gathers information regarding the threads via conventional hardware counters and uses an analytical model to estimate a fair cache miss rate that the thread would experience if the cache memory was equally shared. During a subsequent calibration period, the operating system scheduler computes the fair CPU latency using runtime statistics and the previously computed fair cache miss rate value to determine the fair CPI value.
    Type: Application
    Filed: August 29, 2006
    Publication date: March 6, 2008
    Applicant: Sun Microsystems, Inc.
    Inventor: Alexandra Fedorova