Patents by Inventor David Vengerov

David Vengerov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8533026
    Abstract: A method for maximizing revenue generated from a plurality of service level agreements (SLAs) that includes receiving a first subset of the plurality of SLAs for executing a first plurality of jobs, wherein each SLA in the first subset specifies a first maximum requested delay that is greater than an initial minimum offered delay, and wherein a price of each SLA in the first subset is defined by the maximum requested delay and a price/delay function, calculating a first expected revenue from executing the first subset, and optimizing a second subset of the plurality of SLAs by replacing the initial minimum offered delay on the initial price/delay function with a new minimum offered delay based on the expected revenue, wherein each SLA in the second subset specifies a second maximum requested delay that is greater than the new minimum offered delay.
    Type: Grant
    Filed: October 17, 2006
    Date of Patent: September 10, 2013
    Assignee: Oracle America, Inc.
    Inventors: David Vengerov, Ilya Gluhovsky
  • Publication number: 20130166353
    Abstract: A price optimization system determines the pricing of a plurality of items. The system receives an initial price vector for the items and an objective function, and assigns the initial price vector as a current price vector. The system determines a first new price vector by randomly choosing a first set of allowed prices for the items, and assigning the first set of allowed prices as the current price vector when the objective function is improved. The system then determines a second new price vector by randomly choosing a second set of allowed prices for the items and assigning the second set of allowed prices as the current price vector when the objective function does not decrease by more than a predetermined value. The system sequentially repeats this functionality until a terminating criteria is reached and then it determines the pricing.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 27, 2013
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Kresimir MIHIC, David VENGEROV, Andrew VAKHUTINSKY
  • Patent number: 8468041
    Abstract: One embodiment of the present invention provides a system that allocates resources to projects in a computer system. During operation, the system determines a current demand by a project for a resource, and a current allocation of the resource to the project. The system also uses a computational model to compute an expected long-term utility of the project for the resource. Next, the system trades the resource between the project and other projects in the computer system to optimize expected long-term utilities. During this process, the system uses a reinforcement learning technique to update parameters of the computational model for the expected long-term utility of the project based on performance feedback.
    Type: Grant
    Filed: December 22, 2004
    Date of Patent: June 18, 2013
    Assignee: Oracle America, Inc.
    Inventor: David Vengerov
  • Patent number: 8356061
    Abstract: Some embodiments of the present invention provide a system that executes a garbage collector in a computing system. During operation, the system obtains a throughput model for the garbage collector and estimates a set of characteristics associated with the garbage collector. Next, the system applies the characteristics to the throughput model to estimate a throughput of the garbage collector. The system then determines a level of performance for the garbage collector based on the estimated throughput. Finally, the system adjusts a tunable parameter for the garbage collector based on the level of performance to increase the throughput of the garbage collector.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: January 15, 2013
    Assignee: Oracle America, Inc.
    Inventor: David Vengerov
  • Patent number: 8276143
    Abstract: Disclosed herein is a system and method for dynamic scheduling of application tasks in a distributed task-based system. The system and method employ a learning mechanism that observes and predicts overall application task costs across a networked system, taking into account how the states or loads of the applications are likely to change over time. The application task costs are defined in economic terms. The system and method allows continuous optimization of application response times as perceived by application users.
    Type: Grant
    Filed: March 10, 2008
    Date of Patent: September 25, 2012
    Assignee: Oracle America, Inc.
    Inventors: David Vengerov, Seth Proctor
  • Patent number: 8166269
    Abstract: Methods and apparatus are provided for adaptively triggering garbage collection. During relatively steady or decreasing rates of allocation of free memory, a threshold for triggering garbage collection is dynamically and adaptively determined on the basis of memory drops (i.e., decreases in free memory) during garbage collection. If a significant increase in the rate of allocation of memory is observed (e.g., two consecutive measurements that exceed a mean rate plus two standard deviations), the threshold is modified based on a memory drop previously observed in conjunction with the current memory allocation rate, or a memory drop estimated to be possible for the current allocation rate.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: April 24, 2012
    Assignee: Oracle America, Inc.
    Inventor: David Vengerov
  • Publication number: 20120054447
    Abstract: A method for removing cache blocks from a cache queue includes detecting a first cache miss for the cache queue, identifying, within the cache queue, a new cache block storing a value of a storage block, calculating an estimated cache miss cost for a storage container having the storage block, calculating a removal probability for the storage container based on a mathematical formula of the estimated cache miss cost, randomly selecting a probability number from a uniform distribution, where the removal probability exceeds the probability number, and evicting, in response to the removal probability exceeding the probability number, the new cache block from the cache queue.
    Type: Application
    Filed: January 14, 2011
    Publication date: March 1, 2012
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Garret Frederick Swart, David Vengerov
  • Publication number: 20120054445
    Abstract: A method of inserting cache blocks into a cache queue includes detecting a first cache miss for the cache queue, identifying a storage block receiving an access in response to the cache miss, calculating a first estimated cache miss cost for a first storage container that includes the storage block, calculating an insertion probability for the first storage container based on a mathematical formula of the first estimated cache miss cost, randomly selecting an insertion probability number from a uniform distribution, and inserting, in response to the insertion probability exceeding the insertion probability number, a new cache block corresponding to the storage block into the cache queue.
    Type: Application
    Filed: January 14, 2011
    Publication date: March 1, 2012
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Garret Frederick Swart, David Vengerov
  • Publication number: 20110246995
    Abstract: The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core.
    Type: Application
    Filed: April 5, 2010
    Publication date: October 6, 2011
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Alexandra Fedorova, David Vengerov, Kishore Kumar Pusukuri
  • Patent number: 8032888
    Abstract: A method for scheduling a thread on a plurality of processors that includes obtaining a first state of a first processor in the plurality of processors and a second state of a second processor in the plurality of processors, wherein the thread is last executed on the first processor, and wherein the first state of the first processor includes the state of a cache of the first processor, obtaining a first estimated instruction rate to execute the thread on the first processor using an estimated instruction rate function and the first state, obtaining a first estimated global throughput for executing the thread on the first processor using the first estimated instruction rate and the second state, obtaining a second estimated global throughput for executing the thread on the second processor using the second state, comparing the first estimated global throughput with the second estimated global throughput to obtain a comparison result, and executing the thread, based on the comparison result, on one selected fr
    Type: Grant
    Filed: October 17, 2006
    Date of Patent: October 4, 2011
    Assignee: Oracle America, Inc.
    Inventors: David Vengerov, Savvas Gitzenis, Declan J. Murphy
  • Publication number: 20110161294
    Abstract: The disclosed embodiments provide a system that determines whether to dynamically replicate data segments on a node in a computing cluster that stores a collection of data segments. During operation, the system identifies a data segment from the collection that is predicted to be frequently accessed by future tasks executing in the cluster. The system then determines a slowdown that would result for the current workload of the node if the data segment were to be replicated to the node. The system also determines a predicted future benefit that would be associated with replicating the data segment to the node. If the predicted slowdown is less than the predicted future benefit, the replication system replicates the data segment to the node.
    Type: Application
    Filed: December 30, 2009
    Publication date: June 30, 2011
    Applicant: SUN MICROSYSTEMS, INC.
    Inventors: David Vengerov, George Porter
  • Publication number: 20110107050
    Abstract: Methods and apparatus are provided for adaptively triggering garbage collection. During relatively steady or decreasing rates of allocation of free memory, a threshold for triggering garbage collection is dynamically and adaptively determined on the basis of memory drops (i.e., decreases in free memory) during garbage collection. If a significant increase in the rate of allocation of memory is observed (e.g., two consecutive measurements that exceed a mean rate plus two standard deviations), the threshold is modified based on a memory drop previously observed in conjunction with the current memory allocation rate, or a memory drop estimated to be possible for the current allocation rate.
    Type: Application
    Filed: November 5, 2009
    Publication date: May 5, 2011
    Applicant: SUN MICROSYSTEMS, INC.
    Inventor: David Vengerov
  • Publication number: 20110004882
    Abstract: A method for scheduling a thread on a plurality of processors that includes obtaining a first state of a first processor in the plurality of processors and a second state of a second processor in the plurality of processors, wherein the thread is last executed on the first processor, and wherein the first state of the first processor includes the state of a cache of the first processor, obtaining a first estimated instruction rate to execute the thread on the first processor using an estimated instruction rate function and the first state, obtaining a first estimated global throughput for executing the thread on the first processor using the first estimated instruction rate and the second state, obtaining a second estimated global throughput for executing the thread on the second processor using the second state, comparing the first estimated global throughput with the second estimated global throughput to obtain a comparison result, and executing the thread, based on the comparison result, on one selected fr
    Type: Application
    Filed: October 17, 2006
    Publication date: January 6, 2011
    Applicant: Sun Microsystems, Inc.
    Inventors: David Vengerov, Savvas Gitzenis, Declan J. Murphy
  • Patent number: 7665089
    Abstract: One embodiment of the present invention provides a system that performs thread migration within an array of computing nodes, wherein computing nodes in the array contain central processing units (CPUs) and/or memories. During operation, the system identifies CPUs within the array of computing nodes that are available to accept a given thread. For each available CPU, the system computes an average communication distance between the CPU and memories which are accessed by the given thread. Next, the system determines whether to move the given thread to an available CPU based on the average communication distance for the available CPU.
    Type: Grant
    Filed: November 2, 2004
    Date of Patent: February 16, 2010
    Assignee: Sun Microsystems, Inc.
    Inventor: David Vengerov
  • Patent number: 7665092
    Abstract: One embodiment of the present invention provides a system that performs load balancing between task queues in a multiprocessor system. During operation, the system conditionally requests load information from a number of neighboring CPUs in a neighborhood of a requesting CPU. In response to the request, the system receives load information from one or more neighboring CPUs. Next, the system conditionally requests one or more neighboring CPUs to transfer tasks to the requesting CPU based on the received load information, thereby balancing load between the CPUs in the neighborhood.
    Type: Grant
    Filed: December 15, 2004
    Date of Patent: February 16, 2010
    Assignee: Sun Microsystems, Inc.
    Inventor: David Vengerov
  • Publication number: 20090319255
    Abstract: Some embodiments of the present invention provide a system that executes a garbage collector in a computing system. During operation, the system obtains a throughput model for the garbage collector and estimates a set of characteristics associated with the garbage collector. Next, the system applies the characteristics to the throughput model to estimate a throughput of the garbage collector. The system then determines a level of performance for the garbage collector based on the estimated throughput. Finally, the system adjusts a tunable parameter for the garbage collector based on the level of performance to increase the throughput of the garbage collector.
    Type: Application
    Filed: June 23, 2008
    Publication date: December 24, 2009
    Applicant: SUN MICROSYSTEMS, INC.
    Inventor: David Vengerov
  • Patent number: 7606934
    Abstract: A method for routing an incoming service request is described wherein the service request is routed to a selected storage tier based on that selected storage tier having a predicted value indicating a state having greater utility as compared with the predicted value of the state associated with at least one other storage tier within the storage system. A computer system comprising a multi-tier storage system is described, the multi-tier storage system having a routing algorithm configured to adaptively tune functions which map variables describing the state of each storage tier of the storage system into the average latency experienced by incoming service requests associated with the storage tier.
    Type: Grant
    Filed: March 10, 2005
    Date of Patent: October 20, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: David Vengerov, Harriet G. Coverston, Anton B. Rang, Andrew B. Hastings
  • Publication number: 20090228888
    Abstract: Disclosed herein is a system and method for dynamic scheduling of application tasks in a distributed task-based system. The system and method employ a learning mechanism that observes and predicts overall application task costs across a networked system, taking into account how the states or loads of the applications are likely to change over time. The application task costs are defined in economic terms. The system and method allows continuous optimization of application response times as perceived by application users.
    Type: Application
    Filed: March 10, 2008
    Publication date: September 10, 2009
    Applicant: Sun Microsystems, Inc.
    Inventors: David Vengerov, Seth Proctor
  • Patent number: 7539709
    Abstract: A method and apparatus for managing data is described which includes determining the current state of a storage tier of a plurality of storage tiers within a storage system. Further, a prediction is made, using a prediction architecture comprising at least one predetermined variable, of the utilities of future expected states for at least two of a plurality of storage tiers involved with a data operation, wherein a future expected state of a corresponding storage tier is based on conditions expected to occur following the completion of the data operation. Finally, the data operation is performed if the predicted utility of the future expected state associated with the at least two of a plurality of storage tiers is more beneficial than the utility of the current state.
    Type: Grant
    Filed: June 15, 2005
    Date of Patent: May 26, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: David Vengerov, Harriet G. Coverston, Anton B. Rang, Andrew B. Hastings
  • Patent number: 7444316
    Abstract: One embodiment of the present invention provides a system that assigns jobs to a system containing a number of central processing units (CPUs). During operation, the system captures a current state of the system, which describes available resources on the system, characteristics of jobs currently being processed, and characteristics of jobs waiting to be assigned. The system then uses the current system state to estimate a long-term benefit to the system of not preempting any jobs currently being processed. If the benefit from preempting one or more jobs exceeds the benefit from not preempting any jobs, the system preempts one or more jobs currently being processed on the system with a new job.
    Type: Grant
    Filed: January 28, 2005
    Date of Patent: October 28, 2008
    Assignee: Sun Microsystems, Inc.
    Inventor: David Vengerov