Patents by Inventor Ludmila Cherkasova

Ludmila Cherkasova has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8626944
    Abstract: A method comprises distributing a plurality of descriptors of file encoded with comprising a plurality of recipient nodes, wherein at least one descriptor is distributed from the first node to each recipient node of the at least a portion of the first group. The at least a portion of the first group communicate their respective descriptors received from the first node to other nodes of the first group. A system comprises an origin node operable to distribute all of a plurality of descriptors of a MDC file to a first group of recipient nodes, wherein the origin node does not attempt to communicate all of the plurality of descriptors to all of the recipient nodes of the first group. The recipient nodes of the first group are each operable to communicate a descriptor that it receives from the origin node to other nodes of the first group.
    Type: Grant
    Filed: May 5, 2003
    Date of Patent: January 7, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Ludmila Cherkasova
  • Publication number: 20130339972
    Abstract: A performance model for a collection of jobs that make up a program is used to calculate a performance parameter based on a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. The performance model considers overlap of concurrent jobs. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program.
    Type: Application
    Filed: June 18, 2012
    Publication date: December 19, 2013
    Inventors: Zhuoyao Zhang, Abhishek Verma, Ludmila Cherkasova
  • Publication number: 20130318538
    Abstract: A job profile is received (302) that describes a job to be executed. A performance model is produced (304) based on the job profile and allocated amount of resources for the job, and a performance characteristic of the job is estimated (306) using the performance model.
    Type: Application
    Filed: February 2, 2011
    Publication date: November 28, 2013
    Inventors: Abhishek Verma, Ludmila Cherkasova
  • Publication number: 20130290972
    Abstract: A method of managing workloads in MapReduce environments with a system. The system receives job profiles of respective jobs, wherein each job profile describes characteristics of map and reduce tasks. The map tasks produce intermediate results based on the input data, and the reduce tasks produce an output based on the intermediate results. The jobs are ordered according to performance goals into a hierarchy. A minimum quantity of resources is allocated to each job to achieve its performance goal. A plurality of spare resources are allocated to at least one of the jobs. A new job profile having a new performance goal is then received. Next, it is determined whether the new performance goal can be met without deallocating spare resources. Spare resources are re-allocated form the other jobs to the new job to achieve its performance goal without compromising the performance goals of the other jobs.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Inventors: Ludmila Cherkasova, Abhishek Verma
  • Publication number: 20130290976
    Abstract: Determining a schedule of a batch workload of MapReduce jobs is disclosed. A set of multi-stage jobs for processing in a MapReduce framework is received, for example, in a master node. Each multi-stage job includes a duration attribute, and each duration attribute includes a stage duration and a stage type. The MapReduce framework is separated into a plurality of resource pools. The multi-stage jobs are separated into a plurality of subgroups corresponding with the plurality of pools. Each subgroup is configured for concurrent processing in the MapReduce framework. The multi-stage jobs in each of the plurality of subgroups are placed in an order according to increasing stage duration. For each pool, the multi-stage jobs in increasing order of stage duration are sequentially assigned from either a front of the schedule or a tail of the schedule by stage type.
    Type: Application
    Filed: April 30, 2012
    Publication date: October 31, 2013
    Inventors: Ludmila Cherkasova, Abhishek Verma
  • Publication number: 20130290538
    Abstract: At least one embodiment is for a method for estimating resource costs required to process an workload to be completed using at least two different cloud computing models. Historical trace data of at least one completed workload that is similar to the workload to be completed is received by the computer. The processing of the completed workload is simulated using a t-shirt cloud computing model and a time-sharing model. The t-shirt and time-sharing resource costs are estimated based on their respective simulations. The t-shirt and resource costs are then compared.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Inventors: Daniel Juergen Gmach, Jerome Rolia, Ludmila Cherkasova
  • Patent number: 8561074
    Abstract: Systems and methods of enhanced backup job scheduling are disclosed. An example method may include determining a number of jobs (n) in a backup set, determining a number of tape drives (m) in the backup device, and determining a number of concurrent disk agents (maxDA) configured for each tape drive. The method may also include defining a scheduling problem based on n, m, and maxDA. The method may also include solving the scheduling problem using an integer programming (IP) formulation to derive a bin-packing schedule that minimizes makespan (S) for the backup set.
    Type: Grant
    Filed: November 19, 2010
    Date of Patent: October 15, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Xin Zhang, Xiaozhou Li
  • Publication number: 20130268941
    Abstract: A performance model is used to calculate a performance parameter based on characteristics of a collection of jobs that make up a program, a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program.
    Type: Application
    Filed: April 9, 2012
    Publication date: October 10, 2013
    Inventors: LUDMILA CHERKASOVA, Abhishek Verma, Zhuoyao Zhang
  • Publication number: 20130268940
    Abstract: A system, and a corresponding method enabled by and implemented on that system, automatically calculates and compares costs for hosting workloads in virtualized or non-virtualized platforms. The system allows a service user (i.e., a customer) to decide how best to have workloads hosted by apportioning costs that are least sensitive to workload placement decisions and by providing robust and repeatable cost estimates. The system compares the costs of hosting a workload in virtualized and non-virtualized environments; separates workloads into categories including those that should be virtualized and those that should not, and determines the amount of physical resources to cost-effectively host a set of workloads.
    Type: Application
    Filed: April 4, 2012
    Publication date: October 10, 2013
    Inventors: Daniel Juergen Gmach, Jerome Rolla, Ludmila Cherkasova
  • Patent number: 8543711
    Abstract: A method comprises receiving, by pattern evaluation logic, a plurality of occurrences of a prospective pattern of resource demands in a representative workload. The method further comprises evaluating, by the pattern evaluation logic, the received occurrences of the prospective pattern of resource demands, and determining, by the pattern evaluation logic, based on the evaluation of the received occurrences of the prospective pattern of resource demands, how representative the prospective pattern is of resource demands of the representative workload.
    Type: Grant
    Filed: April 30, 2007
    Date of Patent: September 24, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jerome Rolia, Daniel Gmach, Ludmila Cherkasova
  • Patent number: 8489612
    Abstract: To identify similar files in an environment having multiple client computers, a first client computer receives, from a coordinator computer, a request to find files located at the first client computer that are similar to at least one comparison file, wherein the request has also been sent to other client computers by the coordinator computer to request that the other client computers also find files that are similar to the at least one comparison file. In response to the request, the first client computer compares signatures of the files located at the first client computer with a signature of the at least one comparison file to identify at least a subset of the files located at the first client computer that are similar to the at least one comparison file according to a comparison metric. The first client computer sends, to the coordinator computer, a response relating to the comparing.
    Type: Grant
    Filed: March 24, 2009
    Date of Patent: July 16, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Charles B. Morrey, III, Vinay Deolalikar, Kimberly Keeton, Mark David Lillibridge, Craig A. Soules, Alistair Veitch
  • Publication number: 20130167151
    Abstract: A plurality of job profiles is received. Each job profile describes a job to be executed, and each job includes map tasks and reduce tasks. An execution duration for a map stage including the map tasks and an execution duration for a reduce stage including the reduce tasks of each job is estimated. The jobs are scheduled for execution based on the estimated execution duration of the map stage and the estimated execution duration of the reduce stage of each job.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 27, 2013
    Inventors: Abhishek VERMA, Ludmila Cherkasova, Vijay S. Kumar
  • Patent number: 8392499
    Abstract: A system and method are provided for relating aborted client accesses of server information to the quality of service provided to clients by a server in a client-server network. According to one embodiment, a method comprises determining performance data for at least one aborted client access of information from a server in a client-server network, and using the performance data to determine whether the aborted client access(es) relate to the quality of service provided to a client by the server.
    Type: Grant
    Filed: May 16, 2002
    Date of Patent: March 5, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Yun Fu, Wenting Tang
  • Patent number: 8392927
    Abstract: According to one embodiment, a method comprises receiving into a planning tool a representative workload for a consumer. The method further comprises determining, by the planning tool, an allocation of demand of the consumer for each of a plurality of different classes of service (COSs). According to one embodiment, a method comprises defining a plurality of classes of service (COSs) for use by a scheduler in allocating capacity of a resource pool to a consumer, wherein the COSs each specify a different priority for accessing the capacity of the resource pool. The method further comprises evaluating, by a planning tool, a representative workload of the consumer, and determining, by the planning tool, a partitioning of resource demands of the representative workload between the plurality of COSs.
    Type: Grant
    Filed: May 19, 2005
    Date of Patent: March 5, 2013
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Jerome Rolia, Ludmila Cherkasova
  • Patent number: 8359463
    Abstract: There is provided a computer-implemented method for selecting from a plurality of full configurations of a storage system an operational configuration for executing an application. An exemplary method comprises obtaining application performance data for the application on each of a plurality of test configurations. The exemplary method also comprises obtaining benchmark performance data with respect to execution of a benchmark on the plurality of full configurations, one or more degraded configurations of the full configurations and the plurality of test configurations. The exemplary method additionally comprises estimating a metric for executing the application on each of the plurality of full configurations based on the application performance data and the benchmark performance data. The operational configuration may be selected from among the plurality full configurations based on the metric.
    Type: Grant
    Filed: May 26, 2010
    Date of Patent: January 22, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Arif A. Merchant, Ludmila Cherkasova
  • Patent number: 8336054
    Abstract: A method comprises defining a scheduler parameter for a maximum allocation of capacity of a shared resource to a consumer for a scheduling interval. Utilization of an allocated capacity of the shared resource by the consumer during a given scheduling interval is measured, and when the allocated capacity of the shared resource is completely utilized by the consumer during the given scheduling interval, the scheduler increases the allocated capacity of the shared resource to the defined maximum allocation for the consumer for a next scheduling interval. Thus, rather than gradually increasing the allocation of capacity over many intervals, the scheduler immediately increases the allocation to a predefined maximum amount in response to an allocated amount of capacity being completely utilized during a scheduling interval.
    Type: Grant
    Filed: July 20, 2006
    Date of Patent: December 18, 2012
    Assignee: Hewlett-Packard Development Company, L. P.
    Inventors: Ludmila Cherkasova, Jerome Rolia, Clifford A. McCarthy
  • Patent number: 8326970
    Abstract: According to an embodiment of the present invention, a method for deriving an analytic model for a session-based system is provided. The method comprises receiving, by a model generator, client-access behavior information for the session-based system, wherein the session-based system comprises a plurality of interdependent transaction types. The method further comprises deriving, by the model generator, from the received client-access behavior information, a stateless transaction-based analytic model of the session-based system, wherein the derived transaction-based analytic model models resource requirements of the session-based system for servicing a workload. According to certain embodiments, the derived transaction-based analytic model is used for performing capacity analysis of the session-based system.
    Type: Grant
    Filed: November 5, 2007
    Date of Patent: December 4, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Qi Zhang
  • Patent number: 8321644
    Abstract: A method, a tangible non-transitory computer readable storage medium, and a tape library for backing up filesystems are provided. Historic job durations to back up data to a storage device are obtained. Objects to be backed up to multiple drives in the storage device are ordered based on the job durations. The objects are assigned to agents based on priorities that first back up an object having a longest job duration. The objects are backed up with the agents to the multiple drives according to the priorities.
    Type: Grant
    Filed: October 1, 2009
    Date of Patent: November 27, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Roger K T Lau, Harald Burose, Bernhard Kappler
  • Publication number: 20120296852
    Abstract: A method of determining a workload cost is provided herein. The method includes determining a direct consumption of a resource pool by a workload. The method also includes determining a burstiness for the workload and the resource pool. The burstiness comprises a difference between a peak consumption of the resource pool by the workload, and the direct consumption of the resource pool. The method further includes determining an unallocated amount of the resource pool. Additionally, the method includes determining the workload cost based on the direct consumption, the burstiness, and the unallocated amount of the resource pool.
    Type: Application
    Filed: May 20, 2011
    Publication date: November 22, 2012
    Inventors: Daniel Juergen Gmach, Jerome Rolia, Ludmila Cherkasova
  • Patent number: 8260603
    Abstract: Described herein is a method for scaling a prediction model of resource usage of an application in a virtual environment, comprising: providing a predetermined set of benchmarks, wherein the predetermined set of benchmarks includes at least one of: a computation-intensive workload, a network-intensive workload, and a disk-intensive workload; executing the predetermined set of benchmarks in a first native hardware system in which the application natively resides; executing the predetermined set of benchmarks in the virtual environment; generating at least one first prediction model that predicts a resource usage of the application running in the virtual environment based on the executions of the predetermined set of benchmarks in the first native hardware system and the virtual environment; determining a resource usage of the application running in a second native hardware system in which the application also natively resides; generating at least one second prediction model based on a scaling of the at least o
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: September 4, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ludmila Cherkasova, Timothy W. Wood