System and method for calibrating and extrapolating management-inherent complexity metrics and human-perceived complexity metrics of information technology management

- IBM

The invention broadly and generally provides a method for calibrating the relationship between management-inherent complexity metrics deriving from the management structure and human perceived complexity of information technology management comprising: (a) obtaining a set of management-inherent complexity metrics; (b) obtaining a set of human-perceived complexity metrics; (c) constructing a control model identifying a set of dominant indicators selected from the aforesaid set of management-inherent complexity metrics; (d) establishing a value model mapping from the aforesaid set of dominant indicators to the aforesaid set of human-perceived complexity metrics.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

The present invention relates generally to computing system evaluation and, more particularly, to techniques for quantitatively measuring and benchmarking complexity in information technology management.

BACKGROUND OF THE INVENTION

The complexity of managing computing systems and information technology (IT) processes represents a major impediment to efficient, high-quality, error-free, and cost-effective service delivery ranging from small-business servers to global-scale enterprise backbones. IT systems and processes with a high degree of complexity demands human resources and expertise to manage that complexity, increasing the total cost of ownership. Likewise, complexity increases the amount of time that must be spent interacting with a computing system or between operators to perform the desired function, and decreases efficiency and productivity. Furthermore, complexity results in human errors, as complexity challenges human reasoning and results in erroneous decisions even by skilled operators.

Due to the high complexity level incurred in service delivery processes, it is evident that service providers are actively seeking to reduce the IT complexity by designing, architecting, implementing, and assembling systems and processes with minimal complexity level. In order to do so, they must be able to quantitatively measure and benchmark the degree of IT management complexity exposed by particular computing systems or processes, so that global delivery executives, program mangers, and project leaders can evaluate the prospective complexity before investing in them, and designers, architects, and developers can rebuild and optimize them for reduced complexity. Besides improving decision making for projects and technologies, quantitative complexity evaluation can help computing service providers and outsourcers quantify the amount of human management that will be needed to provide a given service, allowing them to more effectively evaluate costs and set price points. All these scenarios require standardized, representative, accurate, easily-compared quantitative assessments of IT management complexity with metrics mapped to human-perceived complexity such as labor cost, efficiency, and error rate. This motivates the need for a system and methods for calibrating and extrapolating complexity metrics of information technology management.

The prior art of computing system evaluation includes no system or methods for calibrating and extrapolating complexity metrics of information technology management. Well-studied computing system evaluation areas include system performance analysis, software complexity analysis, human-computer interaction analysis, dependability evaluation, and basic complexity evaluation.

System performance analysis attempts to compute quantitative measures of the performance of a computer system, considering both hardware and software components. This is a well-established area rich in analysis techniques and systems. However, none of these methodologies and systems for system performance analysis considers complexity-related aspects of the system under evaluation, nor do they collect or analyze complexity-related data. Therefore, system performance analysis provides no insight into the complexity of the IT management being evaluated.

Software complexity analysis attempts to compute quantitative measures of the complexity of a piece of software code, considering both the intrinsic complexity of the code, as well as the complexity of creating and maintaining the code. However, processes for software complexity analysis do not collect management-related statistics or data and therefore provides no insight into the management complexity of the computing systems and processes running the analyzed software.

Human-computer interaction (HCI) analysis attempts to identify interaction problems between human users and computer systems, typically focusing on identifying confusing, error-prone, or inefficient interaction patterns. However, HCI analysis focuses on detecting problems in human-computer interaction rather than performing an objective, quantitative complexity analysis of that interaction. HCI analysis methods are not designed specifically for measuring management complexity, and typically do not operate on management-related data. In particular, HCI analysis collects human performance data from costly observations of many human users, and does not collect and use management-related data directly from a system under test. Additionally, HCI analysis typically produces qualitative results suggesting areas for improvement of a particular user interface or interaction pattern. Thus, it does not produce quantitative results that evaluate an overall complexity of managing a system, independent of the particular user interface experience. The Model Human Processor approach to HCI analysis does provide objective, quantitative results; however, these results quantify interaction time for motor-function tasks like moving a mouse or clicking an on-screen button, and thus do not provide insight into the complexity of managing computing system and service management.

Dependability evaluation combines aspects of objective, reproducible performance benchmarking with HCI analysis techniques with a focus on configuration-related problems, see, e.g., Brown et al., “Experience with Evaluating Human-Assisted Recovery Processes,” Proceedings of the 2004 International Conference on Dependable Systems and Networks, Los Alamitos, Calif., IEEE, 2004. This approach includes a system for measuring configuration quality as performed by human users, but does not measure configuration complexity and does not provide reproducibility or objective measures.

Basic complexity evaluation quantitatively evaluates complexity of computing system configuration, see, e.g., Brown et al., “System and methods for quantitatively evaluating complexity of computing system configuration,” Ser. No. 11/205,972, filed on Aug. 17, 2005, and Brown et al., “System and methods for integrating authoring with complexity analysis for computing system operation procedures.” However, they do not provide metrics calibration that map configuration-related data directly from a system under test to human-perceived complexity such as labor cost, efficiency, and error rate.

SUMMARY OF THE INVENTION

The invention broadly and generally provides a method for calibrating the relationship between management-inherent complexity metrics deriving from the management structure and human perceived complexity of information technology management comprising: (a) obtaining a set of management-inherent complexity metrics; (b) obtaining a set of human-perceived complexity metrics; (c) constructing a control model identifying a set of dominant indicators selected from the aforesaid set of management-inherent complexity metrics; (d) establishing a value model mapping from the aforesaid set of dominant indicators to the aforesaid set of human-perceived complexity metrics.

The method may further comprise obtaining and validating the aforesaid control model and the aforesaid value model for quality assessment. This step may be repeated.

In some embodiments, the aforesaid set of management-inherent complexity metrics comprise at least one of: (a) execution complexity metrics; (b) parameter complexity metrics; and (c) memory complexity metrics.

In some embodiments, the aforesaid value model may be constructed using a statistical approach or linear regression.

In some embodiments, the aforesaid value model is constructed using machine learning, an artificial neural network, for example. This artificial neural network may be a radial basis function.

Advantageously, the aforesaid step of obtaining a set of management inherent complexity metrics may comprise at least one of: (a) obtaining management-inherent complexity metrics from a complexity analysis; and (b) acquiring human-perceived complexity metrics through controlled user studies.

The aforesaid step of constructing a control model may comprise at least one of: (a) obtaining a subset of management-inherent complexity metrics as a set of dominant indicators under study; (b) constructing a value model from the aforesaid set of dominant indicators and the aforesaid set of human-perceived complexity metrics based on a set of information technology management data; and (c) evaluating the quality of the aforesaid value model based on a different set of information technology management data.

The method may further comprise obtaining a different subset of management-inherent complexity metrics from the aforesaid set of dominant indicators under study. This step may be repeated until no better set of dominant indicators is found.

The invention further broadly and generally provides a method for extrapolating from management-inherent complexity metrics to human-perceived complexity of information technology management, the aforesaid method comprising: (a) collecting a set of management-inherent complexity metrics; (b) obtaining a value model; (c) predicting human-perceived complexity based on the aforesaid set of management inherent complexity metrics and the aforesaid value model.

The invention further broadly and generally provides a program storage device readable by a digital processing apparatus and having a program of instructions which are tangibly embodied on the storage device and which are executable by the processing apparatus to perform a method for calibrating the relationship between management-inherent complexity metrics deriving from the management structure and human perceived complexity of information technology management, the aforesaid method comprising: (a) obtaining a set of management-inherent complexity metrics; (b) obtaining a set of human-perceived complexity metrics; (c) constructing a control model identifying a set of dominant indicators selected from the aforesaid set of management-inherent complexity metrics; (d) establishing a value model mapping from the aforesaid set of dominant indicators to the aforesaid set of human-perceived complexity metrics.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram illustrating the overall architecture for complexity calibration and extrapolation.

FIG. 2 is a flow diagram illustrating the overall process for complexity calibration.

FIG. 3 is a flow diagram illustrating the overall process for complexity extrapolation.

FIG. 4 is a block diagram illustrating the logical structure of the value model.

FIG. 5 is a flow diagram illustrating the operation of the control model for identifying dominant indicators.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENT

Exemplary embodiments of the invention as described herein generally include system or methods for calibrating and extrapolating complexity metrics of information technology management.

For illustrative purposes, exemplary embodiments of the invention will be described with specific reference, if needed, to calibrating and extrapolating complexity metrics of information technology management of a configuration procedure, wherein the management-inherent complexity metrics deriving from the management structure comprise one or more execution complexity metrics, parameter complexity metrics, and/or memory complexity metrics, and human-perceived complexity metrics comprise one of more cost metrics, efficiency metrics, and quality metrics. It is to be understood, however, that the present invention is not limited to any particular kind of information technology management. Rather, the invention is more generally applicable to any information technology management in which it would be desirable to conduct complexity model calibration and extrapolation.

It is to be understood that the system and methods described herein in accordance with the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present invention is implemented in software comprising program instructions that are tangibly embodied on one or more program storage devices (e.g., hard disk, magnetic floppy disk, RAM, CD ROM, DVD, ROM and flash memory), and executable by any device or machine comprising suitable architecture.

It is to be further understood that because the constituent system modules and method steps depicted in the accompanying Figures can be implemented in software, the actual connections between the system components (or the flow of the process steps) may differ depending upon the manner in which the application is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.

FIG. 1 is a block diagram illustrating the overall architecture for complexity calibration and extrapolation. FIG. 1 depicts one or more data processing systems (100) that collect and evaluate configuration related data utilizing techniques taught in U.S. patent application Ser. No. 11/205,972 filed on Aug. 17, 2005. This comprises observing the configuration procedure (101) between the system administrator (103) and the managed system (105) based on configuration goals (102) and authoritative documentation sources (104), documenting the representation of procedure (106), conducting analysis (107), and outputting the quantified results (108).

An exemplary embodiment of the present invention begins by obtaining (or collecting) a set of human-perceived complexity metrics (110) from the system administrator (103) through user studies, for example, and obtaining a set of management-inherent complexity metrics (111) from complexity evaluation quantified result (108). Thereafter, the calibration analysis (112) is conducted to generate calibration models (113) which quantify the relationship between management-inherent complexity metrics and human-perceived complexity of the configuration procedure.

A different data processing system (120) that collects and evaluates configuration related data utilizing techniques is taught in U.S. patent application Ser. No. 11/205,972 filed on Aug. 17, 2005. The present invention, without collecting again a set of human-perceived complexity metrics from the system administrator through user studies (which can be costly or even not feasible), conducts extrapolation analysis (132) that is based on the set of management-inherent complexity metrics (131) from the data processing system (120) and the calibration models (113) from calibration analysis (112) to generate the human-perceived complexity metrics (133).

FIG. 2 is a flow diagram illustrating the overall process for complexity calibration. To calibrate the relationship between management-inherent complexity metrics and human-perceived complexity of information technology management, a system following a method consistent with the present invention collects a set of management-inherent complexity metrics (201), collects a set of human-perceived complexity metrics (202), and constructs a control model identifying a set of dominant indicators (203) which are selected from the set of management-inherent complexity metrics collected in (201) and are most related to said set of human-perceived complexity metrics collected in (202). After that, it establishes a value model that maps from the dominant indicators to the human-perceived complexity metrics (204). The above process is repeated if new data is available (205) and the constructed calibration models including the control model from (203) and the value model from (204) are not valid.

FIG. 3 is a flow diagram illustrating the overall process for complexity extrapolation. FIG. 3 depicts the process of extrapolating from management-inherent complexity metrics to human-perceived complexity of information technology management when human-perceived complexity metrics are not available. A system following a method consistent with the present invention collects a set of management-inherent complexity metrics (301), obtains a value model (302) which is from (204), and predicts the human-perceived complexity (303) based on the above set of management-inherent complexity metrics and the value model.

FIG. 4 is a block diagram illustrating the logical structure of the value model. The value model (400) has model inputs including one or more management-inherent complexity metrics (410), and one or more environment metrics (420), and has model outputs including one or more human-perceived complexity metrics (430). The management-inherent complexity metrics (410) comprises one or more of execution complexity metrics (411), parameter complexity metrics (412), and memory complexity metrics (413). The human-perceived complexity metrics comprises one or more of metrics on labor cost (431), efficiency (432), and quality (433).

The value model can be constructed using statistical approaches or machine learning approaches. For example, a linear regression model can be constructed
ET=b0+b1*nActions+b2*nCtxSw
where the model inputs includes the explanatory variables such as the number of actions (nActions) and the number of context switches (nCtxSw), and the model outputs includes the execution time (ET). The model coefficients such as b0, b1, b2 can be obtained using least squares approach.

Alternatively, a type of neural networks called radial basis function network can be constructed
ET=RBF(nActions, nCtxSw, . . . , goal, . . . )
which can be used to build a nonlinear relationship, and can further comprises environment variables to classify the different IT management types to build a higher quality of model.

FIG. 5 is a flow diagram illustrating the operation of the control model for identifying dominant indicators. FIG. 5 depicts the step of constructing a control model identifying a set of dominant indicators selected from the above set of management-inherent complexity metrics that mostly related to said set of human-perceived complexity metrics. A system performing a method consistent with the present invention obtains a subset of management-inherent complexity metrics (511) as a set of dominant indicators (520) under study (501), and constructs a value model (502) from this set of dominant indicators (520) and the set of human-perceived complexity metrics (512) based on a set of information technology management data (510). Afterwards, the system evaluates the quality of the value model (503) based on a different set of information technology management data (530) including both management-inherent complexity metrics (531) and human-perceived complexity metrics (532). Based on the quality of the value model (504), it may require a different subset of management-inherent complexity metrics as said set of dominant indicators under study; otherwise, it can perform the step of establishing a value model mapping from the dominant indicators to the human-perceived complexity metrics (204).

While changes and variations to the embodiments may be made by those skilled in the art, the scope of the invention is to be determined by the appended claims.

Claims

1. A method for calibrating the relationship between management-inherent complexity metrics deriving from the management structure of a system and human-perceived complexity of information technology management, said method comprising:

obtaining a set of management-inherent complexity metrics from quantified results of a complexity analysis, the complexity analysis quantifying a complexity of a configuration procedure between the system and an administrator of the system;
obtaining a set of human-perceived complexity metrics from user studies from the administrator of the system;
constructing a control model on a first processor, said control model identifying a set of dominant indicators selected from said set of management-inherent complexity metrics;
establishing a value model on a second processor, said value model mapping from said set of dominant indicators selected from said set of management-inherent complexity metrics to said set of human-perceived complexity metrics.

2. The method as set forth in claim 1, further comprising:

obtaining and validating said control model and said value model for quality assessment; and
repeating said obtaining and validating said control model and said value model for quality assessment.

3. The method as set forth in claim 1, wherein said set of management-inherent complexity metrics comprise at least one of:

execution complexity metrics;
parameter complexity metrics; and
memory complexity metrics.

4. The method as set forth in claim 1, wherein said value model is constructed using at least one of:

a statistical approach;
linear regression;
machine learning; and
an artificial neural network, wherein said artificial neural network is a radial basis function.

5. The method as set forth in claim 1, wherein said step of constructing a control model comprises at least one of:

(a) obtaining a subset of management-inherent complexity metrics as a set of dominant indicators under study;
(b) constructing a value model from said set of dominant indicators and said set of human-perceived complexity metrics based on a set of information technology management data; and
(c) evaluating the quality of said value model based on a different set of information technology management data.

6. The method as set forth in claim 5, further comprising obtaining a different subset of management-inherent complexity metrics from said set of dominant indicators under study.

7. The method as set forth in claim 6, further comprising repeating said step of obtaining a different subset of management-inherent complexity metrics as said set of dominant indicators under study until no better set of dominant indicators is found.

8. A method for extrapolating from management-inherent complexity metrics to human-perceived complexity of information technology management, said method comprising:

collecting a set of management-inherent complexity metrics from quantified results of a complexity analysis, the complexity analysis quantifying a complexity of a configuration procedure between the system and an administrator of the system;
obtaining a value model;
predicting human-perceived complexity with a processor, the human-perceived complexity being based on said set of management inherent complexity metrics and said value model, said predicting including: inputting the management-inherent complexity metrics into the value model; and outputting human-perceived complexity metrics from the value model.

9. A program storage device readable by a digital processing apparatus and having a program of instructions which are tangibly embodied on the storage device and which are executable by the processing apparatus to perform a method for calibrating the relationship between management-inherent complexity metrics deriving from the management structure of a system and human-perceived complexity of information technology management, said method comprising:

obtaining a set of management-inherent complexity metrics from quantified results of a complexity analysis, the complexity analysis quantifying a complexity of a configuration procedure between the system and an administrator of the system;
obtaining a set of human-perceived complexity metrics from user studies from the administrator of the system;
constructing a control model on a first processor, said control model identifying a set of dominant indicators selected from said set of management-inherent complexity metrics; and
establishing a value model on a second processor, said value model mapping from said set of dominant indicators to said set of human-perceived complexity metrics.

10. The method as set forth in claim 1, further comprising:

inputting the management-inherent complexity metrics into the value model; and
outputting a second set of human-perceived complexity metrics from the value model.

11. The method as set forth in claim 10, wherein at least one of the set of human-perceived complexity metrics and the second set of human-perceived complexity metrics each comprise at least one of cost metrics, efficiency metrics, and quality metrics.

12. The method as set forth in claim 10, further including inputting into the value model uncontrolled environment metrics to classify different information technology management types, the uncontrolled environment metrics comprising process-related metrics and process-independent metrics, the process-related metrics including at least one of goal metrics, prerequisite metrics, and scenario metrics, the process-independent metrics including at least one of preference metrics and fatigue metrics.

13. The method as set forth in claim 4, wherein inputs of said linear regression comprise a number of actions and a number of context switches, wherein outputs of said linear regression comprise an execution time, and wherein coefficients of said linear regression are obtained using a least squares approach.

14. The method as set forth in claim 4, wherein said radial basis function builds a nonlinear relationship, the radial basis function comprising environment variables to classify different information technology management types.

15. The method as set forth in claim 8, wherein said management-inherent complexity metrics comprise at least one of:

execution complexity metrics;
parameter complexity metrics; and
memory complexity metrics.

16. The method as set forth in claim 8, wherein the human-perceived complexity metrics comprise at least one of cost metrics, efficiency metrics, and quality metrics.

17. The method as set forth in claim 8, further including inputting into the value model uncontrolled environment metrics to classify different information technology management types, the uncontrolled environment metrics comprising process-related metrics and process-independent metrics, the process-related metrics including at least one of goal metrics, prerequisite metrics, and scenario metrics, the process-independent metrics including at least one of preference metrics and fatigue metrics.

18. The method as set forth in claim 8, wherein said obtaining of the value model comprises constructing a linear regression model, wherein inputs of said linear regression model comprise a number of actions and a number of context switches, wherein outputs of said linear regression model comprise an execution time, and wherein coefficients of said linear regression model are obtained using a least squares approach.

19. The method as set forth in claim 8, wherein said obtaining of the value model comprises constructing a radial basis function neural network for building a nonlinear relationship, the radial basis function neural network comprising environment variables to classify different IT management types.

20. The program storage device as set forth in claim 9, further comprising:

inputting the management-inherent complexity metrics into the value model, wherein the management-inherent complexity metrics comprises: execution complexity metrics, parameter complexity metrics, and memory complexity metrics; and
outputting a second set of human-perceived complexity metrics from the value model, wherein the first set of human-perceived complexity metrics and the second set of human-perceived complexity metrics each comprises: cost metrics, efficiency metrics, and quality metrics.
Referenced Cited
U.S. Patent Documents
4835372 May 30, 1989 Gombrich et al.
5504921 April 2, 1996 Dev et al.
5724262 March 3, 1998 Ghahramani
5734837 March 31, 1998 Flores et al.
5765138 June 9, 1998 Aycock et al.
5774661 June 30, 1998 Chatterjee et al.
5826239 October 20, 1998 Du et al.
5850535 December 15, 1998 Maystrovsky et al.
5870545 February 9, 1999 Davis et al.
5884302 March 16, 1999 Ho
5907488 May 25, 1999 Arimoto et al.
5937388 August 10, 1999 Davis et al.
6049776 April 11, 2000 Donnelly et al.
6131085 October 10, 2000 Rossides
6249769 June 19, 2001 Ruffin et al.
6259448 July 10, 2001 McNally et al.
6263335 July 17, 2001 Paik et al.
6308208 October 23, 2001 Jung et al.
6339838 January 15, 2002 Weinman, Jr.
6363384 March 26, 2002 Cookmeyer, II et al.
6453269 September 17, 2002 Quernemoen
6473794 October 29, 2002 Guheen et al.
6496209 December 17, 2002 Horii
6523027 February 18, 2003 Underwood
6526387 February 25, 2003 Ruffin et al.
6526392 February 25, 2003 Dietrich et al.
6526404 February 25, 2003 Slater et al.
6618730 September 9, 2003 Poulter et al.
6675149 January 6, 2004 Ruffin et al.
6738736 May 18, 2004 Bond
6789101 September 7, 2004 Clarke et al.
6810383 October 26, 2004 Loveland
6865370 March 8, 2005 Ho et al.
6879685 April 12, 2005 Peterson et al.
6907549 June 14, 2005 Davis et al.
6970803 November 29, 2005 Aerdts et al.
6988088 January 17, 2006 Miikkulainen et al.
6988132 January 17, 2006 Horvitz
7010593 March 7, 2006 Raymond
7039606 May 2, 2006 Hoffman et al.
7089529 August 8, 2006 Sweitzer et al.
7114146 September 26, 2006 Zhang et al.
7177774 February 13, 2007 Brown et al.
7236966 June 26, 2007 Jackson et al.
7260535 August 21, 2007 Galanes et al.
7293238 November 6, 2007 Brook et al.
7315826 January 1, 2008 Guheen et al.
7364067 April 29, 2008 Steusloff et al.
7403948 July 22, 2008 Ghoneimy et al.
7412502 August 12, 2008 Fearn et al.
7467198 December 16, 2008 Goodman et al.
7472037 December 30, 2008 Brown et al.
7562143 July 14, 2009 Fellenstein et al.
7580906 August 25, 2009 Faihe
7707015 April 27, 2010 Lubrecht et al.
7802144 September 21, 2010 Vinberg et al.
20010047270 November 29, 2001 Gusick et al.
20020019837 February 14, 2002 Balnaves
20020055849 May 9, 2002 Georgakopoulos et al.
20020091736 July 11, 2002 Wall
20020099578 July 25, 2002 Eicher et al.
20020111823 August 15, 2002 Heptner
20020140725 October 3, 2002 Horii
20020147809 October 10, 2002 Vinberg
20020161875 October 31, 2002 Raymond
20020169649 November 14, 2002 Lineberry et al.
20020186238 December 12, 2002 Sylor et al.
20030004746 January 2, 2003 Kheirolomoom et al.
20030018629 January 23, 2003 Namba
20030018771 January 23, 2003 Vinberg
20030033402 February 13, 2003 Battat et al.
20030065764 April 3, 2003 Capers et al.
20030065805 April 3, 2003 Barnes
20030097286 May 22, 2003 Skeen
20030101086 May 29, 2003 San Miguel
20030154406 August 14, 2003 Honarvar et al.
20030172145 September 11, 2003 Nguyen
20030187719 October 2, 2003 Brocklebank
20030225747 December 4, 2003 Brown et al.
20040024627 February 5, 2004 Keener
20040158568 August 12, 2004 Colle et al.
20040172466 September 2, 2004 Douglas et al.
20040181435 September 16, 2004 Snell et al.
20040186757 September 23, 2004 Starkey
20040186758 September 23, 2004 Halac et al.
20040199417 October 7, 2004 Baxter et al.
20050027585 February 3, 2005 Wodtke et al.
20050027845 February 3, 2005 Secor et al.
20050066026 March 24, 2005 Chen et al.
20050091269 April 28, 2005 Gerber et al.
20050114306 May 26, 2005 Shu et al.
20050114829 May 26, 2005 Robin et al.
20050136946 June 23, 2005 Trossen
20050138631 June 23, 2005 Bellotti et al.
20050159969 July 21, 2005 Sheppard
20050187929 August 25, 2005 Staggs
20050203917 September 15, 2005 Freeberg et al.
20050223299 October 6, 2005 Childress et al.
20050223392 October 6, 2005 Cox et al.
20050254775 November 17, 2005 Hamilton et al.
20060067252 March 30, 2006 John et al.
20060069607 March 30, 2006 Linder
20060112036 May 25, 2006 Zhang et al.
20060112050 May 25, 2006 Miikkulainen et al.
20060129906 June 15, 2006 Wall
20060168168 July 27, 2006 Xia et al.
20060178913 August 10, 2006 Lara et al.
20060184410 August 17, 2006 Ramamurthy et al.
20060190482 August 24, 2006 Kishan et al.
20060224569 October 5, 2006 DeSanto et al.
20060224580 October 5, 2006 Quiroga et al.
20060235690 October 19, 2006 Tomasic et al.
20060282302 December 14, 2006 Hussain
20060287890 December 21, 2006 Stead et al.
20070043524 February 22, 2007 Brown et al.
20070055558 March 8, 2007 Shanahan et al.
20070073576 March 29, 2007 Connors et al.
20070073651 March 29, 2007 Imielinski
20070083419 April 12, 2007 Baxter et al.
20070118514 May 24, 2007 Mariappan
20070168225 July 19, 2007 Haider et al.
20070219958 September 20, 2007 Park et al.
20070234282 October 4, 2007 Prigge et al.
20070282470 December 6, 2007 Hernandez et al.
20070282622 December 6, 2007 Hernandez et al.
20070282645 December 6, 2007 Brown et al.
20070282653 December 6, 2007 Bishop et al.
20070282655 December 6, 2007 Jaluka et al.
20070282659 December 6, 2007 Bailey et al.
20070282692 December 6, 2007 Bishop et al.
20070282776 December 6, 2007 Jaluka et al.
20070282876 December 6, 2007 Diao et al.
20070282942 December 6, 2007 Bailey et al.
20070288274 December 13, 2007 Chao et al.
20070292833 December 20, 2007 Brodie et al.
20080065448 March 13, 2008 Hull et al.
20080109260 May 8, 2008 Roof
20080213740 September 4, 2008 Brodie et al.
20080215404 September 4, 2008 Diao et al.
20090012887 January 8, 2009 Taub et al.
Foreign Patent Documents
2007143516 December 2007 WO
Other references
  • “A Capacity Planning Model of Unreliable Multimedia Service Systems”, by Kiejin Park and Sungsoo Kim, Department of Software, Anyang University, Kangwha, Incheon, South Korea, Jul. 2001.
  • “Tracking Your Changing Skills Inventory: Why It's Now Possible, and What It Means for Your Organization”, from CIO.com, Mid 2002 IT Staffing Update, Brainbench.
  • “Project MEGAGRID: Capacity Planning for Large Commodity Clusters”, An Oracle, Dell, EMC, Intel Joint White Paper, Dec. 2004.
  • Ganesarajah, Dinesh and Lupu Emil, 2002, Workflow-based composition of web-services: a business model or programming paradigm?, IEEE Computer Society.
  • M.D. Harrison, P.D. Johnson and P.C. Wright. “Relating the automation of functions in multi-agent control systems to a system engineering representation.” Department of Computer Science, University of York, Heslington, York. UK. Aug. 13, 2004.
  • “Self-Adaptive SLA-Driven Capacity Management for Internet Services”, by Bruno Abrahao et al., Computer Science Department, Federal University of Minas, Gerais, Brazil, 2005.
  • zur Muehlen, Michael. “Resource Modeling in Workflow Applications”, 1999.
  • BEA Systems, Inc., “BEA White Paper—BEA AquaLogic Service Bus—IT's Direct Route to SOA,” printout from http://www.bea.com/content/newsevents/whitepapers/BEAAQLServiceBuswp.pdf, Jun. 2005.
  • Cape Clear Software, Inc., “Cape Clear 6.5”, printout from http://www.capeclear.com/download/CC65Broch.pdf, copyright notice 2005.
  • Cordys, “Cordys Enterprise Service Bus—Capabilities,” printout from http://www.cordys.com/en/Products/CordysESBcapabilities.htm, printed on Jun. 26, 2006.
  • Oracle, “Enterprise Service Bus,” printout from http://www.oracle.com/appserver/esb.html, printed on Jun. 27, 2006.
  • PolarLake Limited, “Enterprise Service Bus (ESB) Resoure Center,” printout from http://www.polarlake.com/en/html/resources/esb/, printed on Jun. 27, 2006, copyright notice dated 2006.
  • Sonic Software, “ESB Architecture & Lifecycle Definition,” printout from http://www.sonicsoftware.com/products/sonicesb/architecturedefinition/index.ssp, printed on Jun. 26, 2006.
  • Mercury, Mercury Capacity Planning (Powered by Hyperformix), 2004.
  • Team Quest, Capacity Planning with TeamQuest Analytic Modeling Software, printed from http://www.teamquest.com/solutions-products/solutions/planning-provis . . . on Jun. 4, 2006.
Patent History
Patent number: 8001068
Type: Grant
Filed: Jun 5, 2006
Date of Patent: Aug 16, 2011
Patent Publication Number: 20070282644
Assignee: International Business Machines Corporation (Armonk, NY)
Inventors: Yixin Diao (White Plains, NY), Robert Filepp (Westport, CT), Robert D. Kearney (Yorktown Heights, NY), Alexander Keller (New York, NY)
Primary Examiner: David R Vincent
Attorney: Cahn & Samuels, LLP
Application Number: 11/422,195
Classifications
Current U.S. Class: Knowledge Processing System (706/45)
International Classification: G06F 17/00 (20060101);