AUTOMATIC BASELINING OF RESOURCE CONSUMPTION FOR TRANSACTIONS
An application monitoring system determines the health of one or more resources used to process a transaction, business application, or other computer process. Performance data is generated in response to monitoring application execution and processed to determine and an actual and baseline value for resource usage data. Resource usage baseline data may be determined from previous resource usage data associated with a resource and particular transaction (a resource-transaction pair). The baseline values are compared to actual values to determine a deviation for the actual value. Deviation information for the time series data can be reported through an interface or some other manner.
The growing presence of the Internet and other computer networks such as intranets and extranets has brought about the development of applications in e-commerce, education and other areas. Organizations increasingly rely on such applications to carry out their business or other objectives, and devote considerable resources to ensuring that the applications perform as expected. To this end, various application management techniques have been developed.
One approach for managing an application involves monitoring the application, generating data regarding application performance and analyzing the data to determine application health. Some system management products analyze a large number of data streams to try to determine a normal and abnormal application state. Large numbers of data streams are often analyzed because the system management products don't have a semantic understanding of the data being analyzed. Accordingly, when an unhealthy application state occurs, many data streams will have abnormal data values because the data streams are causally related to one another. Because the system management products lack a semantic understanding of the data, they cannot assist the user in determining either the ultimate source or cause of a problem. Additionally, these application management systems may not know whether a change in data indicates an application or server is actually unhealthy or not.
SUMMARYThe technology described herein determines the health of one or more computing resources used to process a request for an application. Performance data is generated in response to monitoring application execution and includes resource usage data. These resources may include central processing unit (CPU), memory, disk I/O bandwidth, network I/O bandwidth and other resources. The resource data is processed to determine a baseline for resource usage. The baseline data may include predicted or expected resource usage values that are compared to a time series of actual resource usage values. Based on the comparison, a deviation from the baseline value is determined for the actual resource usage. Deviation information for the time series data is then reported, for example to a user through an interface.
In one embodiment, the deviation information may be associated with a deviation range. A number of deviation ranges can be configured to extend from a predicted value of a data point. The actual data point value is contained in one of the ranges depending on how far the actual data point deviates from the predicted value. The deviation information for the actual data point with respect to the predicted data point may be communicated through an interface as an indication of deviation level (e.g., low, medium, high) and updated as additional data points in the time series are processed.
The deviation information may be provided through an interface as health information for a resource. In one embodiment, the interface may provide health and/or performance information associated with resources used by a business application, transaction, or some other computer process. A transaction is a process performed to generate a response to a request, and a business application is a set of transactions.
In an embodiment, an application performs transactions using one or more resources. A first usage of each resource by a transaction is determined. The difference between the first usage and a predicted usage for each resource with respect to the first transaction is then determined and health information for the resources is reported. The health information is reported for each resource with respect to the first transaction and derived from the difference between the first usage and predicted usage.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
An application monitoring system determines the health of one or more resources used to process a transaction, business application, or other computer process. An application is executed and monitored, and performance data associated with the application execution is generated. The performance data may include data obtained from an application server operating system, a java virtual machine (JVM) or some other source. A portion of the performance data related to resources is accessed and processed to determine a current resource usage and a baseline resource usage. In some embodiments, the resource usage baseline is predicted or expected resource usage value determined from previous resource usage data, such as a time series. In some embodiments, a resource usage time series of data may be associated with a resource and particular transaction (a resource-transaction pair). The baseline values are compared to actual values to determine a deviation for the actual value. Deviation information for the time series data is reported, for example to a user through an interface. The user may then determine if a resource is unhealthy or not based on the deviation information for the resource and whether application performance is being affected by a resource.
In one embodiment, the deviation information reported to a user is based on a deviation range for the actual data point value. A number of deviation ranges can be generated based on the predicted value. The actual data point will be contained in one of the ranges, wherein the deviation associated with the range is proportional to how far the range is from the predicted value. For example, a range that contains the predicted value may be associated with a low deviation, a range adjacent to the low deviation range may have a medium deviation, and another range adjacent to the medium deviation range may have a high deviation. An indication of which range contains the actual data point value may be presented to a user through an interface and updated as different data points in the time series are processed.
In some embodiments, a deviation range may be selected for a time series data point based on two or more predicted values for the data point. When predicting values, two or more functions may be fit to past time series values and used to calculate future data point values in the series data. Deviation ranges are configured for each predicted value, and each predicted value is contained in one of its corresponding ranges. The different predicted values and corresponding deviation ranges are processed to select an overall deviation range based on highest number of occurrences, degree of deviation, and/or other factors.
In some embodiments, the resource usage may be expressed in terms of a transaction, business application, or some other computer process. A transaction is a set of operations performed to process a request received by an application. The operations may be performed by application components, such as EJBs, servlets and other components, and computer processes invoked by the components, such as a backend, database or other process or system. A business application is a set of transactions defined by a user or in some other manner. In some embodiments, the amount of a resource used for a transaction is determined by monitoring execution of an application that performs the transaction in response to a URL or other request. A request may be a “real” customer request or a synthetic request.
To determine whether resource usage is acceptable, the usage is compared to a baseline. In some embodiments, the utilization of one or more resources can be determined for individual transactions. The utilization of each resource may be based on non-transaction specific usage and transaction specific usage for each resource. The non-transaction specific resource usage is not caused by any particular transaction, but is programmatically attributed to one or more transactions which can be associated with the usage. Once the resource usage for each transaction is known, the current resource usage may be determined and the deviation may be determined.
Resources may include hardware, software, or hardware-software hybrid components of a computing system. Examples of resources include a central processing unit (CPU), memory devices and systems such as RAM, DRAM, SRAM, or other memory, input and output bandwidth for a hard disk, network input and output bandwidth for communicating data (sending and receiving data) with a device or system external to a server (network bandwidth), and other computing system components. Resources are discussed in more detail below.
In some embodiments, an application may perform a transaction by associating the transaction with a thread. Once a thread is associated with a transaction, the resource utilization associated with the thread is determined. Performance data associated with application runtime may be generated based on monitoring classes, methods, objects and other code within the application. The thread may handle instantiating classes, calling methods and other tasks to perform the transaction associated with the thread. The performance data is then reported, in some embodiments the data is aggregated, and the resulting data is analyzed to determine resource usage of one or more resources for one or more individual transactions.
In some embodiments, the resource utilization for a transaction may include the utilization directly related to the transaction. The resource utilization directly related to a transaction may be the use of the resource that is directly caused or required by the transaction. For example, a conceptual transaction may require objects that are thirty bytes in length to be stored in RAM memory and require twenty-five CPU processing cycles to perform the entire transaction. The direct resource utilization for this conceptual transaction is thirty bytes of RAM memory and twenty-five CPU cycles.
Additional resource usage may be incurred indirectly from performance of one or more transactions or a process during which the transactions are performed. For example, a CPU may require thirty-five computer cycles to perform garbage collection. Though this CPU usage is not attributed to any one transaction, this non-transaction specific usage is indirectly associated with transactions that create and store data which eventually becomes “garbage” data. In some embodiments, non-transaction specific resource usage that is not a direct result of a transaction may be apportioned to the one or more transactions it is indirectly associated with. In some embodiments, the additional resource usage may be apportioned to one or more transactions based on the percentage load over a period of time in which the non-transaction specific usage occurred. In other embodiments, the non-transaction specific resource load may be apportioned by URL or in some other manner.
In some embodiments, non-transaction specific resource usage may be associated with a computer process which performs background or overhead resource usage while performing the transactions. For example, a CPU overhead usage may include garbage collection, managing thread and connection pools, time spent doing class loading and method compilation and de-compilation, and other tasks. The overhead may be spent as part of a process or in some other manner. In some embodiments, transactions may be performed as part of a Java Virtual Machine (JVM) process. However, transactions can be performed as part of other processes or on other platforms as well, such as Microsoft's .NET or any managed environment where transactions execute on threads and code can be instrumented. The discussion below references transaction monitoring as part of a JVM process for purposes of example only.
As discussed above, resource usage may be determined for transactions, business applications comprising any number of transactions, or some other process. In the example embodiments discussed below, references are made to determining resource usage for a transaction. It is intended that any references to transaction resource usage can easily be converted to business application (or some other process) resource usage, and such references to a transaction are for purposes of example only.
Client device 110 may be implemented as a server, computing device or some other machine that sends requests to network server 112. Network server 112 may provide a network service to client device 110 over network 111. In one embodiment, network server 112 may be implemented as a web server and implement a web service over the Internet. Network server 112 may receive a request from client device 110, process the request and send a response to client device 110. In processing requests, network server 112 may invoke an application on application server 113. The invoked application will process the request, provide a response to network server 112, and network server 112 will provide a corresponding response to client device 110.
Application server 113 includes application 114, application 115 and agent 208. Though not illustrated in
Agent 208 generates performance data in response to monitoring execution of application 115 and provides the performance data to application monitoring system 117. Generation of performance data is discussed in more detail below. Application monitoring system 117 processes performance data reported by agent 208. In some embodiments, the processing of performance data includes providing resource usage and/or performance information to a user through an interface. Application monitoring system 117 is discussed in more detail below with respect to
Backend server 120 may process requests from applications 114-115 of application server 113. Backend servers 120 may be implemented as a database, another application server, or some other remote machine in communication with application server 113 that provides a service or may process requests from an application on application server 113. In some embodiments, backend server 120 may be implemented as a remote system that receives requests from an application, processes the request and provides a response. For example, the backend could be another network service.
CPU 142 may be implemented by one or more computer processors on application server 113. When processing a transaction, the level of use or utilization of CPU 142 may be measured in terms of processing time (in terms of seconds, milliseconds, microseconds, or some other unit) or computer cycles used to perform the transaction.
Memory 144 is a resource having a finite amount of memory space. Memory 144 may include one or more types of memory, such as RAM, DRAM, SRAM or some other type of memory. Memory 144 can be used to store objects and other data allocated while processing a transaction, storing data during a computer process (such as Java Virtual Machine process), and other data.
Hard disk 146 is a resource implemented as hardware and code for reading from and writing to a hard disk. Hard disk 146 may include hard disk writing and reading mechanisms, code associated with the hard disk and optionally other code and hardware used to read and write to a hard disk on application server 113. Hard disk 146 has a finite reading and writing bandwidth and is utilized by read and write methods, and optionally other sets of code, which perform hard disk read and write operations. The usage of a hard disk resource may be expressed as a bandwidth for writing to and reading from the disk per second, such as seven thousand bytes per second.
Network I/O bandwidth 148 is implemented as code and hardware that operates to communicate to machines and devices external to server 120. For example, network I/O bandwidth 148 may use a number of sockets to communicate with data store 130 and/or other machines external to application server 113. There is a finite amount of network bandwidth for sending and receiving data over network 115 and a finite number of available sockets (i.e., network connections) to communicate to other devices. The usage of network I/O bandwidth may be expressed as a number of bytes sent and received per second, such as ten thousand kilobytes per second.
Resources 142-148 are just examples of elements that may be used to process a transaction. Other resources, computing components, and other hardware and software elements may be used to perform a transaction. The level of use and/or utilization of these other hardware and software elements (on one or more servers) may be determined as well.
Instruction processing module 140 communicates with operating system 149 and resources 141 to execute instructions. The instructions may be executed in response to receiving a request from a user or detecting some other event within application server 113. Operation of instruction processing module 140 is discussed in more detail below with respect to
Instruction processing module 140 includes threads 151, 152 and 153, dispatch unit 154 and execution pipeline 155, 156 and 157. Each of threads 151-153 may contain instructions to be processed as part of performing a transaction. In some embodiments, each thread is associated with a URL and implemented or controlled by a thread object. A thread class may be instantiated to generate the thread object. Dispatch unit 154 may dispatch instructions from one of threads 151-153 to one of available pipelines 155-157. Execution pipelines 155-157 may execute instructions provided by a thread as provided to the pipeline by dispatch unit 154. While executing instructions in an execution pipeline, the pipeline may access any of resources 142-148.
In one embodiment, the technology herein can be used to monitor behavior of an application on an application server (or other server) using bytecode instrumentation. The technology herein may also be used to access information from the particular application. To monitor the application, an application management tool may instrument the application's object code (also called bytecode).
Probe builder 204 instruments (e.g. modifies) the bytecode for Application 202 to add probes and additional code to Application 202 in order to create Application 115. The probes may measure specific pieces of information about the application without changing the application's business logic. Probe builder 204 also generates Agent 208. Agent 208 may be installed on the same machine as Application 115 or a separate machine. Once the probes have been installed in the application bytecode, the application is referred to as a managed application. More information about instrumenting byte code can be found in U.S. Pat. No. 6,260,187 “System For Modifying Object Oriented Code” by Lewis K. Cime, incorporated herein by reference in its entirety.
In one embodiment, the technology described herein doesn't actually modify source code. Rather, the present invention modifies object code. The object code is modified conceptually in the same manner that source code modifications are made. More information about such object code modification can be found in U.S. patent application Ser. No. 09/795,901, “Adding Functionality To Existing Code At Exits,” filed on Feb. 28, 2001, incorporated herein by reference in its entirety.
Enterprise Manager 210 receives performance data from managed applications via Agent 208, runs requested calculations, makes performance data available to workstations 212-214 and optionally sends performance data to database 216 for later analysis. The workstations (e.g. 212 and 214) are the graphical user interface for viewing performance data. The workstations are used to create custom views of performance data which can be monitored by a human operator. In one embodiment, the workstations consist of two main windows: a console and an explorer. The console displays performance data in a set of customizable views. The explorer depicts alerts and calculators that filter performance data so that the data can be viewed in a meaningful way. The elements of the workstation that organize, manipulate, filter and display performance data include actions, alerts, calculators, dashboards, persistent collections, metric groupings, comparisons, smart triggers and SNMP collections. In some embodiments, other the natural language tool can be implemented in the console window, explorer window and other windows within an interface.
In one embodiment of the system of
The computer system of
Portable storage medium drive 262 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, to input and output data and code to and from the computer system of
User input device(s) 260 provides a portion of a user interface. User input device(s) 260 may include an alpha-numeric keypad for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. In order to display textual and graphical information, the computer system of
The components contained in the computer system of
Next, a request is received from client 110 by application server 113 to perform a transaction at step 330. In some embodiments, the request is received by another server or machine and causes an invocation of an application on application server 113. The request (or invocation) received by server computer 330 may be forwarded to managed application 206. In some embodiments, the request is a URL request received over network 115 from client 110. In some embodiments, the request is received from some other source external or internal to server 120.
Next, a transaction associated with the received request is performed using one or more resources at step 340. In some embodiments, performing a transaction may include initiating the transaction, assigning the transaction to a thread and performing the transaction using one or more resources. More detail regarding performing a transaction associated with a URL is discussed with respect to
Resource usage for the performed transaction is determined at step 350. Determining resource usage may involve processing data acquired from one or more probes or sets of tracer code, accessing a JVM, an operating system or some other computing process. Resource usage may be determined for CPU 142, memory 144, hard disk 146, network I/O bandwidth 148, and/or other resources or computing components. In some embodiments, resource usage is determined after performance data is provided to application monitoring system 117. Thus, step 350 may be performed after step 360 in the method of
Performance data is provided to application monitoring system 117 at step 360. Providing the performance data may include reporting performance data to enterprise manager 210 by agent 116 periodically, such as every fifteen seconds, or based on some other event. The performance data may include resource usage data received by agent 116 from probes and/or tracer code inserted into an application as well as resource usage data retrieved by agent 116 from operation system 149 or a JVM with respect to a resource.
In some embodiments, the performance data (application runtime data) received and reported may contain information for both transaction specific resource usage and data regarding non-transaction specific resource usage. For example, the non-transaction specific performance data reported at step 530 may include CPU time for a JVM process used to process transactions but not associated with a specific transaction (for example, total CPU time for the group of transactions which includes garbage collection CPU time, garbage CPU time only, and so forth). Additionally, the non-transaction specific resource usage reported may also include the total memory space, hard disk bandwidth and network bandwidth attributed to processing transactions by a JVM process for a period of time associated with one or more transactions. In some embodiments, non-transaction specific resource usage data may be retrieved from a JVM, operating system or some other code or process of a machine that contains or has information regarding the resource. This is discussed in more detail below.
The baseline resource usage and deviation from the baseline resource usage for each resource is determined for the performed transaction at step 370. The baseline resource usage for a transaction may be viewed as the actual resource usage by the transaction and non-transaction specific resource usage. For example, non transaction specific CPU usage may include computer cycles used to collect and process garbage. In one embodiment, determining the baseline deviation includes determining how much an actual data point value differs from a predicted data point value for a resource-transaction pair. The deviation may be determined as a Boolean value, a range of values, or in some other manner. Determining a baseline resource usage and baseline deviation is discussed in more detail below with respect to
Deviation information is reported at step 380. Reporting deviation information may include storing the deviation information in memory, to a hard drive, a data store, or some other storage mechanism, providing the deviation information in a graphical grid, an information window, as an alert, email, page, in a file, to another application or process, or reporting the information in some other manner. In some embodiments, the deviation information is reported to a user through an interface provided through work stations 222 and 224. In some embodiments, the deviation information may provide health information for a particular resource with respect to a transaction or business application. For example, reported baseline deviation information may indicate that CPU usage is at a level of concern for a particular business application comprised of two transactions. For each transaction, the report may indicate that the first transaction has a normal usage of CPU and the second transaction has a higher than normal level of CPU usage. After reporting baseline deviation data, the method of
A thread fetches instructions associated with the transaction at step 420. Next, the thread executes instructions for the transaction using resources at step 430. The resources may include CPU 142, memory 144, hard disk 146, network I/O bandwidth 148 and/or other resources or computer components. The instructions may be executed by instruction processing module 140 in communication with resources 141. For example, instructions from one of threads 151-153 may be processed in one or more of execution pipelines 155-157.
Creation of a thread object associated with a transaction is detected at step 510. In some embodiments, code inserted into a thread class may report when the class is instantiated and a thread object is created. A first time stamp is then retrieved at step 520. The first time stamp is the time at which the thread object is instantiated at step 510.
The end of transaction processing by the created thread is detected at step 530. When a thread object completes processing of a transaction, code inserted into the thread object determines that the transaction is complete. A second time stamp is then accessed at step 540. The second time stamp is associated with the end of the transaction determined at step 530.
Data including a thread object identifier, a URL associated with the thread object, and time stamp data is reported at step 550. The data may be reported by code inserted into the thread object by the thread class (in which monitoring code was inserted at step 510) to agent 208. The time stamp data may be reported as the start and end time of the thread, the difference between the start and end times, or both. In some embodiments, a thread such as a Java thread may handle a single transaction. Thus, the CPU usage directly attributed to the transaction and corresponding URL is the difference between the thread start time and thread end time. In some embodiments, agent 208 may aggregate the data and forward the aggregated data to enterprise manager 210. In some embodiments, additional data may also be reported, such as the thread class, and method used to create the thread object, as well as other data.
CPU usage for a transaction performed in response to a URL request or other request can be measured in other ways as well. For example, monitoring code may retrieve CPU usage data from system calls that provide CPU consumption statistics to a thread. In some embodiment, a thread object may send or otherwise initiate a system call requesting CPU consumption statistics. In some embodiments, the system calls are initiated from a source other than a thread. The operating system, CPU or some other source may then provide CPU data to the thread through one or more system calls. The CPU data may include the percentage of CPU used by the particular thread, the number of CPU cycles used by the thread, and/or other CPU consumption statistics. In some embodiments, the monitoring code traces system calls in which the CPU consumption statistics are provided to a thread. In some embodiments, CPU usage may be measured in cycles, duration, or percentage utilization. With respect to percentage utilization, the total CPU capacity may be 100%, wherein the goal of an administrator may be to keep the percentage utilization at some level or lower, such as 60% or 80% or lower. In some embodiments, the percent utilization of a transaction may be determined by retrieving the percent utilization of the corresponding thread that executes a transaction from the operating system.
After reporting CPU usage directly attributed to the transaction, non-transaction specific CPU time data may be reported at step 560. The portion of the non-transaction specific CPU time includes CPU overhead attributed to the transaction that is not requested by the thread object. A level of “overhead” may be determined for each resource for which usage or bandwidth is attributed to one or more transactions. For example, CPU overhead may include performing garbage collection while processing a number of transactions, managing thread and connection pools, time spent in the JVM doing class loading and method compilation and de-compilation, and other tasks. The non-transaction specific CPU time may be apportioned to one or more transactions by the percent load of time required by each individual transaction. Determining and reporting a portion of non-transaction specific CPU time attributed to a particular transaction is discussed in more detail below with respect to
Next, the total CPU time for the JVM process used to process the transactions is determined at step 620. The total JVM process CPU time may be the time for the entire JVM process or for a selected period discussed with respect to step 610. The CPU usage during the JVM process may include CPU time used to perform the transactions and other tasks, such as garbage collection. When the CPU usage is a length of a JVM process, the time of the JVM process CPU time may be determined by retrieving the data from operating system 149, software associated with CPU 142 or from some other source. When the CPU usage is the number of computing cycles associated with a JVM process, the number of cycles may be retrieved from operating system 149 or some other module on server 120.
The difference between the total transaction CPU time and the JVM process CPU time is determined at step 630. To determine the difference, the total transaction CPU time is subtracted from the JVM process CPU time. Next, the difference in the CPU time determined at step 630 is distributed among transactions that occurred during the JVM process based on the load percentage of each transaction at step 640. The transaction load percentage is determined by dividing the transaction CPU time by the total transaction CPU time. The transaction's portion of the extra CPU time is then determined as the load percentage for the transaction multiplied by the extra CPU time. Step 640 is discussed in more detail with respect to the method of
First, the total transaction CPU time is accessed at step 710. The CPU load percentage for a transaction is determined by dividing the CPU time for the individual transaction by the total transaction CPU time at step 720. For example, consider transactions T1, T2, and T3. T1 uses five milliseconds of a CPU resource, T2 uses six milliseconds of a CPU resource and T3 uses four milliseconds of a CPU resource. The total transaction CPU time is determined as the sum of the CPU time for the three transactions, or fifteen milliseconds (5 ms+6 ms+4 ms=15 ms). The load percentage for transaction T1 is determined as five milliseconds divided by fifteen milliseconds, or 33% (5/15=⅓).
The additional CPU time to attribute to the transaction is determined by the product of the CPU load percentage for the transaction and the difference between the total transaction CPU time and the total JVM process time at step 730. Assume the JVM process has a total CPU time of eighteen milliseconds. Accordingly, the non-transaction specific CPU time is determined as fifteen milliseconds subtracted from the total JVM process CPU time of eighteen milliseconds, resulting in three milliseconds (18−15=3). The portion of CPU time apportioned to transaction T1 is 33% times three milliseconds, or one millisecond (⅓×3=1). The additional non-transaction specific CPU time is then reported at step 740. The non-transaction specific CPU time may be reported separately or together with the transaction specific CPU time reported in step 550 of the method of
First, creation of a thread object associated with a transaction is detected at step 810. Detecting the creation of the thread object at step 810 is similar to detecting a thread object created via instantiation of a thread class discussed with respect to step 510 of
The size of the allocated objects generated while performing the transaction is determined at step 830. The size of the allocated objects may be determined manually or automatically. Determining a size of an allocated object manually involves determining the size of primitives and other object content within each generated object. An object may have a basic framework and one or more variables that require memory space when stored to memory 144. For example, the general object framework may require thirty-two bytes. Different types of primitives may require different sizes of memory, for example, four bytes for an integer type, eight bytes for a long type, two bytes per character in a string type, and other content may require other bytes of data. The data sizes given are for example only; other variables and variable sizes may be used. When an object instantiates another object, the instantiated object size is also considered part of the object measured as part of a chain of allocation at step 830.
The size of an allocated object may also be measured automatically. In some embodiments, a method may be invoked to automatically determine how much memory space an allocated object requires. For example, in Java 1.5 Platform, a method “getObjectSize” may be invoked with a parameter “objectToSize” specifying the allocated object. The method then returns the size of the allocated object. The method may be part of the “Instrumentation” interface of Java 1.5 Platform.
After determining the size of allocated objects generated by a thread, the thread class, method, thread object identifier, URL associated with the thread and size of all the allocated object content for the thread is reported at step 840. In some embodiments, monitoring code within the application may retrieve the URL from the thread object associated with the transaction and thread processing the transaction. The data may be reported to agent 208 which may then report performance data to enterprise manager 210. In some embodiments, the performance data may be aggregated or otherwise processed by agent 208 before being provided to enterprise manager 210.
Next, the portion of the non-transaction specific memory usage to attribute to a transaction is reported at step 850. Some objects may be created and stored in memory 144 while one or more transactions are processed but may not be associated with a particular transaction. The total size of these objects is apportioned among the transactions which are processed during the time the object was created. The time period may be determined by a user, associated with a computer process, such as a JVM process, or some other time period.
To calculate the non-transaction specific memory usage for a transaction, the difference between the total transaction memory space and the total JVM process memory space used, for a period of time shorter than the JVM if applicable, is determined. The total JVM process memory space may be retrieved from operating system 149, software or some other system associated with memory 144, or some other source. The difference in memory space is then distributed among the transactions based on the memory size load for each transaction. The percentage load of total memory can be determined similar to that for CPU percentage load with respect to
First, the instantiation of a hard disk I/O class is detected at step 910. The hard disk I/O class is instantiated to create a hard disk I/O object. Creation of the object may be detected by monitoring code placed in the hard disk I/O class. Several classes and/or methods may be used to read and write to a hard disk. In some embodiments, monitoring code can be placed in several classes and methods associated with performing a write or read operation with respect to hard disk 146. The monitoring code may then add code to the hard disk I/O object when the class is instantiated. For example, one hard disk I/O class which may be modified to include monitoring code is “java.io.fileinputstream.”
Hard disk I/O objects are then monitored to determine the amount of hard disk bandwidth used by the object at step 920. As discussed above, the hard disk bandwidth used is the amount of data read from or written to hard disk 146 per second. The hard disk I/O object is monitored by code placed in the object by monitoring code inserted into the object class. In some embodiments, a thread object may also be monitored to detect creation of the hard disk I/O object and the size of the data written to and from hard disk 146 from the thread.
Next, the hard disk I/O class, method used to write the data, thread object identifier, URL and hard disk bandwidth used by the hard disk I/O object are reported at step 930. Optionally, other data may be reported as well, such as the data length of the read and write operations performed. The data may be reported to agent 208. Agent 208 receives the data, optionally aggregates the data, and provides performance data to enterprise manager 210. The data may be reported to enterprise manager 210 as it becomes available or periodically, such as every fifteen seconds or some other time period.
A portion of the non-transaction specific hard disk bandwidth attributed to the current transaction is determined and reported at step 940. Determining the non-transaction specific hard disk bandwidth utilized involves determining the hard disk bandwidth required for a process which includes transaction execution but not associated with the particular transaction itself. This hard disk bandwidth can be apportioned to one or more transactions by a percentage of the hard disk bandwidth load per transaction or in some other manner. Determining and reporting non-transaction specific hard disk bandwidth to attribute to a transaction involves calculating the total hard disk bandwidth usage and the JVM hard disk usage for a period of time. A portion of the difference in the two usages for the period of time is then allocated, for example by load percentage, to the transaction. For example, if a first transaction had a hard disk bandwidth usage of 8000 bytes per second and the total hard disk bandwidth usage for transactions of the JVM process was 40000 bytes per second, the first transaction would be attributed with twenty percent (8000/40000=20 percent) of the difference of hard disk bandwidth usage.
First, the instantiation of a network I/O class is detected at step 1010. The network I/O class is instantiated to create a network I/O object. Creation of the object may be detected by monitoring code placed in the network I/O class. Several classes and/or methods may be used to send and receive data over network 115. In some embodiments, monitoring code can be placed in several classes and methods associated with performing a network data send or receive operation. The monitoring code may then add code to the network I/O object when the class is instantiated.
Network I/O objects are then monitored to determine the network bandwidth used by the network I/O object at step 1020. The network I/O object is monitored by code placed in the object by monitoring code inserted into the object class. In some embodiments, a thread object may also be monitored to detect creation of the network I/O object and the amount of data send to or a device over network 115 by the thread. As discussed above, the network bandwidth required for transaction execution is the amount of data received and sent over a network as a direct result of executing the transaction.
Next, the network I/O class, method used to send or receive the data, a thread object identifier, URL, the network bandwidth and optionally other data are reported at step 1030. The network bandwidth may be reported as length of data sent or received through a data stream or other network data. Agent 208 may receive the reported data, optionally aggregate the data, and provide performance data to enterprise manager 210. The data may be reported to enterprise manager 210 as it becomes available or periodically, such as every fifteen seconds.
The non-transaction specific network bandwidth attributed to the current transaction is determined and reported at step 1040. Determining the non-transaction specific network bandwidth utilized by a transaction is similar to the process for determining the non-transaction specific hard disk bandwidth used. In particular, determining and reporting non-transaction specific network bandwidth for a transaction involves calculating the total network bandwidth usage and the JVM network usage for a period of time and allocating a portion of the difference in the two usages to the transaction.
Performance data is received at step 1110. The performance data may be received by Enterprise manager 210 from agent 208 of application monitoring system 117. Input may then be received which identifies the transaction or business application for which to report baseline analysis at step 1120. The baseline analysis may indicate the resource usage with respect to a baseline for the identified transaction or business application. As discussed above, a business application is a set of applications, and therefore can be defined as part of step 1120.
The identified data is aggregated into data sets by application and resource at step 1130. In some embodiments, there is one data set per transaction-resource pair. When data is reported for a defined business application, a data set may be reported for a business application-resource pair. In this case, the data for the transactions that comprise the business application are used to determine resource usage for the business application. For example, if there is aggregated data for four different transactions which use four different resources, there will be sixteen different data sets. The data set may comprise a time series of data, such as a series of CPU cycle values that are determined over a set period of time.
A first data set is selected at step 1140. The selected data set may be one of several data sets corresponding to a transaction for a particular time period. Baseline deviation information is calculated and provided to a user for the selected data set at step 1150. In some embodiments, step 1150 includes predicting a value (i.e., a baseline) for each data point in the data set, determining a deviation of the actual data point value from the predicted data point value, providing the deviation information for the data point to a user and repeating the process for the remaining data points in the data set. Calculating and providing baseline deviation information to a user for a data set is discussed in more detail below with respect to
A determination is made as to whether more data sets exist of the identified transaction at step 1160. As discussed above, there may be several data sets for the application identified at step 1120. If more data sets exist to be processed, the next data set is selected at step 1170 and the method of
First, a first data point is selected in the selected data set at step 1210. A baseline for the selected data point is then determined by predicting the value of the data point at step 1220. In this case, the data point value is predicted based on previous data values in the current data set or a previous data set. The baseline can be determined using one or more functions applied to previous or current performance data. Determining a baseline for a selected data point by predicting a data point value is discussed in more detail below with respect to the method of
The deviation of the current data point from the determined baseline is determined at step 1230. Determining the deviation includes comparing an expected baseline value for the data point to the actual value of the data point and characterizing the difference. For example, the difference may be identified within a normal range of deviation or outside the normal range of deviation. Determining deviation from a baseline value for a selected data point is discussed in more detail below with respect to
Next, a determination is made as to whether additional data points exist in the data set to be processed at step 1240. If no more data points exist in the current data set, the method of
A first function used to generate a baseline is loaded at step 1320. The function may be one of a set of several functions used to predict the data point value. The set of functions can include different types of functions, the same function type tuned with different constants, or a combination of these. In some embodiments, any of several functions which may be fit to a time series of data may be used to generate a baseline. In some embodiments, data set data points may be processed using two or more functions to determine a baseline. In some embodiments, once the functions are selected for the first data set, they may be used in subsequent data sets as well. In embodiments, a different set of functions may be used for different data sets, such as data sets associated with a different application or a different resource.
Several types of functions providing statistical models of an application performance data time series may be used with the present technology. Examples of statistical models suitable for use may include simple moving average, weighted moving average, single exponential smoothing, double exponential smoothing, triple exponential smoothing, exponentially weighted moving average, Holt's linear exponential smoothing, Holt-Winters forecasting technique, and others. In some embodiment, selecting one or more functions may include selecting the functions from a group of functions. For example, the five (or some other number) best fitting functions which best fit the first data set may be selected from a group of ten functions. Selecting functions and fitting functions to data, and predicting a data point is discussed in U.S. Pat. No. 7,310,590, filed on Dec. 15, 2006, entitled “Time Series Anomaly Detection using Multiple Statistical Models,” having inventor Jyoti Bansal, and is hereby incorporated by reference.
A predicted value for the selected data point is computed for the selected function at step 1330. Computing the baseline value may be done using any of several functions as discussed above. For example, fitting functions to a data set may include determining function constants. The constants may be determined from the first data set and enable each function to be fit to the first data set.
After computing a baseline value for the data point using the current function, a determination is made as to whether more functions exist for predicting a baseline value at step 1340. If more functions exist for determining a baseline value, the next function is loaded at step 1360 and the method of
First, a predicted value for a function is accessed for the next data point at step 1410. The predicted value is the baseline value determined at step 1330 in the method of
If the deviation is not within the low range of deviation, a determination is made as to whether the difference between the actual data point value and the predicted data point value is within a medium range at step 1450. A medium range may be configured as between 10% and 20% deviation of the predicted value, between the standard deviation and twice the standard deviation, or some other range of values. If the deviation is within a medium range, the deviation for the data point is set to medium at step 1460 and the method of
A determination is made as to whether the deviation between the actual data point value and the predicted data point value is within a threshold at step 1540. In one embodiment, the threshold may be the limit of a low deviation range, such as 10% of the predicted value, a standard deviation, or some other value. If the deviation is not within the threshold, the count is incremented at step 1550. After incrementing the count, the process continues to step 1560. If the deviation is within the threshold, the method of
A determination is made as to whether more functions are used to predict the current data point at step 1560. If more functions exist, a data point value predicted by the next function is accessed at step 1590. The method of
A determination is made as to whether the difference between the actual data point value and the predicted data point value are within a low deviation range at step 1640. The low deviation range may be configured as ten percent of the predicted value, a standard deviation from the predicted value, or in some other manner. If the deviation is within a low deviation range at step 1640, a low count is incremented at step 1650 and the method of
A determination is then made as to whether more functions were used to predict data points for the actual data point at step 1690. If more functions were used, a predicted value generated by the next function is accessed at step 1696. The method of
If no more functions were used to predict values for the current data point, the counts are processed to determine the overall deviation at step 1692. In some embodiments, the count (of the low, medium and high count) which has the largest value is selected as the corresponding range associated with the data point. Thus, if the low count has a value of one, the medium count has a value of three, and the high count has a value of one, the current data point will be associated with a medium deviation range. The method of
In some embodiments, a count may be incremented by a value greater than one in the embodiments of
A time series may experience an increase or decrease in values over time that may not be due to application or resource heath. For example, in some embodiments, different functions can be used to analyze a time series for different periods of time. For example, an application which generates a time series may experience more activity (for example, receive more traffic) during business hours, or more activity on weekdays then weekends. The change from a recognized busy period to a less busy period (e.g, Friday to Saturday, or 5 p.m. to 6 p.m.) may cause a change in the time series data which could be mistaken as an anomaly. In this case, the change would be due to a change in application activity level, not due to an anomaly caused by degraded application health or performance. Thus, the anomaly detection system may be configured to utilize different functions at activity periods or to adjust the functions to better approximate the changed time series as the application activity changes. This “normalization” of the system may be used to reduce false alarms that may appear as a deviation of concern but are actually just a reflection of expected increased activity or load on an application or the particular resource.
Input selecting a transaction or business application to display in the interface is received at step 1720. The input may select a transaction from a list of transactions, or allow a user to define a business application as a set of transactions, or otherwise determine a transaction or business application. Information to display for the selected transaction or business application is accessed at step 1730. The accessed information may include deviation information, actual usage value and baseline usage values for the selected transaction (or business application). The information may be accessed from Enterprise Manager 220, database 222 or some other source.
The identification of the transaction or business application and the resources used is provided in the deviation interface at step 1740. The identifications may include the name, a nickname, or some other identifier for the transaction and resources. In the interface of
The deviation information, actual usage and baseline usage is displayed in the interface for each identified resource-transaction or resource-business application pair at step 1750. In some embodiments, only the deviation information is initially displayed, and the corresponding actual usage and baseline usage is provided upon a request received from a user. In the interface of
It should be noted that the deviation levels, icons, format, actual usage values and baseline usage values illustrated in
A determination is made as to whether deviation information should be displayed for a new transaction or business application at step 1760. The determination may be triggered by receiving a request from a user to display information from a new transaction or detecting some other event. If a new transaction or business application should be displayed, the method returns to step 1720 where input selecting the transaction or business application to display is received. If a new transaction or business application is not to be displayed, a determination is made as to whether new usage and deviation information is available at step 1770. New usage and/or deviation information may be available as an application is monitored and new performance data becomes available. The new information may be pushed to a workstation displaying the interface or pulled by the workstation from enterprise manager 220 or database 222. If new usage or deviation information is available, the method of
A determination is made as to whether new usage or deviation information differs from the currently displayed information at step 1780. If either the new usage or deviation information differs from the currently displayed information, the interface is updated with the new deviation, usage, and/or baseline information for changed resource-transaction and/or resource-business application pairs at step 1790. Updating the interface may include accessing the updated information and populating the interface with the updated information. The method of
Graphical data window 1830 may provide information for a particular transaction or business application in graphical form. The data may indicate relationships between components that perform a transaction, as well as the relationships of the transactions themselves, if any. Additional windows and other information may also be provided in user interface 1810 to provide more information for the resources used, usage and baseline data, the components, transactions and business applications, or other data.
The foregoing detailed description of the technology herein has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claims appended hereto.
Claims
1. A computer implemented method for monitoring a transaction, comprising:
- performing a plurality of transactions by an application using one or more resources;
- determining a first usage of each of the one or more resources by a first transaction of the plurality of transactions;
- determining a difference between the first usage and a predicted usage for each the one or more resources with respect to the first transaction; and
- reporting health information for the one or more resources with respect to the first application, the health information derived from the difference between the first usage and the predicted usage.
2. The computer implemented method of claim 1, wherein said step of determining a first usage includes:
- querying a java virtual machine which performs the first transaction
3. The computer implemented method of claim 1, wherein said step of determining a first usage includes:
- querying an operating system that communicates with the resources.
4. The computer implemented method of claim 1, wherein said step of determining a first usage includes:
- determining a transaction specific use and a non-transaction specific use of one or more of the resources.
5. The computer implemented method of claim 1, wherein said step of determining a first usage includes:
- determining a resource usage associated with a thread.
6. The computer implemented method of claim 1, wherein said step of determining the difference includes:
- identifying a time window; and
- calculating the predicted usage based on a set of performance data associated with the resource, the first transaction and the time window.
7. The computer implemented method of claim 6, wherein said step of calculating includes:
- fitting one or more functions to the set of performance data; and
- calculating a predicted value using the fitted function.
8. The computer implemented method of claim 1, wherein the resources include a central processing unit, network bandwidth, hard disk bandwidth, and memory.
9. The computer implemented method of claim 1, further including:
- monitoring the application while the application is performing the transactions; and
- generating performance data in response to monitoring the application, the first usage determined from the performance data.
10. The computer implemented method of claim 1, wherein said step of reporting includes:
- reporting an alert if usage of one or more of the resources determined to be abnormal.
11. One or more processor readable storage devices having processor readable code embodied on said processor readable storage devices, said processor readable code for programming one or more processors to perform a method comprising:
- accessing performance data generated from monitoring an application and associated with one or more resources used to process requests by the application;
- determining a level of use for each of the one or more resources used to process the requests;
- determining baseline deviation information for the resource use level with respect to a predicted resource usage while processing one or more requests, the resource use level determined from the performance data; and
- reporting the baseline deviation information derived from the difference between the actual resource usage and the predicted resource usage.
12. The one or more processor readable storage devices of claim 11, wherein said step of determining a level of use includes:
- determining a level of use for each of the one or more resources while processing a business application, the business application defined as a set of transactions.
13. The one or more processor readable storage devices of claim 11, wherein said step of determining a level of use includes:
- determining a resource level of use associated with a thread.
14. The one or more processor readable storage devices of claim 11, wherein said step of determining a level of use includes:
- determining a transaction specific resource usage and a non-transaction specific resource usage.
15. The one or more processor readable storage devices of claim 11, wherein determining baseline deviation information:
- identifying a time window; and
- computing a predicted value for a first data point using data within the time window using a selected function.
16. The one or more processor readable storage devices of claim 17, wherein the time window is a dynamic time window.
17. The one or more processor readable storage devices of claim 11, wherein the baseline deviation information indicates one of two or more levels of health for two or more resources used to process the one or more requests.
18. A computer implemented method for monitoring an application, comprising:
- performing transactions by an application on an application server;
- determining the usage of one or more resources of the application server while performing a first transaction, the transactions including the first transaction;
- accessing a predicted value associated usage of each resource while processing the first application;
- determining a difference between the usage and predicted values for each resource; and
- reporting deviation information for each resource if the difference is greater than a threshold.
19. The computer implemented method of claim 18, further comprising:
- accessing two or more baseline functions;
- applying each of the two or more baseline functions to previous resource data usage values to generate predicated resource data for each of the one or more resources; and
- deriving the predicted value associated usage of each resource from the predicted resource data for each resource.
20. The computer implemented method of claim 18, further comprising:
- accessing performance data generated from monitoring the application, usage of the one or more resources derived from the performance data.
21. The computer implemented method of claim 20, wherein said step of retrieving performance data includes:
- inserting monitoring code into the application by bytecode instrumentation; and
- generating the performance data using the monitoring code.
22. The computer implemented method of claim 21, wherein said step of generating the performance data includes:
- detecting that a class object is created by resource class instantiation;
- reporting object data about the class object; and
- detecting that the class object is closed.
23. An apparatus for processing data, comprising:
- a communication interface;
- a storage device;
- a hard disk; and
- one or more processors in communication with said storage device, hard disk and said communication interface, said one or more processors perform transactions, determine a resource usage level, and report deviation information, the usage level determined for each of the storage device, hard disk and one or more processors for each of the transactions, the deviation information indicating whether the resource usage level differs from an expected resource usage level by more than a threshold.
24. The apparatus of claim 23 further comprising:
- a dispatch unit that dispatches instructions from one or more threads to one or more execution pipelines, at least one usage level determined for at least one thread.
25. The apparatus of claim 23, further comprising:
- a network communication device in communication with said one or more processors, said one or more processors determine a usage level and deviation information for said network communication device for each of transaction.
Type: Application
Filed: Feb 1, 2008
Publication Date: Aug 6, 2009
Patent Grant number: 8261278
Inventor: Zahur Peracha (Union City, CA)
Application Number: 12/024,783
International Classification: G06F 9/46 (20060101); G06F 17/30 (20060101);