Transactions matching for multi-tier architectures

A system and method for mapping transactions on both sides of a front-end server. One or more types of front-end transactions and one or more types of back-end transactions are identified. In addition, the times of occurrence of these transactions are identified. Possible associations between the identified front-end transaction types and the back-end transaction types are then built based on a time constraint. One or more of these associations are then eliminated such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

The present invention relates to the field of application performance analysis and more particularly to performance analysis of networked multi-tier applications.

BACKGROUND OF THE INVENTION

Network problems manifest themselves to end users as degradation of application performance. To detect such network problems, Sniffers™ and other devices are used to trace individual packets passing through the network to identify bottlenecks, routing problems, packet loss, etc. However, this approach for detecting network problems and analyzing application performance is limited to direct interactions between a client and a server. For multi-tier applications, such devices cannot be utilized to monitor and analyze interactions between different tiers.

As an example, a client sends a request to an application server. The application server processes the client request, and in response, sends one or more requests to a database server for preparing a response to the client. In this scenario, communications between the client and the application server are separate from the communications between the application server and the database server; that is, they do not share any network packets. Thus, if the application server is serving many concurrent client requests, it is difficult to ascertain which server requests made to the database server were caused by a particular client request. However, determining which server request was caused by which client request is an important factor for analyzing application performance.

Some techniques used to analyze the performance of a multi-tier application include operating such an application in a controlled testing environment, where one client request is being processed by the application at a time. But such techniques may not be able to accurately ascertain that the downstream activity is actually caused by the observed client requests. Moreover, many problems associated with multi-tier applications do not appear except in a high-volume production environment, where many client requests are being processed at a time. Thus, processing one client request at a time may not be a feasible solution for analyzing a multi-tier application's performance. Other techniques establish the relationships between the transactions observed on both sides of an application based on knowledge of the application's complete model or internal structure. However, the feasibility of such techniques is usually limited to simple applications.

What is needed is a technique for effectively analyzing the performance of a multi-tier application without knowledge of the application's internal structure.

SUMMARY OF THE INVENTION

The present invention is a system and method for mapping transactions on both sides of a front-end server. According to an embodiment of the invention, one or more types of front-end transactions and one or more types of back-end transactions are identified. In addition, the times of occurrence of these transactions are identified. Possible associations between the identified front-end transaction types and the back-end transaction types are then built based on a time constraint. For example, a back-end transaction type is associated with a front-end transaction type if the back-end transaction type starts after the front-end transaction type has started and ends before the front-end transaction type has ended. One or more of these associations are then eliminated such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant. For example, if a back-end transaction type occurs during an instance of a front-end transaction type but does not occur during another instance of the front-end transaction type, then the association between the back-end transaction type and the front-end transaction type is eliminated.

In an alternative embodiment of the invention, a frequency of occurrence of each front-end transaction type and a frequency of occurrence of each back-end transaction type with respect to the front-end server are determined. One or more linear relationships are then established to express the frequency of each back-end transaction type as a linear combination of frequencies of one or more front-end transaction types. The established one or more linear relationships represent a mapping of front-end transaction types and back-end transaction types.

The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the invention subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is an illustration of one example of a network environment in which the present invention can operate.

FIG. 2 is an exemplary diagram that may result from observing transactions on both sides of a front-end server.

FIG. 3 is an exemplary graph depicting a relationship between front-end requests and back-end requests.

FIG. 4 is a flowchart illustrating a process implemented by one embodiment of the present invention for mapping transactions on both sides of a front-end server without knowledge of the front-end server's internal structure.

FIG. 5 is a timing diagram illustrating exemplary sequences of front-end and back-end requests.

FIG. 6 is a timing diagram illustrating other exemplary sequences of front-end and back-end requests.

FIG. 7 is a flowchart illustrating a process implemented by another embodiment of the present invention for mapping transactions on both sides of a front-end server without knowledge of the front-end server's internal structure.

FIG. 8 is an exemplary graph showing transaction frequencies of given front-end transaction types and back-end transaction types.

DETAILED DESCRIPTION OF THE INVENTION

A preferred embodiment of the present invention is now described with reference to the figures where like reference numbers indicate identical or functionally similar elements. Also in the figures, the left most digit of each reference number corresponds to the figure in which the reference number is first used.

Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.

Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems.

The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.

In addition, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

FIG. 1 is an illustration of one example of a network environment 100 in which the present invention can operate. As shown, a client 102 sends a front-end request 103 to a front-end server 104 such as an application server. Responsive to the received front-end request 103, the front-end server 104 submits one or more back-end requests to a back-end server 106 such as a database server. The front-end server 104 further receives from the back-end server 106 one or more responses to the submitted back-end requests. Based on the responses from the back-end server 106, the front-end server 104 prepares a response 107 to the front-end request 103 and provides the response 107 to the client 102.

In an embodiment of the invention, an agent residing on the front-end server 104 observes the front-end requests received from the client 102 and the back-end requests submitted to the back-end server 106. The agent is adapted to identify the type and time range of each transaction on either side of the front-end server 104. A transaction is a request-response pair. Thus, in the example illustrated in FIG. 1, the front-end request 103 and the response 107 constitute a single transaction, and the time range of the transaction is measured between the time the front-end server 104 receives the request 103 from the client 102 and the time the front-end server 104 sends the response 107 to the client 102.

A transaction type is determined using an input defined by an end user. Particularly, a transaction type may be defined by a specific protocol. For example, the transaction type for hypertext transfer protocol (HTTP) may be the uniform resource locator (URL), a part of the URL that defines a specific command, a set of key-value pairs from headers of a Hypertext Transfer Protocol (HTTP) POST request, etc. In another example, the transaction type for database servers may be described by a structured query language (SQL) statement, etc. In an embodiment of the invention, there are a finite number of front-end request types and a finite number of back-end request types.

The request or transaction type may be derived from the content of the request or transaction. As an example, the client 102 may submit a request corresponding to the URL “http://<>/pd.php?pid=15&user_id=19” to the front-end server 104. In this example, the part of the URL “http://<>/pd.php?pid=15” would define a single request type, if the user specified that the user_id argument should be ignored. Thus, URLs having the form of “http:/<>/pd.php?pid=15&user_id= . . . ” constitute the same request type, despite their differences in user_id. That is, the “user_id” part of the URL sent by the client 102 is irrelevant for defining the request's type.

In an embodiment of the invention, the front-end server 104 is assumed to work as a deterministic black box; that is, each front-end request of a given type generates a fixed sequence of back-end requests. The same sequence of back-end requests (having the same number of requests and the same types across different occurrences of sequences) will be generated by each occurrence of the given front-end request type. For example, if an instance of front-end request type B generates an instance of back-end request type a and an instance of back-end request type b in that order, other instances of front-end request type B will also generate instances of back-end request types a and b in that order. Thus, the agent residing in the front-end server 104 may observe multiple, possibly overlapping in time, front-end requests and the resulting back-end requests. In addition, some of the back-end requests may not have been caused by any of the front-end requests. An embodiment of the invention determines which front-end request, if any, generates a given back-end request.

FIG. 2 is an exemplary diagram that may result from observing the transactions on both sides of the front-end server 104. Solid arrows on the right of the diagram represent back-end requests that may have been resulted from the front-end request represented by the solid arrow on the left of the diagram, based on time consideration. Specifically, the back-end requests represented by the solid arrows start after the start of, and end before the end of, the front-end request represented by the solid arrow. However, some of these back-end requests represented by the solid arrows may have resulted from other front-end requests (represented by the dashed arrows) in the same time range. The relationship shown in FIG. 2 may be represented by the graph depicted in FIG. 3. In FIG. 3, the front-end request types are shown as numbers and back-end request types are shown as letters. To determine which front-end request type generates a given back-end request type, a system of constraints can be formulated as follows: remove some minimal number of the edges in the graph so that (1) there is no more than one edge coming into each node representing a back-end request type (because each back-end request type is generated by one front-end request type) and (2) for each front-end request type, the number and types of generated back-end request types remain the same.

FIG. 4 illustrates a process for mapping transactions on both sides of the front-end server 104 without knowledge of the internal structure of the front-end server 104, according to an embodiment of the invention. At 402, both front-end transactions and back-end transactions and their individual timings are observed. At 404, possible associations between front-end transaction types and back-end transaction types are built based on time constraints. At 406, a system of constraints is built such that (1) each back-end transaction is caused by one front-end transaction and (2) the number of associations between front-end transactions of a given type and back-end transactions of a given type remain the same. At 408, one or more associations between the front-end transactions and the back-end transactions are eliminated so as to satisfy the system of constraints. For example, the least possible number of associations between the front-end transactions and the back-end transactions are eliminated to satisfy the system of constraints

An algorithm is developed to determine which front-end request type generates a given back-end request type based on the system of constraints. According to an embodiment of the invention, N front-end requests and K back-end requests are observed. The number of front-end requests is denoted as CRn, and the number of back-end requests is denoted as SRk. The front-end request type is denoted as CTn, and the back-end request type is denoted as STk. A matrix of Boolean variables xnk is introduced. The Boolean variable xnk has the value of 1 if front-end request n may have caused back-end request k. Otherwise, the Boolean variable xnk has the value of 0 if front-end request n may not have caused back-end request k. The matrix of Boolean variables xnk is initialized according to a temporal sequence; that is, xnk is set to the value of 1 if the k-th back-end request starts after the start of, and ends before the end of, the n-th front-end request.

Since each back-end request is attributed to no more than one front-end request, a set of inequalities on the values xnk is imposed; namely, k x nk 1.
In addition, a matrix of integer variables ynk is defined, where the first index n ranges from 1 to the number of front-end request types, and the second index k ranges from 1 to the number of back-end request types. The value ynk provides the number of back-end requests of type STk generated by each front-end request of type CTn.

The Y matrix is initialized as follows: for each front-end request of type CTn, count the number of back-end requests of each type STk that may have been caused by the front-end request zk. If the n-th row of the Y matrix has not been initialized, zk is stored in the n-th row of the Y matrix. Otherwise, ynk is replaced with the minimum of ynk or zk. Thus, if a front-end request of type A generates, for example, two back-end requests of type a, then another front-end request of type A will generate two back-end requests of type a as well. Accordingly, if more than two back-end request candidates are observed in response to a front-end request of type A, these back-end requests will eventually be eliminated as candidates except two back-end requests of type a.

Since it is now known how many back-end requests of a given type each front-end request of a given type generates, more equations for the elements of the matrix X may be created. Thus, for each front-end request type CTn and back-end request type STk, the sum of xij, where CRi is of type CTn and SRj is of type STk, is equal to ynk.

The complete system of equations may then be solved by identifying those equations to the elements of the matrix X, where the sum is 0, and by setting the corresponding xij to 0. For those xij not set in the solutions, trial and error may be used to solve the equations. Even though the number of trials may be large, but each step involves a simple arithmetic operation (e.g., summation). Therefore, the execution time is expected to be reasonable. Further improvements to the process of obtaining solution can be implemented to improve efficiency.

FIG. 5 is a timing diagram illustrating exemplary sequences of front-end and back-end requests. A front-end request can cause many back-end requests, and a back-end request can be caused by one front-end request. In FIG. 5, front-end requests are represented by capitalized letters, and back-end requests are represented by lower case letters. The numerals represent particular instances of requests.

As shown, at time period 1, it can be determined that back-end request type a may have been caused by front-end request type A because back-end request type a starts after the start of, and ends before the end of, front-end request type A. It can be determined at time period 1 that back-end request type a cannot have been caused by front-end request type B because back-end request type a starts before the start of front-end request type B. Furthermore, at time period 1, it cannot be determined if back-end request type b is caused by front-end request type A or front-end request type B. This is because back-end request type b starts after both front-end request types A and B have started and ends before both front-end request types A and B have ended.

At time period 2, it can be determined that back-end request type c may have been caused by front-end request type B. This is because back-end request type c starts after front-end request type A has ended (thus cannot be caused by front-end request type A), but starts after the start of, and ends before the end of, front-end request type B.

At time period 3, it can be confirmed that back-end request type c may have been caused by front-end request type B because it again starts after the start of, and ends before the end of, front-end request type B. In addition, it can be determined at time period 3 that back-end request type b cannot be caused by front-end request type B because an instance of back-end request type b does not occur during this particular instance of front-end request type B. As discussed, the same sequence of back-end requests will be generated by each occurrence of a given front-end request type. Thus, it can be determined at time period 3 that back-end request type b may have been caused by front-end request type A.

At time period 4, it can be confirmed that back-end request type b may have been caused by front-end request type A since it starts after the start of, and ends before the end of, this particular instance of front-end request type A. It can also be determined that back-end request type a cannot be caused by front-end request type A because an instance of back-end request type a does not occur during this instance of front-end request type A. Thus, back-end request type a is not caused by either front-end request type A or front-end request type B; it may have occurred due to activity of the server not related to any front-end request.

These observed sequences of front-end transactions and back-end transactions can be formulated into a set of equations according to the discussed algorithm. At time 1, the following equations may be established:
X(a1,A1)=1  (1)
X(b1,A1)+X(b1,B1)=1 (i.e., one of the variables is 1, and the other variable is 0)  (2)

At time 2, the following equation may be established:
X(c1,B1)=1  (3)

At time 3, the following equation may be established:
X(c2,B2)=1  (4)

And at time 4, the following equation may be established:
X(b2,A2)=1  (5)

From equations 3 and 4, it can be derived that X(c1,B1)=X(c2,B2). Thus, it can be determined that front-end request type B may cause back-end request type c. From equations 2 and 5, it can be derived that X(b1,A1)=x(b2,A2). The variable X(b1,B1) can be determined to be 0 because there is not an occurrence of back-end request type a during the occurrence of front-end request type B2. Thus, it can be determined that front-end request type A may cause back-end request type b. Furthermore, it can be determined that front-end request type A cannot cause back-end request type a because there is not an occurrence of back-end request type a during the occurrence of front-end request type A2.

FIG. 6 is a timing diagram illustrating other exemplary sequences of front-end and back-end requests. Again, front-end requests are represented by capitalized letters, and back-end requests are represented by lower case letters. As shown in FIG. 6, two instances of front-end request type B and two instances of back-end request type b occur during time period 5. It cannot be determined which instance of back-end request type b is caused by which instance of front-end request type B. However, since both instances of back-end request type b start after the start of, and end before the end of, both instances of front-end request type B, it can be determined that back-end request type b may have been caused by front-end request type B. For analyzing application performance, it is not important to determine which specific instance of a given front-end request type causes which specific instance of a given back-end request type.

FIG. 7 illustrates a process for mapping transactions on both sides of the front-end server 104 without knowledge of the internal structure of the front-end server 104, in accordance with an alternative embodiment of the invention. At 702, both front-end transactions and back-end transactions are observed. For example, the statistics of both front-end transactions and back-end transactions are observed. At 704, a frequency of occurrence of each front-end transaction type and a frequency of occurrence of each back-end transaction type are established. At 706, a system of linear relationships is established to express the transaction frequency of each back-end transaction type as a linear combination of transaction frequencies of the front-end transaction types. The linear combination of transaction frequencies thus expresses a mapping between a front-end transaction type and one or more back-end transaction types. This alternative embodiment of the invention, even though may provide less accurate results because it is statistical in nature, has the advantages of providing an incremental model and incurring less computation.

FIG. 8 illustrates an exemplary graph showing transaction frequencies of front-end transaction types A and B and back-end transaction types a and b. Transaction frequency is the count of transactions of a given type within a unit of time. From the graph illustrated in FIG. 8, the following linear relationships may be established:
fa=a1fA+a2fB  (6)
fb=b1fA+b2fB  (7)
where fx represents the frequency of transaction type x.

Here, a1, b1, a2, and b2 are the values determined from the data and representing the mapping between front-end and back-end transactions. Thus, a1 determines how many transactions of type a result from each transaction of type A. These values, therefore, constitute the output (end result) of the algorithm.

For example, in linear relationship 6, if a1 is 0 and a2 is 1, then fa=fB. Thus, it can be determined that each front-end transaction of type B causes one back-end transaction of type a.

While particular embodiments and applications of the present invention have been illustrated and described herein, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatuses of the present invention without departing from the spirit and scope of the invention as it is defined in the appended claims.

Claims

1. A method of mapping front-end transactions and back-end transactions of a front-end server, comprising the steps of:

identifying one or more types of front-end transactions and one or more types of back-end transactions and their times of occurrence with respect to the front-end server;
building possible associations between the identified front-end transaction types and the identified back-end transaction types based on a time constraint; and
eliminating one or more of the associations such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant.

2. The method of claim 1, wherein building the possible associations between the identified front-end transaction types and back-end transaction types based on the time constraint includes the step of:

associating a back-end transaction type with a front-end transaction type if the back-end transaction type starts after the front-end transaction type has started and ends before the front-end transaction type has ended.

3. The method of claim 1, wherein eliminating the one or more associations includes the steps of:

identifying a back-end transaction type that occurs during an instance of a front-end transaction type but does not occur during another instance of the front-end transaction type; and
eliminating the association between the back-end transaction type and the front-end transaction type.

4. The method of claim 1, wherein a front-end transaction represents a request sent by a client to the front-end server and a response sent by the front-end server to the client responsive to the request.

5. The method of claim 1, wherein a back-end transaction represents a request sent by the front-end server to a back-end server and a response sent by the back-end server to the front-end server responsive to the request.

6. The method of claim 1, wherein identifying the one or more types of the front-end transactions and the one or more types of the back-end transactions includes the steps of:

identifying the types of the front-end transactions based on contents of the front-end transactions; and
identifying the types of the back-end transactions based on contents of the back-end transactions.

7. The method of claim 1, wherein eliminating the one or more associations includes the steps of:

eliminating the least possible number of the associations such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant.

8. The method of claim 1, wherein eliminating the one or more associations includes the step of:

removing, when a back-end transaction type is associated with more than one front-end transaction type, the associations so that the back-end transaction type is associated with one front-end transaction type.

9. A system for mapping transactions on both sides of a front-end server, the system comprising a module adapted to:

identify one or more types of front-end transactions and one or more types of back-end transactions and their times of occurrence with respect to the front-end server;
build possible associations between the identified front-end transaction types and the identified back-end transaction types based on a time constraint; and
eliminate one or more of the associations such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant.

10. The system of claim 9, wherein the module is adapted to build the possible associations between the identified front-end transaction types and back-end transaction types based on the time constraint by:

associating a back-end transaction type with a front-end transaction type if the back-end transaction type starts after the front-end transaction type has started and ends before the front-end transaction type has ended.

11. The system of claim 9, wherein the module is adapted to:

identify a back-end transaction type that occurs during an instance of a front-end transaction type but does not occur during another instance of the front-end transaction type; and
eliminate the association between the back-end transaction type and the front-end transaction type.

12. The system of claim 9, wherein a front-end transaction represents a request sent by a client to the front-end server and a response sent by the front-end server to the client responsive to the request.

13. The system of claim 9, wherein a back-end transaction represents a request sent by the front-end server to a back-end server and a response sent by the back-end server to the front-end server responsive to the request.

14. The system of claim 9, wherein the module is adapted to:

identify the types of the front-end transactions based on contents of the front-end transactions; and
identify the types of the back-end transactions based on contents of the back-end transactions.

15. The system of claim 9, wherein the module is adapted to:

eliminating the least possible number of the associations such that each back-end transaction type is caused by one front-end transaction type and the number of associations between a given front-end transaction type and a given back-end transaction type remains constant.

16. The system of claim 9, wherein if a back-end transaction type is associated with more than one front-end transaction type, the module is adapted to remove the associations so that the back-end transaction type is associated with one front-end transaction type.

17. A method of mapping front-end transactions and back-end transactions of a front-end server, comprising the steps of:

determining a frequency of occurrence of each front-end transaction type and a frequency of occurrence of each back-end transaction type with respect to the front-end server; and
expressing the frequency of each back-end transaction type as a linear combination of frequencies of one or more front-end transaction types.

18. The method of claim 17, wherein the expression represents a mapping of front-end transaction types and back-end transaction types.

19. The method of claim 17, wherein determining the frequency of occurrence of each front-end transaction type and the frequency of occurrence of each back-end transaction type includes the step of:

observing front-end transaction types and back-end transaction types to determine their occurrence statistics with respect to the front-end server.

20. The method of claim 17, wherein determining the frequency of occurrence of each front-end transaction type and the frequency of occurrence of the back-end transaction type includes the steps of:

identifying a type of a front-end transaction based on content of the front-end transaction; and
identifying a type of a back-end transaction based on content of the back-end transaction.

21. The method of claim 17, wherein a front-end transaction represents a request sent by a client to the front-end server and a response sent by the front-end server to the client responsive to the request.

22. The method of claim 17, wherein a back-end transaction represents a request sent by the front-end server to a back-end server and a response sent by the back-end server to the front-end server responsive to the request.

23. A system for mapping transactions on both sides of a front-end server, the system comprising a module adapted to:

determine a frequency of occurrence of each front-end transaction type and a frequency of occurrence of each back-end transaction type with respect to the front-end server; and
express the frequency of each back-end transaction type as a linear combination of frequencies of one or more front-end transaction types.

24. The system of claim 23, wherein the expression represents a mapping of front-end transaction types and back-end transaction types.

25. The system of claim 23, wherein the module is adapted to determine the frequency of occurrence of each front-end transaction type and the frequency of occurrence of each back-end transaction type by:

observing front-end transaction types and back-end transaction types to determine their occurrence statistics with respect to the front-end server.

26. The system of claim 23, wherein the module is adapted to:

identify a type of a front-end transaction based on content of the front-end transaction; and
identify a type of a back-end transaction based on content of the back-end transaction.

27. The system of claim 23, wherein a front-end transaction represents a request sent by a client to the front-end server and a response sent by the front-end server to the client responsive to the request.

28. The system of claim 23, wherein a back-end transaction represents a request sent by the front-end server to a back-end server and a response sent by the back-end server to the front-end server responsive to the request.

Patent History
Publication number: 20060265430
Type: Application
Filed: May 19, 2005
Publication Date: Nov 23, 2006
Inventor: Dmitrii Manin (Palo Alto, CA)
Application Number: 11/134,568
Classifications
Current U.S. Class: 707/201.000
International Classification: G06F 17/30 (20060101);