Statistical models for improving the performance of database operations

-

A method for performing an automatic software-driven statistical evaluation of a large amount of data to be assigned to statistical variables in a database contained in at least one cluster. The method is characterized by using a statistical model to model an approximate description of a relative frequency of the state or states of the statistical variables and a statistical dependencies between the state or states, and then determining the approximate relative frequency of the state or states of the statistical variables and the approximate relative frequency belonging to a predetermined relative frequency of the state or states of the statistical variables and an expected value of the state or states of the statistical variables dependent thereon.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

This invention relates to a method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database. The data to be evaluated can, in particular, be contained in one or several clusters.

Nowadays databases are in the position to store immense amounts of data. In order to evaluate the stored data and to be able to extract profitable information, efficient i.e. quick and specific database accesses are required because of the data occupancy.

In general, for an evaluation all the data must be found that conforms to a pre-determinable condition. Often it is not the case that the located data itself must be known, but often only information about the statistics based on the data is required.

If, for example, in a customer relationship management (CRM) system in which customer data is stored, it be determined what proportion of customers with specific features bought a certain product, a simple procedure could be to access all the customer entries in the database, request all the features of the customers and under these to find out and count those entries which “match” the desired features for which the customers bought the specific product. For example, such a request to the database could be as follows: how often were specific mobile telephones purchased by male customers who are at least 30 years old? Therefore, all the customer entries that conform to the requirements “male” and “at least 30 years old” must be found in which case a test must be performed for the matching entries found to determine which mobile telephone was purchased the most.

However, a disadvantage of this procedure is the fact that the entire database has to be read to find the matching entries. This can occasionally take a very long time in the case of very large databases.

The database can be searched more skillfully and more efficiently if all the variables are provided with selective indexes that can be requested. In this case it is a rule that the more exact and sophisticated the applicable index technique of a database is, the quicker the database can be accessed. More efficient statistical information about the database entries can also be provided accordingly. This in particular applies if the database is specifically prepared by a special index technique for the requests to be expected.

Alternatively or in combination with index techniques, the results of all the statistical requests to be expected can be pre-calculated which has the disadvantage of considerable effort required for the calculations and storage of results.

The term “online analytical processing” (OLAP) characterizes a class of methods for extracting statistical information from the data of a database. In general, such methods can be subdivided into “relational online analytical processing” (ROLAP) and “multidimensional online analytical processing” (MOLAP).

The ROLAP method only makes slight pre-calculations. When requesting the statistics, the data about the index techniques required for a response to the request is accessed and the statistics are then calculated from the data. The emphasis of ROLAP is then on a suitable organization and indexing of the data to find and load the required data as quickly as possible. Nevertheless, the effort for large amounts of data can still be very great and in addition the selected indexing is sometimes not optimum for all the requests.

In the MOLAP method the focus is on pre-calculating the results for many possible requests. As a result, the response time for a pre-calculated request remains very short. For requests that have not been pre-calculated, the pre-calculated values can sometimes also lead to an acceleration if the desired sizes can be calculated from the pre-calculated results, and this means that it is more cost-effective than directly accessing the data. The number of all possible requests increases rapidly with the increasing number of states of these variables so that the pre-calculation hits against the limits of the present possibilities with regard to memory location and turnaround time. Restrictions with regard to the variables considered, the different states of these variables or the permissible requests must then be taken into consideration.

Even though the OLAP method guarantees an increase in the efficiency compared to merely accessing each database entry it is disadvantageous that a great amount of redundant information has to be generated here. Therefore, statistics must be pre-calculated and extensive index lists created. In general, an efficient application of an OLAP method also requires that this method is optimized to specific requests in which case the OLAP method is then also subject to these selected restrictions, i.e. no random requests can be made to the database.

In addition, it is also true for the OLAP method that, the more quickly the information is to be provided and the more this information varies, the more structures must be pre-calculated and stored. Therefore, OLAP systems can become very large and are by far less efficient than would be desired, response times of less than one second can in practice not be implemented for any statistical requests to a large database. Often the response times are considerably more than one second.

Therefore, there is a need for more efficient methods for the statistical evaluation of data entries. In such cases the requests should not be subject to any restrictions if possible.

The object of this invention is to overcome the disadvantages of the methods known in the prior art, particularly, the OLAP method for the statistical evaluation of database entries.

The methods according to the features of the contingent claims achieve this object according to the invention. Advantageous developments of the invention are specified in the subclaims.

According to the invention, a method is shown for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or more clusters which is characterized in that a statistical model for the approximate description of the relative frequencies of the states of the variables and the statistical dependencies between said states, is learnt by means of the data stored in the database and is used to determine, on the basis of the statistical model, the approximate relative frequencies of states of the variables, in addition to the approximate relative frequencies belonging to the pre-determinable relative frequencies of states of the variables and expected values of the states of variables dependent thereon.

Unlike the conventional method for statistical evaluation of the data from databases, the model is not an exact image of the statistics of the data. In general, this procedure obtains no exact, but only approximate statistical statements. However, the statistical models are subject to fewer restrictions than, for example, the conventional OLAP methods.

In order to make approximate, statistical statements, the entries are then “condensed” in a database to a statistical model in which case the statistical model virtually represents an approximation of the “common probabilistic distribution” of the database entries. In practice, this takes place by learning the statistical model on the basis of database entries, in which case the relative frequencies of the states of the variables of the database entries can approximately be described in this sequence. Therefore, the variables can capture many states with different, relative frequencies. As soon as such a statistical model is available, this can be used to study the relative dependencies between the states of the variables. According to a pre-determinable condition, relative frequencies of the states of the variables can be specified in this way and are used to determine the relative frequencies of states of the variables belonging to predetermined relative frequencies of states of the variables dependent thereon.

A statistical request to the database can in this way be made as a condition for the relative frequencies of specific states of the variables in which case a response to the statistical request is used to determine the relative frequencies of states of the variables belonging to predetermined relative frequencies of states of the variables dependent thereon.

As the statistical model, a graphical probabilistic model is preferably used (see e.g.: Castillo, Jose Manuel Gutierrez, Ali S. Hadi, Expert Systems and Probabilistic Network Models, Springer, N.Y.). The graphical probabilistic models particularly include the Bayesian networks or Belief networks and Markov networks.

A statistical model can for example be generated by structured theories in Bayesian networks (see e.g.: Reimar Hofmann, Lernen der Struktur nichtlinearer Abhangigkeiten mit graphischen Modellen—learning the structure of non-linear dependencies with graphical models—, Dissertation, Berlin, or David Heckermann, a tutorial on learning Bayesian networks, Technical Report MSR-TR-95-06, Microsoft Research).

A further possibility is to learn the parameters for a fixed structure (see e.g.: Martin A. Tanner: Tools for Statistical Inference, Springer N.Y. 1996).

Many learning methods use the likelihood function as an optimization criterion for the parameters of the model. A particular embodiment here is the expectation maximation (EM) learning method that is explained below in detail on the basis of a special model. In principle, it mainly does not concern a generalization ability of the models, but it is only necessary to obtain a good adaptation of the models to the data.

As the statistical model, a statistical clustering model preferably a Bayesian clustering model is used by means of which the data is subdivided into many clusters.

Similarly, a clustering model based on a distance measurement can be used together with a statistical model by means of which the data is likewise subdivided into many clusters.

By using clustering models, a very large database breaks down into smaller clusters that, for their part, can be interpreted as separate databases and can be handled more efficiently based on the comparably smaller size. Here the statistical evaluation of the database tests whether or not a predetermined condition can be mapped via the statistical model to one or more clusters. Should this be applicable, the evaluated data will be restricted to one cluster or a number of clusters. Similarly, it is possible that such clusters are restricted to those in which the data conforming to the predetermined condition contains at least one specific relative frequency. The remaining clusters in which only a smaller amount of data is contained according to the predetermined condition can be ignored because the considered procedure only aims at approximate statements.

For example, a Bayesian clustering model (a model with a discrete latent variable) is used as a statistical clustering model.

This is described in further detail below:

Given a set of statistical variables {A, B, C, D, . . . }, or in other words, a set of fields in a database table. The relevant lower case letters describe the states of the variables. Therefore, variable A can also accept the states {a1 a2, . . . }. The states are assumed to be discrete; but in general continuous (real-value) variables are also permitted.

An entry in the database table consists of values for all the variables in which case the values belonging to an entry are combined into one data record D for all the variables. For example, xΠ=(aΠ, bΠ, cΠ, dΠ, . . .) describes the Πth data record. The table has M entries, i.e. D={xΠ, Π=1, . . . ,M}.

In addition, there is a hidden variable (cluster variable) that is designated with Ω. The cluster variable can accept the values {ωi, i=1, . . . ,N}; i.e. there are N clusters.

Here, P(Ω|θ) describes a priori distribution of the cluster in which case the a priori weight of the ith cluster is given via P(ωi|θ) and θ represents the parameters of the model. The a priori distribution describes which cluster of the data is assigned to the relevant clusters.

The expression P (A, B, C, D, . . . |ωi, |θ) describes the structure of the ith cluster or the conditional distribution of the variables of the variable set {A, B, C, D, . . . } within the ith cluster.

The a priori distribution and the distributions of the conditional probabilities of each cluster thus together parameterize one common probabilistic model on {A, B, C, D, . . . } U Ω or on {A, B, C, D, . . . }. The probabilistic model is given by the product from the a priori distribution and the conditional distribution
P(A,B,C, . . . ,Ω|Θ)=P(Ω|Θ)P(A,B,C, . . . |Ω,Θ),
or by
P(A,B,C, . . . |Θ)=ΣiPi|Θ)P(A,B,C, . . . |ωi,Θ).

The logarithmic likelihood function L of parameter θ of the data record D is now given by
L(Θ)=log P(D|Θ)=ΣΠ log P(xΠ|Θ).

Within the context of the expectation maximation (EM) theory, a sequence of parameters θ(t) is now constructed according to the following general specification:
Θ(t+1)=arg maxΘΣΠΣiPi|xΠ(t))log P(xΠi|Θ)

This iteration specification maximizes the likelihood function step by step.

For the conditional distributions P(A, B, C, D, . . . ωi, θ), 0 restrictive assumptions can (and must, in general) be made. An example of such a restrictive assumption is the following factorization assumption:

If for example for the distribution of the conditional probabilities P(A, B, C, D, . . . ωi, θ) of the variables of the variable set {A, B, C, D, . . . }, the factorization P(A, B, C, D, . . . ωi, θ)=P(Aωiθ)P (Bωiθ)P(Cωiθ)P(Dωiθ) . . . is accepted, the probabilistic model conforms to a naive Bayesian network. Instead of a largely dimensional table one is now only confronted with many one dimensional tables (tables for one variable in each case).

The parameters of the distribution can, as shown above, be learnt from the data with an expectation maximation (EM) learning method. A cluster can be assigned to each data record xΠ=(aΠ, bΠ, cΠ, dΠ, . . . ) after the learning process. The assignment is then takes place via the a posteriori distribution P(ΩaΠ, bΠ, cΠ, dΠ, . . . , θ) in which case the cluster ωi with the highest weight P(ωi aΠ, bΠ, cΠ, dΠ, . . . , θ) is assigned to the data record xΠ.

The cluster affiliation of each entry in the database can be stored as an additional field in the database and corresponding indexes can be prepared to quickly access the data that belongs to a specific cluster.

If, for example, a statistical request of the type “give all the data records with A=a1 and B=b3 as well as the relevant distribution via C and D (i.e. P(C|a1, b3) and P(D|a1, b3))” is made to the database, proceed as follows:

First of all, the a posteriori distribution P(Ωa1, b3) is determined. From this distribution (approximate) it is clear what proportion of the data must be found in which clusters of the database according to the set condition. In this way, it is possible to restrict oneself in the case of all further processes, depending on the desired accuracy, to the clusters of the database that have a high a posteriori weight according to P(Ωa1, b3).

The ideal case is when P(Ωa1, b3)=1 applies to an i and accordingly P(Ωa1, b3)=0 for all j≠i, i.e. all the data corresponding with the set condition lies in one cluster. In such a case, it is possible to restrict oneself to the ith cluster without losing accuracy in further evaluation.

In order to obtain (approximate) distributions for C and D, it is possible to either carry on using the model, i.e. approximately determine the desired distributions P(C|a1, b3) and P(D|a1, b3) based on the parameters of the model:
P(C|a1, b3)≅ΣiP(C|ω1, a1, b3, Θ)P(ωi|a1, b3, Θ).

However, alternatively the model can also only be used to determine the clusters that are relevant for the current request.

After restricting the request to these clusters, more exact methods can be used within the clusters. E.g. the statistics within the clusters can be counted exactly (with the help of an additional index referring to the cluster affiliation or based on the conventional database reporting method or the OLAP method) or further statistical models adapted to the clusters can be used. A tight interlocking with OLAP is particularly advantageous because the so-called “sparsity” of the data is utilized in large dimensions by statistical clustering models and the OLAP methods are only used effectively within the smaller dimensional clusters.

The trade-off for speed and accuracy when evaluating results from the amount of data excluded from the evaluation: the more clusters excluded from the evaluation, the quicker, but also more inexactly, the response to a statistical request will be. The user himself can determine the trade-off between accuracy and speed. In addition, more exact automatic methods can be initiated if an insufficient accuracy from evaluating the model seems to be apparent.

In general, clusters that are below a specific minimum weight are excluded from the evaluation. Exact results can be obtained by excluding only such clusters from the evaluation that have an a posteriori weight of zero. Here, an exact “indexing” of the clusters can be reached as a result of an exact indexing of the database in which case the evaluation is accelerated in many cases. However, in general as many clusters as possible are used for the evaluation.

Overtraining a clustering model is of no importance, because on the contrary the aim is to produce the most exact reproduction possible of historical data and not a prognosis for the future. In the same way, intensely overtrained clustering models tend to supply a the most unambiguous possible assignment of requests to clusters, which means that in further operations it is possible to limit the request to small clusters of the database very quickly.

In an advantageous way, the data belonging to a cluster is stored on a data carrier in a way appropriate to the cluster affiliation. For example, the data belonging to one cluster can be stored on a section of the hard disk so that the data in a block belonging together can be read more quickly.

As has already been shown according to the method of the invention, conventional methods for the statistical evaluation of the data from databases can also be used in a supplementary way if approximate statements are deemed to be insufficient. In particular, conventional database reporting or OLAP methods are used to determine the relative frequencies of the states of the variables.

A supplementary application of conventional database techniques can for example be initiated automatically if a definable test variable accepts or exceeds a predetermined value.

According to the invention, a method is shown for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that the data is subdivided into many clusters by a clustering model based on distance measurement and, if required, the considered data is restricted to the data contained in one cluster or several clusters and in which case the database reporting methods or the OLAF methods are used to determine the relative frequencies and expected values of the states of variables.

The methods shown in the invention can subdivide the data of the database into clusters as well as, if required, result in a restriction to one cluster or several clusters. If the methods according to the invention are used for data that is already contained in one cluster or several clusters, the clusters are in this way subdivided into subclusters. If restriction is to be to one or more subclusters, the methods according to the invention for the data contained therein can be used, in which case, if required, more exactly adapted statistical models can be used. In general, this procedure can be repeated as often as desired, i.e. the clusters can be subdivided into subclusters or the subclusters into sub-subclusters as often as desired, etc. and, if required, there can be a restriction to the data contained therein in each case and the methods according to the invention used (adapted more exactly) for the data contained in the considered clusters.

An embodiment of the invention in the Web reporting/Web mining area is described below in which case reference is made to the accompanying drawings.

FIG. 1 Shows different monitor windows in which variables for describing the visitors to a Web site are displayed.

FIG. 2 Shows different monitor windows of the variables of FIG. 1 in which case the behavior of visitors of a specific referrer is investigated.

FIG. 3 Shows different monitor windows of the variables of FIG. 1 in which case the behavior of visitors that call up the homepage first, then read the news and subsequently again call up the homepage is investigated.

In general, in the Web reporting/Web mining area large amounts of data has to be evaluated. Should a user visit a Web site, each action of the user is usually recorded in the Web log file. This is very data-intensive because such Web log files can increase very rapidly to sizes in the region of several gigabytes.

In order to prepare the evaluation of the Web log files, “sessions” or visits by visitors were extracted, i.e. all the successive entries (page retrievals or clicks) belonging to a visitor are summarized.

Each session by a visitor was characterized by a set of different variables, namely particularly “start time”, “session duration”, “number of requests”, “referrer”, “1st visited category”, “2nd visited category”, “3rd visited category” and “4th visited category”.

In addition, further variables (not shown in the figures) were specified such as “does the visitor accept cookies”, “number of sessions that the visitor had already had up to the current session”, “number of pages retrieved in the last session”, “interval in time to the last session”, “on which page did the last session end”, “time of the first session by the visitor” and “weekday”.

Altogether, each session was characterized in this way on the basis of 18 different variables.

In order to determine the relative frequencies of the states of the variables, a naive Bayesian clustering model, as described above, was used.

Therefore, the specified variables were integrated in the statistical model. The statistical model was trained below by the data contained in the Web log files to find good parameters for the model. The desired relative frequencies can then be read from the model.

The result of determining the relative frequencies of the states of the variables is displayed in FIG. 1. FIG. 1 shows different monitor windows in which the variables “start time”, “session duration”, “number of requests”, “referrer”, “1st visited category”, “2nd visited category”, “3rd visited category” and “4th visited category” to describe the visitors to a Web site are shown.

From FIG. 1 it must particularly be identified that

    • approximately 55% of the visitors visit the Web site during the afternoon or evening,
    • approximately 47% of the visitors only remain less than 1 minute on the Web site,
    • approximately 34% of the visitors only start one request,
    • approximately 56% of the visitors do not have a referrer,
    • approximately 45% of the visitors start on the homepage, and
    • approximately 57% of the visitors only visit 1 category, approximately 74% of the visitors only 2 categories and approximately 85% of the visitors only 3 categories.

After the statistical model based on an EM learning method was trained, the dependencies between the variables could also be studied.

As can be seen in FIG. 2, the behavior of for example those visitors that came from a specific referrer (referred to as Endemann below) was investigated. For this, the corresponding entry in the variable “referrer” was set at 100%. By using the statistical model, it could be determined within fractions of a second that particularly approximately 99% of these visitors first visit the homepage and subsequently in the predominant majority (approximately 96%) again immediately leave the Web site.

FIG. 3 displays a complicated request to the database. FIG. 3 shows different monitor windows of the variables to be considered in which case the behavior of the visitors that call up the homepage first, then read the news and subsequently again call up the homepage is investigated. Here the corresponding entries in the variables “1st visited category”, “2nd visited category” and “3rd visited category” were set at 100%.

Again, it could particularly be determined by means of the statistical model within fractions of a second that these visitors then predominantly either again read the news (approximately 37%) or left the Web site (approximately 36%). It can also be seen in FIG. 3 that approximately 89% of these visitors have no referrer.

In a corresponding way, a response could be given to an amplitude of further requests to the database within a short period, i.e. in general, within less than 1 second. For example, it could be tested which section of the visitors that come from a specific referrer makes more than three side requests, how these people are distributed over the time of day and which one of these visitors is a returning visitor. It could also be tested how the visitor traffic of those visitors starting with the homepage is distributed, i.e. which section of the visitors continues or subsequently aborts the session in which way.

Such an amplitude of requests with many different variables in the case of the data that simultaneously has the same size can only be handled more efficiently with the method according to the invention compared to the conventional database techniques, particularly the OLAP methods. Similarly, conventional OLAP methods can also be used in addition to this, if exact statements are to supplement the approximate statements gained by the statistical model. However, considerably longer response times must then be taken into consideration.

To summarize, it can be established that this invention as opposed to the conventional database techniques, particularly the database reporting and OLAP methods, can answer statistical requests made to extensive databases more or less by using statistical models in a more efficient way. This does not exclude that conventional techniques for evaluating databases can be used in a corresponding way to have exact statements, if required. By using a clustering model by means of which the database can be broken up into smaller clusters, it is possible to restrict oneself very quickly for requests made to the relevant clusters of a database (approximately or exactly). If clusters of the database were restricted, a recent statistical evaluation of these clusters of the database can be carried out according to the invention in the course of which, if required, a renewed restriction of the subclusters contained in these clusters of the database, as well as a renewed statistical evaluation of the data contained in the subclusters can be made. In general, this procedure can be repeated as often as desired. Here it is possible to create more efficient statistics or respond to statistical requests.

Similarly, according to the invention, a clustering model based on a distance measurement can be used to subdivide the data of a database into many clusters in which case the relevant clusters of the database (cluster) are restricted. In order to determine the relative frequencies and expected values of the states of variables, conventional database reporting methods or OLAP methods are used.

In principle, this invention can be used everywhere where an efficient statistical evaluation of large amounts of data is required.

Therefore, a possible application is in the Web reporting/Web mining area as has already been shown in the embodiment.

Further possible applications can for example be found there where the customer data is obtained in large amounts, such as:

    • data from call centers,
    • data from operational custom relationship management systems,
    • data from the health area,
    • data from medical databases,
    • data from environmental databases,
    • data from genome databases,
    • data from the financial area.

Claims

1. Method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that

a statistical model for the approximate description of the relative frequencies of the states of the variables and the statistical dependencies between said states, is learnt and
by means of the data stored in the database and is used to determine, on the basis of the statistical model, the approximate relative frequencies of states of the variables, in addition to the approximate relative frequencies belonging to the pre-determinable relative frequencies of states of the variables and expected values of the states of variables dependent thereon.

2. Method according to claim 1, characterized in that as the statistical model, a graphical probabilistic model, in particular a Bayesian network, is used.

3. Method according to claim 1, characterized in that a statistical clustering model, in particular a Bayesian clustering model, is used by means of which the data is subdivided into many clusters.

4. Method according to claim 1, characterized in that likewise a clustering model based on a distance measurement is used by means of which the data is likewise subdivided into a plurality of clusters.

5. Method according to claim 3 or 4, characterized in that the considered data is restricted to the data contained in one cluster or a number of clusters.

6. Method according to claim 5, characterized in that it is possible that such clusters are restricted in which the data belonging to the specific states of variables contains at least one specific relative frequency.

7. Method according to one of the claims 4 to 6, characterized in that the data belonging to a cluster is stored on a data carrier in a way appropriate to the cluster affiliation.

8. Method according to one of the previous claims, characterized in that database reporting methods or OLAF methods are further used to determine the relative frequencies and expected values of the states of variables.

9. Method according to claim 8, characterized in that database reporting methods or OLAP methods are used if a test variable assumes or exceeds a predetermined value.

10. Method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that,

the data is subdivided into many clusters by a clustering model based on distance measurement and, if required, the considered data is restricted to the data contained in one cluster or several clusters, and
database reporting methods or the OLAF methods are used to determine the relative frequencies and expected values of the states of variables.

11. Application of the method according to one of the previous claims for the statistical evaluation of customer data, in particular, in the Web reporting/Web mining area and in customer relationship management systems.

12. Application of the method according to one of the previous claims for the statistical evaluation of environmental databases, medical databases or genome databases.

Patent History
Publication number: 20070083343
Type: Application
Filed: Oct 17, 2006
Publication Date: Apr 12, 2007
Applicant:
Inventors: Michael Haft (Zorneding), Reimar Hofmann (Munchen)
Application Number: 11/581,452
Classifications
Current U.S. Class: 702/179.000
International Classification: G06F 19/00 (20060101);