METHOD AND SYSTEM FOR DETECTING CHANGES IN SENSOR SAMPLE STREAMS

A method detects a change in a stream of samples acquired by a sensor. A stream of samples acquired by a sensor over time is stored sequentially in a buffer in which an oldest sample is discarded and a newest sample is stored when the buffer is full such that the buffer forms a window of samples sliding forward in time. For each new sample, the buffer is partitioned into all possible pairs of contiguous sub-windows of samples including a first sub-window and a second sub-window such that the newest sample is stored in the second sub-window of the pair. A difference is determined between the first and second sub-window of each pair of the contiguous sub-windows of samples, and a maximum difference is assigned as a merit score. A change in the stream of samples is signaled if the merit score is greater than a predetermine threshold. The change can be abrupt or gradual.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

This invention relates generally to monitoring equipment and an environment with sensors, and more particularly to detecting changes in sensor samples.

BACKGROUND OF THE INVENTION

Sensor Data

Up to now, real-time detailed monitoring of equipment and/or an environment with sensors has only been economically feasible for large, expensive, safety- and mission-critical installations. However, the rapid progress of computer technology, and more specifically, the advent of low-cost sensor networks, cheap wireless communications, and powerful embedded processors, has made it possible to implement equipment condition monitoring (ECM) technology for much cheaper equipment, such as electrical motors, turbines, power switchgear, HVAC equipment, as well as for an ever expanding range of industrial processes, such as oil refining, food processing, product manufacturing, and large scale environments.

The resulting increase in the amount of sensor data that are constantly streaming from sensor networks would quickly overwhelm any human supervisor tasked with monitoring the data. The only viable solution to the problem of processing the sensor data quickly and accurately is to develop automated change detection (ACD) methods. Whereas such automated methods are not likely to reach the competence and versatility of a well-trained human supervisor, automated methods can still be very effective and accurate, when designed to look for specific events in the sensor data stream.

One of the most important among these events is an abrupt change in the sensor data. Detecting such abrupt changes is not a trivial problem, because all but the simplest data streams vary, even when no change in the process that generates the data has occurred. This might be caused either by the natural variability of the process, e.g., when the data come from a dynamical system, or noise that is due to measurement errors, hidden variables and the like. In such cases, the detection of abrupt changes is done in a statistical sense, i.e., the problem reduces to detecting a difference between probability distributions from which the data are sampled, before and after the change. In manufacturing applications, this task is often called statistical process control (SPC). In SPC, the objective is to detect a departure from the in-control distribution of the data to some other, out-of-control distribution.

CUSUM

When the in-control and out-of-control distributions have known parametric forms and the respective parameters are known, it has been shown that the cumulative sum (CUSUM) procedure is optimal. Page, “Continuous Inspection Schemes,” Biometrika 41, pp. 100-114, 1954, and Basseville et al., “Detection of Abrupt Changes: Theory and Application,” Englewood Cliffs, N.J.: Prentice Hall, 1993.

However, explicit modeling of the in-control and all possible out-of-control distributions is typically a laborious and expensive process, and might even be intractable. Therefore, it is desired to provide a method that can detect any changes only by inspecting the sensor data streams and reasoning about their probability distributions.

Abrupt Change Detection

At a current time t, a d-dimensional data vector from a sensor sample stream is xt. The problem of abrupt change detection is to determine whether such a change has occurred at or before the current time t. An important assumption for this problem is that the change is assumed to be permanent, i.e., after the change has occurred, all subsequent readings come from a new distribution. This is the typical, situation for industrial equipment when the change is destructive, e.g., equipment fails.

All sensor samples before the change are assumed to be independent and identically-distributed (i.i.d.) random variables sampled from a distribution p0(x). Similarly, all samples after the change are assumed to be i.i.d. variables sampled from a distribution p1(x).

For cases when the distributions p0(x) and p1(x) are known. Page describes the CUSUM procedure that accumulates the log-likelihood of the current samples with respect to the two distributions, and makes a decision based on an auxiliary variable gt=St−mt for

m t = min 1 j t S j , S t = i = 1 t s i , and s i = log p 1 ( x t ) p 0 ( x t ) .

A change is declared to have occurred if gt>h for a predetermined threshold h. This decision can be shown to be optimal with respect to maximizing the detection probability, for a given false-positive rate.

However, the CUSUM method has the significant disadvantage that both distributions p0(x) and p1(x) must be known before hand. Specifying an accurate probability distribution p0(x) for the normal operation of industrial equipment or normal conditions in an environment is typically hard and laborious even for the very engineers who designed the equipment. Specifying all possible distributions p1(x) for the out-of-control case might be outright impossible. Furthermore, the correct parametric forms for these distributions might not be available.

These limitations of CUSUM have spurred extensive research on alternative methods for ACD that are more data driven and do not rely on pre-specified distributions. An important direction of research is to use non-parametric statistics, such as rank statistics, Brodsky et al, “Nonparametric Methods in Change-Point Problems,” Kluwer, 1991.

Machine Learning

Another line of research has focused on ACD methods based on machine learning. With machine learning, the fundamental idea is to ‘learn’ (fit) two probability distributions from the samples before and after a hypothesized change point, and then to test for differences between the two distributions, often using information-theoretic distance measures such as the Kullback-Leibler divergence, and Rényi divergence, Guha et al., “Streaming and sublinear approximation of entropy and information distances,” Proceedings of SODA'06, pp. 733.742, ACM Press, 2006. However, there are a number of problems with such methods.

The first problem is to learn the two distributions from the samples. When the two distributions are known to be Gaussian, the sample means and variances for two sub-windows can be determined, and the two distributions can be compared using Student's t-statistic, Gosset, “The probable error of the mean,” Biometrika, 1908.

The much more important case is when the two probability density functions (pdfs) of the distributions are not Gaussian. For example, when the distributions are multi-modal due to the system switching between several distinct modes. Gaussian mixture models, which are otherwise an excellent choice for modeling multi-modal distributions, are fairly poor solutions for this particular problem because they are parametric and their fitting requires multiple iterative adjustments of the respective parameters, Hastie et al., “The Elements of Statistical Learning,” Springer, 2001. This is prohibitively time consuming when considering many possible change points. Thus, such methods are not suitable for time-critical applications,

A much better alternative is to use memory-based methods, such as Parzen's kernel density estimate, Parzen “On estimation of a probability density function and mode,” Ann. Math. Stat. 33, pp. 1065-1076, 1962, also known as the Nadaraya-Watson estimate of a probability density function, see Hastie et al. In that method, the probability density p(x) is represented as a normalized sum of kernel values:

p ( x ) = 1 n i = 1 n w ( x - x i ) , ( 1 )

where w is a suitably selected kernel function, and xi, i=1, n are samples drawn from the distribution to be modeled. Popular choices for the kernel are Gaussian and tri-cubic distributions.

The second problem is comparing the two distributions after they have been fit from the samples. Some popular methods employ information-theoretic distance measures, such as the well known Kullback-Leibler (KL) divergence:

D KL ( p 0 p 1 ) = x Ω p 0 ( x ) log p 0 ( x ) p 1 ( x ) x .

The main difficulty when using the KL divergence is the need to integrate over the entire domain Ω of the samples x. This can be time consuming, even in one-dimensional domains, and might be impossible in multivariate cases. Using other popular information-theoretic distance measures, such, as the Rényi divergence, the Jensen-Shannon distance, the Bregman divergence, and the Hellinger-Matsushita-Bhattacharya distance, leads to similar integration-related difficulties.

As a result, much research has focused on approximate computation of these distances. For example, Guha et al. describe several polynomial-time approximation schemes (PTAS) that can compute approximations to most of the above distances in polynomial time. While valuable from a theoretical point of view, similar PTAS methods are not very likely to result in practical methods that can be used for monitoring in practical applications.

Another method, based on machine learning uses two sub-windows of a buffer storing samples from which the two pdfs are estimated. If the window size is made large, then the asymptotic fit to the true pdf from which the data were sampled is good. However, if the size of the window is large, the new samples start to affect the post-change distribution very slowly, resulting in increasing the defection times when an actual change in distributions occurs, making it difficult to detect abrupt changes.

SUMMARY OF THE INVENTION

The embodiments of the invention provide a method for detecting any changes in a stream of sensor samples that deals with a contradiction. The invention, uses sub-windows of varying sizes, both before and after a possible change point. This enables both fast reaction to drastic or abrupt changes, for post-change sub-windows of a small size, e.g., only one sample, as well as sensitivity to more subtle changes using distributions learned from larger windows. A direct implementation of this idea results in an increase of computational complexity to O(N4). However, the embodiments of the invention provide efficient implementations of the method that reduce this increase in complexity.

The embodiments of the invention provide a method for detecting any change in multivariate sensor sample streams. That is the method can detect abrupt changes in real-time, as well as slowly evolving changes over longer time periods. The method can be applied when no explicit models of data distributions, before and after the change, are available.

The methods operate on a sliding window of samples stored in a memory buffer. The buffer stores the most recent N samples. Each new sample causes the oldest: sample to be removed from the buffer. The method considers all possible partionings of the buffer into two contiguous sub-windows of samples before and after a time a possible change could have occurred. Differences are determined between each pair of the contiguous sub-windows of samples, and a maximum difference is assigned as a merit score. Then, a change in the stream of samples can be signaled if the merit score is greater than a predetermined threshold.

One embodiment of the method measures an average Euclidean distance between the pairs of sub-windows of samples as a decision variable. Another embodiment of the method is based on the conventional CUSUM method. However, in contrast, with the conventional method, the probability density functions are unknown, and instead of using log-likelihood estimates of the known distributions, the method estimates the distributions with Parzen's kernel density estimates.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A is a glow diagram of a method for detecting a change in a multivariate sensor data sample stream according to an embodiment of our invention.

FIG. 1B is a block diagram of a buffer storing sensor samples according to an embodiment of the invention;

FIG. 2A is a block diagram of the buffer of FIG. 2 partitioned into unequal sub-windows;

FIG. 2B is a block diagram of different distributions of sample values according to an embodiment of the invention;

FIG. 3 is a block diagram of triangular data structure for determining differences of samples according to an embodiment of the invention;

FIG. 4A are kernel functions according to an embodiment of the invention;

FIG. 4B is a block diagram of triangular data structures according to an embodiment of the invention;

FIG. 5 is a block diagram of pseudo code for a memory based, graph theoretic (MB-GT) method for detecting an abrupt change in sensor data according to an embodiment of the invention;

FIG. 6 is a block diagram of variables used by memory based cumulative sum (MB-CUSUM) method for detecting an abrupt change in sensor data according to an embodiment of the invention; and

FIG. 7 is a block diagram of pseudo code used by the memory-based cumulative sum method.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

FIG. 1A shows a method 100 for detecting any change in a multivariate sensor data sample stream 101 acquired sequentially over time by a sensor 102 from equipment and/or an environment 103. The samples 101 are stored 110 in a buffer 170, in which an oldest sample is discarded and a newest sample is stored when the buffer is full, such that the buffer 170 forms a window of samples sliding forward in time.

Then, for each new sample, the buffer is partitioned 120 into all possible pairs of contiguous sub-windows 111 of samples before and after a time 171 a possible change could have occurred, such that the newest sample is stored in the second sub-window of the pair. Differences 131 between each pair of all possible contiguous sub-windows 111 of the samples are determined 130. A maximum difference is assigned as a merit score, which is thresholded 140 and a change 151 is signaled 150 if the merit score is greater than a predetermine threshold.

All embodiments of the invention operate on pairs of contiguous windows of samples that have variable sizes. That is, the method considers all possible partitions of the buffer 170 of size N samples. Both, methods share a common computational structure that can be exploited to significantly reduce the computational cost of considering all possible partitions of the memory buffer. Both methods work on a buffer Γt, which contains the most recent sensor samples, always renumbered for convenience from x1 to xN, such that xN is the newest sample, and x1 is the oldest sample; i.e., the buffer is a window of samples sliding forward in time.

A difference procedure 130 α determines the quantitative merit score Υtα, which is in proportion to the possibility that a change has occurred within the span of the samples in the buffer. In particular, the merit score Υtα can be, but is not limited to, a distance measure between two sub-windows of the buffer 170. Typically, conventional methods simply partition the buffer into two equal sub-windows of samples and test for a difference between the two equal parts. In contrast, we partition the buffer into all possible pairs of sub-windows, including sub-windows storing a single sample.

As shown in FIG. 1B, a temporally sequential window of samples 101, in a form of a vector x, are stored consecutively in the buffer 170. The methods described below consider all possible pairs of indices (i, j) of sub-windows of the window of sensor samples x, such that 1≦i≦j≦N, that partition the buffer Γt 170 storing the samples x 152 into two adjacent sub-windows,

and

Here, time t increases from left to right, and


={xp, . . . , xq}.

FIG. 2A shows the buffer 170 with samples 1 to N. The sample with index i defines the beginning of the first sub-window of the pair, and the sample with index j defines the first sample of the second sub-window, and, respectively, the hypothesized change point associated with the partitioning 171. The sample at the end of the first sub-window has index j−1, ensuring contiguity of the pair of sub-windows, whereas the end of the second sub-window is always the newest sample, xN.

The two sub-sets of samples stored in the two sub-windows can have an unequal number of samples. A sub-window might even consist of a single sample. Therefore, a sub-window of samples can include one or more samples, i.e., the number of samples in a sub-window can be in the range [1, N−1].

If a difference or merit score Υtα(i,j) is determined for each possible pair of sub-windows, subject to the listed constraints, then the overall merit score is the maximum over all partitionings:


=max1≦i<j≦NΥtα(i,j).

Then, the modeling challenge is to determine which merit score to use. The computational challenge is to do this in a computationally efficient manner for each new sample xt.

A Memory-Based Graph Theoretic Procedure (MB-GT)

As shown in FIG. 2B, one solution to the problem of determining the difference 200 (distance) between two distributions 211-212 of the two sub-windows 221-222 of samples is to determine the average distance (difference) between the samples themselves. Because each sample is a data point in a multi-dimensional Euclidean space, a natural distance measure between the sub-windows of samples xk and xl is their Euclidean distance


dk,l=∥xk−kt∥.

For a particular partitioning defined by the index pair (i,j) specified as above, we can determine the average distance between the two sub-windows of samples as

C i , j = k = i j - 1 l = j N d k , l ( j - i ) ( N - j + 1 ) .

This memory-based graph theoretic method has a merit score


Υtα=max1≦i<j≦NCi,j.

Because determining each Ci,j is of complexity of O(N2), and there are O(N2) such terms to be considered, the overall complexity is O(N4). This complexity is unacceptable for practical applications.

However, the determination of individual Ci,j terms has certain redundancy and repetitive structure that can be exploited to bring the computational complexity back down to O(N2).

If we define

C i , j = . k = i j - 1 l = j N d k , l β i , j = . l = j N d k , l ,

then the following recurrent relationships hold:


βi,j−1i,j+di,j−1,


with


βi,N+1=0,


And


C′i−1,j=C′i,ji−1,j,


with


C′j,j=0 for all 1≦j≦N.

As shown in FIG. 3, these recurrences suggest the following efficient computational process. The values βi,j and C′i,j are stored in data structures 301-302 that are conceptually similar to a matrix. This matrix is upper triangular due to the constraint i<j. Computation can start with the bottom row 310 of this matrix, which has a single element C′N,N, which is zero by definition. For each row 1≦i≦N above the previous row one, proceeding from bottom to top, two steps are performed:

    • 1) All values βi,j are computed recurrently from their immediate adjacent sample to the right, proceeding right to left, and using the recurrence:


βi,j−1i,j+di,j−1.

    • 2) All values C′1,j are computed from the respective values βi,j and the values C′i+1,j in the row immediately below the current one, using the recurrence C′i,j=C′i+1,ji,j.

Determining the merit score, Υtα, can be done concurrently with the computation of the individual terms C′i,j, because that only involves normalization and maximization. For this reason, it is not necessary to store all values C′i,j and βi,j in the buffer. It suffices to keep a buffer of size N samples for the current row i of values βi , and two buffers of the same size for C′i,j and Ci+1,j. Thus, the memory requirement of this procedure is only O(N), and the complexity of the computation is only O(N2).

A Memory-Based CUSUM Procedure (MB-CUSUM)

As shown in FIG. 4A, the second embodiment of the invention has a probabilistic foundation, which enables the method to achieve optimal change detection under certain modeling conditions. Because the distributions of the samples are unknown, we first learn the distribution by Parzen's kernel density estimates, e.g. with Gaussian or tricubic probability distribution function. In FIG. 4A, the vertical axis is the probably of the distributions, and the horizontal axis time.

At the same time, despite its very different theoretical foundation, the second method has similar computational structure to the first embodiment. We describe how this structure can be leveraged to achieve the same significant improvements in computational complexity.

Following the derivation of CUSUM, we consider the following hypotheses about a possible change within the N samples stored in the buffer 170:

H i , 0 : x k p 0 for i k N ( 2 ) 1 i < j N , H ij : x k p 0 for i k j - 1 ( 3 ) x l p 1 for j l N . ( 4 )

Here, we are considering the null hypotheses Hi,0 that no change has occurred while the most recent N−i+1 sample are acquired, versus multiple hypotheses Hi,j that such a change has occurred. By introducing the starting index i, we are expanding the set of hypotheses to be tested to those that do not necessarily use all N samples in the window.

According to the Neyman-Pearson lemma, Hoel et al., “Testing Hypotheses,” Ch. 3 in Introduction to Statistical Theory, New York: Houghton Mifflin, pp. 56-67, 1971, incorporated herein by reference, the best test that we can perform when testing each particular hypothesis Hi,j vs. Hi,0, i.e., the test that has the highest probability of rejecting a false null hypothesis, is the likelihood ratio

Λ ij = k = i j - 1 p 0 ( x k ) · l = j N p 1 ( x l ) k = i N p 0 ( x k ) .

For convenience, the log-likelihood ratio Sij=log(Λij) is commonly used. In our method, we replace the true pdfs p0 and p1 with their kernel density estimates, as described by Equation (1), to result in

S i , j = l = j N log 1 N - j + 1 k = j N w l , k 1 j - i k = i j - 1 w l , k , w l , k = . w ( x l - x k ) . ( 5 )

Here wl,k is a kernel weight for the pair of samples (xl, xk), and it also holds that the weights wl,k=w(dl,k). By using the maximum likelihood principle, the merit score for this procedure is


Υtα=max1≦i<j≦N, Si,j.

As above, a computation of the merit score has a computational complexity of O(N4). However, this merit score has a similar structure to the merit score for the embodiment method that can again be exploited to reduce its computational complexity.

As shown in FIG. 4B, again, we can conceptually organize values of the windows Si,j and V′i,j 401-402 in triangular data structures, and define the following auxiliary variables:

μ j l = . k = j N w l , k v i , j l = . k = i j - 1 w l , k . ( 6 )

Although there appear to be O(N3) vli,j terms to be computed, the recurrent re-formulation of Equation 5


Si,j=Si,j+1+log μ3i−log vi,j3+log(j−i)−log(N−j+1)   (7)

convinces us that not all of the terms are needed. By further defining μ′jμjj, and v′i,j=vji,j, we can use the following equations as a basis for an efficient procedure:

μ j = . k = j N w j , k v i , j = v i + 1 , j + w j , i . ( 8 )

Note that only the term for v′i,j is recurrent; the term for μjj is computed directly. These equations suggest the following procedure:

S1: Compute μ′j for j=1, N directly, per Equation (8). This computation has complexity O(N2), but the results can be stored in O(N) space.

S2: For each row i=N, 1 of the matrix Si,j, starting from the bottom row 410, i=N and moving upwards to the first row i=1, perform the following two steps:

    • S2.1: For each value of j between i+1 and N, compute v′i,j from the corresponding v′i+1,j in the row below, and wj,i, per Equation (8).
    • S2.2: For each value of j between N and i+1, compute Si,j from the value Sij+1 immediately to the right, using the equation Si,j=Si,j+1+log μ′j−log v′i,j+log(j−i)−log(N−j+1), starting with Si,N+1=0 for all i=1, N. The computation in this step proceeds strictly right to left (j=N, i+1).

FIG. 5 shows pseudo code for the first memory-based graph theoretic (MB-GT) method that determines ΥMB-GTt in greater detail. FIG. 3 shows the variables for the second memory-based cumulative sum (MB-CUSUM) method that determines ΥMB-CUSUMt, and FIG. 4 shows the pseudo code in greater detail.

Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims

1. A method for detecting a change in a stream of samples acquired by a sensor, comprising the steps of:

storing sequentially a stream of samples acquired by a sensor over time in a buffer in which an oldest sample is discarded and a newest sample is stored when the buffer is full, such that the buffer forms a window of samples sliding forward in time;
partitioning, for each new sample, the buffer into all possible pairs of contiguous sub-windows of samples including a first sub-window and a second sub-window such that the newest sample is stored in the second sub-window of the pair;
determining a difference between the first and second sub-window of each pair of the contiguous sub-windows of samples;
assigning a maximum difference as a merit score; and
signaling a change in the stream of samples if the merit score is greater than a predetermined threshold.

2. The method of claim 1, in which the change is abrupt in time.

3. The method of claim 1, in which the change evolves relatively slowly over time.

4. The method of claim 1, in a distribution of the samples is unknown.

5. The method of claim 1, in the difference is a Euclidian distance between values of the samples in the first sub-window and the second sub-window.

6. The method of claim 1, in which the first and second sub-windows have an unequal number of samples.

7. The method of claim 1, in which the number of sample in the buffer is N, and the number of samples in the sub-windows can be in the range of [1, N−1].

8. The method of claim 1, in which a complexity of the difference determination is O(N2), where N is a number of samples stored in the buffer.

9. The method of claim 1, in which values of the samples in the first and second sub-window are stored in respective triangular matrices, and the differences are determined recurrently from immediate adjacent samples.

10. The method of claim 1, further comprising:

estimating the distributions using Parzen's kernel estimates.

11. The method of claim 10, in which the differences are determined using a cumulative sum procedure of log-likelihoods of the kernel estimates.

Patent History
Publication number: 20090177443
Type: Application
Filed: Apr 23, 2007
Publication Date: Jul 9, 2009
Inventors: Daniel N. Nikovski (Cambridge, MA), Ankur Jain (Cambridge, MA)
Application Number: 11/738,562
Classifications
Current U.S. Class: Using Matrix Operation (702/196)
International Classification: G06F 17/16 (20060101);