MULTI-DISTANCE CLUSTERING

Systems, methods, and other embodiments associated with multi-distance clustering are described. In one embodiment, a method includes reading a multi-distance similarity matrix S that records pair-wise multi-distance similarities between respective pairs of data points in a data set. Each pair-wise similarity is based on distances between a pair of data points calculated using K different distance functions, where K is greater than one. The method includes clustering the data points in the data set into n clusters based on the similarity matrix S. The number of clusters n is not determined prior to the clustering.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to and is a continuation-in-part of U.S. patent application Ser. No. 14/669,792, filed on Mar. 26, 2015 and titled “MULTI-DISTANCE CLUSTERING”, which is incorporated herein by reference.

BACKGROUND

Data mining and decision support technologies use machine learning to identify patterns in data sets. Machine learning techniques include data classification, data clustering, pattern recognition, and information retrieval. Technology areas that utilize machine learning include merchandise mark-down services in retail applications, clinician diagnosis and treatment plan assistance based on similar patients' characteristics, and general purpose data mining. The various machine learning techniques rely, at their most basic level, on a distance between pairs of data points in a set of data as a measure of similarity or dissimilarity. Machine learning has become one of the most popular data analysis and decision making support tool in recent years. A wide variety of data analysis software packages incorporate machine learning to discover patterns in large quantities of data.

Clustering or data grouping is one of the fundamental data processing activities. Clustering seeks to uncover otherwise hidden relationships between data objects with the goal of using the relationships to predict outcomes based on new data objects. For example, by identifying clusters in a set of patient data, an analyst can identify subgroups of patients with different success rates to specific treatments based on patients' data. The treatment plan for a new patient can then be based on the relationship between the new patient's data and the data for patients in the various subgroups, thus maximizing the success probability for the selected treatment regimen.

Clustering, as a data analysis tool, creates groups of data that are “close” together, where “close” implies a distance metric. Distance calculations used in clustering are defined by an analyst for the type of data based on the analyst's subjective intuition and/or experience about the similarity of the data. In some clustering techniques, the analyst selects a number of clusters to be created. Thus, the analyst's bias is present in some form in the resulting clustering, which may be overfit to existing data and produce arbitrarily uncertain results on new data.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate various systems, methods, and other embodiments of the disclosure. It will be appreciated that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one embodiment of the boundaries. In some embodiments one element may be designed as multiple elements or that multiple elements may be designed as one element. In some embodiments, an element shown as an internal component of another element may be implemented as an external component and vice versa. Furthermore, elements may not be drawn to scale.

FIG. 1 illustrates an embodiment of a system associated with similarity analysis with tri-point data arbitration.

FIG. 2 illustrates an embodiment of a method associated with similarity analysis with tri-point data arbitration.

FIG. 3 illustrates results of one embodiment of a system associated with similarity analysis with multi-distance tri-point data arbitration.

FIG. 4 illustrates an embodiment of a method associated with similarity analysis using multi-distance tri-point data arbitration.

FIG. 5 illustrates results of one embodiment of a system associated with multi-distance clustering.

FIG. 6 illustrates an embodiment of a method associated with multi-distance clustering.

FIG. 7 illustrates an embodiment of a method associated with multi-distance clustering that is based on spectral theory.

FIG. 8 illustrates an embodiment of a computing system in which example systems and methods, and equivalents, may operate.

DETAILED DESCRIPTION

The basic building block of traditional similarity analysis in machine learning and data mining is categorizing data and their attributes into known and well-defined domains and identifying appropriate relations for handling the data and their attributes. For example, similarity analysis includes specifying equivalence, similarity, partial order relations, and so on. In trivial cases when all attributes are numeric and represented by real numbers, comparing data point attributes is done by using the standard less-than, less-than-or-equal, more-than, and more-than-or-equal relations, and comparing points by computing distances (e.g., Euclidean) between the two points. In this case, the distance between two data points serves as the measure of similarity between the data points. If the distance is small, the points are deemed similar. If the distance is large, the points are deemed dissimilar.

A matrix of pair-wise distances between all data points in a data set is a standard similarity metric that is input to a variety of data mining and machine learning tools for clustering, classification, pattern recognition, and information retrieval. Euclidean distance is one possible distance between data points for use in the pair-wise matrix. A variety of other distance-based measures may be used depending on the specific domain of the data set. However, the distance based measures used in traditional machine learning are understandably all based on two data points.

One of the deficiencies of the traditional two data point distance approach to similarity analysis is the subjectivity that is introduced into the analysis by an outside analyst. An outside analyst determines the threshold on distances that indicate similarity. This leads to non-unique outcomes which depend on the analyst's subjectivity in threshold selection.

Traditionally, a determination as to what constitutes “similarity” between data points in a data set is made by an analyst outside the data set. For example, a doctor searching for patients in a data set having “similar” age to a given patient specifies an age range in her query that, in her opinion, will retrieve patients with a similar age. However, the age range that actually represents “similar” ages depends upon the data set itself. If the data set contains patients that are all very similar in age to the given patient, the query may be over-selective, returning too many patients to effectively analyze. If the data set contains patients that have ages that have a wide variety of ages, the query may be under-selective, missing the most similar patients in the data set.

Another deficiency in the traditional two point distance approach to similarity analysis is the conceptual difficulty of combining attributes of different types into an overall similarity of objects. The patient age example refers to a data point with a single, numerical, attribute. Most machine learning is performed on data points that have hundreds of attributes, with possibly non-numerical values. Note that the analyst will introduce their own bias in each dimension, possibly missing data points that are actually similar to a target data point. Some pairs of points may be close in distance for a subset of attributes of one type and far apart in distance for another subset of attribute types. Thus, the analyst may miss data points that are similar to the target data point for reasons that are as yet unappreciated by the analyst. Proper selection of the similarity metric is fundamental to the performance of clustering, classification, and pattern recognition methods used to make inferences about a data set.

The proper selection of the distance function used to determine the similarity metric plays a central role in similarity analysis. There are hundreds of distance functions that have been proposed and used in the analysis of various data types. For example, there are at least seventy-six different distance functions that can be used for simple binary data represented by sequences of 0's and 1's. Selecting the “right” one of these different distance functions for a given dataset places a great deal of burden on the analyst. In addition, it is likely that there will be differences in the results obtained with different distance functions, which will difficult to understand. The difficulty in selecting the proper distance function is even more difficult in the analysis of complex data types involving free text, graphics, and multimedia data.

Traditional approaches to similarity analysis that consider multiple different distance functions when determining similarity use a weighted sum of several relevant distances. This approach produces results that are highly dependent on the selected weights, meaning that it is important to select appropriate values for the individual weights. Therefore, the already complicated analysis of the data becomes even more complicated and prone to user bias, estimation errors and instabilities, and non-uniqueness of results.

U.S. patent application Ser. No. 13/680,417 filed on Nov. 19, 2012, invented by Urmanov and Bougaev, and assigned to the assignee of the present application provides a detailed description of tri-point arbitration. The '417 application is incorporated herein by reference in its entirety for all purposes. Tri-point arbitration addresses the problem of analyst bias in determining similarity. Rather than determining similarity by an external analyst, tri-point arbitration determines similarity with an internal arbiter that is representative of the data set itself. Thus, rather than expressing similarity based on distances between two points and forcing the analyst to determine a range of distances that is similar, tri-point arbitration uses three points to determine similarity, thereby replacing the external analyst with an internal arbiter point that represents the data set, i.e., introducing an internal analyst into similarity determination.

The present application describes a multi-distance extension of tri-point arbitration that allows for seamless combination of several distance functions for analysis of compound data. Thus, the systems and methods described herein address the problem of analyst bias in selecting distance functions and/or weighting of the distance functions to be used in similarity analysis. A brief overview of tri-point arbitration is next, which will be followed by a description of multi-distance tri-point arbitration.

Tri-Point Arbitration

Tri-point arbitration is realized through the introduction of an arbiter data point into the process of evaluation of the similarity of two or more data points. The term “data point” is used in the most generic sense and can represent points in a multidimensional metric space, images, sound and video streams, free texts, genome sequences, collections of structured or unstructured data of various types. Tri-point arbitration uncovers the intrinsic structure in a group of data points, facilitating inferences about the interrelationships among data points in a given data set or population. Tri-point arbitration has extensive application in the fields of data mining, machine learning, and related fields that in the past have relied on two point distance based similarity metrics.

With reference to FIG. 1, one embodiment of a tri-point arbitration learning tool 100 that performs similarity analysis using tri-point arbitration is illustrated. The learning tool 100 inputs a data set X of k data points and calculates a similarity matrix [S] using tri-point arbitration. The learning tool 100 includes a tri-point arbitration similarity logic 110. The tri-point arbitration logic 110 selects a data point pair (x1, x2) from the data set. The tri-point arbitration logic 110 also selects an arbiter point (a1) from a set of arbiter points, A, that is representative of the data set. Various examples of sets of arbiter points will be described in more detail below. The tri-point arbitration logic 110 calculates a per-arbiter tri-point arbitration similarity for the data point pair based, at least in part, on a distance between the first and second data points and the selected arbiter point a1.

FIG. 2 illustrates the basis of one embodiment of a tri-point arbitration technique that may be used by the tri-point arbitration logic 110 to compute the per-arbiter tri-point arbitration similarity for a single data point pair. A plot 200 illustrates a spatial relationship between the data points in the data point pair (x1, x2) and an arbiter point a. Recall that the data points and arbiter point will typically have many more dimensions than the two shown in the simple example plot 200. The data points and arbiter points may be points or sets in multi-dimensional metric spaces, time series, or other collections of temporal nature, free text descriptions, and various transformations of these. A tri-point arbitration similarity for data points (x1, x2) with respect to arbiter point a is calculated as shown in 210, where ρ designates a two-point distance determined according to any appropriate distance function:

S ( x 1 , x 2 a ) = min { ρ D ( x 1 , a ) , ρ D ( x 2 , a ) } - ρ D ( x 1 , x 2 ) max { ρ D ( x 1 , x 2 ) , min { ρ D ( x 1 , a ) , ρ D ( x 2 , a ) } } EQ . 1

Thus, the tri-point arbitration technique illustrated in FIG. 2 calculates the tri-point arbitration similarity based on a first distance between the first and second data points, a second distance between the arbiter point and the first data point, and a third distance between the arbiter point and the second data point.

Values for the per-arbiter tri-point arbitration similarity, S(x1, x2|a), range from −1 to 1. In terms of similarities, S(x1, x2)|a) is greater than 0 when both distances from the arbiter to either data point are greater than the distance between the data points. In this situation, the data points are closer to each other than to the arbiter. Thus a positive tri-point arbitration similarity indicates that the points are similar, and the magnitude of the positive similarity indicates a level of similarity. S(x1, x2|a) equal to one indicates a highest level of similarity, where the two data points are coincident with one another.

In terms of dissimilarity, S(x1, x2|a) is less than zero when the distance between the arbiter and one of the data points is less than the distance between the data points. In this situation, the arbiter is closer to one of the data points than the data points are to each other. Thus a negative tri-point arbitration similarity indicates dissimilarity, and the magnitude of the negative similarity indicates a level of dissimilarity. S(x1, x2|a) equal to negative one indicates a complete dissimilarity between the data points, when the arbiter coincides with one of the data points.

A tri-point arbitration similarity equal to zero results when the arbiter and data points are equidistant from one another. Thus S(x1, x2|a)=0 indicates complete neutrality with respect to the arbiter point, meaning that the arbiter point cannot determine whether the points in the data point pair are similar or dissimilar.

Aggregating Per-Arbiter Tri-Point Similarities

Returning to FIG. 1, the tri-point arbitration similarity logic 110 calculates additional respective per-arbiter tri-point arbitration similarities for the data point pair (x1, x2) based on respective arbiter points (a2−am) and combines the per-arbiter tri-point arbitration similarities for each data pair in a selected manner to create a tri-point arbitration similarity, denoted S(x1, x2|A), for the data point pair. The tri-point arbitration logic 110 computes tri-point arbitration similarities for the other data point pairs in the data set. In this manner, the tri-point arbitration logic 110 determines a pair-wise similarity matrix [S], as illustrated in FIG. 1.

As already discussed above, the arbiter point(s) represent the data set rather than an external analyst. There are several ways in which a set of arbitration points may be selected to represent the data set. The set of arbiter points A may represent the data set based on an empirical observation of the data set. For example, the set of arbiter points may include all points in the data set. The set of arbiter points may include selected data points that are weighted when combined to reflect a contribution of the data point to the overall data set. The tri-point arbitration similarity calculated based on a set of arbitration points that are an empirical representation of the data set may be calculated as follows:

S ( x 1 , x 2 A ) = 1 m i = 1 m S ( x 1 , x 2 a i ) EQ . 2

Variations of aggregation of arbiter points including various weighting schemes may be used. Other examples of aggregation may include majority/minority voting, computing median, and so on. For a known or estimated probability distribution of data points in the data set, the set of arbitration points corresponds to the probability distribution, f(a). The tri-point arbitration similarity can be calculated using an empirical observation of the data point values in the data set, an estimated distribution of the data point values in the data set, or an actual distribution of data point values in the data set. Using tri-point arbitration with an arbiter point that represents the data set yields more appealing and practical similarity results than using a traditional two point distance approach.

Per-Attribute Tri-Point Arbitration Similarity Analysis

In another embodiment that may be more suitable for data containing non-numeric attributes converted into numeric values, the arbiter and a pair of data points are compared in each attribute or dimension separately and then the results of the comparison for all arbiters in each dimension are combined to create an overall comparison. This approach is useful i) for non-numerical data, such as binary yes/no data or categorical data, ii) when the magnitude of the difference in a dimension doesn't matter, or iii) when some of the data attributes are more important than others. In this embodiment, the distances between attributes of the points and each given arbiter are not combined to compute per-arbiter similarities. Instead distances between attributes of the points and the arbiters are combined on a per attribute basis for all the arbiters to compute “per-attribute similarities.” The per-attribute similarities for each arbiter are combined to compute the tri-point arbitration similarity S for the data point pair. U.S. patent application Ser. No. 13/833,757 filed on Mar. 15, 2013, invented by Urmanov, Wood, and Bougaev, and assigned to the assignee of the present application provides a detailed description of per-attribute tri-point arbitration. The '757 application is incorporated herein by reference in its entirety for all purposes.

Distances between attributes of different types may be computed differently. A per-attribute similarity is computed based on the distances, in the attribute, between the arbiters and each member of the pair of data points. The per-attribute similarity is a number between −1 and 1. If the arbiter is farther from both of the data points in the pair than the data points in the pair are from each other, then the pair of data points is similar to each other, for this attribute, from the point of view of the arbiter. Depending on the distances between the arbiter and the data points, the per-attribute similarity will be a positive number less than or equal to 1.

Otherwise, if the arbiter is closer to either of the data points in the pair than the data points are to each other, then the pair of data points is not similar to each other, for this attribute, from the point of view of the arbiter. Depending on the distances between the arbiter and the data points, the per-attribute similarity will be a negative number greater than or equal to −1.

Per-attribute distances can be combined in any number of ways to create the tri-point arbitration similarity. Per-attribute tri-point arbitration similarities can be weighted differently when combined to create the tri-point arbitration similarity. Per-attribute tri-point arbitration similarities for a selected subset of arbiters may be combined to create the tri-point arbitration similarity. For example, all per-attribute tri-point arbitration similarities for a given numeric attribute for all arbiters can be combined for a pair of points to create a first per-attribute similarity, all per-attribute tri-point arbitration similarities for a given binary attribute can be combined for the pair of points to create a second per-attribute similarity, and so on. The per-attribute similarities are combined to create the tri-point arbitration similarity for the data point pair.

In one embodiment, a proportion of per-attribute similarities that indicate similarity may be used as the tri-point arbitration similarity metric. For example, if two data points are similar in a 3 out of 5 attributes, then the data points may be assigned a the tri-point arbitration similarity metric of 3/5.

Returning to FIG. 1, the illustrated pair-wise similarity matrix [S] arranges the tri-point arbitration similarities for the data points in rows and columns where rows have a common first data point and columns have a common second data point. When searching for data points that are similar to a target data point within the data set, either the row or column for the target data point will contain tri-point arbitration similarities for the other data points with respect to the target data point. High positive similarities in either the target data point's row or column may be identified to determine the most similar data points to the target data point. Further, the [S] matrix can be used for any number of learning applications, including clustering and classification based on the traditional matrix of pair-wise distances. The matrix [S] may also be used as a proxy for similarity/dissimilarity of the pairs.

Multi-Distance Tri-Point Arbitration

Often datasets are produced by compound data-generating mechanisms, meaning that the variation in the data points is produced by variations in more than one factor. Hereinafter this type of dataset will be referred to as a compound dataset. For example, data corresponding to a dimension of an orifice in a series of manufactured parts being measured for quality control purposes may vary because of both an offset of the orifice within the part as well as variations in the shape of the orifice. Using a single distance function to determine similarities in the data will likely not be able to identify orifices as similar that are similar in both shape and offset. Rather a single distance function will typically only identify as similar orifices that are similar in either shape or offset.

Many different distance functions can be used in similarity analysis. Probably the most basic and easily understood distance function is the Euclidean distance, which corresponds to a length of a line segment drawn between two points. Another distance function is the Pearson Correlation distance. The Pearson Correlation is a measure of the linear correlation between two data points. The Pearson Correlation distance is based on this correlation. The Cosine distance function produces a distance between two data points that is based on an angle between a first vector from the origin to the first data point and a second vector from the origin to the second data point. Hundreds of other distance functions have been theorized, any of which is suitable for use in multi-distance tri-point arbitration.

For compound datasets, it is important to utilize more than one distance function when determining similarity. Consider the orifice example from above. If tri-point arbitration similarity is determined between orifices based only on a Euclidean distance, orifices having similar offsets will be determined to be similar to one another. However, the pairs of orifices determined to be similar will include pairs of orifices that have similar offset but non-similar shapes as well as pairs of orifices that have similar offset and similar shape. Likewise, if tri-point arbitration similarity is determined between orifices based only on a Pearson Correlation distance, orifices having similar shapes will be determined to be similar to one another. However, the pairs of orifices determined to be similar will include pairs of orifices that have similar shape but non-similar offsets as well as pairs of orifices that have similar shape and similar offset.

As discussed above, traditional similarity analysis techniques that consider distances produced by more than one distance function utilize weighting to combine the different distances. The selection of the weights as well as the different distance functions introduces analyst bias into similarity analysis. Multi-distance tri-point arbitration allows for seamless combination of several distance functions for analysis of compound data.

FIG. 3 illustrates one example embodiment of a multi-distance tri-point arbitration learning tool 300. The multi-distance tri-point arbitration learning tool 300 is configured to implement multi-distance clustering upon features of electronic communication attempting to access a user account or service. This is performed to classify the electronic communication as either safe/expect/normal electronic communication or malicious/anomalous electronic communication. Such features can correspond to time at which the electronic communication was sent. A feature can correspond to a location from which the electronic communication was sent. A feature can correspond to various device features, such as device type and IP address, of the device that sent the electronic communication. In this way, the multi-distance tri-point arbitration learning tool 300 can extract and cluster features from an electronic communication. The features may be clustered in a cluster dissimilar from clusters of expected/safe features of electronic communications routinely received when the user account or service is accessed in a non-malicious manner and normal manner. Accordingly, the electronic communication is labeled as anomalous/malicious and is blocked, such as by modifying operation of computers and/or networking equipment to block the electronic communication. If the features are clustered within the clusters of expected/safe features, then the electronic communication is labeled as safe and is allowed. In this way, account security and service access security is improved.

The learning tool 300 includes the tri-point arbitration similarity logic 110 of FIG. 1 and multi-distance similarity logic 320. The tri-point arbitration similarity logic 110 inputs a data set X having k data points {x1, . . . , xk} and a set A having m arbiter points {a1, . . . , am}. The tri-point arbitration similarity logic 110 also inputs a set D having K distance functions {D1, . . . , DK}. For example, one of the distance functions could be Euclidean distance, another distance function could be Cosine distance, and so on. For each distance function, the tri-point arbitration similarity logic 110 calculates a per-distance similarity for each data point pair in X using the set of arbiter points A and the given distance function as described above with respect to FIG. 1.

Recall that any number of aggregation functions can be used to combine the per-arbiter similarities for a given data point pair and given distance function. Further, as also discussed above, per-attribute similarities may be computed for each arbiter and a pair of data points and these per-arbiter per-attribute similarities can then be combined to create the tri-point arbitration similarity. The resulting per-distance similarities for each data point pair populate a per-distance similarity matrix [SD] for each distance function, resulting in K per distance similarity matrices [SD1]-[SDK].

The multi-distance logic 320 inputs a rule set TD[ ] that specifies how to combine per-distance tri-point arbitration similarities SD1-SDK for a data point pair into a single multi-distance tri-point similarity S for the data point pair. In one embodiment, the rules combine SD1-SDK as follows. If a dominant number of the per-distance tri-point arbitration similarities SD1-SDK for a data point pair indicate that the data points are similar, S will be determined to indicate similarity. If a dominant number of the per-distance tri-point arbitration similarities SD1-SDK for a data point pair indicate that the data points are dissimilar, S will be determined to indicate dissimilarity.

In one particular embodiment, the rule set TDset forth above is evaluated iteratively such that the multi-distance tri-point similarity S for a data point pair is successively adjusted based on each per-distance tri-point arbitration similarity SD for the data point pair considered in turn. Note that the per-distance tri-point arbitration similarities SD1-SDK are readily obtained by reference to the K per distance similarity matrices [SD1]-[SDK]. Recall that similarity values range from −1 to 1, with −1 corresponding to total dissimilarity, 0 corresponding to neutrality, and +1 corresponding to total similarity. The rule set TDis as follows:


If S>=0 and SD>=0, Then S=S+SD−(S*SD)  1.

This rule has the effect of increasing the level of similarity indicated by S when both the multi-distance tri-point similarity S and the per-distance tri-point arbitration similarity SD under consideration in the present iteration indicate that the data points are similar.


If S<=0 and SD<=0, Then S=S+SD+(S*SD)  2.

This rule has the effect of increasing the level of dissimilarity indicated by S when both the multi-distance tri-point similarity S and the per-distance tri-point arbitration similarity SD under consideration in the present iteration indicate that the data points are dissimilar.


If S<=0 and SD>=0 OR S>=0 and SD<=0,  3.


Then S=S+SD/(1−min(abs(S),abs(SD)))

This rule has the effect of adjusting the level of similarity indicated by S toward neutral when one of the multi-distance tri-point similarity S and the per-distance tri-point arbitration similarity SD indicates that the data points are similar and the other indicates that the data points are dissimilar.

After the rule set is applied to a current value of S and SD to calculate a new value for S, the rule set is applied to the new S and the next SD, and so on, until all SD have been considered. The final value for S is returned as the multi-distance tri-point similarity S for the data point pair. Application of the rule set above will result in a multi-distance tri-point similarity S equal to 1 when all of the SD indicate total similarity, a multi-distance tri-point similarity S equal to −1 when all of the SD indicate total dissimilarity, and a multi-distance tri-point similarity S equal to 0 when all of the SD indicate complete neutrality.

FIG. 4 illustrates one embodiment of a method 400 for performing multi-distance tri-point arbitration. The method 400 may be performed by the multi-distance tri-point arbitration learning tool 300 of FIG. 3. The method includes, at 410, determining whether another data point pair remains for similarity analysis. If not, the method ends. When an unanalyzed data point pair remains, the method includes, at 420, using a K different distance functions D1-DK, calculating K per-distance tri-point arbitration similarities SD1-SDK between the pair of data points xi and xj with respect to an arbiter point a.

The method includes, at 430, computing a multi-distance tri-point arbitration similarity S between the data points based on a dominating number of the K per-distance tri-point arbitration similarities. Thus, the method determines that the data points are similar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are similar. The method determines that the data points are dissimilar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are dissimilar. At 440, the method includes associating the multi-distance tri-point arbitration similarity with the data points for use in future processing.

As can be seen from the foregoing description, the multi-distance tri-point arbitration disclosed herein is capable of performing similarity analysis of datasets produced by compound data-generating mechanisms. A plurality of distance functions can be combined in a non-trivial way to perform similarity analysis without any additional parameter tuning (e.g., weight selection). The results produced by multi-distance tri-point arbitration are superior to results obtained using a single distance function for compound data sets and are also competitive for non-compound datasets. Multi-distance tri-point arbitration can be used in a wide spectrum of data-mining applications such as health, e-commerce, insurance, retail, social networks, monitoring, analytics, and so on.

Multi-Distance Clustering

Clustering, as a data analysis tool, creates groups of data that are “close” together, where “close” implies a distance metric that is used as a proxy for similarity. Both unsupervised and supervised clustering are based on pair-wise comparison of data points in the data set. The comparison is done by computing distances defined for the type of data or by devising heuristic scores that capture the analyst's subjective intuition and/or experience about similarity of data objects. When the attributes are numeric or can be converted to numeric, distance metrics, such as the Euclidean distance between two points shown in Equation (3) below, are applicable. This distance is based on a certain attribute or on attribute combinations, represented by the ai-bi for k attributes in Equation (3). For example, subgroups in a group of patients can be identified based on attributes such as age, gender, results of a certain test, type of disease, disease progression level, and/or genetic characteristics.


d(a,b)=√{square root over ((a1−b1)2+ . . . +(ak−bk)2)}  EQ. 3

As an input to most clustering techniques, the distances between all pairs of points are calculated and stored, creating the distance matrix shown in Equation (4).

M d = ( d ( x 1 , x 1 ) d ( x 1 , x k ) d ( x k , x 1 ) d ( x k , x k ) ) ( 4 )

Among the most notable and widely used clustering algorithms are K-means clustering, hierarchical clustering, density-based clustering, distribution based clustering, and self organized clustering. Any of these methods may benefit from the use of tri-point arbitration to determine the distance or similarity between points.

In essence, for distance-based clustering, the distance between the two points serves as a proxy for the similarity of two points. During the clustering process, the analyst adjusts parameters of the clustering process based on what the analyst thinks is similar and what is not. For example, using K-means clustering, the analyst would select a number of clusters that seems to give good results; using density-based clustering, the analyst would select a distance that seems to give good results. While this subjective approach may work in some situations, it will most likely fail in other situations or for slight changes in the underlying structure of the data or the data-generating mechanism. The analyst, by adjusting the parameters, may achieve arbitrarily accurate results on the existing set of data points, but an algorithm overfit to the existing data will produce arbitrarily uncertain results on new data. Such sensitivity to slight changes in the assumptions makes the resulting diagnostics systems unstable and unreliable for predictions based on the clusters.

The disclosed data clustering is based on multi-distance similarity between the data points. Rather than an analyst artificially specifying a distance that is “close enough,” a number of clusters, a size of cluster, or a cluster forming property such as density of points, in the disclosed data clustering the clustering process itself determines the number of clusters. When multi-distance tri-point arbitration similarity is the basis for the multi-distance clustering, each data point contributes to the determination of the similarity of all other pairs of data points. Thus, the data, rather than the analyst, controls the cluster formation.

FIG. 5 illustrates one example of a multi-distance clustering tool 500 that performs clustering on the multi-distance similarity matrix S, which may be have been computed using tri-point arbitration as described above with reference to FIG. 3. The multi-distance clustering tool 500 outputs a number n clusters that are mutually dissimilar. The multi-distance clustering tool 500 includes multi-distance clustering logic 510 that performs clustering without requiring the selection of the number of clusters prior to performing clustering.

FIG. 6 illustrates one embodiment of a method 600 that performs multi-distance clustering. At 620, a multi-distance similarity matrix S that records pair-wise multi-distance similarities between respective pairs of data points in a data set is read from an electronic data structure. The similarities in the similarity matrix may have been computed using any type of similarity analysis that combines multiple distance functions, including the multi-distance tri-point arbitration described above. Thus, each pair-wise similarity in the similarity matrix S is based on distances between a pair of data points calculated using K different distance functions, where K is greater than one.

The similarity matrix S may be stored in a database table or any other electronic data structure. The similarity matrix may be read by moving the similarity matrix into working memory or cache that is accessible to a processor and/or logic performing the clustering method 600. At 630, the data points in the data set are clustered into n clusters based on the similarity matrix S such that n is not determined prior to the clustering.

Recall the selecting the number of clusters prior to clustering greatly impacts the resulting clustering, such that selecting the wrong number of clusters may significantly degrade the quality of the clustering results. Because the multi-distance similarity used as the basis of the clustering combines numerous distance functions to capture interrelated factors that generate the data variations produced by a compound data-generating mechanism, it is unnecessary to pre-compute a number of clusters. Instead, the data itself can drive the clustering process.

When the multi-distance similarity is determined as described above from the perspective of non-biased arbiters (i.e., using tri-point arbitration), the clustering results become independent of the selection of weights or other methodology used to combine the different distances, insulating the clustering process from human error, and producing consistently accurate clustering. The clustering described herein will be based on a similarity matrix as determined using multi-distance tri-point arbitration as described above. The described clustering techniques can also be used with multiple per-distance similarities determined in other manners.

Returning to FIG. 5, in one embodiment, the multi-distance clustering logic 510 clusters the dataset having multi-distance pair-wise similarities recorded in the similarity matrix as follows. First, the multi-distance clustering logic 510 selects a pair of similar data points to create an initial cluster. In one embodiment, the pair of data points having the highest positive similarity (e.g., as evidenced by the highest value in the similarity matrix) can be selected as the initial pair. The cluster is grown by subsuming, into the cluster, data points that are similar to any point in the cluster. A threshold may be set for on level of similarity for adding a data point to a cluster. For example, given a similarity that ranges from −1 to 1, a similarity of +0.5 may be used as the threshold for adding a data point to a cluster.

When no un-clustered data points remain that are similar to data points in the cluster, a new pair of similar data points is selected to create a subsequent cluster. The subsequent cluster is grown by subsuming any data points that are similar to a data point in the subsequent cluster. This clustering is repeated until all points are in a cluster. Any data point that is not similar to any other data point is in a cluster by itself. In one embodiment, a given data point can be a member of more than one cluster. Note that the number of clusters is determined by the clustering process itself, which terminates when all points are in a cluster. The number of clusters does not need to be determined prior to clustering or otherwise input to the clustering process.

In another embodiment, the multi-distance clustering logic 510 uses a clustering algorithm that is based on the spectral theory of matrices. One embodiment of a clustering method 700 that uses spectral theory to cluster data is illustrated in FIG. 7. At 705, the multi-distance similarity matrix S, a binary version of the similarity matrix B, and a set C which specifies a set of indices in S that are to be clustered is input. These matrices may be input by placing them in working memory for access by a processor or logic. In one embodiment, an entry in B has a value of −1 when its value in S is negative and a value of +1 when its value in S is positive. The clustering algorithm works by splitting the matrix S into a number n sub-matrices, where each of the sub-matrices corresponds to a cluster. The method continues clustering until no sub-matrices remain that can be split at 710 at which point, at 770, as set of clusters C1-Cn is output that correspond to the set of sub-matrices that cannot be split.

For each clustering iteration, at 715 the sub-matrix of S, denoted hereinafter as sub-matrix s, corresponding to the indices in set C is determined. The sub-matrix of B, denoted hereinafter as sub-matrix b, corresponding to the indices in set C is also determined at 715. At 720, a Laplacian matrix Λ is computed using a desired objective function. In one embodiment, the objective function is Λ=D−b, where D is the diagonal matrix obtained from the sub-matrix b by summing its entries column wise and placing the resulting sums on the diagonal. This particular objective function is based on a MinCut objective function used in other spectral theory clustering. Other objective functions can be used.

At 725, the eigenvalues for the Laplacian matrix are computed. The Laplacian matrix will have at least one eigenvalues equal to zero, and under spectral theory, the number of zero-valued eigenvalues indicates the number of connected components that exist in the sub-matrix s. The eigenvalues are arranged in ascending order and the number m of zero-valued eigenvalues is counted. At 730, if m is greater than one, then there is more than one zero-valued eigenvalue. This means that the sub-matrix includes more than one connected component and the sub-matrix s should be split into a number m of sub-matrices. If m is not greater than one, the method continues at 740 as will be described below.

At 735, to split the sub-matrix s into m sub-matrices, m eigenvectors for each of the m zero-valued eigenvalues are computed. The sub-matrix s is split based on the eigenvectors for zero-valued eigenvalues. That is, the indices of non-zero elements of each eigenvector correspond to indices in sub-matrix s that are assigned to the same sub-matrix. Each of the m sub-matrices from sub-matrix s are then input into the clustering algorithm at 710 to determine if further splitting of any of the sub-matrices should be done.

A sub-matrix is split when it has more than one zero-valued eigenvalue. When a sub-matrix has only one zero-valued eigenvalue, spectral theory holds that the sub-matrix has a single connected component and thus it may be that no further splitting needs to be performed on the sub-matrix. To confirm that a sub-matrix with a single zero-valued eigenvalue needs no further splitting, the following steps are performed. At 740, the eigenvector for the second smallest eigenvalue (the smallest non-zero eigenvalue) is computed for the sub-matrix and the sub-matrix is split into two sub-matrices based on the eigenvector. That is, indices in the sub-matrix that correspond to positive elements in the eigenvector are assigned to a first sub-matrix or cluster and indices in the sub-matrix that correspond to negative elements in the sub-matrix are assigned to a second sub-matrix or cluster.

At 745, the cross cluster similarity is determined between the first and second clusters. The cross cluster similarity is determined by determining pairwise similarity between data point pairs that have one point in the first cluster C1 and a second point in the second cluster C2. The pairwise similarities are combined in some manner such as averaging. If the cross-cluster similarity indicates that the two clusters C1 and C2 are similar (e.g., has a positive value), the two clusters C1 and C2 are discarded and the sub-matrix under consideration for splitting is not split any further. When the clustering terminates, a cluster that corresponds to the sub-matrix will be included at 770 in the output of the clustering algorithm. The method returns to 710 and the next sub-matrix is processed.

If at 745 the cross-cluster similarity indicates that the two clusters C1 and C2 are dissimilar (e.g., has a negative value), at 750 the two sub-matrices corresponding to the two clusters C1 and C2 are each input to the clustering algorithm. The algorithm terminates when no sub-matrices remain that need to be split. As with the other clustering methods described herein the number of clusters n is not determined a prior or input to the clustering method. This represents a significant advantage over existing clustering techniques that typically require the number of clusters as an input.

Distance Function Learning

As discussed earlier, selecting the proper distance function for determining similarity in data produced by a compound data-generating mechanism is important to recognizing interrelated factors that affect the data. An additional useful result of introducing an arbiter into the similarity analysis that forms the basis for clustering is the ability to learn a proper distance function or a set of distance functions that are relevant to a given data set. To identify which distance function or functions is relevant to a particular data set, an initial clustering is performed on the data set using a similarity matrix that includes multi-distance similarity values based on different multiple distance functions D1-DK. The initial clustering results in n clusters.

A subsequent clustering is performed on the data set using a similarity matrix S′ that includes pairwise similarity values calculated using K−1 distance functions, such that a given distance function from the original K distance functions has not been used to calculate the pair-wise similarities in the similarity matrix. The resulting n′ clusters are compared to n clusters and when the n clusters and the n′ clusters are similar, the given distance function is determined as not being relevant to clustering for the data set. This distance function can be eliminated from future analysis of the data set, saving resources and increasing performance.

Each distance function is considered for elimination in the same manner, in turn, until all distance functions have been considered. Distance functions that are not eliminated are relevant and should be used as the basis for determining multi-distance similarity in future analysis of the data set. If the number of distance functions is greater than one, the data-generating mechanism responsible for generating the data set is compound and, hence, the data set should be analyzed using the multi-distance techniques described herein.

As can be seen from the foregoing description, using multi-distance similarity to perform clustering results in clustering that recognizes interrelated factors produced by compound data-generating mechanisms. Such interrelated factors may not be recognized when a single distance function is used to calculate similarity. When multi-distance tri-point arbitration similarity is used for clustering, it is possible to obtain excellent clustering results on data from a compound data generating mechanism without determining a number of clusters prior to clustering. This simplifies clustering and removes human error from the clustering process.

Security Enhancement Using Multi-Distance Clustering

In one embodiment, multi-distance clustering is used to improve security of online services, storage environments, cloud computing environments, multi-tenant environments, and/or other computing environments. Such techniques can be implemented by a security module, such as hardware and/or software, configured to execute instructions for performing various computed implemented actions to modify operation of computing devices to improve security.

Data points, analyzed using multi-distance clustering, can relate to information associated with entities, such as computing devices, users, user accounts, etc. In one embodiment, a data point can correspond to information relating to a computing device attempting to access a user account with an online service, such as an email service, a banking service, an online shopping service, etc. The information can comprise a location of the computing device, an IP address of the computing device, device information of the computing device (e.g., a MAC address, a device type, a manufacturer, a model, etc.), a time of day when the attempt was made, etc. Such information can be extracted as features from electronic communications, received over a network, attempting to access the user account of the online service.

The data points, such as features extracted from electronic communications, can be analyzed using multi-distance clustering. The data points are analyzed to detect an anomaly data point indicative of a computing device attempting to perform a malicious activity. In particular, multi-distance clustering is used to cluster data points into n clusters. The n clusters are evaluated to detect a cluster that comprises data points of features indicative of anomalous electronic communication attempting to access the service. The anomalous electronic communication is identified as having features corresponding to anomaly data points, within the cluster, indicative of malicious activity. The malicious activity can relate to the using of compromised credentials to gain unauthorized access to the user account with the online service. The malicious activity can relate to one or more computing devices, such as bots executing on the computing devices, performing a security attack.

A data point is determined by multi-distance clustering to be an anomaly data point, associated with anomalous electronic communication, when the data point is clustered into a cluster that is dissimilar from clusters of known data points that are associated with normal, expected, and safe features. The anomaly data point represents malicious activity because a feature represented by the anomaly data point is dissimilar from expected features of non-malicious activities. That is, the data point represents a feature of the electronic communication, such as a location of a device that sent the electronic communication in an attempt to access the user account.

The data point is compared to known data points of known locations of devices that routinely access the user account to perform non-malicious activities (e.g., locations from which an owner of the user account generally accesses the user account). If the data point, representing the location of the device, is dissimilar beyond a threshold amount from the known locations, then the data point is identified as the anomaly data point indicative of malicious activity. The data point is determined to be dissimilar beyond the threshold amount based upon the multi-distance clustering process clustering the data point into a cluster dissimilar from clusters of safe/normal/expected data points. That malicious activity may correspond to a malicious user attempting to access the user account from a strange location different from the known locations. Accordingly, an electronic communication having a feature represented by the anomaly data point is determined to be an anomalous electronic communication indicative of malicious activity.

The security module can perform various actions to improve security upon detecting an anomaly data point and thus an anomalous electronic communication representing malicious activity by a computing device. In one embodiment, the security module blocks further attempts by the computing device to access the online service. For example, the security module modifies a setting within a server or network device (e.g., a router, a gateway, etc.) to modify operation of the server or network device to block communication packets or authentication attempts received from the computing device. In this way, operation of the server or network device is modified in order to improve security against malicious computing devices.

In another embodiment, the security module modifies the user account to force the owner of the user account to reset an account password of the user account. In this way, the user account is protected against malicious attempts to access the user account. In another embodiment, the security module modifies the user account by implementing additional authentication verification or to change to a different type of authentication verification. This can force multifactor authentication for subsequent attempts by computing devices to access the user account.

In another embodiment, the security module modifies a setting within the server or network device to block an IP address of the computing device. The IP address is a feature from the electronic communication that is represented by the anomaly data point. A state of a network firewall implemented by the server or network device can be modified to block network communication from the IP address of the computing device.

In another embodiment where the anomaly data point is indicative of a security attack, the security module modifies a setting within the server or network device to change an IP address used by the online service. The security module can also modify a setting within the server or network device to block a port used by the online service.

In one embodiment of utilizing multi-distance clustering to improve security of an online service from a malicious device, an electronic communication is received by a computing device. The computing device, such as a server or network device, may implement the security module. The electronic communication is parsed to identify user account credentials, such as an email address and password, and an online service address of the online service. The electronic communication is stored in an unscored communications pool within memory of the computing device. The unscored communications pool may be isolated from a blocked communications pool and an allowed communication pool stored within the memory of the computing device. Once electronic communications are determined to be safe or malicious, then such electronic communications are transferred from the unscored communications pool to either the blocked communications pool or the allowed communications pool.

Features of the electronic communications within the allowed communications pool are clustered by multi-distance clustering into clusters of known/safe/expected data points of such features. Features of the electronic communications within the blocked communications pool are clustered by multi-distance clustering into clusters of malicious data points (anomaly data points) of such features. In this way, features of unscored communications can be clustered by multi-distances clustering to see if such features are clustered into the safe clusters, clusters dissimilar from the safe clusters (e.g., new malicious features), and/or the malicious clusters (e.g., known malicious features) to determine whether the unscored communications are malicious or safe.

In particular, the electronic communication is evaluated to extract features as data points. The data points are evaluated using multi-distance clustering to identify an anomaly data point and thus an anomalous electronic communication. The anomalous electronic communication is indicative of a malicious attack or a malicious attempt to gain unauthorized access to the user account. The anomaly data point is identified as a data point having one or more features that are dissimilar from known expected features. In one embodiment, the anomaly data point is identified where the anomaly data point is clustered into a cluster that is dissimilar from clusters of data points of safe/known/expected features. For example, the data point may represent a current location of a device that sent the electronic communication. The data point, representing the current location of the device, may be clustered in a cluster that is different than clusters of data points of locations of devices routinely used to by the owner to access the user account in non-malicious ways. For example, the anomaly data point represents an attempt to access the user account from a foreign country, whereas the owner of the user account usually access the user account from home, such as within the last 20 minutes.

In another embodiment, the anomaly data point is identified where an IP address of the device that sent the electronic communication deviates a threshold amount from IP addresses of devices routinely used to access the user account. This deviation is identified based upon the IP address being clustered into a cluster that is dissimilar from clusters of IP addresses of devices routinely used to by the owner to access the user account in non-malicious ways. In yet another embodiment, the anomaly data point is identified where device characteristics (e.g., a device model, a MAC address, a device type, etc.) of the device that sent the electronic communication deviates a threshold amount from device characteristics of devices routinely used to access the user account. This deviation is identified based upon the device characteristics being clustered into a cluster that is dissimilar from clusters of device characteristics of devices routinely used to by the owner to access the user account in non-malicious ways.

Detection of the anomaly data point is performed using multi-distance clustering of features extracted from electronic communications and clusters of known data points representing features of electronic communications routinely used to non-maliciously access the user account. Such features can correspond to IP addresses, device characteristics, times of access, device locations, etc. In this way, a wide variety of features can be used as data points for detecting anomaly data points indicative of malicious activity. If the anomaly data point is identified, then computerized actions can be taken upon the anomaly electronic communication and/or to modify operation of a computing device such as a server or network device associated with the online service. Such actions can comprise blocking further communication from the device by modifying settings within the server and/or network device. The settings can be modified by transmitting an instruction over a network to the server and/or network device to modify a setting, such as a setting to block a port, a setting to block communication from an IP address, a firewall setting, etc.

In one embodiment, the user identification, such as an email address, is extracted from the electronic communication. An account verification is performed to identify the existence of the user account. Features of the electronic communication such as a current location, device information, device IP address, and/or other features are extracted as data points for multi-distance clustering. A modified electronic communication is created by appending the data points and account verification results to the electronic communication.

The modified electronic communication is scored to create a scored electronic communication. In one embodiment of scoring, recent user authentication behavior data is retrieved, such as from a database or other data structure within which user authentication behavior data is stored. The user authentication behavior data may comprise device login IP address data, device login location data, login times, device information, and/or other authentication behavior associated with the owner of the user account authenticating and accessing the user account for non-malicious activity. The recent user authentication behavior data is appended to the modified electronic communication. The recent user authentication behavior data can be used by multi-distance clustering as clusters of known data points of non-malicious behavior of the owner of the account.

The modified electronic communication is scanned to determine whether the modified electronic communication is malicious communication by executing multi-distance clustering. The tripoint anomaly detection can use the recent user authentication behavior data as known data points to compare with data points of the features extracted from the electronic communication to determine similarity/dissimilarity between such for anomaly detection. If multi-distance clustering clusters the one or more data points of features extracted from the modified electronic communication in a cluster that is dissimilar from clusters of known/safe data points, then the modified electronic communication is flagged as a malicious/anomalous electronic communication. Detecting malicious communication can be performed for other electronic communications received by the online service. Scores are assigned to modified electronic communications based upon how similar features of the modified electronic communications are to known expected behavior of the owner.

A new data file can be created by copying the modified electronic communication and a score into a new data file as a scored communication file. The scored communication file is transferred into a scored communications pool. Once scored communication files are created from electronic communications within the unscored communications pool and are transferred to the scored communications pool, remaining data within the unscored communications pool is deleted. Scores of the scored communication files within the scored communications pool are evaluated to determine whether such scored communication files are to be moved into the blocked communications pool or the allowed communications pool. If a score of a scored communication file exceeds a threshold indicative of malicious communication, then the scored communication file is moved to the blocked communications pool. Otherwise, the scored communications file is moved to the allowed communications pool. Scored communication files within the blocked communications pool are blocked from accessing the user account and/or from being normally processed by the online service. Scored communication files within the allowed communications pool are allowed to access the user account and/or are processed normally by the online service.

General Computer Embodiment

FIG. 8 illustrates an example computing device in which example systems and methods described herein, and equivalents, may operate. The example computing device may be a computer 800 that includes a processor 802, a memory 804, and input/output ports 810 operably connected by a bus 808. In one example, the computer 800 may include a multi-distance clustering logic 830 configured to facilitate similarity analysis using multi-distance tri-point arbitration. In different examples, the multi-distance clustering logic 830 may be implemented in hardware, a non-transitory computer-readable medium with stored instructions, firmware, and/or combinations thereof. While the multi-distance clustering logic 830 is illustrated as a hardware component attached to the bus 808, it is to be appreciated that in one example, the multi-distance clustering logic 830 could be implemented in the processor 802.

In one embodiment, multi-distance clustering logic 830 is a means (e.g., hardware, non-transitory computer-readable medium, firmware) for performing multi-distance clustering.

The means may be implemented, for example, as an ASIC programmed to perform multi-distance tri-point arbitration. The means may also be implemented as stored computer executable instructions that are presented to computer 800 as data 816 that are temporarily stored in memory 804 and then executed by processor 802.

Multi-distance clustering logic 830 may also provide means (e.g., hardware, non-transitory computer-readable medium that stores executable instructions, firmware) for performing the methods illustrated in FIGS. 1-7 as well as the functions performed by the multi-distance clustering tool 500 of FIG. 5 and the tri-point arbitration learning tool 100 of FIG. 1.

Generally describing an example configuration of the computer 800, the processor 802 may be a variety of various processors including dual microprocessor and other multi-processor architectures. A memory 804 may include volatile memory and/or non-volatile memory. Non-volatile memory may include, for example, ROM, PROM, and so on. Volatile memory may include, for example, RAM, SRAM, DRAM, and so on.

A disk 806 may be operably connected to the computer 800 via, for example, an input/output interface (e.g., card, device) 818 and an input/output port 810. The disk 806 may be, for example, a magnetic disk drive, a solid state disk drive, a floppy disk drive, a tape drive, a Zip drive, a flash memory card, a memory stick, and so on. Furthermore, the disk 806 may be a CD-ROM drive, a CD-R drive, a CD-RW drive, a DVD ROM, and so on. The memory 804 can store a process 814 and/or a data 816, for example. The disk 806 and/or the memory 804 can store an operating system that controls and allocates resources of the computer 800.

The bus 808 may be a single internal bus interconnect architecture and/or other bus or mesh architectures. While a single bus is illustrated, it is to be appreciated that the computer 800 may communicate with various devices, logics, and peripherals using other busses (e.g., PCIE, 1394, USB, Ethernet). The bus 808 can be types including, for example, a memory bus, a memory controller, a peripheral bus, an external bus, a crossbar switch, and/or a local bus.

The computer 800 may interact with input/output devices via the i/o interfaces 818 and the input/output ports 810. Input/output devices may be, for example, a keyboard, a microphone, a pointing and selection device, cameras, video cards, displays, the disk 806, the network devices 820, and so on. The input/output ports 810 may include, for example, serial ports, parallel ports, and USB ports.

The computer 800 can operate in a network environment and thus may be connected to the network devices 820 via the i/o interfaces 818, and/or the i/o ports 810. Through the network devices 820, the computer 800 may interact with a network. Through the network, the computer 800 may be logically connected to remote computers. Networks with which the computer 800 may interact include, but are not limited to, a LAN, a WAN, and other networks.

Definitions and Other Embodiments

In another embodiment, the described methods and/or their equivalents may be implemented with computer executable instructions. Thus, in one embodiment, a non-transitory computer readable/storage medium is configured with stored computer executable instructions of an algorithm/executable application that when executed by a machine(s) cause the machine(s) (and/or associated components) to perform the method. Example machines include but are not limited to a processor, a computer, a server operating in a cloud computing system, a server configured in a Software as a Service (SaaS) architecture, a smart phone, and so on). In one embodiment, a computing device is implemented with one or more executable algorithms that are configured to perform any of the disclosed methods.

In one or more embodiments, the disclosed methods or their equivalents are performed by either: computer hardware configured to perform the method; or computer software embodied in a non-transitory computer-readable medium including an executable algorithm configured to perform the method.

While for purposes of simplicity of explanation, the illustrated methodologies in the figures are shown and described as a series of blocks of an algorithm, it is to be appreciated that the methodologies are not limited by the order of the blocks. Some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be used to implement an example methodology. Blocks may be combined or separated into multiple actions/components. Furthermore, additional and/or alternative methodologies can employ additional actions that are not illustrated in blocks. The methods described herein are limited to statutory subject matter under 35 U.S.C § 101.

The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Both singular and plural forms of terms may be within the definitions.

References to “one embodiment”, “an embodiment”, “one example”, “an example”, and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in one embodiment” does not necessarily refer to the same embodiment, though it may.

ASIC: application specific integrated circuit.

CD: compact disk.

CD-R: CD recordable.

CD-RW: CD rewriteable.

DVD: digital versatile disk and/or digital video disk.

HTTP: hypertext transfer protocol.

LAN: local area network.

PCI: peripheral component interconnect.

PCIE: PCI express.

RAM: random access memory.

DRAM: dynamic RAM.

SRAM: synchronous RAM.

ROM: read only memory.

PROM: programmable ROM.

EPROM: erasable PROM.

EEPROM: electrically erasable PROM.

SQL: structured query language.

OQL: object query language.

USB: universal serial bus.

XML: extensible markup language.

WAN: wide area network.

An “electronic data structure”, as used herein, is an organization of data in a computing system that is stored in a memory, a storage device, or other computerized system. A data structure may be any one of, for example, a data field, a data file, a data array, a data record, a database, a data table, a graph, a tree, a linked list, and so on. A data structure may be formed from and contain many other data structures (e.g., a database includes many data records). Other examples of data structures are possible as well, in accordance with other embodiments.

“Computer-readable medium” or “computer storage medium”, as used herein, refers to a non-transitory medium that stores instructions and/or data configured to perform one or more of the disclosed functions when executed. A computer-readable medium may take forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media may include, for example, optical disks, magnetic disks, and so on. Volatile media may include, for example, semiconductor memories, dynamic memory, and so on. Common forms of a computer-readable medium may include, but are not limited to, a floppy disk, a flexible disk, a hard disk, a magnetic tape, other magnetic medium, an application specific integrated circuit (ASIC), a programmable logic device, a compact disk (CD), other optical medium, a random access memory (RAM), a read only memory (ROM), a memory chip or card, a memory stick, solid state storage device (SSD), flash drive, and other media from which a computer, a processor or other electronic device can function with. Each type of media, if selected for implementation in one embodiment, may include stored instructions of an algorithm configured to perform one or more of the disclosed and/or claimed functions. Computer-readable media described herein are limited to statutory subject matter under 35 U.S.C § 101.

“Logic”, as used herein, represents a component that is implemented with computer or electrical hardware, a non-transitory medium with stored instructions of an executable application or program module, and/or combinations of these to perform any of the functions or actions as disclosed herein, and/or to cause a function or action from another logic, method, and/or system to be performed as disclosed herein. Equivalent logic may include firmware, a microprocessor programmed with an algorithm, a discrete logic (e.g., ASIC), at least one circuit, an analog circuit, a digital circuit, a programmed logic device, a memory device containing instructions of an algorithm, and so on, any of which may be configured to perform one or more of the disclosed functions. In one embodiment, logic may include one or more gates, combinations of gates, or other circuit components configured to perform one or more of the disclosed functions. Where multiple logics are described, it may be possible to incorporate the multiple logics into one logic. Similarly, where a single logic is described, it may be possible to distribute that single logic between multiple logics. In one embodiment, one or more of these logics are corresponding structure associated with performing the disclosed and/or claimed functions. Choice of which type of logic to implement may be based on desired system conditions or specifications. For example, if greater speed is a consideration, then hardware would be selected to implement functions. If a lower cost is a consideration, then stored instructions/executable application would be selected to implement the functions. Logic is limited to statutory subject matter under 35 U.S.C. § 101.

While the disclosed embodiments have been illustrated and described in considerable detail, it is not the intention to restrict or in any way limit the scope of the appended claims to such detail. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the various aspects of the subject matter. Therefore, the disclosure is not limited to the specific details or the illustrative examples shown and described. Thus, this disclosure is intended to embrace alterations, modifications, and variations that fall within the scope of the appended claims, which satisfy the statutory subject matter requirements of 35 U.S.C. § 101.

To the extent that the term “includes” or “including” is employed in the detailed description or the claims, it is intended to be inclusive in a manner similar to the term “comprising” as that term is interpreted when employed as a transitional word in a claim.

To the extent that the term “or” is used in the detailed description or claims (e.g., A or B) it is intended to mean “A or B or both”. When the applicants intend to indicate “only A or B but not both” then the phrase “only A or B but not both” will be used. Thus, use of the term “or” herein is the inclusive, and not the exclusive use.

Claims

1. A non-transitory computer storage medium storing computer-executable instructions that when executed by a computer cause the computer to perform corresponding functions, the functions comprising:

reading a multi-distance similarity matrix S that records pair-wise multi-distance similarities between respective pairs of data points in a data set, where each pair-wise similarity is based on distances between a pair of data points calculated using K different distance functions, where K is greater than one, wherein the data set comprises data points corresponding to features extracted from electronic communications associated with a service;
clustering the data points in the data set into n clusters based on the similarity matrix S, where n is not determined prior to the clustering;
evaluating the n clusters to detect a cluster comprising data points of features indicative of anomalous electronic communication attempting to access the service; and
executing an action to block the anomalous electronic communication from accessing the service.

2. The non-transitory computer storage medium of claim 1, where the functions comprise clustering the data points in the data set by, until no un-clustered data points remain:

selecting a pair of data points having a relatively large multi-distance similarity as recorded in the similarity matrix S; and
creating a cluster that includes the selected pair of data points by adding data points to the cluster that are similar to any point in the cluster.

3. The non-transitory computer storage medium of claim 1, where the functions comprise clustering the data set by:

iteratively partitioning the similarity matrix S into n sub-matrices using spectral theory, where each sub-matrix corresponds to a cluster; and
ceasing partitioning when all sub-matrices are mutually dissimilar.

4. The non-transitory computer storage medium of claim 1, where the functions comprise iteratively clustering the data set by, starting with the similarity matrix as a sub-matrix:

clustering the sub-matrix by: using an objective function to compute a Laplacian matrix of the sub-matrix; computing eigenvalues and corresponding eigenvectors for the Laplacian matrix and ordering the eigenvalues in ascending order such that the first eigenvalue is equal to zero; identifying m eigenvalues that are equal to zero; and when m is greater than one, partitioning the sub-matrix into m sub-matrices based on the second through the mth eigenvectors; and
clustering each of the resulting m sub-matrices.

5. The non-transitory computer storage medium of claim 4, where the functions comprise, when a sub-matrix has a single eigenvalue equal to zero:

partitioning indices of the sub-matrix into two sub-matrices based on the second eigenvector, such that one of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate similarity and the other of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate dissimilarity;
determining a cross-cluster similarity between the two sub-matrices;
retaining the two sub-matrices when the cross-cluster similarity indicates dissimilarity; and
discarding the two sub-matrices when the cross-cluster similarity indicates that the two sub-matrices are similar.

6. The non-transitory computer storage medium of claim 1, where the functions comprise computing each pairwise similarity in the similarity matrix S by:

using a K different distance functions D1-DK, calculating K per-distance tri-point arbitration similarities SD1-SDK between a pair of data points xi and xj with respect to an arbiter point a; and
computing a multi-distance tri-point arbitration similarity S between the data points by: determining that the data points are similar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are similar; and determining that the data points are dissimilar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are dissimilar.

7. The non-transitory computer storage medium of claim 6, where the functions comprise computing the per-distance tri-point similarity between points x1 and x2 with respect to arbiter a based on the following relationship, where ρ is the distance between points using the respective distance function: S D  ( x 1, x 2  a ) = min   { ρ D  ( x 1, a ), ρ D  ( x 2, a ) } - ρ D  ( x 1, x 2 ) max   { ρ D  ( x 1, x 2 ), min   { ρ D  ( x 1, a ), ρ D  ( x 2, a ) } }

8. The non-transitory computer storage medium of claim 1, where the functions further comprise:

reading, from an electronic data structure, a different multi-distance similarity matrix S′ that records pair-wise multi-distance similarities between respective pairs of data points in a data set, where each pair-wise similarity is based on distances between a pair of data points calculated using K−1 different distance functions, such that a given distance function has not been used to calculate the pair-wise similarities in the similarity matrix;
clustering the data points in the data set into n′ clusters based on the similarity matrix S′; and
comparing the n clusters and the n′ clusters and when the n clusters and the n′ clusters are similar, determining that the given distance function is not relevant to clustering for the data set.

9. A computing system, comprising:

a processor;
multi-distance clustering logic configured to cause the processor to: read a multi-distance similarity matrix S that records pair-wise multi-distance similarities between respective pairs of data points in a data set, where each pair-wise similarity is based on distances between a pair of data points calculated using K different distance functions, where K is greater than one, wherein the data set comprises data points corresponding to features extracted from electronic communications associated with a service; cluster the data points in the data set into n clusters based on the similarity matrix S, where n is not determined prior to the clustering; evaluate the n clusters to detect a cluster comprising data points of features indicative of anomalous electronic communication attempting to access the service; and execute an action to block the anomalous electronic communication from accessing the service.

10. The computing system of claim 9, where the multi-distance clustering logic is configured to cause the processor to cluster the data points in the data set by, until no un-clustered data points remain:

selecting a pair of data points having a relatively large multi-distance similarity as recorded in the similarity matrix S; and
creating a cluster that includes the selected pair of data points by adding data points to the cluster that are similar to any point in the cluster.

11. The computing system of claim 9, where the multi-distance clustering logic is configured to cause the processor to cluster the data set by:

iteratively partitioning the similarity matrix S into n sub-matrices using spectral theory, where each sub-matrix corresponds to a cluster; and
ceasing partitioning when all sub-matrices are mutually dissimilar.

12. The computing system of claim 11 where the multi-distance clustering logic is configured to cause the processor to iteratively cluster the data set by, starting with the similarity matrix as a sub-matrix:

clustering the sub-matrix by: using an objective function to compute a Laplacian matrix of the sub-matrix; computing eigenvalues and corresponding eigenvectors for the Laplacian matrix and ordering the eigenvalues in ascending order such that the first eigenvalue is equal to zero; identifying m eigenvalues that are equal to zero; and when m is greater than one, partitioning the sub-matrix into m sub-matrices based on the second through the mth eigenvectors; and when a sub-matrix has a single eigenvalue equal to zero: partitioning indices of the sub-matrix into two sub-matrices based on the second eigenvector, such that one of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate similarity and the other of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate dissimilarity; determining a cross-cluster similarity between the two sub-matrices; when the cross-cluster similarity indicates dissimilarity retaining the two sub-matrices; and clustering each of the resulting m sub-matrices.

13. A computer-implemented method comprising, with a processor:

reading, from an electronic data structure, a multi-distance similarity matrix S that records pair-wise multi-distance similarities between respective pairs of data points in a data set, where each pair-wise similarity is based on distances between a pair of data points calculated using K different distance functions, where K is greater than one, wherein the data set comprises data points corresponding to features extracted from electronic communications associated with a service;
clustering the data points in the data set into n clusters based on the similarity matrix S, where n is not determined prior to the clustering;
evaluating the n clusters to detect a cluster comprising data points of features indicative of anomalous electronic communication attempting to access the service; and
executing an action to block the anomalous electronic communication from accessing the service.

14. The computer-implemented method of claim 13, further comprising, with the processor, clustering the data points in the data set by, until no un-clustered data points remain:

selecting a pair of data points having a relatively large multi-distance similarity as recorded in the similarity matrix S; and
creating a cluster that includes the selected pair of data points by adding data points to the cluster that are similar to any point in the cluster.

15. The computer-implemented method of claim 13, further comprising, with the processor, clustering the data set by:

iteratively partitioning the similarity matrix S into n sub-matrices using spectral theory, where each sub-matrix corresponds to a cluster; and
ceasing partitioning when all sub-matrices are mutually dissimilar.

16. The computer-implemented method of claim 13, further comprising, with the processor, iteratively clustering the data set by, starting with the similarity matrix as a sub-matrix:

clustering the sub-matrix by: using an objective function to compute a Laplacian matrix of the sub-matrix; computing eigenvalues and corresponding eigenvectors for the Laplacian matrix and ordering the eigenvalues in ascending order such that the first eigenvalue is equal to zero; identifying m eigenvalues that are equal to zero; and when m is greater than one, partitioning the sub-matrix into m sub-matrices based on the second through the mth eigenvectors; and
clustering each of the resulting m sub-matrices.

17. The computer-implemented method of claim 16, further comprising, with the processor, when a sub-matrix has a single eigenvalue equal to zero:

partitioning indices of the sub-matrix into two sub-matrices based on the second eigenvector, such that one of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate similarity and the other of the two sub-matrices contains data vectors with indices corresponding to elements of the second eigenvector that indicate dissimilarity;
determining a cross-cluster similarity between the two sub-matrices;
retaining the two sub-matrices when the cross-cluster similarity indicates dissimilarity; and
discarding the two sub-matrices when the cross-cluster similarity indicates that the two sub-matrices are similar.

18. The computer-implemented method of claim 13, further comprising, with the processor, computing each pairwise similarity in the similarity matrix S by:

using a K different distance functions D1-DK, calculating K per-distance tri-point arbitration similarities SD1-SDK between a pair of data points xi and xj with respect to an arbiter point a; and
computing a multi-distance tri-point arbitration similarity S between the data points by: determining that the data points are similar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are similar; and determining that the data points are dissimilar when a dominating number of the K per-distance tri-point arbitration similarities indicate that the data points are dissimilar.

19. The computer-implemented method of claim 18, further comprising, with the processor, computing the per-distance tri-point similarity between points x1 and x2 with respect to arbiter a based on the following relationship, where ρ is the distance between points using the respective distance function: S D  ( x 1, x 2  a ) = min   { ρ D  ( x 1, a ), ρ D  ( x 2, a ) } - ρ D  ( x 1, x 2 ) max   { ρ D  ( x 1, x 2 ), min   { ρ D  ( x 1, a ), ρ D  ( x 2, a ) } }

20. The computer-implemented method of claim 13, further comprising, with the processor, computing each pairwise similarity in the similarity matrix S by:

using a K different distance functions D1-DK, calculating K per-distance tri-point arbitration similarities SD1-SDK between a pair of data points xi and xj with respect to an arbiter point a; and
computing a multi-distance tri-point arbitration similarity S between the data points by iteratively applying a rule set to the K per-distance tri-point arbitration similarities SD1-SDK, wherein the rule set comprises: i) a first rule to increase a level of similarity when the multi-distance tri-point similarity S and a per-distance tri-point similarity SD indicate that the pair of data points are similar; ii) a second rule to increase a level of dissimilarity when the multi-distance tri-point similarity S and the per-distance tri-point similarity SD indicate that the pair of data points are dissimilar; and iii) a third rule to adjust a level of similarity towards neutral when one of the multi-distance tri-point similarity S and the per-distance tri-point similarity SD indicate that the pair of data points are similar and the other indicate that the pair of data points are dissimilar.
Patent History
Publication number: 20180322363
Type: Application
Filed: Jul 17, 2018
Publication Date: Nov 8, 2018
Patent Grant number: 10956779
Inventors: Aleksey M. URMANOV (San Diego, CA), Alan Paul WOOD (San Jose, CA), Anton A. BOUGAEV (San Diego, CA)
Application Number: 16/037,116
Classifications
International Classification: G06K 9/62 (20060101); G06F 17/16 (20060101);