PREDICTION MODELS INCORPORATING STRATIFICATION OF DATA

A Prism Vote process can be used to make predictions based on a data set that may be heterogeneous. The data set can be stratified, e.g., using techniques adapted from principal component analysis. A prediction model can be trained independently on each stratum of the data set. To make a prediction for a “testing” data sample, the prediction model for each stratum can be used to provide a per-stratum prediction of an outcome, and a per-stratum probability that the testing data sample belongs to each stratum can be determined. The predicted outcome (e.g., a probability of a particular outcome) can be computed from the per-stratum predictions weighted according to the per-stratum probabilities.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/915,459, filed Oct. 15, 2019, the disclosure of which is incorporated herein by reference.

BACKGROUND

This disclosure relates generally to prediction of outcomes and in particular to prediction models that incorporate stratification of data.

The ability to predict outcomes can be useful in many contexts. For example, in the field of medicine, optimal recommendations related to cancer screenings (e.g., how often to perform screening and/or which screening tests to perform) may vary depending on the particular patient's risk of cancer. As another example, if a patient has a particular disease, the choice of treatment may vary depending on the likelihood of a favorable outcome.

Traditionally, statistical correlations have been used to generate predictions based on one or more variables, using techniques such as linear or logistic regression. In traditional methods, a research team designs a study to test a specific hypothesis that a particular variable (or set of variables) correlates with a particular outcome, then collects a number of samples sufficient to test the hypothesis, with the number being determined in advance based on the expected effect size, potential confounding variables that are to be controlled for, and so on.

More recently, machine learning has increased the potential for individualized predictions, particularly when confronted with large numbers of potentially relevant variables. A machine-learning classifier is given a (usually large) set of “training” data that represent cases where a set of variables and an outcome are known. The classifier can be trained using known training procedures to optimize a (mathematically complex) formula that predicts outcomes from the variables. Very often, the training of a machine-learning classifier is a dynamic process, with new samples being added to the training data set as information about outcomes for more cases becomes available. Training of the classifier is repeated from time to time to take advantage of the additional information.

SUMMARY

As data sets become larger, they also tend to become more heterogeneous. This increasing heterogeneity can decrease the accuracy of prediction algorithms that are based on treating the entire training data set as a single population. For instance, a variable that may is strong predictor for one segment of the population may have little effect on another segment.

Certain embodiments of the claimed invention relate to techniques for making predictions based on a data set that may be heterogeneous. The heterogeneity is handled using a systematic approach that stratifies the data, e.g., using techniques adapted from principal component analysis. A prediction model can be trained independently on each stratum of the data set. To make a prediction for a “testing” data sample, the prediction model for each stratum can be used to provide a per-stratum prediction of an outcome, and a per-stratum probability that the testing data sample belongs to each stratum can be determined. The predicted outcome (e.g., a probability of a particular outcome) can be computed from the per-stratum predictions weighted according to the per-stratum probabilities.

The techniques described herein are applicable to any data set that may represent a heterogeneous population. While examples described herein relate to disease prediction using genomic data, similar techniques may be applied in other contexts. For example, in the field of health care, the data may include biomarkers other than genomic data (e.g., blood chemistry data; medical imaging data; biometric parameters such as heart rate or blood pressure; family medical history; behavioral parameters such as diet or exercise), and the prediction may relate to a diagnosis (e.g., presence or absence of a particular disease), likelihood of developing a disease, expected response to a particular course of treatment, and so on. The techniques described herein can also be applied to other fields, such as finance (e.g., predicting future investment returns or likelihood of default on a loan), insurance (e.g., predicting likely value of future claims by an insured person), and so on.

The following detailed description, together with the accompanying drawings, will provide a better understanding of the nature and advantages of the claimed invention.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a flow diagram of a process for predicting likelihood of an outcome according to an embodiment of the present invention.

FIG. 2 shows a flow diagram of a process for segmenting, or stratifying, a training set that can be used with the process of FIG. 1 in some embodiments of the present invention.

FIG. 3 shows a flow diagram of a process for computing a predicted outcome that can be used with the process of FIG. 1 in some embodiments of the present invention.

FIGS. 4A-4D show four graphs illustrating results of applying a process according to an embodiment of the present invention to simulated data sets.

FIG. 5 is a bar chart illustrating results of applying a process according to an embodiment of the present invention to simulated data sets.

FIG. 6 is a graph showing receiver operating characteristic (ROC) curves for Alzheimer's disease data analyzed using a process according to an embodiment of the present invention and a global logistic regression.

FIG. 7 is a graph showing ROC curves for schizophrenia data analyzed using a process according to an embodiment of the present invention and a global logistic regression.

DETAILED DESCRIPTION

To provide an understanding of various features of the claimed invention, embodiments are described in which genomic data is used to predict likelihood of an individual developing a specific disease. It should be understood, however, that the same techniques can be applied to other types of data, and the invention is not limited to genomic data, to disease prediction, or to the field of health care.

Prism Vote Processes

FIG. 1 shows a flow diagram of a process 100 for predicting likelihood of an outcome according to an embodiment of the present invention. Process 100 can be implemented using a computer system of suitable design.

At block 102, a training set of data samples is identified. The training set can include any number (N) of individual data samples, which can be samples corresponding to different individual subjects or different instances of a phenomenon being studied. For each data sample xi, it is assumed that a set of p variables {xij} (for j=1, . . . , p) has been measured and that an outcome yi is known. For example, the set of variables {xij} can represent genomic data indicating presence or absence of each of p different single nucleotide polymorphisms (SNPs). For each SNP, the variable xij can have the value 0, 1, or 2, corresponding to the minor allele frequency count in the genotype observed for a subject. For example, if G is the minor allele and the observed genotype is GG, the SNP value is coded as 2. If the observed genotype is CC, the SNP value is coded as 0. In the case of disease prediction, outcome)), can indicate presence (yi=1) or absence (yi=0) of a disease. In cases such as predicting a variable physical characteristic (e.g., blood glucose level or cholesterol level), outcome yi can be a continuously-valued variable. Other coding schemes can be used, depending on the particular information being represented in the data samples xi.

At block 104, the training set of data samples is segmented into a number of strata. The number of strata can be chosen based on the size of the data set (i.e., the number N of samples) and a minimum average number of samples (C) for a stratum. In some embodiments, the number of strata (K) can be chosen within the range 2≤K≤N/C. As described below, a prediction model is trained for each stratum and, the minimum average number of samples C can be determined based in part on the particular prediction model that is to be trained. For example, many prediction models based on linear regression assume a data set of a minimum size (which depends on the number of variables in the model), and an appropriate choice of C may be 30, 50, 100 or the like, depending on the number of variables. Machine-learning classifiers may require even larger training sets to produce reliable prediction models, particularly if the number of variables is large. An example of a technique that can be used in some embodiments to optimize the number of strata for a given training data set is described below.

FIG. 2 shows a flow diagram of a process 200 for segmenting, or stratifying, a training set that can be implemented at block 104 of process 100. Process 200 involves using a matrix representation of the training data and elements of principal component analysis to define similarity.

At block 202, a matrix X is defined from the training data. In some embodiments, each row of matrix X can correspond to a data sample xi and each column to a variable. Thus, for a training data set of N samples, each having p variables, X can be an N×p matrix. Depending on the particular combination of variables, it may be desirable to perform a normalization operation on each column (e.g., normalizing each variable to a percentage of a maximum value) so that all variables are in a similar numerical range.

At block 204, eigenvalues and eigenvectors can be computed for the X matrix. Specifically, eigenvalues λj and eigenvectors vj (for j=1, N) can be computed according to:


XX′vjjvj,   (1)

where X′ is the matrix transpose of X. It is assumed that the eigenvalues are ordered by magnitude from largest to smallest.

At block 206, a subset consisting of a number q of the eigenvectors vj can be selected as the “top” (or most significant) eigenvectors. For example, the eigenvalues λj can be ordered according to magnitude, and the q largest eigenvalues can be identified as representing the most statistically significant components. The choice of q in a given case can be determined using a scree plot or similar techniques.) The q eigenvectors vj corresponding to the q most statistically significant components are selected as the top eigenvectors.

At block 208, a g vector can be computed from the top q eigenvectors according to:

g = j = 1 q a j v j , ( 2 ) where a j = λ j Σ i = 1 q λ i . ( 3 )

The g vector is an N-component vector that summarizes the variation of the N data samples measured in eigenvalues λj along the top q eigenvectors, and each component of g corresponds to a different data sample xi.

At block 210, the g vector can be used to segment the training data into K strata. Specifically, the components of the g vector can be ordered by magnitude, and quantiles of g can be assigned to each stratum. For example, if K=2, the data samples corresponding to the first N/2 components of (ordered) g can be assigned to one stratum and the rest to the other stratum. If K=20, each quantile can include N/20 data samples. (Since there is an eigenvalue corresponding to each data sample, this also implies that parts of the eigenvectors are assigned to a stratum.)

In some embodiments, the same number of data samples (e.g., N/K) can be assigned to each stratum. If N/K is not an integer, rounding techniques can be used, with the result that the number of data samples in different strata may differ by 1. In other instances, the segmentation can be unequal, e.g., based on patterns in the eigenvalues or in the components of the g vector that suggest a natural clustering of the data samples. As long as each stratum includes at least the minimum number of data samples to support training of a prediction model on that stratum, different strata can include different numbers of data samples without restriction.

In cases where the variables {xij} represent genomic variation (e.g., where the variables represent SNPs), the eigenvectors can be interpreted as an ancestry direction. Subjects with high variation among the top q eigenvectors are genetically closer and are grouped together. In this context, process 200 can be understood as an approach to breaking out and integrating heritability as a spectrum. In other cases, the interpretation may be different, but the general approach is to form clustered cohorts within a set of data samples where the clustering reflects endogenous structure within the data set.

At block 212, a center of each stratum can be computed as the mean of the top q eigenvectors in the stratum. Specifically, for a stratum k (where k=1, . . . , K), a center can be defined as a q-component vector ck having components

c j k = 1 n k i = i 0 i 0 + n k v ij , ( 4 )

where i0 is an index corresponding to the first data sample in the stratum, vij is the ith component of the jth eigenvector, and j=1, . . . , q. As described below, the vectors ck can be used to determine a probability that a new data sample belongs to stratum k.

Referring again to FIG. 1, after the training data has been segmented into strata (also referred to as stratifying the training data), at block 106 a prediction model is trained independently for each stratum. In some embodiments, the same prediction model is used for all strata but because the training data sets are different, the models for different strata may yield different predictions.

By way of example, the prediction model can be a linear regression model that predicts an outcome as a continuously-valued variable y. A linear regression model for a training set of N data samples having p variables can be expressed as:


y=Xβ+ε,   (5)

where

y = ( y 1 y N )

is a vector of (known) outcomes,

X = ( 1 x 1 1 x 1 p 1 x 2 1 x 2 p 1 x N 1 x N p )

is a matrix of (known) observations, and

β = ( β 0 β p ) and ɛ = ( ɛ 1 ɛ N )

are parameters to be computed. {ε1, ε2, . . . , εN} are independent of each other, with mean 0 and variance σ2. Techniques for computing the parameters of a linear regression model from training data are known in the art and may be applied in the context of process 100.

In accordance with block 106 of process 100, the linear regression model of Eq. (5) is applied to each stratum separately. That is, instead of a single model, there are K models of the form:

y k = X k β k + ɛ k , where y k = ( y 1 k y n k k ) , X k = ( 1 x 1 1 k ... x 1 p k 1 x 2 1 k ... x 2 p k 1 x n k 1 k ... x n k p k ) , β k = ( β 0 k β p k ) , ɛ k = ( ɛ 1 k ɛ n k k ) , and k = 1 , , K . ( 6 )

The components of each εk are independently and normally distributed, with mean 0 and variance σ02 (in this example, variance is assumed to be the same for all k).

While linear regression is used as an example, it should be understood that process 100 is not limited to any specific prediction model. Other prediction models, such as logistic regression models, nonlinear models, support vector machine (SVM) models, deep learning models, can be used, provided that sufficient data is available to train the prediction model independently for each stratum. Depending on the particular prediction model, training can include computing a linear regression (e.g., using Eq. (6)), applying a machine-learning algorithm to train a deep-learning classifier, or any other technique for training a particular prediction model based on available training data.

The trained prediction models can be used to make predictions. For example, at block 108 of process 100, a “testing” sample (s) can be obtained. As used herein, the testing sample s can be any data sample that was not used in training the prediction models and for which the relevant variables xs={xsj} are known. In some instances, the outcome (ys) for the testing sample s is not known (e.g., when using the trained prediction models to predict an outcome for a patient in clinical practice). In other instance, the outcome)), may be known (e.g., when testing a trained prediction model to assess its performance).

At block 110, a predicted outcome for the testing sample can be computed based on the prediction model for each stratum and a probability that the testing sample belongs to that stratum. A per-stratum prediction can be computed from the prediction model for each stratum, and the per-stratum predictions can be combined using weights based on the probability that the testing sample belongs to each stratum.

FIG. 3 shows a flow diagram of a process 300 for computing a predicted outcome that can be used at block 110. At block 302, for each stratum k, a per-stratum prediction (yk) is computed on the assumption that the testing sample belongs to stratum k. For example, the (known) variables {xsj} associated with the testing sample s can be provided as inputs to the trained prediction model for each stratum, and the prediction model can be applied to compute the predicted outcome.

At block 304, for each stratum k, a per-stratum probability that the testing sample s belongs to the stratum can be determined. In some embodiments, the probability of a testing sample belonging to a particular stratum can be based on a distance to a center of that stratum and a Bayesian analysis.

For example, the center of a stratum can be defined according to Eq. (4) above. Distance to the center can be computed as follows. First, eigenvectors for a test set that includes test sample s are computed using a matrix XX′ that includes test sample s. (The matrix XX′ may also include some or all of the training samples.) Accordingly, an eigenvector vs for test sample s can be determined. The distance between vs and the center ck of stratum k (defined according to Eq. (4) above) can be computed as:

d s k = ( v s - c k ) ( Σ k ) - 1 ( v s - c k ) = j = 1 q 1 σ j 2 ( k ) ( v s j - c j k ) 2 χ q 2 , ( 7 ) where Σ k = ( σ 1 2 ( k ) 0 0 σ q 2 ( k ) ) q × q ( 8 )

and σj2(k)=var(vij), i=1, . . . , nk. The off-diagonal terms are 0 because the eigenvectors are orthogonal. The (tail) probability of observing a sample s having variables xs={xsj} under the hypothesis that sample s belongs to stratum k is:


Pr(xs|s ∈ k)=Pr[χq2>dsk].   (9)

The closer the sample is to the center of the stratum, the larger the tail probability, and Pr(xs|s ∈ k) can be used as a similarity measure of sample s to stratum k. The distance dsk is measured in the space spanned by the top q eigenvectors, which captures the endogenous structure of the sample. Eq. (9) can be used in a Bayesian analysis to determine the probability that sample s belongs to stratum k given the variables xs, as described below. Other techniques for determining distance from a test sample to a stratum center can also be used.

At block 306, the predicted outcome for sample s is computed based on the per-stratum predicted outcomes yk determined at block 302 and a per-stratum probability that sample s belongs to stratum k. (Note that training of the prediction model is assumed not to have included test sample s.) In some embodiments, a Bayesian model can be used to compute the prediction. For example, suppose that the prediction model has two outcomes: y=1 corresponds to disease positive and y=0 corresponds to disease negative. The predicted probability of disease for subject s is the aggregated prediction from each stratum:

P r ( y s = 1 | x s ) = k = 1 K P r ( y s = 1 | s k , x s ) P r ( s k | x s ) ( 10 )

where stratum-specific predictor Pr(ys=1|s ∈ k, xs) is the result of the prediction model from that stratum. The probability Pr(s ∈ k|xs) can be given by:

P r ( s k | x s ) = P r ( s k ; x s ) P r ( x s ) = P r ( x s | s k ) P r ( s k ) Σ k = 1 K P r ( x s | s k ) P r ( s k ) ( 11 )

where Pr(s ∈ k)=nk/N is the fraction of the training data set that is in stratum k, which is also the prior probability that a data sample belongs to stratum k. Pr(xs|s ∈ k) is the probability of observing xs when the data sample belongs to stratum k, e.g., as defined in Eq. (9).

Process 100 is illustrative, and variations and modifications are possible. For example, the number of data samples and variables can be chosen as desired. Any type of prediction model can be used, including but not limited to linear regression models. (It is assumed that the same type of prediction model and same set of input variables are used for all strata.) Process 100 is representative of a category of processes in which a training data set is stratified and prediction models are trained for different strata, and in which predictions for testing samples made by combining predictions from different strata according to the probability that the testing sample is in a particular stratum. Such processes are referred to herein as “Prism Vote” processes.

It is contemplated that a Prism Vote process such as process 100 can be performed by a computer and may operate on data sets of any size, having any number of variables. A set of prediction models and associated parameters (e.g., eigenvalues, eigenvectors, stratum centers) can be generated, e.g., by performing blocks 102-106, and the prediction models and parameters can be stored for later use. Obtaining testing samples and computing predicted outcomes for the testing sample (e.g., blocks 108 and 110) can be performed at any time after the prediction models have been trained and stored, and the same set of prediction models can be applied to any number of testing samples. In addition, the trained prediction models and associated parameters can be provided to computer systems other than the system that was used to train the prediction models. For example, a Prism Vote process for predicting disease can be trained by a research team, and the trained prediction models and associated parameters can be distributed to clinical workers (e.g., at a laboratory). The clinical workers can apply the trained prediction models and associated parameters to data collected from individual patients, e.g., by using a computer to compute a predicted outcome for a given patient. The predicted outcome can be used to inform treatment decisions or other care recommendations (e.g., changes in diet or lifestyle).

In some embodiments, a testing sample may be classified as an outlier if, for every stratum, its probability of belonging to that stratum (e.g., as determined from Eq. (11)) is less than a threshold (e.g., 0.05). Outliers can be treated differently from other testing samples. For instance, in addition to the per-stratum prediction models, a “global” prediction model can be trained on all of the training data samples (without stratification other segmentation), and the predicted outcome for an outlier can be determined using the global prediction model. Where a testing sample is identified as an outlier, any report of the predicted outcome can be annotated to indicate that the testing sample is an outlier.

Performance of a Prism Vote Process

A Prism Vote process such as process 100 allows different prediction models to be developed from different strata of a heterogeneous training data set. A tradeoff is that segmentation of the data set provides fewer data samples for training each per-stratum prediction model, which may mean each per-stratum prediction model may be less reliable. To understand when a Prism Vote process can be expected to provide more reliable predictions than a “global” prediction model (i.e., a single prediction model trained using all of the training data), one can consider the expected prediction error (EPE) for different approaches. For purposes of illustration, it is assumed that the prediction model is a linear regression model as described above. Similar analysis can be applied for other prediction models.

For a “global” linear regression model {circumflex over (f)}LR(x) trained on all training data samples, the EPE can be expressed as:

EPE LR ( x 0 ) = E [ ( Y - f ^ LR ( x 0 ) ) 2 | X = x 0 ] = Var ( Y | X = x 0 ) + E [ f ^ LR ( x 0 ) - E f ^ LR ( x 0 ) ] 2 = Var ( Y | X = x 0 ) + Var ( f ^ LR ( x 0 ) ) . ( 12 )

For a K-stratum Prism Vote process with a linear regression model trained for each stratum, the EPE can be expressed as:

E P E PV ( x 0 ) = E [ ( Y - f ^ PV ( x 0 ) ) 2 | X = x 0 ] = E [ ( Y - k = 1 K w k ( x 0 ) f ^ k ( x 0 ) ) 2 | X = x 0 ] . ( 13 )

where {circumflex over (f)}lk(x0) is the prediction from the regression model of the kth stratum for test sample x0 and wk(x0) is the weight of the prediction for the kth stratum for test sample x0.

Assuming a linear regression model, a Prism Vote process is expected to provide better accuracy than a global model when EPEPV(x0)<EPELR(x0). Derivation of the inequality reduces to PVI>0, where the criterion PVI is given by:

PVI = min k { σ 2 σ 0 2 - [ 1 ln ( n ) + x 0 ( X k X k ) - 1 x 0 ] [ 1 ln ( K n ) + x 0 ( X ' X ) - 1 x 0 ] , k = 1 , 2 , , K } , where β ^ k = ( β ^ 0 k β ^ p k ) ( 14 )

is the least square estimates based on observations from stratum k according process 100,

β ^ = ( β ^ 0 β ^ p )

is the least square estimates using the global linear regression model (trained on all samples), and

σ 2 = σ 0 2 + 1 [ K n - ( p + 1 ) ] k = 1 K ( β ^ k - β ^ ) X k X k ( β ^ k - β ^ ) . ( 15 )

When Eq. (14) yields PVI>0, using a Prism Vote process with a linear regression model to train each stratum independently can be expected to provide better prediction performance than a single linear regression model trained on all the training samples.

In addition, when linear regression is used as the prediction model, the optimum number K of strata can be computed as:

K ^ = arg max K { min k [ σ 2 σ 0 2 - ( 1 ln ( n ) + x 0 ( X k X k ) - 1 x 0 ) ( 1 ln ( K n ) + x 0 ( X ' X ) - 1 x 0 ) ] , K = 1 , 2 , } . ( 16 )

It should be understood that values of K other than that indicated by Eq. (16) may be selected, even if performance is sub-optimal. Further, where the prediction model is a model other than linear regression, similar logic can be used to define conditions under which a Prism Vote process is expected to outperform a single model and/or to determine an optimum number of strata.

EXAMPLE 1

To illustrate selection of the number of strata, a simulation study of four data sets was performed. In each data set, a “true” number of strata was chosen (1, 2, 3, or 4), and data was generated assuming each stratum follows a different linear-regression prediction model. Prism Vote models were trained on each data set with different values of K (K=1, 2, 3, 4, 5).

FIGS. 4A-4D show graphs 410, 420, 430, 440 corresponding to the four data sets. In FIG. 4A, the true number of strata was 1 (a traditional linear regression); in FIG. 4B, 2 strata; in FIG. 4C, 3 strata; in FIG. 4D, 4 strata. For each data set, PVI as defined by Eq. (14) above (lines 411, 421, 431, 441) and a mean square error (MSE) measure −1*MSE (lines 412, 422, 432, 442) are shown as a function of the number of strata K used in the Prism Vote process. In each case, PVI is maximized when K is chosen to be the true number of strata, and the K value that yields maximum PVI also provides the smallest MSE, as indicated by Eq. (16) above.

EXAMPLE 2

To illustrate performance of a Prism Vote process, a simulation study has been performed using two populations having different phenotypes. Data was generated for five different scenarios. Each scenario used the same set of predictors (variables) and linear regression models, but different effect sizes (expressed as mean β difference) for various predictors between the two populations. In Scenario 1, there was no difference between effect sizes (mean β difference 0) for the two populations; in Scenarios 2-4, there were increasing degrees of difference between effect sizes (mean β difference 0.18, 0.4, 0.67); and in Scenario 5, the effects were are completely different (mean β difference 1). The resulting PVIs for Scenarios 1-5 were 0.27, 0.10, 0.76, 1.46 and 3.23. For each scenario, a mean squared error (MSE) was calculated for a global linear regression model (trained on all data samples) and for a Prism Vote process with K=2.

FIG. 5 is a bar chart comparing the MSE for the predictions in each scenario. For each scenario, the result for the global (traditional) linear regression model is on the left and the result for the Prism Vote process is on the right. It can be seen from FIG. 5 that when there is effect size difference, the PVI is greater than 0, and a Prism Vote process can provide a reduced MSE (better performance) compared to conventional processes.

EXAMPLE 3

A Prism Vote process using a logistic-regression prediction model for each stratum has been applied to two whole-genome data sets associated with Alzheimer's disease and schizophrenia, respectively. A global logistic regression prediction model was also applied to the same data sets for comparison. Each trained model was applied to testing data for which the outcome was known, in order to assess sensitivity and specificity.

FIG. 6 is a graph showing receiver operating characteristic (ROC) curves for the Alzheimer's disease data for a Prism Vote process (line 602) and global logistic regression (line 604). With the Prism Vote process, the average Area-Under-the-Curve (AUC) reached 74.36% in 5-group cross validation (5GCV), making a 3.5% improvement over the conventional logistic regression.

FIG. 7 is a graph showing ROC curves for the schizophrenia data for the Prism Vote process (line 702) and global logistic regression (line 704). With the Prism Vote process, the 5GCV average AUC is 68.2%, improved by 3.1% compared to the conventional logistic regression.

These examples show that accuracy of prediction can be improved using Prism Vote processes of the kind described herein. It should be understood that these examples are illustrative and not limiting. Performance may depend on the particular set of variables and outcomes being modeled, as well as on the prediction model used, the number of strata, and the size of the data set.

Computer System Implementation

Data analysis and computational operations of the kind described herein can be implemented in computer systems that may be of generally conventional design, such as a desktop computer, laptop computer, tablet computer, mobile device (e.g., smart phone), or the like. Such systems may include one or more processors to execute program code (e.g., general-purpose microprocessors usable as a central processing unit (CPU) and/or special-purpose processors such as graphics processors (GPUs) that may provide enhanced parallel-processing capability); memory and other storage devices to store program code and data; user input devices (e.g., keyboards, pointing devices such as a mouse or touchpad, microphones); user output devices (e.g., display devices, speakers, printers); combined input/output devices (e.g., touchscreen displays); signal input/output ports; network communication interfaces (e.g., wired network interfaces such as Ethernet interfaces and/or wireless network communication interfaces such as Wi-Fi); and so on. Computer programs incorporating various features of the claimed invention may be encoded and stored on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and other non-transitory media. (It should be understood that “storage” of data is distinct from propagation of data using transitory media such as carrier waves.) Computer readable media encoded with the program code may be packaged with a compatible computer system or other electronic device, or the program code may be provided separately from electronic devices (e.g., via Internet download or as a separately packaged computer-readable storage medium).

As described above, training of prediction models and application of trained prediction models to training data can be performed at different times and/or by different computer systems or the same computer system. Further, the training portion of a Prism Vote process can be repeated from time to time as additional training data becomes available.

Additional Embodiments

While the invention has been described with reference to specific embodiments, those skilled in the art will appreciate that variations and modifications are possible. All processes described above are illustrative and may be modified. Processing operations described as separate blocks may be combined, order of operations can be modified to the extent logic permits, processing operations described above can be altered or omitted, and additional processing operations not specifically described may be added. Particular definitions and data formats can be modified as desired.

In various embodiments, Prism Vote processes can be implemented using any type of prediction model and any number of strata, provided sufficient data is available to train the per-stratum prediction models. A Prism Vote process can stratify the data based on endogenous structure of the data set, e.g., as described above, and this may result in improved performance for a particular type of prediction model.

Further, while the foregoing examples make reference to disease prediction using genomic data, it should be understood that this is merely one use-case for a Prism Vote process. For example, genomic data may be used to predict any phenotypic characteristic, including disease (or absence thereof), response to treatment, expected physiological characteristics (e.g., blood sugar or cholesterol levels), effectiveness of preventive measures, and so on. Likewise, the input variables are not limited to genomic data. In a health-care context, any quantifiable information about an individual that may be correlated with a medical condition can be used as a variable. Examples include medical imaging data, blood test results, stress test results, and so on.

The applicability of Prism Vote processes is also not limited to the field of health care. For instance, in the financial sector, similar techniques can be used to predict performance of a publicly-traded stock based on a number of variables, where data sets may be heterogeneous in relation to factors such as industrial sector, dependence on weather or commodities, and so on.

Thus, although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.

Claims

1. A method for predicting likelihood of an outcome based on a set of variables, the method comprising:

identifying a training set of data samples, wherein for each data sample the variables and outcomes are known;
segmenting the training set into a plurality of strata based on a measure of similarity of the data samples;
training a prediction model for each stratum, wherein the prediction model predicts a likelihood of an outcome based on the variables and wherein training of the prediction model is performed independently for each stratum;
obtaining a testing sample for which the variables are known; and
predicting the outcome for the testing sample, wherein predicting the outcome includes: determining, for each stratum, a likelihood of the outcome using the prediction model for that stratum; determining, for each stratum, a probability that the testing sample belongs to that stratum; and computing a predicted outcome for the testing sample based on the likelihood for each stratum weighted by the probability that the testing sample belongs to that stratum.

2. The method of claim 1 wherein segmenting the training set includes:

building a matrix for the training set of samples;
computing a set of eigenvalues and a set of eigenvectors from the matrix;
sorting the eigenvectors based on respective magnitudes of the eigenvalues; and
using the sorted eigenvectors to segment the training set.

3. The method of claim 2 wherein using the sorted eigenvectors to segment the training set includes:

selecting a subset of the sorted eigenvectors as significant eigenvectors;
computing a weighted average vector of the significant eigenvectors, wherein the average is weighted according to the eigenvalues;
sorting components of the weighted average vector; and
using quantiles of the weighted average vector to assign each data sample from the training set to one of the strata.

4. The method of claim 1 further comprising:

computing a center for each of the plurality of strata.

5. The method of claim 4 wherein determining, for each stratum, a probability that the testing sample belongs to that stratum includes computing a distance metric between the testing sample and the center of that stratum.

6. The method of claim 1 wherein the predicted outcome for the testing sample is computed based on a Bayesian model.

7. The method of claim 1 wherein the prediction model for each stratum is a linear regression model.

8. The method of claim 1 wherein the prediction model for each stratum is a logistic regression model.

9. The method of claim 1 wherein the variables include genomic information about a subject and the outcome corresponds to a health characteristic of the subject.

10. The method of claim 9 wherein the health characteristic is presence or absence of a disease.

11. A computer system comprising:

a memory; and
a processor coupled to the memory and configured to: identify a training set of data samples, wherein for each data sample the variables and outcomes are known; segment the training set into a plurality of strata based on a measure of similarity of the data samples; train a prediction model for each stratum, wherein the prediction model predicts a likelihood of an outcome based on the variables and wherein training of the prediction model is performed independently for each stratum; obtain a testing sample for which the variables are known; and predict the outcome for the testing sample, wherein predicting the outcome includes: determining, for each stratum, a likelihood of the outcome using the prediction model for that stratum; determining, for each stratum, a probability that the testing sample belongs to that stratum; and computing a predicted outcome for the testing sample based on the likelihood for each stratum weighted by the probability that the testing sample belongs to that stratum.

12. The computer system of claim 11 wherein the processor is further configured such that segmenting the training set includes:

building a matrix for the training set of samples;
computing a set of eigenvalues and a set of eigenvectors from the matrix;
sorting the eigenvectors based on respective magnitudes of the eigenvalues; and
using the sorted eigenvectors to segment the training set.

13. The computer system of claim 12 wherein the processor is further configured such that using the sorted eigenvectors to segment the training set includes:

selecting a subset of the sorted eigenvectors as significant eigenvectors;
computing a weighted average vector of the significant eigenvectors, wherein the average is weighted according to the eigenvalues;
sorting components of the weighted average vector; and
using quantiles of the weighted average vector to assign each data sample from the training set to one of the strata.

14. The computer system of claim 11 wherein the processor is further configured to:

compute a center for each of the plurality of strata,
wherein determining, for each stratum, a probability that the testing sample belongs to that stratum includes computing a distance metric between the testing sample and the center of that stratum.

15. The computer system of claim 11 wherein the predicted outcome for the testing sample is computed based on a Bayesian model.

16. The computer system of claim 11 wherein the prediction model for each stratum is a linear regression model.

17. The computer system of claim 11 wherein the prediction model for each stratum is a logistic regression model.

18. The computer system of claim 11 wherein the variables include genomic information about a subject and the outcome corresponds to a health characteristic of the subject.

19. A computer-readable storage medium having stored therein program code instructions that, when executed by a processor of a computer system, cause the computer system to perform a method comprising:

identifying a training set of data samples, wherein for each data sample the variables and outcomes are known;
segmenting the training set into a plurality of strata based on a measure of similarity of the data samples;
training a prediction model for each stratum, wherein the prediction model predicts a likelihood of an outcome based on the variables and wherein training of the prediction model is performed independently for each stratum;
obtaining a testing sample for which the variables are known; and
predicting the outcome for the testing sample, wherein predicting the outcome includes: determining, for each stratum, a likelihood of the outcome using the prediction model for that stratum; determining, for each stratum, a probability that the testing sample belongs to that stratum; and computing a predicted outcome for the testing sample based on the likelihood for each stratum weighted by the probability that the testing sample belongs to that stratum.

20. The computer-readable storage medium of claim 19 wherein segmenting the training set includes:

building a matrix for the training set of samples;
computing a set of eigenvalues and a set of eigenvectors from the matrix;
sorting the eigenvectors based on respective magnitudes of the eigenvalues; and
using the sorted eigenvectors to segment the training set.

21. The computer-readable storage medium of claim 20 wherein using the sorted eigenvectors to segment the training set includes:

selecting a subset of the sorted eigenvectors as significant eigenvectors;
computing a weighted average vector of the significant eigenvectors, wherein the average is weighted according to the eigenvalues;
sorting components of the weighted average vector; and
using quantiles of the weighted average vector to assign each data sample from the training set to one of the strata.

22. The computer-readable storage medium of claim 19 further comprising:

computing a center for each of the plurality of strata,
wherein determining, for each stratum, a probability that the testing sample belongs to that stratum includes computing a distance metric between the testing sample and the center of that stratum.

23. The computer-readable storage medium of claim 1 wherein the predicted outcome for the testing sample is computed based on a Bayesian model.

24. The computer-readable storage medium of claim 1 wherein the prediction model for each stratum is a linear regression model.

25. The computer-readable storage medium of claim 1 wherein the prediction model for each stratum is a logistic regression model.

26. The computer-readable storage medium of claim 1 wherein the variables include genomic information about a subject and the outcome corresponds to a presence or absence of a disease in the subject.

Patent History
Publication number: 20210110926
Type: Application
Filed: Oct 2, 2020
Publication Date: Apr 15, 2021
Inventors: Haitian Maggie WANG (Shatin), Xiaoxuan XIA (Weifang), Chung-Ying Benny ZEE (Homantin)
Application Number: 17/062,387
Classifications
International Classification: G16H 50/20 (20060101); G06N 7/00 (20060101); G06N 20/00 (20060101);