STRATIFYING CONDITIONAL CUMULATIVE DISTRIBUTION MODELS
A computing system comprises a processor configured to receive, a plurality of simulations, a discrete distribution function, and one or more cumulative distribution models. One or more conditional cumulative distribution models are generated based at least in part on the discrete distribution function and the one or more cumulative distribution models. A range of the one or more cumulative distribution models and one or more conditional cumulative distribution models is stratified, and a sum of discrepancy scores is computed for the plurality of simulations. In each of one or more resampling iterations, one or more simulations are replaced with one or more resampled simulations based on a policy. An updated sum of discrepancy scores is generated for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations. The plurality of simulations are output subsequent to performing the one or more resampling iterations.
Latest Willis Group Limited Patents:
This invention relates generally to modeling stochastic processes. Particularly, but not exclusively, the invention relates to modeling correlated distributions. The invention may allow processor hardware to generate, based at least in part on a set of initial simulation results, a set of modified simulation results that is more representative of one or more cumulative distribution models and one or more conditional cumulative distribution models.
BACKGROUNDModels of correlated stochastic processes are used in fields such as weather forecasting, electrical grid management, finance, supply chain management, and insurance. In some instances, these models are based on Monte Carlo simulations. For example, in energy-related applications, estimates of renewable energy outputs, such as availability of wind, solar, and hydroelectric power resources, drive scheduling of other power generation resources to meet demand. However, random variables representing availability of the renewable energy outputs may be correlated with weather events. Hence, Monte Carlo methods may be used to estimate aggregate power production.
As another example, estimates of insurance claims over time drive decisions made by insurance companies to hold capital. Claims from different sources may be correlated due to underlying events (e.g., severe weather). Monte Carlo methods may be used to estimate aggregate losses due to such events.
As yet another example, in retail applications, businesses set inventory levels for various products at regional distribution centers to meet estimated customer demand. However, customer demand is correlated between nearby localities due to social effects. For example, a baseball team in Chicago winning a championship game may result in above-average demand for baseball memorabilia across other cities and towns near Chicago. Monte Carlo methods may be used to determine ballpark estimates of how much baseball memorabilia will be sold.
Many models aim to predict an entire distribution of variables, rather than focusing primarily on so-called tail statistics of events that are relatively infrequent (e.g., financial losses due to multiple 500-year floods occurring in a single season). Such models may quantify the constituents of a risk profile that are driving a risk measure, a focus of capital allocation, or selected results near a center region of the distribution as well as near its tails. For example, in insurance-related applications, the distribution may be used to predict average financial losses due to weather, which may be modeled near the center region of the distribution. The distribution may also be used to allocate risk capital based on a likelihood of an above-average financial loss, which may be modeled at a tail end of the distribution.
However, technical challenges exist in developing such complex models. For example, performing a sufficient number of Monte Carlo simulations to develop an accurate model may demand extensive processor time and memory utilization. Reducing the number of simulations may accelerate model generation but may also sacrifice accuracy relative to models based on a greater number of simulations. Aspects and embodiments of the invention have been devised with the foregoing in mind.
SUMMARYTo address these issues, computing systems and methods are provided for outputting a plurality of representative simulation results based on stratification. One example aspect provides a computing system comprising a processor configured to receive, for a plurality of correlated variables, a first predetermined number of simulations from a Monte-Carlo simulation sample, each simulation including a plurality of initial simulation results for the plurality of the variables, one or more target statistics, a second predetermined number of strata for each variable and each target statistic, and a cumulative distribution function for each variable and each target statistic. For each variable and for each of the one or more target statistics, a unit interval of the cumulative distribution function is segmented into the second predetermined number of strata and a support of the cumulative distribution function is segmented into a plurality of bins such that each bin of the plurality of bins corresponds to one of the strata. An initial discrepancy score is determined based upon a quantity of values in each bin, the first predetermined number of the simulations, and the second predetermined number of the strata for the variable. An initial sum of the initial discrepancy scores is determined. At least one of the plurality of the initial simulations is removed based upon a determination that the initial sum of the initial discrepancy scores is not within an optimization threshold. At least one other simulation is added to a remaining one or more initial simulations. For each variable and for each of the one or more target statistics, the quantity of the values in one or more bins corresponding to the at least one of the plurality of the initial simulation results and the quantity of the values in one or more bins corresponding to the at least one other simulation result are used to generate an updated discrepancy score. The computing system is further configured to determine an updated sum of the updated discrepancy scores, and to output a plurality of representative simulations that represent the cumulative distribution functions across the strata based upon the updated sum of the updated discrepancy scores.
According to another aspect of the present disclosure, a computing system is provided, including a processor configured to receive, for a plurality of correlated random variables, a simulation sample including a plurality of simulations. Each simulation may include a plurality of simulation results. The processor may be further configured to generate a surrogate cumulative distribution model at least in part by estimating a plurality of surrogate model parameters based at least in part on the plurality of simulation results. Based at least in part on the surrogate cumulative distribution model with the surrogate model parameters, the processor may be further configured to select one or more subsets of the plurality of simulations. In each of one or more resampling iterations, until a sum of one or more respective discrepancy scores of the one or more subsets is determined to meet an optimization threshold, the processor may be further configured to compute the one or more discrepancy scores of the one or more subsets. Based at least in part on the sum of the one or more discrepancy scores, the processor may be further configured to sample one or more resampled simulations for the plurality of correlated random variables from among the plurality of simulations that are included in the simulation sample and not already included in the one or more subsets. The processor may be further configured to replace one or more simulations included in the one or more subsets with the one or more resampled simulations. The processor may be further configured to output the simulations included in the one or more subsets subsequently to performing the one or more resampling iterations.
Another example aspect provides a computing system, comprising a processor configured to receive a plurality of simulations, a discrete distribution function, and one or more cumulative distribution models. Each simulation includes a plurality of simulation results. One or more conditional cumulative distribution models are generated based at least in part on the discrete distribution function and the one or more cumulative distribution models. A range of the one or more cumulative distribution models and one or more conditional cumulative distribution models is stratified into a number of strata. A sum of discrepancy scores is computed for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models. One or more resampling iterations are performed until a sum of one or more respective discrepancy scores is determined to meet an optimization threshold. In each of the one or more resampling iterations, one or more resampled simulations are generated based at least in part on the one or more cumulative distribution models. An updated sum of discrepancy scores is generated for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations. One or more simulations are replaced with one or more resampled simulations based on a policy. The plurality of simulations are output subsequent to performing the one or more resampling iterations.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Section 1. Introduction
As introduced above, technical challenges exist for the development and use of Monte-Carlo-based models for correlated stochastic processes. Generally, performing a larger number of Monte Carlo simulations results in more uniform sampling across a distribution than can be achieved with fewer simulations. This may result in a more accurate model. However, performing large numbers of Monte Carlo simulations may be computationally expensive, requiring both time and processing resources to perform, and reducing the number of simulations may result in a less accurate model. Opportunities exist to address these countervailing technical challenges and improve accuracy while decreasing processing time for performance.
To address these issues, systems and methods are disclosed herein that model correlated variables using quantum-based optimization routines to achieve both accurate and efficient performance. Outputs of the quantum-based optimization routines can be used to build a model based upon a relatively small number of Monte Carlo simulations while maintaining similar accuracy to a model based upon a larger number of simulations. The systems and methods disclosed herein potentially can achieve a similar accuracy using only 1/10-1/1000 the number of simulations as compared to conventional approaches. Briefly, the systems and methods disclosed herein minimize a discrepancy metric on a sequence of simulations to within an optimization threshold. This increases the predictive power of the simulations by decreasing sampling error. As a result, a full distribution may be obtained with up to several orders of magnitude fewer simulations.
At a high level,
The computing system 102 is configured to receive, as input marginal distributions, data representing a plurality of correlated variables. Some examples of suitable variables include, but are not limited to, sales of one or more predefined stock-keeping units (SKUs) in a geographic region, insurance losses in one or more business lines, and power generated at one or more power plants. These three specific examples are provided to aid the understanding of the invention; however, it will be appreciated that the data types are not so limited. In the example depicted in
In some examples, the event-driven model 125 comprises a frequency-severity model 126. The event-driven model 125 may include uncertainty in a number of events that occur and uncertainty in a value associated with each event. A dependent variable, such as the total of the values associated with each event (e.g., a collective risk model), is often of interest. The values associated with each event may additionally or alternatively be considered when developing models of insurance claim values, models of flood defense breaches, models of power surges, models of equity prices over time, models of operational risks, etc. The frequency severity model 126 relates the frequency with which an event occurs with the severity of a value associated with the event. As an illustrative example, the frequency severity model may be for weather events. For example, the frequency-severity model 126 may model first events 128 such as empirical measurements of the total output of wind power production over a period of time (e.g., a calendar year), given a discrete number of second events 130, such as windstorm events occurring during the year. As described in more detail below, in some examples, the systems and methods disclosed herein are configured to apply a policy to stratify event-driven model 125, such as the frequency-severity model 126, into a plurality of strata, which are subgroups of the data that form a collectively exhaustive and mutually exclusive partition of the input data. Stratifying the input data in this manner results in lower sampling error than the use of a random sample of the same number of Monte-Carlo simulations that are not well stratified. It will be appreciated that the conditional cumulative distribution for each of the plurality of dependent variables may be conditional upon a predetermined quantity of the correlated events occurring. For example, in the above wind power example, consider a Monte-Carlo simulation that produces simulation results that are not well stratified, for example, because they contain few or no examples for power produced in years when fewer than three wind storms a year occurred. A model generated based on such simulation results will be more likely to not properly reflect the correlations for one or more target statistics that are conditional upon fewer than three windstorms occurring, even though the model may be fairly accurately reflect correlations in other strata or for an aggregate of all events in the frequency-severity model 126 (e.g., average power produced by all storms occurring in any year). In addition, the computing system 102 can achieve more uniform sampling across the distribution with fewer simulations. This in turn reduces processor and memory utilization, as a larger number of randomly selected simulations may be required to achieve similar accuracy. The stratification of the event-driven models is described in more detail below with reference to
The computing system 102 is further configured to receive a copula 118. The copula 118 describes the correlation between the first input data 104, the second input data 106, and the third input data 108. Copula 118 may have a distribution 120 of various forms. For example, the copula may be a Gaussian copula with a Gaussian distribution, or an Archimedean copula such as the Clayton copula or Gumbel copula. In addition, the copula 118 may have an associated a correlation matrix 122.
The output data 116 represents a dependent variable that is a function of the first input data 104, the second input data 106, and the third input data 108. In some examples, the output data 116 represents an aggregate of the first input data 104, the second input data 106, and the third input data 108. Some examples of suitable aggregates include, but are not limited to, aggregate sales, aggregate insurance losses, or aggregate power generation. For example, the first input data 104 may represent power generated at a solar array, the second input data 106 may represent power generated at a wind farm by relatively steady, low-intensity wind, and the third input data 108 may represent power generated by relatively infrequent but more severe wind events than those reflected in the second input data 106. In these examples, the output data 116 represents aggregate power generation as a function of the first input data 104, the second input data 106, and the third input data 108. In other examples, the output data 116 may comprise any other suitable output produced based upon the first input data 104, the second input data 106, and the third input data 108. For example, the output data 116 may comprise disaggregated data.
The output data 116 is generated by a correlator 124, for example using an Iman-Conover method to generate correlated samples. The following paragraphs describe examples of systems and methods for reducing discrepancy of a sample of simulations to within an optimization threshold of a distribution. As described in more detail below with reference to
The computing system 102 is additionally configured to model aggregate distributions as indicated at 134 and/or to compress Monte-Carlo simulations as indicated at 136. This enables the computing system 102 to produce a representative sequence of simulations at the output stage that accurately resembles the CDF of the aggregate distribution without requiring additional simulations. These aspects are described in more detail below with reference to
In some examples, and with reference now to
Section 2. Stratification Based on Discrepancy Scores
Referring first to
Here, A(x1, . . . , xn) represents a quantity of interest (e.g., power transmitted, uninsured losses, or stock shortages), and dμ(x1, . . . , xn) represents a likelihood of each factor or variable x1, . . . , xn. A(x1(j), . . . , xn(j)) is the quantity of interest as a function of the sequence of simulations x1(j), . . . , xn(j).
In some instances, the initial Monte Carlo simulation data 206 is subject to simulation or sampling error. The Koksma-Hlawka inequality bounds error in estimating a statistic in terms of total variation of the statistic and a *-discrepancy of a sequence of samples. The *-discrepancy of samples S={x1, . . . , xN} in [0,1]n dimensions is represented as:
Here, μ is an n-dimension Lebesgue measure and is a collection of intervals of the form [0, u1]× . . . ×[0, un]. However, this can be challenging to compute for large dimensions n (e.g., tens or hundreds of dimensions, or more).
In other instances, simplified forms of the *-discrepancy are used. For example, with n=1:
In equation (3), the #term counts a number of elements in a selected sequence, which is compared to a total number of elements N. The first term #=(S∩[0,u])/N is discontinuous and jumps when u crosses a point in S. Thus, the simplified discrepancy metric of equation (3) does not require a continuous variable supremum. Instead, the discrepancy may be computed by restricting
A number of samples in each strata is counted and compared to a number of samples evenly distributed into each strata. However, it is technologically challenging to use the *-discrepancy (including the 1-dimension simplification and discretization described above) as a metric for selecting representative samples from a pool of Monte Carlo simulation results in an optimization process, as it is computationally expensive to recompute the discrepancy at each step of the optimization process.
To address these issues, examples are disclosed that relate to selecting a representative sequence of simulations based on stratification. Briefly, an initial discrepancy score is determined based upon a quantity of values in each bin, a first predetermined number of the simulations (e.g., a quantity of the simulations), and the second predetermined number (e.g., a quantity) of the strata for the factor or variable. As described in more detail below, this expression of the discrepancy score is computationally tractable. The representative sequence of simulations is selected in an iterative optimization process in which at least one simulation result value is removed from a set of initial simulation result values, and at least one other simulation result value is added to the set. An updated discrepancy score is determined using the quantity of the values in one or more bins corresponding to the removed simulation result value and the added simulation result value. As described in more detail below, computing the updated discrepancy score in this manner is less computationally expensive than recomputing the initial discrepancy scores for the entire set. This enables a computing system to select and output a plurality of representative simulation result values, which may be utilized for downstream processing (e.g., Iman-Conover-based analysis of correlated variables).
The computing system 202 is configured to receive, for a plurality of correlated variables, a first predetermined number 212 of simulations 204 from a Monte Carlo simulation sample. Each simulation includes a plurality of initial simulation results 206 for the plurality of the variables (e.g., the initial simulation results include Monte Carlo simulation data for at least one dimension j). For simplicity, the initial simulation results 206 are described herein as one-dimensional. In other examples the initial simulation results 206 include two or more dimensions. In yet other examples, the initial simulation results 206 include 10 or more dimensions.
The computing system 202 is further configured to receive the CDF 208 for each variable, and a second predetermined number 210 of strata for the variable. A unit interval of the CDF 208 is segmented into the second predetermined number of strata and a support of the CDF 208 (e.g., elements of a domain of the CDF 208 which are not mapped to zero) is segmented into a plurality of bins such that each bin corresponds to one of the strata. In this manner, the quantity 210 of strata governs accuracy of an estimated distribution resulting from one or more representative simulation result values 218 output by the computing system. The CDF 208 is also stratified for each of one or more target statistics 213. For example, the target statistics 213 may include one or more of: an aggregate outcome or dependent variable value, a minimum value, a maximum value, or any other target statistic desired to be computed based upon the dependent variable. Similarly, the unit interval of the CDF 208 for each target statistic is segmented according to the second predetermined number 210 of strata.
The quantity 210 of strata may be greater than or equal to the first predetermined number 212 of the sample points 204. The computing system 202 may attempt to place at least one simulation into each bin. Thus, a larger quantity 210 of strata results in the CDF 208 being segmented into a larger number of bins 216, which may result in a more even distribution of the representative simulation results 218 across the unit interval 214 than a smaller quantity 210 of strata. In some examples, the quantity 210 of strata is 10,000-50,000 strata. In other examples, the quantity 210 of strata is greater than 50,000 strata (e.g., 250,000 strata). In yet other examples, the quantity 210 of strata is less than 50,000 strata (e.g., 1,000 strata).
For example, the CDF 208 of
The computing system 202 is further configured to place a value x for each initial simulation 204 into one of the plurality of bins 216A-216J. To place each value x, the computing system 202 is configured to determine a value of the CDF u=F(x)∈[0,1] of each value x. The value of the CDF u represents the cumulative likelihood, where the value x resides within the CDF. The computing system 202 then places the value x into one of the plurality of bins 216 based on the value of the CDF u according to equation (4):
bin(x)=└u·M┘ (4)
In equation (4), the value of the CDF u is multiplied by the quantity of strata (M). The product of the value of the CDF u and the quantity of strata M is truncated to a next lowest integer. This integer represents the bin in which the value x is placed. As a result of performing this procedure, the strata coincide with division of the unit interval of the CDF (e.g., [0,1]) into M quantiles. However, it will also be appreciated that this example is not meant to be limiting, and that each value x may be placed into a bin in any other suitable manner.
Referring again to
An initial discrepancy score 220 is determined based upon a quantity of values in each bin 216, the first predetermined number 212 of the simulations 204, and the second predetermined number 210 of the strata for the variable or target statistic. As described in more detail below, the initial discrepancy score measures deviation between the initial Monte Carlo simulation data and the CDF.
In some examples, determining the initial discrepancy score 220 includes determining an initial bin-wise discrepancy metric 222 for each bin 216. In some examples, the initial bin-wise discrepancy metric 222 for a selected bin includes a difference between the quantity of values in the selected bin (bk(s)) and the first predetermined number 212 (N) of the simulations 204 divided by the second predetermined number 210 (M) of the strata for the variable:
With reference again to
Thus, the initial bin-wise discrepancy metric 222 measures the homogeneity of the initial Monte Carlo simulation data across bins 216A-216J.
In some examples, the initial discrepancy score 220 comprises a maximum bin-wise discrepancy metric from the plurality of bins 216:
In this manner, the initial discrepancy score 220 represents the maximum discrepancy between the initial simulation results 206 and the CDF 208.
In other examples, the initial discrepancy score 220 comprises a sum of the initial bin-wise discrepancy metrics 222 for the plurality of bins 216:
In this manner, the initial discrepancy score 220 represents an aggregate discrepancy for the initial simulation results 206.
Some applications require accuracy at one or both tails of the CDF that is greater than or equal to the accuracy of the initial Monte Carlo simulation data at or near (e.g., within one standard deviation of) the mean. Accordingly, in some examples, the computing system 202 is configured to weight the initial bin-wise discrepancy metric for each bin of the plurality of bins based upon a proximity of a stratum corresponding to the bin to a tail of the CDF. For example, in
The computing system 202 is further configured to determine an initial sum 226 of the initial discrepancy scores 220. In some examples, the initial discrepancy scores 220 are summed for all variables (e.g., the dimension j and any other dimensions of the initial simulation results 204). As described in more detail below, the initial sum 226 serves as an optimization metric to select the representative simulation result values 218 for output.
The initial sum 226 of the initial discrepancy scores 220 is compared to an optimization threshold 228. As described in more detail below, if a set of simulation results meets the optimization threshold 228, the set of simulation results is output as the representative simulation results 218. In some examples, “meeting the optimization threshold” refers to the initial sum 226 of the initial discrepancy scores 220 being less than or equal to the optimization threshold 228. In this manner, the computing system 202 ensures that the representative simulation results 218 closely resemble the CDF 208.
In some examples, the optimization threshold 228 is derived from the parameters of the simulations and statistics. For example, the optimization threshold 228 may be a user-specified parameter that is based upon the first predetermined number 212 (N) and the second predetermined number 210 of the strata (M) described above. In some examples, the optimization threshold is in the range of 0.001 N/M−0.5 N/M, which represents a deviation of 0.1%-50% from a homogenous distribution. In other examples, the optimization threshold is in the range of 0.01 N/M-0.25 N/M. In yet other examples, the optimization threshold is in the range of 0.01 N/M-0.5 N/M. In this manner, optimization may be terminated upon the initial sum 226 of the initial discrepancy scores 220 being within a user-specified percentage of an optimal value (e.g., representing a homogenous distribution across the strata M).
It will also be appreciated that the optimization threshold 228 may be defined in any other suitable manner. Another suitable example of the optimization threshold 228 includes a user-specified discrepancy value. In this manner, optimization may be terminated upon the initial sum 226 of the initial discrepancy scores 220 being less than or equal to the user-specified discrepancy value.
In yet other examples, the optimization process is additionally or alternatively terminated based upon reaching or exceeding a user-specified runtime duration or a user-specified number of iterations. In such examples, the computing system 202 may proceed as if the optimization threshold 228 is met to output a current set of simulation results as the plurality of representative simulation result values 218. The computing system 202 may additionally or alternatively output a notification to the user that the user-specified runtime duration or the user-specified number of iterations is reached or exceeded.
If the set of simulation results does not meet the optimization threshold, the set of simulation results is modified as indicated at 230. To generate the modified set of simulation results 230, the computing system 202 is configured to remove at least one of the plurality of the initial simulations 204 based upon a determination that the initial sum 226 of the initial discrepancy scores 220 is not within the optimization threshold 228. For example,
At least one other simulation result is added to a remaining one or more initial simulation results. It will be appreciated that “added” means that the at least one other simulation result is included in a set with the remaining one or more initial simulation results, rather than being numerically added to those results. For example, another simulated value 206B is added to the values that remain from the initial simulation results 206 after the value 206A is removed. In the example of
In some examples, the at least one other simulation result value 206B is derived from a precomputed Monte Carlo simulation 232. The precomputed simulation 232 may be selected from a pool of precomputed Monte Carlo simulation results that also includes the initial simulation results 206. Precomputing enables the computing system 202 to estimate the CDF 208 for an aggregate of all the precomputed simulation results (including the simulations that are selected as the representative simulations 218 and the simulations that are not selected) upfront. This enables the computing system 202 to compare the same CDF 208 to the initial simulation results 204 and to the modified simulation result values 230, rather than recomputing a new CDF 208 for the modified simulation result values 230.
Like the selection of the removed value 206A, in some examples, the computing system 202 generates or selects the at least one other simulation result value 206B via a randomized process. In this manner, the simulation result 206B may reduce simulation error in the modified data 230 via random sampling. In other examples, the simulation result value 206B is explicitly selected, either by the computing system 202 or a user, to drive the modified data 230 towards the optimization threshold 228.
As introduced above, the computing system 202 is configured to generate an updated discrepancy score 234 for each variable. The updated discrepancy score is generated using the quantity of the values in one or more bins from which one or more simulation results 206A are removed and the quantity of the values in one or more bins into which one or more other simulation result values 206B are added. As described in more detail below, this formulation of the updated discrepancy score 234 does not require the computing system 202 to recompute the initial bin-wise discrepancy metric 222 for each bin 216A-216J. Instead, the computing system 202 may reuse the initial bin-wise discrepancy metric 222 for bins that are not modified (e.g., bins 216A-216H), and an updated bin-wise discrepancy metric 236 is computed for bins in which one or more simulation results have been added or removed (e.g., bins 216J and 216I, respectively). This results in the discrepancy score 234 being updated more rapidly, while demanding less processor time and memory allocation, than recomputing the initial bin-wise discrepancy metric 222 for all of the bins 216A-216J each time a simulation result is added and/or removed.
For example, referring again to
u=Fk(xk(j)) (8)
bin(xk(j))=└u·M┘ (9)
The computing system is further configured to decrement bin count b k of bin(xk(j)) in dimension k by the number of simulation values removed (e.g., 1).
For a simulation value xk(j′) that is added (e.g., 206B), the computing system 202 is configured to compute its cumulative probability (u′) and bin (bin(xk(j′)) according to the following equations in the same manner as described above:
u′=Fk(xk(j′)) (10)
bin(xk(j′))=└u′·M┘ (11)
The computing system is further configured to increment bin count b′k of bin(xk(j′)) in dimension k by the number of simulation values added (e.g., 1). The updated quantities of values in each of the bins 216I and 216J are used to generate the updated bin-wise discrepancy metric using equation (5). As described above, the initial bin-wise discrepancy metrics 222 are re-used for bins 216A-216H. The computing system is configured to use this set of the updated bin-wise discrepancy metrics (for bins 216I and 216J) and the initial bin-wise discrepancy metrics 222 (for bins 216A-216H) to generate the updated discrepancy score 234.
In some examples, the updated discrepancy score 234 is determined using the same operation as the initial discrepancy score 220. For example, the updated discrepancy score 234 may be generated using equation (6) or (7). In this manner, the updated discrepancy score 234 may be comparable with the initial discrepancy score 220.
With reference again to
In some examples, the computing system 202 is configured to accept or reject the at least one other simulation result value 206B for potential inclusion in an event-based model based upon the updated sum 238 of the discrepancy scores 234. For example, the computing system 202 may reject the at least one other simulation result value 206B if the updated sum 238 of the discrepancy scores 234 is greater than the initial sum 226 of the initial discrepancy scores 220. The computing system 202 may additionally or alternatively reject the removal of the value 206A if the other simulation result value 206B is rejected. In this manner, the computing system 202 is configured to drive the selection of the representative simulation result values 218 towards the optimization threshold 228.
The computing system 202 is configured to output the plurality of representative simulation result values 218 that represent the CDF 208 across the strata based upon the updated sum 238 of the discrepancy scores 234. For example, if the modified simulation results 230 meet the optimization threshold 228, the modified simulation results 230 are output as the representative simulation results 218. If the modified simulation results 230 do not meet the optimization threshold 228, the computing system 202 is configured to iteratively modify the simulation results and update the sum of the discrepancy scores until the optimization threshold is met.
With reference now to
It will be appreciated that the following description of method 600 is provided by way of example and is not meant to be limiting. It will be understood that various steps of method 600 can be omitted or performed in a different order than described, and that the method 600 can include additional and/or alternative steps relative to those illustrated in
Referring first to
At 604, the method 600 includes, for each variable and for each of one or more target statistics, segmenting a unit interval of the cumulative distribution function into the second predetermined number of strata and a support of the cumulative distribution function into a plurality of bins such that each bin of the plurality of bins corresponds to one of the strata. For example,
The method 600 further includes, at 606, determining an initial discrepancy score based upon a quantity of values in each bin, the first predetermined number of the simulations, and the second predetermined number of the strata for the variable. For example, the computing system 202 is configured to determine the initial discrepancy score 220 based upon the quantity of values in each bin 216, the first predetermined number 212 of the simulations 204, and the second predetermined number 210 of the strata. As described above, the initial discrepancy score measures deviation between the initial Monte Carlo simulation data and the CDF.
In some examples, at 608, determining the initial discrepancy score includes determining an initial bin-wise discrepancy metric for each bin of the plurality of bins. For example, the computing system 202 is configured to determine the initial bin-wise discrepancy metric 222 for each bin 216. The initial bin-wise discrepancy metric indicates how evenly the initial simulation results 206 are distributed between bins 216A-216J.
At 610, in some examples, determining the initial bin-wise discrepancy metric for a selected bin includes determining a difference between the quantity of values in the selected bin and the first predetermined number of the simulations divided by the second predetermined number of the strata for the variable or target statistic. For example, the initial bin-wise discrepancy metric 222 may be represented in the form of equation (5) described above. In this manner, the initial bin-wise discrepancy metric 222 measures discrepancy between the number of values in each bin and the number of values if the initial simulations were distributed evenly across the bins.
In some examples, at 612, determining the initial bin-wise discrepancy metric for the selected bin includes determining a maximum bin-wise discrepancy metric or determining a sum of the initial bin-wise discrepancy metrics for the plurality of bins. For example, the initial discrepancy score 220 may comprise the maximum bin-wise discrepancy metric described in equation (6) or the sum of the initial bin-wise discrepancy metrics described in equation (7). In this manner, the initial discrepancy score represents the largest discrepancy between the initial Monte Carlo simulation data and the CDF, or an aggregate discrepancy for the initial Monte Carlo simulation data, respectively.
At 614, in some examples, determining the initial discrepancy score includes weighting the initial bin-wise discrepancy metric for each bin of the plurality of bins based upon a proximity of a stratum corresponding to the bin to a tail of the cumulative distribution function. For example, in
Referring now to 616 of
For example, at 618, the method 600 includes removing at least one of the plurality of the initial simulations based upon a determination that the initial sum of the initial discrepancy scores is not within an optimization threshold. For example, one simulation result value 206A is removed from the plurality of the initial simulation result values in
At 620, the method 600 includes adding at least one other simulation to a remaining one or more initial simulations. For example, the simulation result value 206B is added to the values that remain after the value 206A is removed. Like the removal of the simulation result value 206A, adding the simulation result value 206B may reduce simulation error in the modified data 230 via random sampling or explicitly selecting a simulation value that is closer to the CDF than the removed value 206A. In addition, adding at least one other simulation result helps achieve or maintain a target number of representative simulation results for output.
In some examples, at 622, the at least one other simulation includes a precomputed simulation. For example, the computing system 202 is configured to precompute one or more Monte Carlo simulations 232. This enables the computing system 202 to estimate the CDF 208 upfront and enables re-use of the CDF 208 for the modified simulation result values 230, rather than recomputing a new CDF 208 at each step of the optimization process.
Referring now to 624 of
In some examples, at 626, generating the updated discrepancy score includes determining an updated bin-wise discrepancy metric for the one or more bins corresponding to the at least one of the plurality of the initial simulations and for the one or more bins corresponding to the at least one other simulation. The updated bin-wise discrepancy metric is used for the one or more bins corresponding to the at least one of the plurality of the initial simulations and for the one or more bins corresponding to the at least one other simulation. The initial bin-wise discrepancy metric is used for each of the remaining one or more initial simulations to generate the updated discrepancy score. For example, the computing system 202 may compute an updated bin-wise discrepancy metric 236 for bin(xk(j)) from which sample xk(j) is removed, and for bin(xk(j′)) to which a simulation value xk(j′) is added. The computing system 202 may re-use the initial bin-wise discrepancy metric 222 for any bins that are not modified. This results in the discrepancy score 234 being updated more rapidly and in a less computationally intensive manner than by recomputing the initial bin-wise discrepancy metric for each bin during the optimization process.
At 628, in some examples, generating the updated discrepancy score includes: decrementing the quantity of values in the one or more bins corresponding to the at least one of the plurality of the initial simulations; and incrementing the quantity of values in the one or more bins corresponding to the at least one other simulation. For example, the computing system is configured to decrement bin count bk of bin(xk(j)) in dimension k by the number of simulation values removed (e.g., 1). The computing system is further configured to increment bin count b′k of bin(xk(j′)) in dimension k by the number of simulation values added (e.g., 1). The updated quantities of values in each of the bins 216I and 216J are used to generate the updated bin-wise discrepancy metric.
The method 600 further includes, at 630, determining an updated sum of the updated discrepancy scores. For example, the computing system 202 is configured to calculate the updated sum 238. The updated sum serves as an aggregate discrepancy measure that can be compared to the same optimization threshold 228 as the initial sum 226. This comparison enables the computing system 202 to determine whether to initiate an additional optimization loop (e.g., by further modification of the modified simulation results 230) or whether the modified simulation results 230 satisfy the optimization threshold 228 for output as the representative simulation result values 218.
In some examples, at 632, the method 600 includes accepting or rejecting the at least one other simulation based upon the updated sum of the updated discrepancy scores. For example, the computing system 202 may reject the added simulation result value 206B if the updated sum 238 of the discrepancy scores 234 is greater than the initial sum 226 of the initial discrepancy scores 220. This may drive the modified simulation results 230 towards the optimization threshold 228.
At 634, the method 600 includes outputting a plurality of representative simulation results that represent the cumulative distribution function across the strata based upon the updated sum of the updated discrepancy scores. For example, the computing system 202 is configured to output the modified simulation results 230 as the representative simulation results 218 if the modified simulation results 230 meet the optimization threshold 228.
The above-described systems and methods may be used to select a representative sequence of simulations from a pool of Monte Carlo simulation data. At least one simulation is removed from a plurality of initial simulation based at least on an initial discrepancy score, and at least one other simulation is added to the values that remain. This may reduce the discrepancy of the selected sequence of simulations relative to the initial discrepancy score, either via statistical effects achieved via random sampling or by explicitly replacing an outlier from a CDF of the Monte Carlo simulation data values. In addition, adding at least one other simulation helps achieve or maintain a target number of simulations for output. An updated discrepancy score is generated and used to select a plurality of representative simulations for output. The updated discrepancy score is generated using an updated bin-wise discrepancy metric for any bins that are modified, and by re-using an initial bin-wise discrepancy metric for any bins that are not modified. This formulation of the updated discrepancy score may be updated more rapidly and in a less computationally expensive manner than by recomputing the initial bin-wise discrepancy metric for each bin. As a result, the above-described systems and methods increase the predictive power of models based upon a relatively small pool of representative samples by decreasing sampling error relative to the initial simulation results, while maintaining similar accuracy to a model based upon a larger number of samples.
As discussed above with reference to
When using the stratified sampling techniques discussed above, determining the locations of the boundaries between the bins 216 is one challenge that may arise. Depending upon the shape of the CDF 208, the locations within the unit interval of the boundaries between the bins 216 may vary between different sets of initial simulation results 204. In order to obtain a set of representative simulation results 218 with a reduced discrepancy relative to the initial simulation results 204, the computing system 202 may be configured to utilize a surrogate cumulative distribution model, as discussed in further detail below.
Section 3. Modeling Aggregate Distributions and Compressing Monte Carlo Simulations Using Surrogate Cumulative Distribution Model
In addition to modeling the shape of the CDF 208, the surrogate cumulative distribution model discussed below may also be used when modeling the dependent variables of the initial simulation results 204. One such target statistic of particular interest in applications such as insurance, inventory management, and energy production is the aggregate over the correlated random variables. The aggregate may, for example, be an aggregate loss by an insurer, an aggregate volume of a product sold, or an aggregate quantity of energy generated. As discussed below, the surrogate cumulative distribution model may be used when generating a low-discrepancy sample of an aggregate distribution.
The computing system 702 may be configured to receive a plurality of simulation results 712 for a plurality of correlated random variables 706. For example, as shown in
The plurality of simulation results 712 may, in some examples, include a plurality of aggregate values over the plurality of correlated random variables 706. In other examples, the plurality of simulation results 712 may include a plurality of minimum values or maximum values over the plurality of correlated random variables 706. Other target statistics of the correlated random variables 706 may additionally or alternatively be included among the plurality of simulation results 712.
In some examples, the computing system 702 may be configured to generate the plurality of simulation results 712 for the plurality of correlated random variables 706 at least in part by executing an Iman-Conover algorithm 750 at the event simulation model 704. When the Iman-Conover algorithm 750 is performed at the event simulation module 704, the computing system 702 may be configured to sample simulation results 712 for the plurality of correlated random variables 706 in a manner that preserves dependencies between those simulation results 712. Thus, the computing system 702 may be configured to generate the simulations 711. The dependencies between the correlated events 706 may be indicated by a copula 118 and/or a correlation matrix 122, as discussed above with reference to
The event simulation module 704 may be configured to receive and/or compute a respective CDF 708 associated with a correlated random variable 706. The event simulation module 704 may be further configured to use the CDF 708 as input when computing the simulation results 712 included in the simulations 711. The CDF 708 may, for example, be the CDF 208 and may be estimated as shown in the example of
The computing system 702 may be further configured to generate a surrogate cumulative distribution model 714 configured to model the CDF 708. The surrogate cumulative distribution model 714 may, for example, be configured to model the CDF 708 of an aggregate value, a minimum value, or a maximum value of the plurality of variables, or some other statistic of the variables, which may be a user-defined custom function. As discussed below, the surrogate cumulative distribution model 714 may have a plurality of surrogate model parameters 718, and generating the surrogate cumulative distribution model 714 may include estimating the plurality of surrogate model parameters 718 based at least in part on the plurality of simulation results 712.
In some examples, the surrogate cumulative distribution model 714 may be a mixed Erlang model including a plurality of Erlang distributions 716. Thus, in such examples, the surrogate cumulative distribution model 714 may approximate the CDF 708 as a weighted sum of the plurality of Erlang distributions 716 in which each of the Erlang distributions 716 has a respective mixing weight included among the plurality of surrogate model parameters 718. Each of the plurality of Erlang destructions may be parameterized by parameters k∈+ λ∈+. The parameter k is the shape parameter of the Erlang distribution 716 and the parameter A is the rate parameter of the Erlang distribution 716. Alternatively, the Erlang distributions 716 may each be parameterized in terms of k and β, and where β=1/λ.
In some examples, the surrogate cumulative distribution model 714 may further include one or more substitute tail region distributions 720. The one or more substitute tail region distributions 720 may be configured to replace one or more respective tail regions of one or more of the plurality of Erlang distributions 716, and may differ from the one or more Erlang distributions 716 within the one or more respective tail regions. The one or more tail regions that are configured to be replaced with the one or more substitute tail region distributions 720 may include a lower tail of the mixed Erlang model and/or an upper tail of the mixed Erlang model. Thus, when the surrogate cumulative distribution model 714 includes one or more substitute tail region distributions 720, the surrogate cumulative distribution model 714 may be a piecewise function of the sum of the plurality of Erlang distributions 716 and the one or more substitute tail region distributions 720, with one or more respective threshold values that specify one or more cutoff points between the sum of the plurality of Erlang distributions 716 and the one or more substitute tail region distributions 720. In examples in which the surrogate cumulative distribution model 714 includes one or more substitute tail region distributions 720, the computing system 702 may be configured to normalize the surrogate cumulative distribution model 714 such that the integral of the surrogate cumulative distribution model 714 over the interval [0, ∞) is equal to 1.
In some examples, the surrogate cumulative distribution model 714 may be an empirical model. In such examples, the computing system 702 may be configured to estimate the surrogate model parameters 718 based at least in part on empirical data included in the plurality of simulations 711. The plurality of simulations 711 may, in such examples, include both empirically collected data and programmatically generated synthetic data.
By modeling the CDF 708 with a surrogate cumulative distribution model 714 expressed as a sum of a plurality of Erlang distributions 716, the computing system 702 may flexibly model the shape of the CDF 708 while only using a small number of parameters. In addition, including one or more substitute tail region distributions 720 in the surrogate cumulative distribution model 714 may allow the computing system 702 to more accurately represent a light-tailed or heavy-tailed distribution. Since heavy-tailed distributions are of particular interest in risk modeling applications (e.g., insurance against extreme weather events), a mixed Erlang model with substitute tail regions may allow for more accurate modeling of regions of the distribution that are particularly likely to be relevant to the user's decision-making.
In some examples, as shown in
At the M-step 740, the computing system 702 may be further configured to compute an estimated argmax of the expectation value 734 as a function of the surrogate model parameters 718. In some examples, the argmax may be estimated at least in part by executing a stochastic search algorithm such as simulated annealing, simulated quantum annealing, population annealing, or parallel tempering. Additionally or alternatively, the computing system 702 may be configured to compute the argmax of the expectation value 734 at least in part by iteratively performing a 3-optimal algorithm to update the shape parameters k of the Erlang distributions 716 to the estimated argmax values.
At the cross-validation step 742, the computing system 702 may be further configured to select a reduced number of Erlang distributions 716 for inclusion in the surrogate cumulative distribution model 714 in order to avoid overfitting. Performing the cross-validation step 742 may include selecting a training set 710A and a validation set 710B that each include simulations 711 included in the simulation sample 710. The computing system 702 may use the simulations 711 included in the training set 710A as inputs when computing a fitted density function with the surrogate model parameters 718. The computing system 702 may be further configured to use the simulations 711 included in the validation set 710B as inputs, along with the fitted density function, when computing a cross-validation score. The computing system 702 may be further configured to compute respective cross-validation scores for different numbers of Erlang distributions 716 and select the number of Erlang distributions 716 that maximizes the cross-validation score. Accordingly, the computing system 702 may be configured to iteratively compute the parameters of a mixed Erlang model that accurately represents the CDF 708 while also including a small number of Erlang distributions 716.
Returning to the example of
The computing system 702 may be configured to stratify the image of the surrogate cumulative distribution model 714 into a number of strata. These strata may be quantiles. For example, a quantile may be a quintile, a decile, a percentile, or some other subset given by a partitioning of the range of the surrogate cumulative distribution model into equal-sized strata. The computing system 702 may be configured to estimate the locations of boundaries between the strata by computing the locations of boundaries between quantiles of the surrogate cumulative distribution model 714. In examples in which the locations of the boundaries are computed, the computing system may be further configured to select one or more subsets 722 of the plurality of simulations 711 so that the simulations results 712 included in the simulations 711 included in the one or more subsets 722 are equally distributed across the strata.
The computing system 702 may be further configured to perform one or more resampling iterations 748 at a resampling module 724. During each of the resampling iterations 748, the computing system 702 may be configured to compute respective discrepancy scores 726 of the one or more subsets 722. Each discrepancy score 726 may be computed as discussed above with reference to
During each of the one or more resampling iterations 748, based at least in part on the discrepancy score 726, the computing system 702 may be further configured to sample one or more resampled simulations 730 for the plurality of correlated random variables 706. The resampled simulations 730 may be sampled from among the plurality of simulations 711 that are included in the simulation sample 710 and not already included in the one or more subsets 722. Thus, the computing system 702 may be configured to pre-compute the plurality of simulations 711 and resample the simulations 711 included in the one or more subsets 722 without having to generate additional simulations 711 at the event simulation module 704. The resampling iterations 748 may accordingly be executed more quickly in examples in which the event simulation module 704 takes substantial amounts of time to compute the simulations 711.
Subsequently to generating the one or more resampled simulations 730 in each resampling iteration 748, the computing system 702 may be further configured to replace one or more simulations 711 included in the one or more subsets 722 with the one or more resampled simulations 730. When one or more of the plurality of simulations 711 included in the one or more subsets 722 are resampled, the computing system 702 may be configured to select one or more of the simulations 711 included in the simulation sample 710 that are not currently included in the subset 722. Thus, one or more simulations 711 used to estimate the surrogate distribution model 714 may be reused, thereby reducing the amount of computation performed when generating the one or more resampled simulations 730. Accordingly, if the resampling iteration 748 in which the one or more resampled simulations 730 are generated is followed by a subsequent resampling iteration 748, the computing system 702 may be configured to use the updated subset 722 including the one or more resampled simulations 730 in the subsequent resampling iteration 748 when computing the one or more discrepancy scores 726.
In some examples, the computing system 702 may be configured to replace the one or more simulations 711 with the one or more resampled simulations 730 at least in part by performing a quantum-inspired algorithm. The quantum-inspired algorithm may, for example, be a Markov chain Monte Carlo algorithm such as simulated annealing, simulated quantum annealing, population annealing, or parallel tempering. The sum of the discrepancy scores 726 may, in such examples, be used as a loss for which the resampling module 724 may be configured to estimate a minimum value.
In some examples, when one or more of the plurality of simulations 711 are resampled, the computing system 702 may be configured to sample the one or more resampled simulations 730 at least in part by executing the Iman-Conover algorithm 750. The computing system 702 may be configured to perform the Iman-Conover algorithm 750 at the resampling module 724, with the simulations 711 included in the simulation sample 710 as input.
When the resampling iteration 748 is performed, the computing system 702 may replace a simulation 711 with a resampled simulation 730. In this case, a simulation result 712 included in the simulation 711 is replaced with a resampled simulation result 731 included in the resampled simulation 730. This resampled simulation result 731 may be in the stratum 744 in which the simulation result 712 is located. Alternatively, the resampled simulation 730 may be in a different stratum from the stratum 744 in which simulation result 712 is located. The simulation results 712 included in other simulations 711 of the simulation sample 710 may remain unchanged. Although
Returning to
In some examples, the additional computing process 732 may be a Monte Carlo algorithm 732A. In such examples, the one or more subsets 722 may be used as one or more compressed inputs to the Monte Carlo algorithm 732A with which the computing system 702 may compute an estimated solution to an optimization problem. The Monte Carlo algorithm 732A may be configured to compute the estimated solution to the optimization problem with a smaller set of inputs relative to the full simulation sample 710. However, since the subset 722 is resampled to have a sum of one or more discrepancy scores 726 below the predetermined discrepancy threshold 728, the accuracy of the Monte Carlo simulation may be maintained while reducing the number of inputs. Thus, compressing the simulation sample 710 using the surrogate cumulative distribution model 714 may allow the Monte Carlo simulation to be performed more efficiently.
In some examples, as shown in
At the GUI 800, the computing system 702 may be further configured to generate the surrogate cumulative distribution model 714 in response to receiving a surrogate model type selection 802 at the GUI 800. The surrogate model type selection 802 may include a selection of a type of function configured to be used as the surrogate cumulative distribution model 714. In the example of
At the GUI 800, the computing system 702 may be further configured to generate the one or more subsets 722 of the plurality of simulations 711 in response to receiving simulation generating instructions 804 at the GUI 800. The simulation generating instructions 804 may, as shown in the example of
The computing system 702 may be further configured to output the one or more subsets 722 of the simulations 711 to the GUI 800. The example GUI 800 of
The plurality of simulations 711 may be received from an event simulation module 704. The event simulation module 704 may be configured to receive a CDF 708 as input. In some examples, the plurality of simulation results 712 may be computed using an Iman-Conover algorithm 750.
At step 904, the method 900 may further include generating a surrogate cumulative distribution model 714 for the plurality of correlated random variables 706. Generating the surrogate cumulative distribution model 714 may include estimating a plurality of surrogate model parameters 718 based at least in part on the plurality of simulation results 712. The plurality of surrogate model parameters 718 may, for example, be estimated at least in part by performing iterative expectation maximization. For example, a GEM algorithm may be used. In some examples, the surrogate cumulative distribution model 714 may be an empirical model for which the surrogate model parameters 718 are estimated based at least in part on empirical data included in the plurality of simulations 711.
In some examples, the surrogate cumulative distribution model 714 may be a mixed Erlang model including a plurality of Erlang distributions 716. The plurality of surrogate model parameters 718 may include parameters of the Erlang distributions 716 and mixing weights for the Erlang distributions 716 in such examples. The surrogate cumulative distribution model 714 may further include one or more substitute tail region distributions 720 configured to replace one or more respective tail regions of one or more of the plurality of Erlang distributions 716. The one or more substitute tail region distributions 720 may replace an upper tail and/or a lower tail of the mixed Erlang model and may differ from the one or more Erlang distributions 716 within the one or more respective tail regions.
At step 906, the method 900 may further include selecting one or more subsets 722 of the plurality of simulations 711 based at least in part on the surrogate cumulative distribution model 714 with the surrogate model parameters 718. In some examples, the one or more subsets may be selected to be of equal size. Alternatively, the plurality of subsets may have a plurality of different sizes.
Step 908, step 910, and step 912 of the method 900 may be performed in each of one or more resampling iterations 748. These steps may be performed until a sum of one or more respective discrepancy scores 726 of the one or more subsets 722 is determined to meet an optimization threshold. For example, the sum of the discrepancy scores 726 may meet the optimization threshold when the sum is below a predetermined discrepancy threshold 728. At step 908, the method 900 may include computing the one or more discrepancy scores 726 of the one or more subsets 722. At step 910, the method 900 may further include sampling one or more resampled simulations 730 for the plurality of correlated events 706 based at least in part on the sum of the one or more discrepancy scores 726. The one or more resampled simulations 730 may be sampled from among the plurality of simulations 711 that are included in the simulation sample 710 and not already included in the one or more subsets 722. At step 912, the method 900 may further include replacing one or more simulations 711 included in the one or more subsets 722 with the one or more resampled simulations 730. Thus, the sum of the one or more discrepancy scores 726 of the one or more subsets 722 may be reduced over the course of the plurality of resampling iterations 748.
At step 914, the method 900 may further include outputting the simulations 711 included in the one or more subsets 722 subsequently to performing the one or more resampling iterations 748. In some examples, the one or more subsets 722 may be output to a Monte Carlo algorithm 732A. The one or more subsets 722 may be one or more compressed subsets of inputs to the Monte Carlo algorithm 732A that have a reduced sum of one or more discrepancy scores 726 relative to the initial simulation sample 710. Thus, the Monte Carlo algorithm 732A may compute a solution to an optimization problem more efficiently by using the subset 722 as input.
Using the systems and methods discussed above, a compressed subset of Monte Carlo simulation inputs may be generated, thereby allowing a Monte Carlo simulation to be performed more efficiently without a large reduction in accuracy. For example, the variable indicated in the simulation results included in the compressed subset may be aggregate values of a dependent variable over multiple different types of events. The aggregate values may, for example, be values of an aggregate loss by an insurer, an aggregate amount of a product stocked, or an aggregate amount of energy generated. The systems and method discussed above may accordingly facilitate the use of Monte Carlo methods to compute estimates of such quantities.
Section 4. Stratifying Event-Driven Models
Referring now to
In contrast to the case where there is a fixed known number of variables and given simulation results that require stratification, for a type of event-driven model called a frequency-severity model the number of variables within each simulation varies, according to a specified frequency distribution. Stratification therefore needs to be done conditionally on the number of variables and so we can use the information about the frequency distribution to determine additional statistics that will be stratified. This means multiple conditional cumulative distribution functions need to be considered, corresponding to the distribution conditional on a given number of events. In the constriction of the sum of discrepancy scores, terms are added for each of these conditional distributions.
The properties of the frequency distribution are used to determine how many terms are added (and this will typically include a cutoff since many frequency distributions do not have finite support, which may be based on a very low probability threshold, which may be configured to select one simulation from each stratum, so based on the number of simulations there will be a largest relevant number of events to condition on).
A term may also be added to the sum of the discrepancy scores to represent a dependent variable of simulation data, such as the aggregate amount under the frequency-severity model. The CDF of the dependent variable may be computed using techniques such as a Fast Fourier Transform (which may be done using tilting).
To address these issues, examples are disclosed that relate to stratifying event-driven models. Briefly, a conditional cumulative distribution model is approximated for a plurality of simulation results from a plurality of simulations. As described herein, the conditional cumulative distribution model refers to a distribution of a derived quantity for each simulation in an input dataset. The conditional cumulative distribution models are used to determine a sum of discrepancy scores for the plurality of simulations that satisfy the condition associated to the conditional cumulative distribution model. The plurality of simulation results within a selected simulation are replaced with another plurality of simulation results from a resampled simulation based upon a policy. An updated sum of discrepancy scores is generated for one or more remaining simulations and the resampled simulation. Accordingly, a result of one or more accepted simulations is more representative of the conditional cumulative distribution models. The result of the one or more accepted simulations can serve as an input to more general stratification and/or correlation methods (e.g., at the correlator 124 of
The computing system 1002 is configured to receive a simulation sample 1004 containing a plurality of simulations 1005. Each simulation may contain simulation results 1006 and the quantity of simulation results in a simulation may vary between simulations. Simulation results may be correlated. Simulation results may represent dependent variables on other simulation results. For example, Table 1 shows an example of a plurality of simulations, in the form of wind storms, for which one simulation result, the total power generated by the wind farm, is the aggregate of the other simulation results, the power generated during individual storms.
In some examples, the computing system 1002 receives 50,000-250,000 samples resulting from Monte-Carlo simulations. In other examples, and as described in more detail below, the simulation sample 1004 include a smaller number of simulations (e.g., less than 50,000 simulations). As introduced above, a greater number of simulation results (e.g., 50,000-250,000) results in greater accuracy than a smaller number of simulation results. However, it can be computationally expensive to conduct such large numbers of simulations.
With continued reference to
Each row in Table 1 represents a simulation. Each simulation is tagged with a quantity of events that occur in that simulation, which is represented in the “# of Events” column in Table 1. The quantity of events follows a discrete distribution function (e.g., a Poisson distribution). The wind power is distributed according to a log-normal, pareto, or beta distribution. An aggregate over all events is shown in the “Total Power” column. However, for each scenario in which one event is modeled in total, the power produced by that event is relatively low compared to the maximum power produced by a single storm in Table 1 (e.g., 4355.40 MWh), and is not representative of an overall distribution of wind power.
In other examples, at least a portion of the simulations 1005 include empirical data (e.g., measured wind power produced by real-world storms). This may result in a more realistic starting point for the optimization of the simulation results, as a set of initial simulation results generated at random may include data that is unrealistically high (e.g., 10 GWh for 1 event) or low (e.g., 1 kwh for 5 events). As a result, the use of at least some empirical data may result in achieving a more representative sample of simulation results in fewer iterations of the optimization process than the use of randomly selected Monte Carlo simulations.
The computing system 1002 is further configured to receive one or more cumulative distribution models 1008. These cumulative distribution models may be for independent variables, such as the power produced in an individual wind storm given in Table 1. In some examples, the cumulative distribution model 1008 is based upon statistics, such as the “Total Power” given in Table 1. In this example, the cumulative distribution model 1008 may be computed based on the values of the variables from the simulation results in Table 1.
Other simulation results 1006 may be introduced leading to additional cumulative distribution models 1008. For example, in supply chain applications, stock levels of local sports paraphernalia are adjusted to account for demand spikes around sporting events. For example, sales of local sports paraphernalia are tied to wins as introduced above. If a number of wins rises above a threshold or a team enters post-season play, previously projected stock levels may no longer be accurate. Here, the number of wins may also represent a simulation result in addition to or as an alternative to aggregate sales.
The computing system 1002 is configured to approximate a conditional cumulative distribution model 1012 associated to the cumulative distribution model 1008 for the plurality of the simulation results 1006 in the plurality of simulations. In some examples, the conditional cumulative distribution models 1012 for each of the plurality of cumulative distribution models 1008 is conditional upon a predetermined quantity of the events occurring. In this manner, the conditional cumulative distribution models 1012 reflects simulation results 1006 that are based upon the predetermined quantity of the correlated events (e.g., mean power produced when at least 3 storms occur in Table 1).
In some examples, the conditional cumulative distribution model 1012 is approximated using a Fourier transform. The approximation of the conditional cumulative distribution model 1012 may additionally or alternatively include numerical techniques such as tilting. This allows the computing system 1002 to rapidly compute accurate conditional cumulative distribution models for target statistics.
In other examples, the conditional cumulative distribution model 1012 is approximated using a Monte Carlo method. In yet other examples, the conditional cumulative distribution model 1012 is approximated using a Fourier transform based upon Monte Carlo simulation data. For example, Monte Carlo methods may be used to provide boundaries that estimate how the cumulative distribution model 1008 supports the conditional cumulative distribution model 1012, while the conditional cumulative distribution model 1012 itself is computed using the Fourier transform. Although the use of Monte Carlo methods makes the approximation of the conditional cumulative distribution model 1012 non-deterministic, the use of Monte-Carlo-based support data may enable the computing system 1002 to compute the conditional cumulative distribution model 1012 more rapidly and with a similar or greater level of accuracy than via deterministic methods alone.
Accordingly, in some examples, the cumulative distribution models 1008 and the conditional cumulative distribution models 1012 are stratified into a number of strata. For example, the simulation results 1006 may include values over all events that occur which is reflected by a cumulative distribution model 1008. The conditional cumulative distribution models 1012 may reflect the probability of values conditional upon a predetermined number of events occurring, values conditional upon at least a predetermined number of events occurring, or values conditional upon at most a predetermined number of events occurring. As described in more detail below, stratifying each of the cumulative distribution model 1008 and the conditional cumulative distribution models 1012 by the quantity of these events enables the computing system 1002 to optimize the simulation results to serve as a representative sample that accurately reflects a number of events and values associated with those events.
The one or more cumulative distribution models and one or more conditional cumulative distribution models 1012 associated to each are used to compute a sum of discrepancy scores 1014 for the plurality of the simulation results 1006 for each simulation 1004. For example, the sum of the discrepancy scores 1014 may be determined as described above with reference to
As described in more detail below, the computing system 1002 is operatively configured to perform one or more resampling iterations until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold 1016. As described in more detail below, if a set of simulations meets the optimization threshold 1016, the computing system 1002 is configured to output an event-driven model including the accepted simulations as indicated at 1018. In some examples, “meeting the optimization threshold” refers to the sum of the discrepancy scores 1014 being less than or equal to the optimization threshold 1016. In this manner, the computing system 1002 ensures that the set of simulations closely approximates the conditional cumulative distribution models 1012.
If the sum of the discrepancy scores 1014 does not meet the optimization threshold, the set of simulations is modified as indicated at 1020. To generate the modified set of simulations 1020, the computing system 1002 is configured to replace one of the simulations 1004 with one or more resampled simulations. For example,
In the example depicted in
At least one resampled simulation result 1022B is added to the values that remain from the initial simulation results 1006 after the simulation result 1022A is removed. In the example of
In some examples, the resampled simulation result 1022B is contained in a precomputed simulation 1024. For example, the precomputed simulation 1024 may be selected from a pool of precomputed Monte Carlo simulations that also includes the simulations 1004. As described above, precomputing enables the computing system 1002 to estimate the cumulative distribution model 1008 for an aggregate of all the all the simulation results upfront.
Like the selection of the removed simulation result 1022A, in some examples, the computing system 1002 generates or selects the resampled simulation via a randomized process. In this manner, the other simulation result 1022B may reduce simulation error in the modified data 1020 via random sampling. In other examples, the other simulation result 1022B is explicitly selected, either by the computing system 1002 or a user, to drive the modified data 1020 towards the optimization threshold 1016.
The computing system 1002 is further configured to generate an updated sum of discrepancy scores 1026 for one or more remaining simulations and the resampled simulation 1022B. In some examples, the updated sum of the discrepancy scores 1026 is generated as described above with reference to
As described in more detail below, the computing system 1002 is configured to apply a policy 1028 to accept or reject the resampled simulation 1022B based upon the updated sum of the discrepancy scores 1026. For example, the computing system 1002 may reject the resampled simulation 1022B if the updated sum of the discrepancy scores 1026 is greater than the initial sum of the discrepancy scores 1014. The computing system 1002 may additionally or alternatively reject the removal of a simulation if the resampled simulation is rejected. In this manner, the computing system 1022 is configured to drive the discrepancy scores towards the optimization threshold 1016.
In some examples, the policy 1028 is implemented at a Markov Chain Monte Carlo (MCMC) agent 1030.
During the one or more iterations of the resampling loop 1032, the MCMC agent 1030 is configured to conditionally accept a set of modified simulations 1020 with a higher evaluated cost than a previous pass through the resampling loop 1032 more readily at higher temperatures 1034 and less readily at lower temperatures 1034.
In some examples, the computing system 1002 is configured to reduce the temperature parameter 1034 over a series of steps 1036. As the temperature 1034 is lowered on successive passes through the resampling loop 1032, the policy 1028 is further constrained to seek lower cost solutions, eventually trending toward a local minimum on the solution surface. This results in minimizing the updated sum 1026 of the discrepancy scores.
The MCMC agent 1030 is used to iteratively evaluate the modified simulations 1020 (including the one or more remaining simulations), and to adjust the temperature parameter 1034, when a simulation is replaced.
For each step 1036 through the resampling loop 1032, a value for the temperature parameter 1034 is determined by the MCMC agent 1030 according to a temperature function that trends lower over time (e.g., a temperature function for a quantum-inspired algorithm, as described in more detail below). In this example, the value for the temperature (K) of a first step I through the optimization loop is set to 5, the value (K) for a second step II is set to 3, the value (K) for a third step III is set to 2, and the value (K) for a fourth step IV is set to 1. The selected set of modified resampled simulations 1020 from the computing system 1002 are conditionally accepted by the MCMC agent 1030 at the value for the temperature parameter for each optimization loop. For instance, in the first step I through the optimization loop, the modified simulations 1020 are unconditionally accepted until reaching a local minimum, as the updated sum of the discrepancy scores 1026 decreases. On the other hand, the modified simulations 1020 are conditionally accepted after the local minima according to the value (K=5) for the temperature, as the updated sum of the discrepancy scores 1026 increases. In the second step II, the updated sum of the discrepancy scores 1026 decreases to a local minimum and then increases at the value (K=3) for the temperature. However, the increase in the updated sum of the discrepancy scores 1026 in the second step II is less than the increase in the updated sum of the discrepancy scores 1026 in the first step I since the value (K) for the temperature decreases from 5 to 3. In the same manner, the increase in the third step III is less than that of the second step II and the increase in the fourth step IV is less than that of the third step III. At the end of the fourth step IV, an estimate of a solution, which is the lowest point of the updated sum of the discrepancy scores 1026, is determined. In this manner, an optimized solution for a set of modified simulations 1020 having the lowest updated sum of the discrepancy scores 1026 can be computed with reasonable accuracy in an efficient number of optimization steps.
In some examples, the policy 1028 is tuned according to a temperature parameter of a quantum-inspired algorithm 1050 to transition from a first optimization threshold to a second, updated optimization threshold. As used herein, the term “quantum-inspired algorithm” refers to an algorithm run on traditional computing hardware that emulates one or more features of quantum mechanics for a computational advantage. In particular quantum-inspired optimization algorithms emulate quantum tunneling, an effect that provides an advantage to the adiabatic quantum optimization algorithm that runs on a quantum computer. It is common to including annealing algorithms among quantum-inspired algorithms as additional randomness, whose strength is governed by a temperature that decreases over the course of the algorithm, provides additional computational advantage and is regularly exploited by practitioners in the field. Some examples of quantum-inspired algorithms include, but are not limited to, Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, and Parallel Tempering. In some examples, the quantum-inspired algorithm 1050 is at least partially implemented at a classical computing device that simulates quantum behavior. In other examples, the quantum-inspired algorithm 1050 is implemented at least partially at a quantum computer. Quantum-inspired algorithms offer the ability to break out of local minima on the solution surface through tunneling-like effects. Thus, the quantum-inspired algorithm may enable the computing system 1002 to explore the solution surface more efficiently than through classical annealing and may prevent the modified simulations 1020 from becoming trapped in a local minimum.
The computing system 1002 is further configured to output the event-driven model 1018 including the accepted simulation results. Table 2 shows an example output of the computing system 1002 for the scenario presented earlier with reference to Table 1. Table 2 shows 10 accepted simulations for wind power (in MWh) generated by storms. Compared to Table 1, the values output in Table 2 are more representative of an overall distribution of wind power that can be produced by the given number of storms.
As introduced above, in some examples, the computing system 1002 is configured to output the event-driven model 1018 (e.g., Table 2) to the correlator 124 of
With reference now to
It will be appreciated that the following description of method 1300 is provided by way of example and is not meant to be limiting. It will be understood that various steps of method 1300 can be omitted or performed in a different order than described, and that the method 1300 can include additional and/or alternative steps relative to those illustrated in
With reference first to
In some examples, at 1304, the method 1300 includes computing the discrete distribution function based upon the quantity of simulation results contained in each simulation. For example, the computing system 1002 may compute the discrete distribution function for a range of numbers of storms as specified in Tables 1 and 2. For example, at 1306, the discrete distribution function may form a Poisson distribution. Stratifying the CDF by the quantity of these events enables the computing system to optimize the simulation results to serve as a representative sample for the target statistic.
At 1308, the method 1300 includes generating one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models. For example, the computing system 1002 is configured to generate the one or more conditional cumulative distribution models 1012. Approximating the conditional marginal distribution enables the computing device to generate a representative sample of simulation result values to fit that distribution.
In some examples, at 1310, the conditional cumulative distribution model is conditional upon a predetermined quantity of simulation results. For example, the conditional cumulative distribution models 1012 may reflect simulation results 1006 that are based upon the predetermined quantity of the correlated events (e.g., mean power produced when at least 3 storms occur in Table 1).
At 1312, in some examples, approximating the conditional cumulative distribution model includes using one or more of a Fourier transform, a Monte Carlo method, or a Fourier transform based upon Monte Carlo simulation data. For example, Monte Carlo methods may be used to provide boundaries that estimate how the cumulative distribution model 1008 supports the conditional cumulative distribution model 1012, while the conditional cumulative distribution model 1012 itself is computed using the Fourier transform. Although Monte Carlo methods are not deterministic, the use of Monte-Carlo-based support data may enable the computing system 1002 to compute the conditional cumulative distribution model 1012 more rapidly and with a similar or greater level of accuracy than via deterministic methods alone.
The method 1300 further includes, at 1314, stratifying a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata. For example, the computing system 1002 is configured to stratify the cumulative distribution models 1008 and the conditional cumulative distribution models 1012. This enables the computing system to optimize the simulations to generate a representative sample for the target statistics, which may increase the accuracy of the target statistics for downstream processing.
At 1316, the method 1300 includes computing a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models. For example, the computing system 1002 is configured to determine the sum of discrepancy scores 1014 for the simulation results 1006 using the cumulative distribution models 1008 and the conditional cumulative distribution models 1012. As described above, the discrepancy scores measure deviation between the simulation results 1006 and the conditional cumulative distribution models 1012.
The method 1300 further includes, at 1318, one or more resampling iterations. Each of the one or more resampling iterations may be performed until the sum of one or more respective discrepancy scores (e.g., the sum of the discrepancy scores 1014 or the updated sum of the discrepancy scores 1026) is determined to meet an optimization threshold (e.g., the optimization threshold 1016).
In each of the one or more resampling iterations, the method 1300 includes, at 1320, generating one or more resampled simulations based at least in part on the one or more cumulative distribution models. For example, at 1322, the resampled simulation may be derived from a precomputed simulation.
Each of the one or more resampling iterations further includes, at 1324, replacing one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy. For example,
In some examples, at 1326, the policy is implemented at a Markov Chain Monte Carlo (MCMC) agent. For example, the policy 1028 of
At 1328, each of the one or more resampling iterations further includes generating an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations. For example, the computing system 1002 is configured to generate the updated sum of discrepancy scores 1026 for the modified simulation results 1020. This updated sum of discrepancy scores is compared to the optimization threshold to determine whether to proceed through another resampling iteration.
In some examples, at 1330, the resampling iterations are defined by a quantum-inspired algorithm. For example, at 1332, the quantum-inspired algorithm may include a Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, or Parallel Tempering algorithm. These quantum-inspired algorithms may prevent the modified simulation results from becoming trapped in local minima of a solution surface and enable the computing system to explore solutions more efficiently than through other algorithms, such as classical annealing.
At 1334, the method 1300 includes outputting the plurality of simulations subsequent to performing the one or more resampling iterations. For example, the computing system 1002 is configured to output the plurality of simulations 1018 based upon the modified simulation results 1020 meeting the optimization threshold 1016. As the output result is based upon the optimized set of modified simulation results, the output event-driven model may have at least similar accuracy to a result produced using a substantially larger (e.g., at least 10-100 times larger) set of Monte Carlo simulations.
The above-described systems and methods may be used to stratify event-driven models. For example, a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models are stratified. Stratifying the one or more cumulative distribution models and the one or more conditional cumulative distribution models enables the computing system to generate a representative sample of simulations. During an iterative resampling process, one or more simulations are replaced with one or more resampled simulations based on a policy. The policy enables a computing system to optimize the modified simulation results (e.g., by minimizing a discrepancy score), such that an event-driven model based upon one or more accepted simulation result values is more representative of the cumulative distribution models and conditional cumulative distribution models than an event-driven model based upon the initial simulation results. In some examples, the resampling iterations are defined by a quantum-inspired algorithm. This enables a computing system to explore the solution surface more efficiently than other algorithms, such as classical annealing, while also preventing the modified simulation results from becoming trapped in local minima of the solution surface. The computing system is configured to output the plurality of simulations subsequent to performing the one or more resampling iterations. As a result of the above-described system and methods, the output simulations may be a more representative sample than the initial simulations result values. This may also increase the accuracy of downstream processing (e.g., at the correlator 124) without requiring a larger sample of Monte Carlo simulations.
In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 1400 includes a logic processor 1402 volatile memory 1404, and a non-volatile storage device 1406. Computing system 1400 may optionally include a display subsystem 1408, input subsystem 1410, communication subsystem 1412, and/or other components not shown in
Logic processor 1402 includes one or more physical devices configured to execute instructions. For example, the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor 1402 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood.
Volatile memory 1404 may include physical devices that include random access memory. Volatile memory 1404 is typically utilized by logic processor 1402 to temporarily store information during processing of software instructions. It will be appreciated that volatile memory 1404 typically does not continue to store instructions when power is cut to the volatile memory 1404.
Non-volatile storage device 1406 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device 1406 may be transformed—e.g., to hold different data.
Non-volatile storage device 1406 may include physical devices that are removable and/or built-in. Non-volatile storage device 1406 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology. Non-volatile storage device 1406 may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device 1406 is configured to hold instructions even when power is cut to the non-volatile storage device 1406.
Aspects of logic processor 1402, volatile memory 1404, and non-volatile storage device 1406 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system 1400 typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function. Thus, a module, program, or engine may be instantiated via logic processor 1402 executing instructions held by non-volatile storage device 1406, using portions of volatile memory 1404. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
When included, display subsystem 1408 may be used to present a visual representation of data held by non-volatile storage device 1406. The visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the non-volatile storage device, and thus transform the state of the non-volatile storage device, the state of display subsystem 1408 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1408 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic processor 1402, volatile memory 1404, and/or non-volatile storage device 1406 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 1410 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor.
When included, communication subsystem 1412 may be configured to communicatively couple various computing devices described herein with each other, and with other devices. Communication subsystem 1412 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network, such as a HDMI over Wi-Fi connection. In some embodiments, the communication subsystem may allow computing system 1400 to send and/or receive messages to and/or from other devices via a network such as the Internet.
The following paragraphs discuss several aspects of the present disclosure. One aspect provides a computing system, comprising: a processor configured to receive, for a plurality of correlated variables, a first predetermined number of simulations from a Monte-Carlo simulation sample, each simulation including a plurality of initial simulation results for the plurality of the variables; one or more target statistics, a second predetermined number of strata for each variable and each target statistic, and a cumulative distribution function for each variable and each target statistic; for each variable and for each of the one or more target statistics, segment a unit interval of the cumulative distribution function into the second predetermined number of strata and a support of the cumulative distribution function into a plurality of bins such that each bin of the plurality of bins corresponds to one of the strata, determine an initial discrepancy score based upon a quantity of values in each bin, the first predetermined number of the simulations, and the second predetermined number of the strata for the variable; determine an initial sum of the initial discrepancy scores; remove at least one of the plurality of the initial simulations based upon a determination that the initial sum of the initial discrepancy scores is not within an optimization threshold; add at least one other simulation to a remaining one or more initial simulations; for each variable and for each of the one or more target statistics, use the quantity of the values in one or more bins corresponding to the at least one of the plurality of the initial simulations and the quantity of the values in one or more bins corresponding to the at least one other simulation to generate an updated discrepancy score; determine an updated sum of the updated discrepancy scores; and output a plurality of representative simulations that represent the cumulative distribution function across the strata based upon the updated sum of the updated discrepancy scores. A potential technical advantage of such a configuration is that a representative sequence of simulations are selected from a pool of Monte Carlo simulation data.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to accept or reject the at least one other simulation based upon the updated sum of the updated discrepancy scores. A potential technical advantage of such a configuration is that the selection of the representative simulations is driven towards the optimization threshold.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to, for each variable and for each of the one or more target statistics, determine a stratum of a value for each initial simulation result and place the value into one of the plurality of bins based upon stratum. A potential technical advantage of such a configuration is that the computing system identifies the bin in which the value is placed.
Further to this aspect, in some examples, the at least one other simulation additionally or alternatively includes a precomputed simulation. A potential technical advantage of such a configuration is that the CDF for an aggregate of all the precomputed simulations may be estimated upfront.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to determine an initial bin-wise discrepancy metric for each bin of the plurality of bins. A potential technical advantage of such a configuration is that the computing system measures the homogeneity of the initial simulation results across the plurality of bins.
Further to this aspect, in some examples, the initial bin-wise discrepancy metric for a selected bin additionally or alternatively includes a difference between the quantity of values in the selected bin and the first predetermined number of the simulations divided by the second predetermined number of the strata for the variable or target statistic. A potential technical advantage of such a configuration is that the initial bin-wise discrepancy metric may be computed using arithmetic operations.
Further to this aspect, in some examples, the initial discrepancy score additionally or alternatively includes a maximum bin-wise discrepancy metric or a sum of the initial bin-wise discrepancy metrics for the plurality of bins. A potential technical advantage of such a configuration is that the initial discrepancy score represents the maximum discrepancy between the initial Monte Carlo simulation data and the CDF or an aggregate discrepancy for the initial Monte Carlo simulation data.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to weight the initial bin-wise discrepancy metric for each bin of the plurality of bins based upon a proximity of a stratum corresponding to the bin to a tail of the cumulative distribution function. A potential technical advantage of such a configuration is that the initial discrepancy score places greater emphasis on accuracy at the tail than elsewhere in the distribution.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to: determine an updated bin-wise discrepancy metric for the one or more bins corresponding to the at least one of the plurality of the initial simulations and for the one or more bins corresponding to the at least one other simulation; and use the updated bin-wise discrepancy metric for the one or more bins corresponding to the at least one of the plurality of the initial simulations and for the one or more bins corresponding to the at least one other simulation, and the initial bin-wise discrepancy metric for each of the remaining one or more initial simulations, to determine the updated discrepancy score. A potential technical advantage of such a configuration is that this formulation of the updated discrepancy score does not require the computing system to recompute the initial bin-wise discrepancy metric for each bin.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to, for each variable and target statistic: decrement the quantity of values in the one or more bins corresponding to the at least one of the plurality of the initial simulations; and increment the quantity of values in the one or more bins corresponding to the at least one other simulation. A potential technical advantage of such a configuration is that the updated discrepancy score may be determined using the same operations as the initial discrepancy score.
Another aspect provides, at a computing device, a method comprising: receiving, for a plurality of correlated variables, a first predetermined number of simulations from a Monte-Carlo simulation sample, each simulation including a plurality of initial simulation results for the plurality of the variables, one or more target statistics, a second predetermined number of strata for each variable and each target statistic, and a cumulative distribution function for each variable and each target statistic; for each variable and for each of the one or more target statistics, segmenting a unit interval of the cumulative distribution function into the second predetermined number of the strata and a support of the cumulative distribution function into a plurality of bins such that each bin of the plurality of bins corresponds to one of the strata, determining an initial discrepancy score based upon a quantity of values in each bin, the first predetermined number of the simulations, and the second predetermined number of the strata for the variable, determining an initial sum of the initial discrepancy scores; removing at least one of the plurality of the initial simulations based upon a determination that the initial sum of the initial discrepancy scores is not within an optimization threshold; adding at least one other simulation to a remaining one or more initial simulation; for each variable and for each of the one or more target statistics, using the quantity of the values in one or more bins corresponding to the at least one of the plurality of the initial simulation results and the quantity of the values in one or more bins corresponding to the at least one other simulation result to generate an updated discrepancy score; determining an updated sum of the updated discrepancy scores; and outputting a plurality of representative simulations that represent the cumulative distribution function across the strata based upon the updated sum of the updated discrepancy scores. A potential technical advantage of such a configuration is that a representative sequence of simulation results are selected from a pool of Monte Carlo simulation data.
Further to this aspect, in some examples, the method additionally or alternatively includes accepting or rejecting the at least one other simulation based upon the updated sum of the updated discrepancy scores. A potential technical advantage of such a configuration is that the selection of the representative simulation results is driven towards the optimization threshold.
Further to this aspect, in some examples, the at least one other simulation additionally or alternatively includes a precomputed simulation. A potential technical advantage of such a configuration is that the CDF for an aggregate of all the precomputed simulation results may be estimated upfront.
Further to this aspect, in some examples, determining the initial discrepancy score additionally or alternatively includes determining an initial bin-wise discrepancy metric for each bin of the plurality of bins. A potential technical advantage of such a configuration is that the initial bin-wise discrepancy metrics measure the homogeneity of the initial simulation results across the plurality of bins.
Further to this aspect, in some examples, determining the initial bin-wise discrepancy metric for a selected bin additionally or alternatively includes determining a difference between the quantity of values in the selected bin and the first predetermined number of the simulations divided by the second predetermined number of the strata for the variable or target statistic. A potential technical advantage of such a configuration is that the initial bin-wise discrepancy metric may be computed using arithmetic operations.
Further to this aspect, in some examples, determining the initial bin-wise discrepancy metric for the selected bin additionally or alternatively includes determining a maximum bin-wise discrepancy metric or determining a sum of the initial bin-wise discrepancy metrics for the plurality of bins. A potential technical advantage of such a configuration is that the initial discrepancy score represents the maximum discrepancy between the initial Monte Carlo simulation data and the CDF or an aggregate discrepancy for the initial Monte Carlo simulation data.
Further to this aspect, in some examples, determining the initial discrepancy score additionally or alternatively includes weighting the initial bin-wise discrepancy metric for each bin of the plurality of bins based upon a proximity of a stratum corresponding to the bin to a tail of the cumulative distribution function. A potential technical advantage of such a configuration is that the initial discrepancy score places greater emphasis on accuracy at the tail than elsewhere in the distribution.
Further to this aspect, in some examples, generating the updated discrepancy score additionally or alternatively includes determining an updated bin-wise discrepancy metric for the one or more bins corresponding to the at least one of the plurality of the initial simulation results and for the one or more bins corresponding to the at least one other simulation; and using the updated bin-wise discrepancy metric for the one or more bins corresponding to the at least one of the plurality of the initial simulation results and for the one or more bins corresponding to the at least one other simulation result, and the initial bin-wise discrepancy metric for each of the remaining one or more initial simulation results, to determine the updated discrepancy score. A potential technical advantage of such a configuration is that this formulation of the updated discrepancy score does not require the initial bin-wise discrepancy metric to be recomputed for each bin.
Further to this aspect, in some examples, generating the updated discrepancy score additionally or alternatively includes decrementing the quantity of values in the one or more bins corresponding to the at least one of the plurality of the initial simulations; and incrementing the quantity of values in the one or more bins corresponding to the at least one other simulation. A potential technical advantage of such a configuration is that the updated discrepancy score may be determined using the same operations as the initial discrepancy score.
Another aspect provides a computing system, comprising: a processor configured to, receive, for a plurality of correlated variables, a first predetermined number of simulations from a Monte-Carlo simulation sample, each simulation including a plurality of initial simulation results for the plurality of the variables; one or more target statistics, a second predetermined number of strata for each variable and each target statistic; and a cumulative distribution function for each variable and each target statistic; for each variable and for each of the one or more target statistics, segment a unit interval of the cumulative distribution function into the second predetermined number of strata and a support of the cumulative distribution function into a plurality of bins such that each bin of the plurality of bins corresponds to one of the strata, count a quantity of values in each bin of the plurality of bins, and determine an initial discrepancy score based upon a difference between the quantity of values in each bin and the quantity of the initial simulations divided by the second predetermined number of the strata for the variable or target statistic; determine an initial sum of the initial discrepancy scores; remove at least one of the plurality of the initial simulation results based upon a determination that the initial sum of the initial discrepancy scores is not within an optimization threshold; add at least one other simulation to a remaining one or more initial simulations; for each variable and target statistic, decrement the quantity of values in one or more bins corresponding to the at least one of the plurality of the initial simulation results, increment the quantity of values in the one or more bins corresponding to the at least one other simulation result, and use the quantity of the values in the one or more bins corresponding to the at least one of the plurality of the initial simulation results and the quantity of the values in the one or more bins corresponding to the at least one other simulation result to generate an updated discrepancy score; determine an updated sum of the updated discrepancy scores; and output a plurality of representative simulations that represent the cumulative distribution function across the strata based upon the updated sum of the updated discrepancy scores. A potential technical advantage of such a configuration is that a representative sequence of simulation results are selected from a pool of Monte Carlo simulation data.
According to one aspect of the present disclosure, a computing system, is provided, including a processor configured to receive, for a plurality of correlated random variables, a simulation sample including a plurality of simulations. Each simulation may include a plurality of simulation results. The processor may be further configured to generate a surrogate cumulative distribution model at least in part by estimating a plurality of surrogate model parameters based at least in part on the plurality of simulation results. Based at least in part on the surrogate cumulative distribution model with the surrogate model parameters, the processor may be further configured to select one or more subsets of the plurality of simulations. In each of one or more resampling iterations, until a sum of one or more respective discrepancy scores of the one or more subsets is determined to meet an optimization threshold, the processor may be further configured to compute the one or more discrepancy scores of the one or more subsets. In each of the one or more resampling iterations, based at least in part on the sum of the one or more discrepancy scores, the processor may be further configured to sample one or more resampled simulations for the plurality of correlated random variables from among the plurality of simulations that are included in the simulation sample and not already included in the one or more subsets. In each of the one or more resampling iterations, the processor may be further configured to replace one or more simulations included in the one or more subsets with the one or more resampled simulations. The processor may be further configured to output the simulations included in the one or more subsets subsequently to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that the one or more subsets may be compressed relative to the simulation sample, thereby allowing a Monte Carlo algorithm to be performed more efficiently when the one or more subsets are used as input.
According to this aspect, for a plurality of quantiles of the plurality of simulation results, the processor may be further configured to compute a plurality of strata of the surrogate cumulative distribution model. The processor may be further configured to select the one or more subsets of simulations such that the simulation results included in the simulations included in the one or more subsets are distributed equally among the plurality of strata. A potential technical advantage of such a configuration is that the distribution of the correlated random variables in sparse regions of the range of the simulation results may be accurately represented with a reduced number of simulations.
According to this aspect, the processor may be configured to replace the one or more simulations with the one or more resampled simulations at least in part by performing a quantum-inspired algorithm. A potential technical advantage of such a configuration is that the sum of the one or more discrepancy scores may be reduced in a manner that may quickly converge to a value below the optimization threshold.
According to this aspect, the processor may be configured to generate the plurality of simulation results for the plurality of correlated random variables at least in part by executing an Iman-Conover algorithm. A potential technical advantage of such a configuration is that the event simulation module may efficiently generate the simulation results.
According to this aspect, the processor may be configured to sample the one or more resampled simulations at least in part by executing an Iman-Conover algorithm. A potential technical advantage of such a configuration is that the processor may efficiently resample the resampled simulations.
According to this aspect, the surrogate cumulative distribution model may be a mixed Erlang model including a plurality of Erlang distributions. A potential technical advantage of such a configuration is that the surrogate cumulative distribution model may accurately model the cumulative distribution function with a small number of parameters.
According to this aspect, the surrogate cumulative distribution model may further include one or more substitute tail region distributions configured to replace one or more respective tail regions of one or more of the plurality of Erlang distributions. The one or more substitute tail region distributions may differ from the one or more Erlang distributions within the one or more respective tail regions. A potential technical advantage of such a configuration is that the surrogate cumulative distribution model may model heavy-tailed or light-tailed distributions more accurately.
According to this aspect, the surrogate cumulative distribution model may be an empirical model for which the processor may be configured to estimate the surrogate model parameters based at least in part on empirical data included in the plurality of simulations. A potential technical advantage of such a configuration is that the surrogate cumulative distribution model may accurately model empirical data.
According to this aspect, the processor may be configured to estimate the plurality of surrogate model parameters at least in part by performing iterative expectation maximization. A potential technical advantage of such a configuration is that the processor may set the values of the surrogate model parameters such that the surrogate cumulative distribution model accurately models the cumulative distribution function.
According to this aspect, the plurality of simulation results may include a plurality of aggregate values, minimum values, or maximum values over the plurality of correlated random variables. A potential technical advantage of such a configuration is that quantities that are likely to be of interesting in areas such as energy production, inventory management, and insurance may be modeled.
According to this aspect, the processor may be further configured to generate the surrogate cumulative distribution model in response to receiving a surrogate model type selection at a graphical user interface (GUI). The processor may be further configured to generate the one or more subsets of the plurality of simulations in response to receiving simulation generating instructions at the GUI. The processor may be further configured to output the one or more subsets of the simulations to the GUI. A potential technical advantage of such a configuration is that the GUI may allow the user to specify properties of the surrogate cumulative distribution model and the pone or more subsets, and to view the one or more subsets.
According to another aspect of the present disclosure, a method for use with a computing system is provided. The method may be computer-implemented. The method may include receiving, for a plurality of correlated random variables, a simulation sample including a plurality of simulations. Each simulation may include a plurality of simulation results. The method may further include generating a surrogate cumulative distribution model at least in part by estimating a plurality of surrogate model parameters based at least in part on the plurality of simulation results. Based at least in part on the surrogate cumulative distribution model with the surrogate model parameters, the method may further include selecting one or more subsets of the plurality of simulations. The method may further include, in each of one or more resampling iterations, until a sum of one or more respective discrepancy scores of the one or more subsets is determined to meet an optimization threshold, computing the one or more discrepancy scores of the one or more subsets. In each of the one or more resampling iterations, the method may further include, based at least in part on the sum of the one or more discrepancy scores, sampling one or more resampled simulations for the plurality of correlated random variables from among the plurality of simulations that are included in the simulation sample and not already included in the one or more subsets. The method may further include, in each of the one or more resampling iterations, replacing one or more simulations included in the one or more subsets with the one or more resampled simulations. The method may further include outputting the simulations included in the one or more subsets subsequently to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that the one or more subsets may be compressed relative to the simulation sample, thereby allowing a Monte Carlo algorithm to be performed more efficiently when the one or more subsets are used as input.
According to this aspect, method may further include, for a plurality of quantiles of the plurality of simulation results, computing a plurality of strata of the surrogate cumulative distribution model. The method may further include selecting the one or more subsets of simulations such that the simulation results included in the simulations included in the one or more subsets are distributed equally among the plurality of strata. A potential technical advantage of such a configuration is that the distribution of the correlated random variables in sparse regions of the range of the simulation results may be accurately represented with a reduced number of simulations.
According to this aspect, replacing the one or more simulations with the one or more resampled simulations may include performing a quantum-inspired algorithm. A potential technical advantage of such a configuration is that the sum of the one or more discrepancy scores may be reduced in a manner that may quickly converge to a value below the optimization threshold.
According to this aspect, sampling the one or more resampled simulations may further include executing an Iman-Conover algorithm. A potential technical advantage of such a configuration is that the resampled simulations may be resampled efficiently.
According to this aspect, the surrogate cumulative distribution model may be a mixed Erlang model including a plurality of Erlang distributions. A potential technical advantage of such a configuration is that the surrogate cumulative distribution model may accurately model the cumulative distribution function with a small number of parameters.
According to this aspect, the surrogate cumulative distribution model may be an empirical model for which the surrogate model parameters are estimated based at least in part on empirical data included in the plurality of simulations. A potential technical advantage of such a configuration is that the surrogate cumulative distribution model may accurately model empirical data.
According to this aspect, estimating the plurality of surrogate model parameters may include performing iterative expectation maximization. A potential technical advantage of such a configuration is that the processor may set the values of the surrogate model parameters such that the surrogate cumulative distribution model accurately models the cumulative distribution function.
According to this aspect, the plurality of simulation results may include a plurality of aggregate values, minimum values, or maximum values over the plurality of correlated random variables. A potential technical advantage of such a configuration is that quantities that are likely to be of interest in areas such as energy production, inventory management, and insurance may be modeled.
According to another aspect of the present disclosure, a computing system is provided, including a processor configured to receive, for a plurality of correlated random variables, a simulation sample including a plurality of simulations. Each simulation may include a plurality of simulation results. Based at least in part on the plurality of simulation results, the processor may be further configured to generate a surrogate cumulative distribution model. Based at least in part on the surrogate cumulative distribution model, the processor may be further configured to select a compressed subset of the plurality of simulations. In each of one or more resampling iterations, until a discrepancy score of the compressed subset is determined to be below a predetermined discrepancy threshold, the processor may be further configured to compute the discrepancy score of the compressed subset. In each of the one or more resampling iterations, based at least in part on the discrepancy score, the processor may be further configured to sample one or more resampled simulations for the plurality of correlated random variables from among the plurality of simulations that are included in the simulation sample and not already included in the subset. In each of the one or more resampling iterations, the processor may be further configured to replace one or more simulations included in the compressed subset with the one or more resampled simulations. The processor may be further configured to output the simulations included in the compressed subset subsequently to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that the one or more subsets may be compressed relative to the simulation sample, thereby allowing a Monte Carlo algorithm to be performed more efficiently when the one or more subsets are used as input.
Another aspect provides a computing system, comprising: a processor configured to, receive, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, one or more cumulative distribution models; generate one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models; stratify a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata; compute a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models; in each of one or more resampling iterations, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold: generate one or more resampled simulations based at least in part on the one or more cumulative distribution models; replace one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy, and generate an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and output the plurality of simulations subsequent to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that a set of modified simulation results is generated that is more representative of the one or more cumulative distribution models and one or more conditional cumulative distribution models than the initial simulation results.
Further to this aspect, in some examples, the discrete distribution function is additionally or alternatively computed based upon the quantity of simulation results contained in each simulation. A potential technical advantage of such a configuration is that a representative sample of Monte Carlo simulations is generated that is more representative of the conditional marginal distribution than the initial simulation results.
Further to this aspect, in some examples, the discrete distribution function additionally or alternatively forms a Poisson distribution. A potential technical advantage of such a configuration is that the Poisson distribution represents a distribution of discrete quantities of events contained in each simulation.
Further to this aspect, in some examples, the conditional marginal distribution for each of the plurality of the target statistics is additionally or alternatively conditional upon a predetermined quantity of simulation results. A potential technical advantage of such a configuration is that the conditional marginal distribution reflects target statistics that are based upon the predetermined quantity of the events.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to approximate the conditional marginal distribution using a Fourier transform, a Monte Carlo method, or a Fourier transform based upon Monte Carlo simulation data. A potential technical advantage of such a configuration is that accurate conditional marginal distributions may be rapidly computed for target statistics.
Further to this aspect, in some examples, the other simulation result is additionally or alternatively derived from a precomputed simulation. A potential technical advantage of such a configuration is that the statistical properties of an aggregate of all the precomputed simulation results may be estimated upfront.
Further to this aspect, in some examples, the accept/reject policy is additionally or alternatively implemented at a Markov Chain Monte Carlo agent. A potential technical advantage of such a configuration is that the updated sum of the discrepancy scores may be optimized.
Further to this aspect, in some examples, the resampling iterations are additionally or alternatively defined by a quantum-inspired algorithm. A potential technical advantage of such a configuration is that a solution surface may be explored more efficiently than through classical annealing and may prevent the modified simulation results from becoming trapped in a local minimum.
Further to this aspect, in some examples, the quantum-inspired algorithm additionally or alternatively includes a Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, or Parallel Tempering algorithm. A potential technical advantage of such a configuration is that a solution surface may be explored more efficiently than through classical annealing and may prevent the modified simulation results from becoming trapped in a local minimum.
Further to this aspect, in some examples, the processor is additionally or alternatively configured to apply the policy to minimize the one or more respective discrepancy scores. A potential technical advantage of such a configuration is that a set of simulations is generated that closely approximates the conditional cumulative distribution models.
Another aspect provides, at a computing system, a method comprising: receiving, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, and one or more cumulative distribution models; generating one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models; stratifying a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata; computing a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models; in each of one or more resampling iterations, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold, generating one or more resampled simulations based at least in part on the one or more cumulative distribution models, replacing one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy, and generating an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and outputting the plurality of simulations subsequent to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that a set of modified simulation results is generated that is more representative of the one or more cumulative distribution models and one or more conditional cumulative distribution models than the initial simulation results.
Further to this aspect, in some examples, the method additionally or alternatively includes computing the discrete distribution function based upon the quantity of simulation results contained in each simulation. A potential technical advantage of such a configuration is that a representative sample of Monte Carlo simulations is generated that is more representative of the conditional marginal distribution than the initial simulation results.
Further to this aspect, in some examples, the discrete distribution function additionally or alternatively forms a Poisson distribution. A potential technical advantage of such a configuration is that the Poisson distribution represents a distribution of discrete quantities of events contained in each simulation.
Further to this aspect, in some examples, the conditional cumulative distribution model is additionally or alternatively conditional upon a predetermined quantity of simulation results. A potential technical advantage of such a configuration is that the conditional marginal distribution reflects target statistics that are based upon the predetermined quantity of the events.
Further to this aspect, in some examples, the method additionally or alternatively includes approximating the conditional cumulative distribution model using a Fourier transform, a Monte Carlo method, or a Fourier transform based upon Monte Carlo simulation data. A potential technical advantage of such a configuration is that accurate conditional marginal distributions may be rapidly computed for target statistics.
Further to this aspect, in some examples, the method additionally or alternatively includes deriving the resampled simulation result from a precomputed simulation. A potential technical advantage of such a configuration is that the statistical properties of an aggregate of all the precomputed simulation results may be estimated upfront.
Further to this aspect, in some examples, the method additionally or alternatively includes implementing the policy at a Markov Chain Monte Carlo agent. A potential technical advantage of such a configuration is that the updated sum of the discrepancy scores may be optimized.
Further to this aspect, in some examples, the resampling iterations are additionally or alternatively defined by a quantum-inspired algorithm. A potential technical advantage of such a configuration is that a solution surface may be explored more efficiently than through classical annealing and may prevent the modified simulation results from becoming trapped in a local minimum.
Further to this aspect, in some examples, the quantum-inspired algorithm additionally or alternatively includes a Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, or Parallel Tempering algorithm. A potential technical advantage of such a configuration is that a solution surface may be explored more efficiently than through classical annealing and may prevent the modified simulation results from becoming trapped in a local minimum.
Another aspect provides a computing system, comprising: a processor configured to receive, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, one or more cumulative distribution models; generate one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models; stratify a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata; compute a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models; in each of one or more resampling iterations defined by a quantum-inspired algorithm, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold; generate one or more resampled simulations based at least in part on the one or more cumulative distribution models; replace one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy configured to minimize the one or more respective discrepancy scores, and generate an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and output the plurality of simulations subsequent to performing the one or more resampling iterations. A potential technical advantage of such a configuration is that a set of modified simulation results is generated that is more representative of the one or more cumulative distribution models and one or more conditional cumulative distribution models than the initial simulation results.
Features which are described in the context of separate aspects and embodiments of the invention may be used together and/or be interchangeable. Similarly, where features are, for brevity, described in the context of a single embodiment, these may also be provided separately or in any suitable sub-combination. Features described in connection with the system may have corresponding features definable with respect to the method(s), and vice versa, and these embodiments are specifically envisaged.
“And/or” as used herein is defined as the inclusive or v, as specified by the following truth table:
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Further, it will be appreciated that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words used in either the detailed description or the claims are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Claims
1. A computing system, comprising:
- a processor configured to, receive, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, and one or more cumulative distribution models; generate one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models; stratify a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata; compute a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models; in each of one or more resampling iterations, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold: generate one or more resampled simulations based at least in part on the one or more cumulative distribution models; replace one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy, and generate an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and output the plurality of simulations subsequent to performing the one or more resampling iterations.
2. The computing system of claim 1, wherein the discrete distribution function is computed based upon the quantity of simulation results contained in each simulation.
3. The computing system of claim 2, wherein the discrete distribution function forms a Poisson distribution.
4. The computing system of claim 1, wherein the conditional cumulative distribution model is conditional upon a predetermined quantity of simulation results.
5. The computing system of claim 1, wherein the processor is configured to approximate the conditional cumulative distribution model using a Fourier transform, a Monte Carlo method, or a Fourier transform based upon Monte Carlo simulation data.
6. The computing system of claim 1, wherein the resampled simulation result is derived from a precomputed simulation.
7. The computing system of claim 1, wherein the policy to replace one or more simulation results with one or more sampled simulation results is implemented at a Markov Chain Monte Carlo agent.
8. The computing system of claim 1, wherein the resampling iterations are defined by a quantum-inspired algorithm.
9. The computing system of claim 7, wherein the quantum-inspired algorithm is a Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, or Parallel Tempering algorithm.
10. The computing system of claim 1, wherein the processor is configured to apply the policy to minimize the one or more respective discrepancy scores.
11. At a computing system, a method comprising:
- receiving, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, and one or more cumulative distribution models;
- generating one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models;
- stratifying a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata;
- computing a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models;
- in each of one or more resampling iterations, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold, generating one or more resampled simulations based at least in part on the one or more cumulative distribution models, replacing one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy, and generating an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and
- outputting the plurality of simulations subsequent to performing the one or more resampling iterations.
12. The method of claim 11, further comprising computing the discrete distribution function based upon the quantity of simulation results contained in each simulation.
13. The method of claim 12, wherein the discrete distribution function forms a Poisson distribution.
14. The method of claim 11, wherein the conditional cumulative distribution model is conditional upon a predetermined quantity of simulation results.
15. The method of claim 11, further comprising approximating the conditional cumulative distribution model using a Fourier transform, a Monte Carlo method, or a Fourier transform based upon Monte Carlo simulation data.
16. The method of claim 11, further comprising deriving the resampled simulation result from a precomputed simulation.
17. The method of claim 11, further comprising implementing the policy at a Markov Chain Monte Carlo agent.
18. The method of claim 11, wherein the resampling iterations are defined by a quantum-inspired algorithm.
19. The method of claim 18, wherein the quantum-inspired algorithm includes a Quantum Monte Carlo, Substochastic Monte Carlo, Population Annealing, or Parallel Tempering algorithm.
20. A computing system, comprising:
- a processor configured to receive, a plurality of simulations, wherein each simulation includes a plurality of simulation results, a discrete distribution function, and one or more cumulative distribution models; generate one or more conditional cumulative distribution models based at least in part on the discrete distribution function and the one or more cumulative distribution models; stratify a range of the one or more cumulative distribution models and one or more conditional cumulative distribution models into a number of strata; compute a sum of discrepancy scores for the plurality of simulations based at least in part on the cumulative distribution models and conditional cumulative distribution models; in each of one or more resampling iterations defined by a quantum-inspired algorithm, until the sum of one or more respective discrepancy scores is determined to meet an optimization threshold: generate one or more resampled simulations based at least in part on the one or more cumulative distribution models; replace one or more simulations of the plurality of simulations with the one or more resampled simulations based on a policy configured to minimize the one or more respective discrepancy scores, and generate an updated sum of discrepancy scores for the plurality of simulations with the one or more simulations replaced by the one or more resampled simulations; and output the plurality of simulations subsequent to performing the one or more resampling iterations.
Type: Application
Filed: Jun 30, 2022
Publication Date: Jan 4, 2024
Applicant: Willis Group Limited (London)
Inventors: Andrew John MCGUINNESS (Thornhill), Bradley Curtis LACKEY (Redmond, WA)
Application Number: 17/810,276