METHOD FOR EXPLOITING A GEOLOGICAL RESERVOIR FROM A RESERVOIR MODEL MATCHED BY THE COMPUTATION OF AN ANALYTICAL LAW OF CONDITIONAL DISTRIBUTION OF UNCERTAIN PARAMETERS OF THE MODEL

- IFP ENERGIES NOUVELLES

The invention is a method for exploiting a geological reservoir to exploit hydrocarbons or provide gas storage. The exploitation is defined on the basis of a reservoir model calibrated relative to dynamic data. The calibration is carried out by computing an objective function for a set of reservoir models. The objective function makes it possible to determine an analytical law of conditional distribution of the uncertain parameters of the model to generate new reservoir models added to the set of reservoir models.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to the technical field of the oil industry, and more particularly the exploitation of underground reservoirs, such as oil reservoirs or gas storage sites.

2. Description of the Prior Art

Optimization and exploitation of an oil deposit relies on a description that is as accurate as possible of the structure, of the petrophysical properties, of the fluid properties, and so on, of the deposit being studied. To do this, a software tool makes it possible to assess these aspects in an approximate manner which is the reservoir model. Such a model constitutes an experimental model of the subsoil, representative of its structure and of its behavior. Generally, this type of experimental model is represented on a computer, and it is then called a numerical model. A reservoir model comprises a mesh, or grid, generally three-dimensional, associated with one or more maps of petrophysical properties (porosity, permeability, saturation, etc.). The association assigns values of the geophysical properties to each of the mesh cells of the grid.

These models, well known and widely used in the oil industry, make it possible to determine numerous technical parameters relating to the study or exploitation of a reservoir, of hydrocarbons for example. In actual fact, since the reservoir model is representative of the structure of the reservoir and of its behavior, it can be used for example to determine the areas which have the greatest chance of containing hydrocarbons, the areas in which it may be advantageous/necessary to drill an injection or production well to improve the recovery of the hydrocarbons, the type of tools to be used, the properties of the fluids used and recovered, and so on. These interpretations of reservoir models in terms of “technical exploitation parameters” are well known. Similarly, the modeling of CO2 storage sites makes it possible to monitor these sites, detect unexpected behaviors and predict the displacement of the injected CO2.

In order to make these various interpretations, the dynamic behavior of the reservoir model needs to be known. This is done by simulating the fluid flows within a reservoir by software called a flow simulator. This is what is called reservoir simulation. One or more reservoir simulations may be necessary. The PumaFlow® software (IFP Energies nouvelles, France) is an example of flow simulator used in the field of exploitation of underground reservoirs.

These interpretations are carried out using a reservoir model that should be as representative as possible, that is to say consistent with all the available data. These data generally comprise:

measurements at certain points of the geological formation of the property modeled, for example in wells. These data are said to be static because they do not vary over time (on the time scale of the reservoir production) and are directly linked to the property of interest, and

“history data”, comprising production data, for example the fluid flow rates measured in the wells or the concentrations of tracers and data from seismic acquisition campaigns repeated at successive intervals, called 4D seismic data. These data are said to be dynamic because they change during exploitation and are indirectly linked to the properties assigned to the mesh cells of the reservoir model.

Since the number of static data available is very small compared to the number of mesh cells of the reservoir model, extrapolations are necessary. For example, probabilistic methods can be used to fill the model in terms of petrophysical properties. In this case, the available static data are used to define random functions for each petrophysical property such as the porosity or the permeability. A representation of the spatial distribution of a petrophysical property is an outcome of a random function. Generally, an outcome is generated by, on the one hand, an average, a variance and a covariance function which characterizes the spatial variability of the property studied and, on the other hand, a seed or a series of random numbers. There are many simulation techniques, called geostatistical simulations, like the Gaussian sequential simulation method, the Cholesky method or even the fast Fourier transform with moving average FFT-MA method. These techniques are described notably in the following documents:

Goovaerts, P., 1997, Geostatics for Natural Resources Evaluation, Oxford Press, New York, 483 p.

Le Ravalec, M., Noetinger B., and Hu L.-Y., 2000, The FFT Moving Average (FFT-MA) Generator: An Efficient Numerical Method for Generating and Conditioning Gaussian Simulations, Mathematical Geology, 32(6), 701-723.

The construction of a representative reservoir model is done in steps. First of all, a reservoir model is constructed on the basis of the static data. Then, this model is updated in order to best reproduce the history data via the flow simulation while retaining consistency with the static data.

The techniques of integration of the dynamic data (production and/or 4D seismic data) in a reservoir model are well known: these are so-called “history-matching” techniques.

History matching modifies the parameters of a reservoir model, such as the permeabilities, the porosities or the skins of wells (representing damage around the well), fault connections, etc., to minimize the deviations between the measured history data and the corresponding responses simulated on the basis of the reservoir model. The parameters may be linked to geographic regions like the permeabilities or porosities around one or several wells. The deviation between the history data and simulated responses forms a functional, called the objective function. The history-matching problem is solved by minimizing this functional.

Several types of methods exist for minimizing this functional. Some supply only a single model corresponding to the minimum of the objective function. Others seek to estimate the law of distribution of the uncertain parameters conditional on the dynamic data, called a posteriori law, and thus estimate the uncertainty in the parameters. In actual fact, the uncertainty in the measured data and the flow simulator propagates to the estimation of the “best” reservoir model. It is then important to know the a posteriori law around the best model. In the general case, this law can be known only by generating a sample of values according to this law. For this, methods of Markov-Chain Monte Carlo (MCMC) type, which converge toward the desired law, can be used. Such a method is described in the document:

Geyer C J (1992) Practical Markov Chain Monte Carlo (with discussion). Statistical Science 7:473-511. Doi:10.1214/ss/1177011137.

The MCMC methods require a large number of flow simulations, which demands significant computation time. To limit the use of the flow simulator and therefore to reduce the computation time, other solutions have been devised. For example, it is possible to use a model of the objective function in the space of the parameters, called “response surface”. This model is constructed progressively in the course of iterations by interpolation of the simulated points. For example, the following document describes such a method:

Busby D, Feraille M (2008) Adaptive Design of Experiments for Bayesian Inversion—An Application to Uncertainty Quantification of a Mature Oil Field. Journal of Physics: Conference Series 135. Doi: 10.1088/1742-6596/135/1/012026.

This approach therefore offers two levels of approximation: the first relates to the convergence of the MCMC algorithm for which it is very difficult to state whether it has converged toward the desired law and the second relates to the approximation of the objective function by the response surface.

SUMMARY OF THE INVENTION

Thus, the invention relates to a method for exploiting a geological reservoir according to an exploitation scheme defined on the basis of a reservoir model. The reservoir model is obtained by a probabilistic “history-matching” technique that takes account of an analytical law of conditional distribution of uncertain parameters, associated with a given response surface. In the method according to the invention, this response surface does not approximate the objective function itself but approximates another function determined from the objective function. This makes it possible to obtain better approximations of the objective function and therefore of the a posteriori law. Furthermore, the use of an analytical law makes it possible to reduce the computation times, to achieve much faster processing of the results and obtain a representative reservoir model by limiting the approximations.

The invention relates to a method for exploiting a geological reservoir according to an exploitation scheme defined on the basis of a reservoir model, said reservoir model comprising a grid associated with parameters θ of said reservoir. For this method, the following steps are carried out:

a) a reservoir model matched to data measured within the reservoir is constructed by the following steps:

i) an initial set of reservoir models is generated stochastically from laws of probability p(θ) of the parameters θ;

ii) an objective function F(θ) is determined that measures a deviation between dynamic data y1, . . . , yn acquired during exploitation and dynamic data simulated by a flow simultator and the reservoir models belonging to the set;

iii) from the objective function F(θ), an analytical law p(θ|y1, . . . , yn) of conditional probability of the parameters θ is determined from the knowledge of measured dynamic data y1, . . . , yn;

iv) at least one new reservoir model is generated by the analytical law p(θ|y1, . . . , yn) and the new model is added to the set of reservoir models;

v) the steps ii) to iv) are reiterated and the reservoir model which minimizes the objective function is determined;

b) an optimum exploitation scheme is determined for the reservoir by simulating the exploitation of the reservoir by the matched reservoir model and the flow simulator; and

c) the reservoir is exploited by implementing the optimum exploitation scheme.

According to the invention, the conditional analytical law p(θ|y1, . . . , yn) is determined by determining an approximation of a function G(θ) such that G(θ)=exp(−F(θ)), then by computing the analytical law from the following relationship:

p ( θ y 1 , , y n ) = G ( θ ) × p ( θ ) G ( θ ) × p ( θ ) θ .

Preferably, the approximation of the function G(θ) is determined by computing G(θi)=exp(−F(θj)) for the models i belonging to said set, then by interpolating the values of G(θi).

According to an embodiment of the invention, the values of G(θi) are interpolated by a kriging technique.

According to the invention, in the step iv), at each iteration, a number of new reservoir models are generated, from which M1 models are chosen and added to the set, the set of reservoir models is made up during the construction of the initial set of a number M0 of models.

Advantageously, the M0 models of the initial set are sampled by an experimental design technique of latin hypercube type.

Advantageously, the M1 models added to the set are chosen using the approximation of the function G(θ).

According to a variant embodiment of the invention, at least some of the M1 models are models for which the approximation of the function G(θ) gives the maximum values.

Furthermore, at least some of the M1 models may be the models for which the quality of the approximation of the function G(θ) has the highest kriging variances out of the new models.

A sensitivity study can be carried out by computing a number of objective functions with each objective function being computed for some of the parameters or for a limited time interval relative to the available information.

The invention also relates to a computer program product that can be downloaded from a communication network and/or stored on a medium that can be read by computer and/or executed by a processor. It comprises program code instructions for implementing the method as described above, when the program is run on a computer.

BRIEF DESCRIPTION OF THE DRAWINGS

Other features and advantages of the method according to the invention will become apparent on reading the following description of nonlimiting exemplary embodiments, with reference to the appended figures and described hereinbelow.

FIG. 1 illustrates the steps of the method according to the invention.

FIG. 2 illustrates values of the objective function for the different reservoir models considered during the matching process of the method according to the invention.

FIGS. 3a)-3h) illustrate the distribution of different parameters.

FIGS. 4a)-4g) illustrate the distribution of different parameters for different matching periods.

FIGS. 5a)-5e) illustrate the distribution of different parameters when certain parameters are fixed.

FIGS. 6a)-6b) illustrate well production data.

FIGS. 7a)-b), 8a)-8b), 9a)-9d), 10a)-10d), 11a)-11f) and 12a)-12f) illustrate the distribution of different parameters for example 2.

DETAILED DESCRIPTION OF THE INVENTION

The method of the invention is intended for the exploitation of an underground reservoir with the exploitation possibly involving the recovery of hydrocarbons but also the storage of gas, such as CO2, in the reservoir. Hereinafter in the description, we will essentially describe the use of the method in the case of the recovery of hydrocarbons.

FIG. 1 illustrates the method for exploiting an oil reservoir according to the invention. The method comprises the following main steps:

1) determination of uncertain parameters (X1)

2) construction of an initial set of reservoir models (EMRi)

3) probabilistic history-matching (CAL)

4) simulation of exploitation schemes (SE)

5) exploitation of the deposit (EX)

Step 1) Determination of Uncertain Parameters (X1)

This step is for determining a set of P uncertain parameters θ=(θ1 . . . θP) used in the reservoir model as well as laws of probability p(θi) (called “a priori” laws) for each of these parameters. These laws do not incorporate any information originating from the dynamic data. They are based on information originating from the knowledge of the reservoir, from static data (geological for example), etc. For example, the laws of probability p(θ) that are chosen may be of uniform, Gaussian, or other type.

The uncertain parameters may be linked to any part of the reservoir model. For example, they may be parameters that can be used to fill the reservoir model in terms of petrophysical properties (permeability, porosity), parameters characterizing the wells (skins), the fluids in place or injected (residual oil saturation after water or gas scavenging for example) or even the structure of the reservoir (fault connections). A set of values of the parameters θ=(θ1 . . . θP), which corresponds also to a point in the space of the parameters, therefore defines a reservoir model. A modification of each parameter therefore results in a modification of the reservoir model. In particular, if the uncertain parameter is linked to the spatial distribution of a petrophysical property, it may be necessary to use a geostatistic simulation technique to generate the associated reservoir model.

Step 2) Construction of an Initial Set of Reservoir Models (EMRi)

Geological formations are generally very heterogeneous environments. Modeling a reservoir, that is to say constructing a reservoir model that complies with the static data, entails making use of construction methods called “probabilistic” because of the limited amount of information available (limited number of wells, etc). Because of this, the geological models constructed from these probabilistic methods are called “stochastic models”. The construction of a stochastic reservoir model must first of all depend on the environment of the geological deposit, which makes it possible to represent the major heterogeneities which control the flow of the fluids. Integrating static data in this model involves linear operations and can be done using geostatistical techniques that are well known. For this, a geostatistical simulator can advantageously be used.

A reservoir model, that can be represented on a computer, comprises an N-dimensional grid (N>0 and generally equal to two or three), in which each of the mesh cells is assigned the value of a property characteristic of the area studied. It may be, for example, the distributed porosity or permeability in a reservoir. These values form maps. Thus, a model comprises a grid associated with at least one map. Other parameters characterizing for example the wells or the injected fluids are also involved in defining this model.

During this step, an initial set of reservoir models is generated stochastically by the a priori law of probability p(θ) of each uncertain parameter.

More specifically, for each uncertain parameter θi selected during the step 1), a sample (X1) of M0 values is generated according to the a priori distribution law p(θi), for example based on an experimental design method of latin hypercube type. A set of M0 sets of values of the uncertain parameters, and therefore M0 different reservoir models, is thus obtained.

Step 3) Matching of the Reservoir Model (CAL)

The aim of the matching process according to the invention is to construct a reservoir model that is as representative as possible. Now, at this stage, the dynamic data have not been considered in constructing the initial reservoir model. Dynamic data are therefore acquired during the exploitation of the deposit (DD). These are production data, well test data, drilling time data, 4D seismic data, etc., that are particular in as much as they vary over time as a function of the fluid flows in the reservoir. These measurements are carried out by measurement tools such as flowmeters or seismic sensors. (y1, . . . , yn) denotes the vector of the observed dynamic data and (t1 . . . tn) denotes the corresponding acquisition times.

These dynamic data are then incorporated in the reservoir model through probabilistic matching. According to the invention, a history-matching of probabilistic type (CAL) is performed by carrying out the following steps:

i. a flow simulator f is used to compute the production response simulated with the reservoir models that are to be assessed (SIM);

ii. an objective function (OF) that measures the deviation between the simulated results (SIM) and the associated dynamic data (DD) is computed;

iii. an approximation of the exponential of the opposite of the objective function is constructed by kriging, and, for this response surface, a conditional analytical law of probability {tilde over (p)}(θ|y1, . . . , yn) of the uncertain parameters is determined from knowledge of the dynamic data which law also being called a posteriori law (POST);

iv. at least one new reservoir model is generated, to be assessed to improve the quality of the response surface (EMR);

v. the steps i) to iv) are reiterated, in order to know as accurately as possible the a posteriori distribution of the objective function.

i. Simulation of the Dynamic Data (SIM)

Using a flow simulator, for example the PumaFlow® software (IFP Energies nouvelles, France), the flows are simulated in the reservoir models to be assessed, that is to say either the reservoir models of the initial set, or the reservoir models generated during the preceding iteration. The recovery of oil or the displacements of the fluids (for example the stored gases) in the reservoir can, for example, be simulated.

ii. Computation of an Objective Function (OF)

The objective function F(θ) represents the deviations between the measured dynamic data (DD) and the simulated dynamic data (SIM). According to one embodiment of the invention, the objective function F(θ) is computed by the least squares approach:

F ( θ ) = 1 2 i = 1 n ω i ( y i - f ( t i , θ ) ) 2

with ωi being the weight associated with the datum i.

According to the invention, the objective function is computed for each reservoir model of the set. It will be recalled that this set initially comprises M=M0 reservoir models. This set increases at each iteration by a number M1 of models. The selection of the M1 additional models, or, in an equivalent manner, of the M1 sets of values of the uncertain parameters, is described in paragraph iv) concerning selection of new reservoir models.

iii. Determination of the Conditional Analytical Law (POST)

The inverse problem in reservoir engineering is that of finding the value of uncertain parameters of the reservoir model which, when applied to the flow simulator, provide a simulated response that is as close as possible to the measured dynamic data. The simulated response at the wells is modeled as follows:


yi=f(ti, θ)+εi i=1, . . . , n

where ti is the ith dynamic data acquisition time, yi the vector of the observed dynamic data, θ is the vector of the uncertain parameters of the model, f is the flow simulator which models the relationship between the parameters and the observed data and εi is the error between the model and the dynamic data and the number of measured dynamic data. The objective is to use the observed data to deduce information on the vector of the uncertain parameters θ.

It is assumed here that the errors εi between the flow simulation model f(ti, θ) and the observed data yi follow a given law, estimated as a function of the measurement errors. The conditional law of distribution of the vector θ knowing the measured data, or a posteriori law, is given by the Bayes formula:

p ( θ y 1 , , y n ) = p ( y 1 , , y n θ ) × p ( θ ) p ( y 1 , , y n )

where p(θ) is the a priori distribution law, p(y1, . . . , yn) is the distribution law of the data, p(y1, . . . , yn|θ) is the conditional distribution law of the dynamic data knowing θ and p(θ|y1, . . . , yn) is the a posteriori conditional distribution law which is sought. Finally, by assuming the errors εi to be Gaussian and independent, the a posteriori law can be rewritten:

p ( θ y 1 , , y n ) = exp ( - F ( θ ) ) × p ( θ ) exp ( - F ( θ ) ) × p ( θ ) θ

In this case, the weight ωi used for the computation of the objective function is defined by ω=1/σ2, with σi2 being the variance of the error εi.

According to the invention, an approximation of the law p(θ|y1, . . . , yn) of conditional distribution of the uncertain parameters knowing the measured dynamic data is determined by an analytical law {tilde over (p)}(θ|y1, . . . , yn) dependent on the objective function F(θ).

The conditional analytical law {tilde over (p)}(θ|y1, . . . , yn) is determined by considering a function G(θ) dependent on the objective function F(θ), defined by G(θ)=exp(−F(θ)). The a posteriori law is then rewritten:

p ( θ y 1 , , y n ) = G ( θ ) × p ( θ ) G ( θ ) × p ( θ ) θ .

This relationship evolves from the previous relationship:

p ( θ y 1 , , y n ) = exp ( - F ( θ ) ) × p ( θ ) exp ( - F ( θ ) ) × p ( θ ) θ .

Since the function G(θ) is not known analytically, an approximation thereof is constructed, also called response surface, by interpolation of the known values G(θ)=exp(−F(θj)) computed for each of the reservoir models j during the step ii). Unlike in the prior art, the response surface used by the invention does not approximate the objective function, but approximates a function dependent on this objection function. The function G(θ) is between 0 and 1, and therefore varies within a more restricted interval than the function F(θ). Consequently, it is easier to construct a good approximation of the function G(θ) than of the function F(θ).

Preferably, the interpolation method used is a kriging technique or any other regression method belonging to the family of penalized empirical risk minimization in Reproducing Kernel Hilbert Spaces (RKHS). These interpolation methods make it possible to write the approximation of the function (or response surface) {tilde over (G)}(θ) in the form of a linear combination of known functions. The computation of the law {tilde over (p)}(θ|y1, . . . , yn) of conditional probability associated with {tilde over (G)}(θ) then involves different integrals of these known functions, which can be computed analytically.

iv. Selection of New Reservoir Models (EMR)

New reservoir models, that is to say new sets of values of the uncertain parameters, are generated as a function of the analytical law {tilde over (p)}(θ|y1, . . . , yn) of conditional probability obtained in the preceding step. From this sample, M1 models are selected for which the objective function will be computed in order to improve knowledge of this function, and therefore the quality of the response surface.

According to the invention, the set of the reservoir models for which the production response is computed is increased at each iteration of the matching process by the addition of M1 new models. These models are chosen according to a number of criteria (preferably with an identical number of models for each of the criteria), notably:

(i) the points of the space of the parameters, that is to say the sets of values of the uncertain parameters, for which the quality of the approximation of the function G(θ) is low, are selected.

(ii) the points of the space of the parameters for which the response surface {tilde over (G)}(θ) predicts high values, that is to say low values of the objective function, are selected. It is possible, for example, to use the expected improvement technique described for example in the document: Schonlau M (1997) Computer Experiments and Global Optimization, Dissertation, University of Waterloo, Canada.

Other criteria can be used notably to choose points where the kriging variance is high, the points which, if they are added, most reduce the integral of the model error, or the points of maximum entropy, etc.

The reservoir models that correspond to each new set of values of the parameters selected previously are then generated. This may entail using a geostatistical simulator to generate the petrophysical properties of the reservoir model.

v. Repetition of the Steps

Until the maximum number of points that can be evaluated (simulated) has been reached, the steps of the matching process are reiterated with an ever increasing number of reservoir models belonging to the set.

On each construction by kriging of {tilde over (G)}(θ), the quality criterion Q2 of this response surface is computed on the basis of the reservoir models of the set used to define it:

Q 2 = 1 i = 1 M ( G ( θ i ) - G ~ - i ( θ i ) ) 2 i = 1 M ( G ( θ i ) - G _ ) 2 .

In this equation, G denotes the average of the values G(θi), i=1 . . . M and {tilde over (G)}−ii) denotes the prediction for the values θi of the response surface obtained by kriging by using the M-1 sets of values θj, j=1 . . . M, j ≠ i. According to one embodiment of the invention, it is possible to define a criterion Q2 by removing mesh cell groups rather than a single mesh cell. Other types of quality criteria can also be used. The quality criterion Q2 is better if large values of G are well predicted. In our case, these large values correspond to values of parameters for which the objective function is low, that is to say to values where the fit with the history data is good. Thus, a high criterion Q2 means that the objective function is correctly approximated for parameters corresponding to good history matching.

In the deterministic case, a reservoir model matched with the dynamic data amounts to associating with the uncertain parameters (or model matching parameters) the value corresponding to the lowest value of the objective function. In the case, a reservoir model matched with the dynamic data and according to the probabilistic method described previously amounts to associating with the uncertain parameters (or model matching parameters) their a posteriori law of probability.

According to one embodiment of the invention, a normalized objective function

F s ( θ ) = F ( θ ) / s

is used, where s is a normalization constant. This entails constructing an approximation of

G s ( θ ) = exp ( - F ( θ ) / s ) .

The preceding steps ii to iv are retained to choose new models to be simulated using the approximation of Gs(θ). Then, the final a posteriori law of probability of the objective function is sampled by an MCMC method, after evaluation of a model of F(θ) by kriging.

Step 4) Simulation of Exploitation Schemes (SE)

From at least one probabilistically matched reservoir model, a number of exploitation schemes (SE) can be determined corresponding to different possible options for exploiting the underground reservoir: placement of the producing and/or injecting wells, target values for the flow rates per well and/or for the reservoir, the type of tools used, the fluids used and recovered, etc. For each of these schemes, their production forecasts should be determined after the matching period. These probabilistic production forecasts are obtained by flow simulator software (preferably the same as that used previously) and by the matched numerical reservoir model.

One or more possible exploitation schemes (SE) matched to the reservoir model is or are defined. For each of these schemes, the a posteriori uncertainty obtained after the probabilistic matching is propagated.

The propagation of the a posteriori law of the uncertain parameters can be done by constructing response surfaces on the results of interest from the simulation (aggregate oil flow rate or economic value of the reservoir, for example), with a view to avoiding running too many numerical simulations. The description of this step is given, for example, in the document:

Feraille M, Marrel A (2012) Prediction Under Uncertainty on a Mature Field. Oil & Gas Science and Technology. Doi: 10.2516/ogst/2011172.

From the probabilistic production forecast defined for each exploitation scheme (preceding step), a comparison can be used to choose the exploitation scheme which seems most relevant. For example:

by comparing the maximum of the volume of oil recovered, it is possible to determine the production scheme that is likely to provide the maximum recovery or be the most cost-effective.

by comparing the standard deviation of the volume of oil recovered, it is possible to determine the production scheme with the least risk.

Step 5) Exploitation of the Underground Reservoir (EX)

The reservoir is then exploited according to the selected exploitation scheme (EX) for example by drilling new wells (producing or injecting), by modifying the flow rates and/or the fluids injected, and so on.

Use of the Results

The analytical computation, which is therefore quasi-instantaneous, of the a posteriori laws of probability of the parameters and of the objective function for a given response surface makes it possible to use the results of the probabilistic matching that make possible better understanding of the impact of each uncertain parameter on the behavior of the reservoir.

For example, it is possible to decide to fix the value of one or more of the parameters and look at the impact on the uncertainty of the other parameters as well as on the chosen objective function or functions. In this case, all that is required is to analytically recompute the law of conditional distribution of the parameters θj, j=1 . . . N, j ≠ i, conditional on the value chosen for θi.

It is also possible to consider a number of objective functions according to the information sought. This is done by estimating the a posteriori distribution of the parameters for each of these functions via the construction of a response surface. The objective function F measures the overall deviation between the results of simulations and the measured dynamic data. This function is conventionally constructed in the least squares manner. From a trade viewpoint, an objective function computes the sum of the deviations between history properties linked to objects of the trade (wells P1 to P10, group of wells, trap, etc.), which are themselves linked to properties like the flow rate of oil QOS, of water QWS, of gas QGS, . . . , which are themselves a function of the time when the measurements were performed (for example, every month from 1980 to 2000). From the history data, it is therefore possible to compute an objective function for any subset made up of some of the objects, of the properties or of a certain number of time intervals. For example, it is possible to determine an objective function linked:

to the well P1 for all the properties and all the times, or

to the period 1980-1990 for all the objects and all the properties.

It may therefore be of interest to know the influence of the parameters on different objective functions, for example:

to identify the moment at which the different parameters become influential, it is sufficient to consider, for a number of times, the objective function that includes all the data up to each of these times, then to construct a response surface for each of the functions obtained. This gives access to the trend of the a posteriori distribution as a function of the time of matching in an interactive manner.

to know the influence of the parameters on each well, it is sufficient to estimate the a posteriori distributions of the objective functions associated with the properties of a given well.

Knowing the influence of the uncertain parameters on the “trade” objective functions defined enables better appreciation of the operation of the reservoir and to take decisions. For example, it is possible

to hypothetically fix the value of the parameters seen as influential, and then check that the reduction in the uncertainty in the reservoir model which devolves therefrom (in the other parameters and in one or more objective functions defined by the engineer) is sufficient to justify a better characterization of these influential parameters. In this case, new measurements on the oil field can be carried out (well tests, logs, etc.).

to know that parameters linked to one part of the reservoir begin to have an influence from a date on one or more objective functions linked to another part of the reservoir, which makes it possible to better understand the dynamic operation of the reservoir. This understanding may be translated into different exploitation schemes suited to each of the parts of the reservoir.

Finally, it is possible to envisage studies of sensitivity to the weights of the objective function, which is extremely costly if the prior art MCMC method has to be used.

Exemplary Applications

In order to illustrate the method according to the invention, two series of trials were carried out. The first series relates to obtaining a posteriori distributions. The second series illustrates the good correlation between the analytical law associated with the response surfaces and the exact conditional distribution law.

EXAMPLE 1

To illustrate the method, an exemplary synthetic application is described. The reservoir considered contains gas, oil and water initially. It is sealed by faults on two sides, and by an aquifer on the other sides. Production is done by depletion from six wells (PRO1, PRO4, PRO5, PRO11, PRO12, PRO15) with an oil production flow rate of 150 m3/day imposed on each producing well for 10 years. During this production period, the aggregate of oil (V) and the gas/oil ratio (R) are collected for each producing well, as is the total aggregate of oil in the reservoir. These values are known with a relative noise of 4%. These data are illustrated by the curves of FIG. 6. FIG. 6a) corresponds to the volume (V) of the aggregate of oil for each producing well as a function of the time t expressed in years. FIG. 6b) corresponds to the gas/oil ratio (R) for each producing well as a function of the time t expressed in years.

The distribution of the petrophysical properties (porosity, permeability) in the reservoir is also assumed known. The uncertain parameters are the following seven parameters: four transmissivity multipliers (MPH2, MPH1, MPV2, MPV1), the residual oil saturation after water scavenging (SORW), the residual oil saturation after gas scavenging (SORG) and a parameter characterizing the aquifer (AQUI).

The goal is then to estimate, by using the method according to the invention, the distribution of the uncertain parameters and the distribution of the objective function conditional on the dynamic data.

Estimation of the a Posteriori Distributions

The probabilistic matching is carried out with an initial experimental design of latin hypercube type LHS defining a set made up of 70 reservoir models, then with an addition of models in series of 10 at each iteration of the matching process. The first 5 models added are chosen at the point where the response surface {tilde over (G)}(θ) predicts large values, and the next 5 where the quality of the approximation of G(θ) is low in terms of high kriging variance. The trend of the objective function during the procedure for adding models in the process of matching of the numerical model of the reservoir is given in FIG. 2. The darkest vertical bars mark the last simulation of the LHS and of each model addition series. It can then be seen that, after the first 70 exploration models (LHS), the points added in series of 10 indeed exhibit 5 first globally low values of the objective function, then 5 globally higher values.

FIGS. 3a)-3h) represent the curves of the a priori distribution, of the a posteriori distribution determined after the initial experimental design and of the a posteriori distribution at the end of the matching process for each uncertain parameter (MPH2, MPH1, MPV2, MPV1, SORW, SORG, AQUI) and for the objective function (FO). In these figures, the black dot indicates the reference value, the gray discontinuous curve corresponds to the a priori law of probability, the black continuous curve corresponds to the a posteriori law of probability with 70 models and the black discontinuous curve corresponds to the a posteriori law of probability at the end of the matching process with 210 models.

Use of the Results

By virtue of the quasi-immediate (because it is analytical) updating of the a postepriori conditional distribution for a given kriging model, it is possible to fix the value of one or more of the uncertain parameters and looking at the impact on the uncertainty of the other parameters. This is illustrated for example in FIGS. 5a)-5e) which represent the a posteriori conditional distributions at the end of the process of matching of the parameters MPH2, MPH1, MPV2, MPV1 and AQUI by fixing or not fixing the value of the parameters SORG and SORW at the most probable value. In these figures, the black dot corresponds to the reference value, the discontinuous curve corresponds to the a posteriori law of probability when no parameter is fixed, whereas the continuous curve corresponds to the law of probability when SORG and SORW are fixed. It can be seen here that this results in a modification of certain distributions.

It is also possible to envisage studies of sensitivity to the weights of the objective function. For example, it is possible to focus on the trend of the a posteriori conditional laws of the parameters when considering a matching interval of greater or lesser length in the computation of the objective function. In the case presented here, all the simulations performed during the probabilistic process have been considered and the kriging model associated respectively with a history of 10, 8, 6, 4 and 2 years has been reconstructed. The new a posteriori conditional laws of the parameters are given in FIGS. 4a)-4g). In these figures, the black dot designates the reference value, the gray discontinuous curve which remains horizontal (uniform) corresponds to the a priori law of probability, the discontinuous curve made up of a series of small dots corresponds to the a posteriori law of probability computed for two years, the discontinuous curve made up of a series of dashes corresponds to the a posteriori law of probability computed for four years, the continuous curve corresponds to the a posteriori law of probability computed for six years, the discontinuous curve made up of a series of rhomboids correspond to the a posteriori law of probability computed for eight years, and the discontinuous curve made up of a series of large dots corresponds to the a posteriori law of probability computed for ten years. It will be noted that the main changes occur after 6 years.

EXAMPLE 2

This example makes it possible to detail the proposed methodology on an analytical example, which enables us to validate and study in depth the possible analytical forms of the a posteriori law for different cases. All the examples discussed here have been processed with the Matlab® software (MathWorks, United States) and the “Dacefit” toolbox to construct the kriging.

For this example, the flow simulator f is replaced by a linear model f(ti, θ)=tT, θ where θ=(θ1 . . . θD) is a vector of dimension D.

To test the matching method of the method according to the invention on this linear model, the following procedure is repeated for D=1, D=2 and D=3:

1. the reference value of 0 is fixed

2. n=10 points are chosen in the interval [0,1]D according to a latin hypercube, these points are denoted ti, i=1, . . . , n and the observations yi=tiT θ+εi are generated by randomly generating Gaussian variables εi by fixing on a value σε2. On the one hand, this step makes it possible to generate the “false” observed data, which makes it possible to construct an objective function. On the other hand, it makes it possible to deduce (only in this precise case) the exact a posteriori laws, so as to compare them with the approximate laws.

3. A set (θi)i=i, . . . N of N parameters is generated, with N=10 D, 20 D, or 50 D according to a latin hypercube matched to the a priori law considered and (Gi)i=1, . . . N is computed where G(θi)=exp(−F(θi)) for i=1, . . . , N. The choices N=10 D and N=20 D correspond to situations that are common in practice. The choice N=50 D makes it possible generally to have an extremely predictive kriging model and serves here as reference.

4. For each size of the sample of the parameters, the conditional law of distribution is determined on the basis of the results of the step 3, that are compared to the exact a posteriori laws.

    • Test for θ with one dimension (D=1)

θ=1 is chosen.

In the case of a Gaussian a priori law, the average μ=1.2 and a standard deviation of 0.4 are fixed. FIG. 7 shows the a priori law (gray discontinuous line), the approximate a posteriori law (made up of crosses) and the exact a posteriori law (made up of dashes) for N=10 D and for a Gaussian (FIG. 7a)) and Matern (FIG. 7b)) covariance. It will be noted that in both cases the approximation is good because the a posteriori distribution laws are merged. Consequently, there is no need to add additional points (N=20 D or N=50 D).

In the case of a uniform a priori law, [a1, b1]=[0,2] is fixed. FIG. 8 shows the a priori law (gray continuous line), the approximate a posteriori law (made up of crosses) and the exact a posteriori law (made up of dashes) per N=10 D and a Gaussian (FIG. 8a)) and Matern (FIG. 8b)) covariance. As previously, the approximation is of good quality.

    • Test for θ with two dimensions (D=2)

θ=[1 1] is chosen.

In the case of a Gaussian a priori law, the average μ=[1,2; 1,2] and a standard deviation of 0.4 are fixed for each component, both being independent. FIG. 9 shows the a priori law (black continuous line), the approximate a posteriori law (made up of crosses) and the exact a posteriori law (made up of dashes) for N=10 D and for a Gaussian (FIGS. 9a) and 9c)) and Matern (FIGS. 9b) and 9d)) covariance. The first line (FIGS. 9a) and 9b)) corresponds to θ1 and the second (FIGS. 9c) and d)) to θ2. It will be noted that the approximate law for the first parameter is slightly off-center relative to the exact law. On the other hand, the approximate law for the second parameter is close to the exact law.

In the case of a uniform a priori law, [a1, b1]=[a2, b2]=[0,2] is fixed. In this case, it is the law of the first parameter which is best approximated, whereas the law of the second exhibits oscillations.

In terms of a posteriori law, reasonable approximations are obtained for only N=10 D.

When the number of samples is increased to reach N=20 D, the approximations are better. The case of the Gaussian a priori law is significantly improved (FIGS. 10a) to 10d)) and that of the uniform a priori law slightly improved. FIGS. 10a) to 10d) correspond to FIGS. 9a) and 9d) for N=20 D.

Finally, for N=50 D, very good approximations are obtained. The best results are obtained with the Gaussian a priori law, the first parameter of the uniform case again exhibiting some oscillations.

    • Test for θ with three dimensions (D=3) θ=[1 1 1] is chosen.

In the case of a Gaussian a priori law, the average μ=1,2 1,2 1,2] and a standard deviation of 0.4 are fixed for each component, all three being independent. FIG. 11 shows the a priori law (gray continuous line), the approximate a posteriori law (black continuous line) and the exact a posteriori law (made up of dashes) for N=10 D and for a Gaussian (FIGS. 11a), c) and e)) and Matern (FIGS. 11b), d) and f)) covariance. The first line (FIGS. 11a) and b)) corresponds to θ1, the second (FIGS. 11c) and d)) to θ2, the third (FIGS. 11e) and f)) to θ3.

In the case of a uniform a priori law, [a1, b1]=[a2, b2]=[a3, b3]=[0,2] is fixed. In this case, the approximations are less precise.

When the number of samples is increased to reach N=20 D, the approximations are better. The case of the Gaussian a priori law is significantly improved and that of the uniform a priori law is slightly improved.

Finally, for N=50 D, very good approximations are obtained. FIGS. 12a) to f) correspond to FIGS. 11a) to f) for N=50 D.

The different tests show that when the dimension D increases, the results are degraded. This is an expected result. The larger the space of the parameters, the more the approximation by kriging looses precision. However, the conventional choices of the number of points in the space of the parameters N=10 D and N=20 D provide, despite everything, correct approximations. For this example, the tests do not involve an iterative procedure for adding points. In practice, when the size of the set is increased, the points are placed in the regions of interest and the approximations become less sensitive to the dimension.

It will be noted that, for the simulations undertaken for these examples, the a posteriori analytical law obtained by virtue of the method according to the invention requires a computation time of the order of a hundredth of a second, whereas an MCMC sampling of the prior art takes several minutes.

Claims

1-11. (canceled)

12. A method for exploiting a geological reservoir according to an exploitation scheme defined on a basis of a reservoir model, comprising a grid associated with parameters θ of the reservoir, comprising:

a) providing a reservoir model matched to data measured within the reservoir constructed by: i) generating an initial set of reservoir models stochastically from laws of probability p(θ) of the parameters θ; ii) determining an objective function F(θ) that measures a deviation between dynamic data y1,..., yn acquired during exploitation, dynamic data simulated by a flow simulator and the reservoir models belonging to the set; iii) determining from the objective function F(θ) an analytical law p(θ|y1,..., yn) of conditional probability of the parameters θ from knowledge of the measured dynamic data y1,..., yn; iv) generating at least one new reservoir model with the analytical law p(θ|y1,..., yn) and adding the at least one new model to the set of reservoir models; v) reiterating steps ii) to iv) and determining a reservoir model which minimizes the objective function;
b) determining an optimum exploitation scheme for the reservoir by simulating the exploitation of the reservoir with the matched reservoir model and the flow simulator; and
c) exploiting the reservoir by implementing the optimum exploitation scheme.

13. The method according to claim 12, comprising determining the conditional analytical law p(θ|y1,..., yn) by determining an approximation of a function G(θ) such that G(θ)=exp(−F(θ)) and then computing the analytical law from p  ( θ  y 1, … , y n ) = G  ( θ ) × p  ( θ ) ∫ G  ( θ ) × p  ( θ )   θ.

the following relationship:

14. The method according to claim 13, comprising determining the approximation of the function G(θ) by computing G(θi)=exp(−F(θj)) for models i belonging to the set and then interpolating the values of G(θi).

15. The method according to claim 14, comprising interpolating the values of G(θi) by kriging.

16. The method according to claim 12, wherein during step iv) at each iteration new reservoir models are generated from which M1 models are chosen and added to the set and the set of reservoir models is made during construction of the initial set of M0 of models.

17. The method according to claim 13, wherein during step iv) at each iteration new reservoir models are generated from which M1 models are chosen and added to the set and the set of reservoir models is made during construction of the initial set of M0 of models.

18. The method according to claim 14, wherein during step iv) at each iteration new reservoir models are generated from which M1 models are chosen and added to the set and the set of reservoir models is made during construction of the initial set of M0 of models.

19. The method according to claim 15, wherein during step iv) at each iteration new reservoir models are generated from which M1 models are chosen and added to the set and the set of reservoir models is made during construction of the initial set of M0 of models.

20. The method according to claim 16, comprising sampling the M0 models of the initial set using a latin hypercube.

21. The method according to claim 16, wherein M1 models are added to the approximation of the function G(θ).

22. The method according to claim 20, wherein M1 models are added to the approximation of the function G(θ).

23. The method according to claim 21, wherein at least some of the M1 models are models for which approximation of the function G(θ) provides maximum values.

24. The method according to claim 22, wherein at least some of the M1 models are models for which approximation of the function G(θ) provides maximum values.

25. The method according to claim 21, wherein at least some of the M1 models are models for which a quality of the approximation of the function G(θ) has kriging variances which are highest for the new models.

26. The method according to claim 24, wherein at least some of the M1 models are models for which a quality of the approximation of the function G(θ) has kriging variances which are highest for the new models.

27. The method according to claim 12, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

28. The method according to claim 13, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

29. The method according to claim 14, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

30. The method according to claim 15, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

31. The method according to claim 16, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

32. The method according to claim 20, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

33. The method according to claim 23, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

34. The method according to claim 25, comprising carrying out a sensitivity study by computing objective functions such that each objective function is computed for parameters or for a limited time interval relative to available information.

35. A computer program product that is downloadable from a communication network, and/or stored on a medium that can be read by computer, and/or executed by a processor, comprising program code instructions for executing the method on a computer comprising:

a) providing a reservoir model matched to data measured within the reservoir constructed by: i) generating an initial set of reservoir models stochastically from laws of probability p(θ) of the parameters ii) determining an objective function F(θ) that measures a deviation between dynamic data y1,..., yn acquired during exploitation, dynamic data simulated by a flow simulator and the reservoir models belonging to the set; iii) determining from the objective function F(θ) an analytical law p(θ|y1,..., yn) of conditional probability of the parameters θ from knowledge of the measured dynamic data y1,..., yn; iv) generating at least one new reservoir model with the analytical law p(θ|y1,..., yn) and adding the at least one new model to the set of reservoir models; v) reiterating steps ii) to iv) and determining a reservoir model which minimizes the objective function;
b) determining an optimum exploitation scheme for the reservoir by simulating the exploitation of the reservoir with the matched reservoir model and the flow simulator; and
c) exploiting the reservoir by implementing the optimum exploitation scheme.
Patent History
Publication number: 20140019108
Type: Application
Filed: Jun 28, 2013
Publication Date: Jan 16, 2014
Applicant: IFP ENERGIES NOUVELLES (Rueil-Malmaison Cedex)
Inventors: Sebastien DA VEIGA (Paris), Veronique GERVAIS-COUPLET (Rueil-Malmaison), Mathieu FERAILLE (Houilles)
Application Number: 13/930,239
Classifications
Current U.S. Class: Well Or Reservoir (703/10)
International Classification: E21B 41/00 (20060101);