Computers & Geosciences 50 (2013) 4–15
Contents lists available at SciVerse ScienceDirect
Computers & Geosciences journal homepage: www.elsevier.com/locate/cageo
Hierarchical benchmark case study for history matching, uncertainty quantification and reservoir characterisation D. Arnold n, V. Demyanov, D. Tatum, M. Christie, T. Rojas, S. Geiger, P. Corbett Institute of Petroleum Engineering, Heriot-Watt University, UK
a r t i c l e i n f o
a b s t r a c t
Article history: Received 11 April 2012 Received in revised form 31 August 2012 Accepted 4 September 2012 Available online 26 September 2012
Benchmark problems have been generated to test a number of issues related to predicting reservoir behaviour (e.g. Floris et al., 2001, Christie and Blunt, 2001, Peters et al., 2010). However, such cases are usually focused on a particular aspect of the reservoir model (e.g. upscaling, property distribution, history matching, uncertainty prediction, etc.) and the other decisions in constructing the model are fixed by log values that are related to the distribution of cell properties away from the wells, fixed grids and structural features and fixed fluid properties. This is because all these features require an element of interpretation, from indirect measurements of the reservoir, noisy and incomplete data and judgments based on domain knowledge. Therefore, there is a need for a case study that would consider interpretational uncertainty integrated throughout the reservoir modelling workflow. In this benchmark study we require the modeller to make interpretational choices as well as to select the techniques applied to the case study, namely the geomodelling approach, history matching algorithm and/or uncertainty quantification technique. The interpretational choices will be around the following areas:
Keywords: Uncertainty quantification History matching Risk Geomodelling Geostatistics Benchmark
(1) (2) (3) (4) (5) (6)
Top structure interpretation from seismic and well picks. Fault location, dimensions and the connectivity of the network uncertainty. Facies modelling approach. Facies interpretations from well logs cutoffs. Petrophysical property prediction from the available well data. Grid resolution-choice between number of iterations and model resolution to capture the reservoir features adequately.
A semi-synthetic study is based on real field data provided: production data, seismic sections to interpret the faults and top structures, wireline logs to identify facies correlations and saturation profile and porosity and permeability data and a host of other data. To make this problem useable in a manageable time period multiple hierarchically related gridded models were produced for a range of different interpretational choices. & 2012 Elsevier Ltd. All rights reserved.
1. Introduction Uncertainty is an issue we need to deal with in every reservoir due to sparse and/or low resolution data available and our limited knowledge about the reservoir. Creating a simulation model to forecast petroleum reservoir production requires the modeller to make a number of choices about how to construct a representative model. These choices range from which numerical value to assign a reservoir property based on measured data to interpretational choices such as the depositional model.
n
Corresponding author. Tel.: þ44 131 451 8298. E-mail addresses:
[email protected] (D. Arnold),
[email protected] (V. Demyanov). 0098-3004/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cageo.2012.09.011
Typically a geomodeller would produce many 100’s of realisations of the reservoir to assess the uncertainty in a pre-production field. Critically these models attempt to cover the uncertainties in reservoir through a range of possible numerical input parameters that produce variations in key reservoir features. Where we miss can out uncertainty is in the interpretational elements of modelling that are difficult to describe using numerical parameters, such as the choice of which faults are present, the picking of the top structure, the depositional model, cut off selection and permeability prediction models. Different possible interpretations of the reservoir for instance would require different conceptual models for the facies distributions and different modelling approaches. The modelling choices made are related how complex a model of the real system we choose to create. The complexity issue is
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
well known as the bias-variance trade-off where over-complex models result in inflation of the uncertainty (variance) while oversimplistic ones—to bias. A meta-model that subsumes specific candidate models would lead to considerable inflation of variance, therefore there is a need to fund ways of mixing model that are consistent with what we believe is plausible (Christie et al., 2011). On the other hand if the models from the ensemble come from the same code (are related) the unvariance may will be understated (P. Challenor et.al in Christie et al., 2011). Therefore, it is important to include the bias terms in the model corresponding to the effects not accounted by the model. Once it comes to choosing from different modelling algorithms and multiple interpretations of data and knowledge, uncertainty quantifications inevitably become a subject to that choice, which is consistent with the subjective nature of uncertainty (Caers, 2011). From a probabilistic point of view uncertainty can be seen as a ‘‘correct’’ posteriori PDF, which is unknown and needs to be sampled for Oliver et al. (2008). Furthermore, uncertainty can be seen as a relative term subject to the measurement scale and precision of how we define things we want to measure or estimate. Uncertainty analysis based on evaluating of covariance matrix for the posterior probability is difficult for complex problems when posterior pdf is complex itself. A Bayesian approach implies there is one ‘‘true’’ posterior to characterise the uncertainty which is defined as a multiplication of the prior probability and the likelihood. The determination of the unique posterior pdf of a multi-scenario model can become quite difficult. A Bayesian approach joins the probabilities by means of ‘‘strong’’ conjunction. Relaxation of the way this conjunction is performed would possibly provide a wider evaluation of uncertainty. There are non-Bayesian approaches to combine prior and evidence e.g. by means of conjunction (Tarantola, 2005). There have been attempts to use Dempster–Shafer generalisation of the Bayesian theory of subjective probability to measure uncertainty in reservoir geological properties (Kim, 2002). Another approach was proposed in perception-based theory of probabilistic reasoning, which uses fuzzy logic to describe perceptions and subjective probability Zadeh (2002). The issue of multiple scenarios is compounded when we move to post-production fields and we want to add production data to the uncertainty quantification process. Here the model is history matched to the measured well production data and the quality of fit defines the likelihood of the model and is typically now done using a range of automated approaches (Floris et al., 2001). Uncertainty in automated HM approaches is usually described with an inference from an ensemble of multiple models which come from a prior range of the model parameters or state vectors, updated through each iteration. There are several difficulties associated with this. The uncertainty in the model definitions, components and complexity is not easy to take into account. For instance, if there are uncertainties about the structure it can become difficult to re-grid the model to new structural configurations automatically. Distinguishing between different geological interpretations also becomes an issue. Data assimilation approaches, such as Ensemble Kalmar filters (Evensen et al., 2007), can potentially handle interpretational uncertainty within the range of the initial ensemble of realisations, however, the consistency of the converged ensemble with geologically realistic distributions may be an issue. Particularly tricky aspects of the uncertainty to account for in history matching include the top structure, layering, faults (numbers, dimensions, displacements and locations) and other structural features. As a result we tend to keep these features fixed, preferring instead to change elements that are easy to parameterise in the simulation model. This means that significant uncertainties in the Gross rock volume (top structure), Net to Gross (layering) and
5
partitioning (faults) of the reservoir are typically not accounted for by the uncertainty quantification process. These features are developed principally from seismic data and well correlations so are based on the interpretation of geologists/petrophysicists and are (a) subject to uncertainties in depth conversion and (b) have uncertainties due to the interpretation process, where the error cannot easily be quantified as we do not typically produce many independent seismic interpretations. A number of studies have considered the uncertainty in seismic interpretation (e.g. Bond et al., 2007, Rankey and Mitchell, 2003) and show a potential for significant variation in the interpretation of the same data due to the background of the interpreter (Bond et al., 2007) and overconfidence in the quality of their interpretation (Rankey and Mitchell, 2003). In Rankey and Mitchell (2003), 6 interpreters were given the same seismic data from a carbonate reservoir. They all believed that because the seismic data was apparently easy to interpret, their subsequent interpretations were very accurate with only a small error in the location of the reservoir top. In fact, while most of the reservoir top was accurately identified by all the geologists studied, the edges of the reservoir were less well defined. A comparison of the 6 interpretations showed that the portions of the reservoir that were less well defined added considerable variation to the volumetric estimates, even though the other parts of the field were the same for all interpretations. There is therefore a need to develop techniques that account for the both the uncertainties in measured reservoir inputs that are easy to handle, such as relative permeability uncertainties, and interpretational uncertainties in the reservoir. A further complication in accounting for the uncertainty is the grid resolution of the model. Most previous history matching case studies provide a single grid resolution model to match to real field production data, production data from a truth case model of the same resolution or a higher resolution model. The resolution of the grid and the accuracy of the gridding approach are important in history matching as the total run time available for the process is a limiting factor. In most cases we can either run many low resolution models that have a higher solution error in their predictions or fewer high resolution with less error but potentially do not identify the best history matched models, due to insufficient iterations of the model. Solution errors are the difference between the exact mathematical solution and the numerical algorithm solution used to represent them in the simulation model. Any assumptions and simplifications from the mathematical model, errors in rounding numbers by the simulator or numerical errors due to the grid resolution all contribute to the solution errors. A solution error model was implemented to account for upscaling errors in O’Sullivan and Christie, 2005 and Christie et al., (2006). There are also uncertainties associated with the missing physics in the model, which therefore may have an impact on the observations. This so called model inadequacy, once accounted for with a random statistical processes, improved quantification of the overall uncertainty based on the more consistent inference from the generated model response. In this paper we describe a new case study, called the Watt Field (after James Watt the founder of Heriot-Watt University), we have developed to consider both the impact of how you choose to estimate the uncertainty and what model you choose to do this with. The benchmark study addresses mostly the issues associated with an early stage of a brown field development. Previous case studies have concentrated on the choice of uncertainty quantification method (PUNQ-S3 (Floris et al., 2001)) or facies/petrophysics modelling (such as the Brugge case study (Peters et al., 2010)). For the case of models like PUNQ-S3 many parameterisations were developed by different authors (Floris
6
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
et al., 2001, Demyanov et al., 2004, Hajizadeh et al., 2010) to account for the model uncertainty within a given geological scenario. Stanford 6 case study provided a good example of a realistic synthetic full scale model with different depositional environments and associated seismic (Castro et al., 2005). The Brugge case study gave the modeller choices from 104 prior realisations of the geology along with high resolution log data and a choice of petrophysical correlations. SPE10 (Christie and Blunt, 2001) is a widely used model for upscaling studies which deals with the problems of grid resolution and solution errors. As yet no study has combined these choices with other uncertainties that need to be accounted for such as the shape of the top structure and the fault network uncertainty. This paper proposes a new case study that includes the kinds of interpretational uncertainties present in a real reservoir as well as other uncertainties. Specifically we have produced a number of realisations of the model based on different top structure interpretations, fault models, facies modelling/depositional environment choices and relative permeability/capillary pressure uncertainties for a number grid resolution choices. Our intention is to encourage researchers to look into accounting for uncertainties in their interpretations of the reservoir and develop techniques that account for these uncertainties.
There are many routes to calculating the Bayesian inference (See section 2.4). 5. We may need to repeat steps 1–3 (or 4 if we are doing Bayesian inference) for multiple scenarios of the reservoir model and potentially use all scenarios in estimating uncertainty. 6. We balance run time with grid resolution to minimise solution errors while producing enough simulations to describe the uncertainty. Therefore, the grid resolution must be chosen appropriately and may require testing.
2.1. Objective function An objective function is a mathematical expression that measures how close a problem has been reduced towards an optimal value. In the case of history matching, the objective function is a measure of the difference between the observed and simulated results, and we aim to minimise this value. The most commonly used objective function for history matching is the least square norm, which calculates a measure of the discrepancy between the simulated and historical values as a numerical value called the ‘‘misfit’’, M. The least squares misfit formula is defined as M¼
2. Uncertainty quantification of producing fields Extensive work has been carried out on developing techniques for uncertainty quantification in petroleum reservoirs. Most common approaches develop a Bayesian methodology as it allows the engineer to update an initial estimate of uncertainty to a new posterior estimate using production data. Bayes’ theorem is described by the relationship between the posterior and prior pðO9mÞpðmÞ pðO9mÞpðmÞdm m
pðO9MÞ ¼ R
ð1Þ
N X ðqobs ðt i Þqsim ðt i ÞÞ2 2s2i n¼1
ð2Þ
where qobs is the observed or historical rate and qsim is the simulated results at times ti, N is the number of data points and s2i represent the measurement error in the observed data assuming they are independent and identically distributed. The use of the least squares misfit is important when carrying out Bayesian inference as by assuming the Gaussian errors in our production data we can define the likelihood function from the misfit where PðO9mÞpeM
ð3Þ
where P(m9O) is the posterior probability of the model, an updated estimate of the reservoir uncertainty from the initial prior probability p(m) and the likelihood p(O9m). By calculating the likelihood we can update the initial prior probability to the improved posterior. The steps in this process require the calculation of the likelihood from the mismatch between the production data and the simulation model responses where the likelihood increases with a closer match between the data and simulation predictions. The following areas are key components in the Bayesian inference process for integrating production data into uncertainty quantification:
Without Gaussian assumption a rigorous likelihood definition may be difficult to produce, however for many practical problems an objective function different from least squares may be needed. In case of correlated errors s2i transforms on to the full covariance matrix. The likelihood function describes how likely it is that the model response explains the historical data and thus how likely it is that the model represents the true reservoir configuration. A well-matched model will therefore result in a low misfit value and a correspondingly high likelihood value.
1. Define an objective function that describes the closeness of fit between the history data and the simulation model response. 2. Automate the history matching process to find good matches to the data. This is done using an optimisation algorithm to find good results quickly or through assimilation methods by perturbing the model state vector (Evensen et al., 2007) (see section 2.2). 3. The model is history matched using a set of parameters that encompass the uncertainty. The definition of these parameters that are uncertain is called the parameterization. The choice of parameterization is important in making sure the model is able to calibrate to the data adequately and produce good forecasts. 4. With many matched models found by history matching, we can calculate the likelihoods of the model parameter space and infer the posterior probability using Bayesian methodologies.
Optimisation methods were classified by Christie et al. (2005) as either calibration methods or data assimilation methods. Calibration methods are an automated version of the traditional history matching method, whereby a complete run of the simulation is carried out and the match quality to the production data is used to move the model towards a better solution. Data assimilation methods carry out a similar function, but data is calibrated for each time step and the optimisation step adjusts the model before the next step is run. The main calibration methods are gradient and stochastic search algorithms, while the main data assimilation method used for history matching is the Ensemble Kalman Filter (EnKF). Gradient based methods require the calculation of the derivative of the objective function with respect to the model parameters typically as either gradients or sensitivity coefficients. Methods include Gauss–Newton, Levenberg–Marquardt and
2.2. Optimisation algorithms
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
steepest descent, and are very efficient at finding local optimal solutions. Stochastic optimisation methods such as Genetic Algorithms (GA) (Erbas and Christie, 2007), Neighbourhood Approximation algorithm (NA) (Sambridge (1999a)), Particle Swarm Optimisation (PSO) (Mohamed et al., 2006), Differential Evolution (DE) (Hajizadeh et al., 2009) are global estimation methods that are designed to find many local minima assuming a global minima cannot be differentiated—probably a good assumption for reservoir history matching where there is a limit on the number of simulations we can run. Mariethoz et al. (2010) pointed out that important sampling can lead to underestimation of uncertainty due to sampling bias while the posterior is not sampled uniformly but preferentially in the regions of better fit. In MCMC sampling this bias is corrected by weighting the acceptance probability with a ratio between the entire prior and its subset. Stochastic evolutionary algorithms, referred to in this study, differ from the MCMC adaptive sampling in a way that while sampling from the prior the acceptance probability is set to 1. Therefore, a further inference stage is required in this case to evaluated the posterior probability and with an MCMC method (i.e. Gibbs sample in the NAB, see section 2.4. The bias correction in NAB is ensured by normalising to the volume around each sample and multiplying by the prior to approximating the posterior. Ensemble based optimisation methods (Evensen et al., 2007), the most common of which is Ensemble Kalman Filtering (EnKF) differs from stochastic sampling and gradient based methods because the simulation models are run forward one timestep at a time whereupon EnKF updates the model state vectors before moving on to the next timestep. EnKF has been recently used in many applications to fairly complex reservoir models including structural uncertainty. For instance, Seiler et al. (2010) demonstrated uncertainty study of fault geometry, however, it is limited to a single structural scenario based on the given number of faults. This problem can be tackled within EnKF approach using a multipoint statistics simulations of fault network used by Sun (2011). All techniques have shown an ability to locate good history matched models quickly (e.g. Hajizadeh et al., 2009, Mohamed et al., 2009) however the choice of algorithm has an impact on the resolved area of parameter space and therefore the estimation of the posterior probability as shown by Erbas and Christie (2006) and the PUNQ project (Floris et al., 2001).
2.3. Parameterisation techniques Choice of parameterisation defines the way to describe natural dependencies in the model. Accuracy, complexity and realism of the model depend on the way it is parameterised. Furthermore, the efficiency of calibrating the model to data and its prediction power also depends on the number and nature of the parameters. Some parameterisations may be very efficient in getting a good calibration match but do not actually address the key uncertainties. For instance we might ignore geological uncertainties in favour of local factors, like skin, to tune the model around the wells. The difficulty arises when a model includes the parameters that cannot be directly inferred from nature or this inference is non-unique and bears large uncertainty. For instance, connectivity has the major impact on flow, but a full field reservoir models are usually defined by correlation models and proportions of geological bodies. More recent training image based models give good descriptions of connectivity, although they are not controlled directly by connectivity measures. Engineering approaches often provide consistent control over the volume and connectivity of the reservoir compartments, but they typically have limited capability vs. full field geological models when making infill drilling decisions.
7
A common approach for history matching is to parameterise the relative permeability curves (Okano et al., 2005). These are frequently uncertain due to the low number of available measurements and geological heterogeneity in the system. The relative permeability uncertainty increased furthermore in three phase flow cases. Geological parameterisation describes techniques that include parameters relating to geological features such as dimensions of facies bodies, orientations and relationships to petrophysical properties. Arnold (2009) and more recent work by Rojas et al. (2011) showed the impact of realistic geological priors on posterior inference, reducing the volume of realistic parameter space by avoiding unrealistic models and improving the validity of the posterior inference. 2.4. Bayesian inference The Bayesian inference step evaluates the posterior probability from Eq (3) from an ensemble of generated models. Bayesian inference is most simply applied by sampling from the prior distribution using a Monte Carlo technique such as Markov Chain Monte Carlo (MCMC) or rejection sampling (RS) from the posterior (Liu et al., 2001). These approaches sample randomly across parameter space so many iterations may be required to find good local minima. Various types of MCMC algorithms has been developed and applied for the inferences problems in reservoir predictions, such as Population MCMC (Mohamed et al., 2010a), Hamiltonian Monte-Carlo (Mohamed et al., 2010b). NA-Bayes (NAB) is another algorithm used to infer uncertainty from parameter space sampled in assisted history matching. It uses Gibbs sampling to estimate the posterior uncertainty based on the sampling from any optimisation algorithms (Sambridge (1999b)). EnKF gradually optimises a model after each assimilation step/ datum and as a result provides an estimate of the posterior probability based on the assumptions that the model is linear, priors are normally Gaussian distributed and the ensemble size is infinite. 2.5. Handling multiple scenarios When presented with multiple geological scenarios, a better option than picking a single scenario would be to either parameterise the model so that we can sample across a parameter space between the scenarios identified or create separate parameterisations for each scenario, sample from each and then recombine the results in a Bayesian framework. Scenario are constructed through a hierarchy of choices based on uncertainties in the model inputs at each stage of the model build. Choices earlier in the model build e.g. the shape of a top structure or the locations of faults, will propagate through the entire modelling workflow. Any combination of these choices can be produced to create a possible realisation of the reservoir. Prior probability estimates of each model to be used in Bayesian uncertainty quantification may well be qualitative rather than quantitative ones where we cannot describe the prior based on some kind of parameter space. Authors such as Wood and Curtis (2004) have formalized this difference between quantitative and qualitative uncertainty in terms of Bayes theory. The key issue therefore is to account for (a) the (qualitative) probability of each individual interpretation of the model in a systematic way and (b) handle potentially different numbers of parameters for different parameterisations of the model. The option of sampling in a parameter space developed between scenarios has been tackled previously by Suzuki et al. (2008), where a gradual deformation technique was applied to
8
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
effectively interpolate in between 6 possible interpretations of the structural model create new realisations. As this was a single parameterisation of the reservoir, adjusting only the gradual deformation parameters to adjust between the interpretations, the resulting ensemble of models could be used in Bayesian inference techniques as the number of dimensions of parameter space does not change. A recently proposed metric space approach (Scheidt and Caers, 2008) provides a way to both explore the relationships and evaluate the similarity between realisations in their separation in the metric space. An alternative approach is Bayesian model averaging (Pickup et al., 2008) where the posterior probability for each of a set of possible models can be generated using standard assisted history matching and Bayesian inversion, then the posteriors are combined to calculate an average posterior over all possible posteriors. In Pickup et al. (2008) the posterior inference was influenced by the total number of parameters, where smaller numbers of parameters was considered to be more likely due to less chance of overfitting (Occam principle). Transdimensional sampling such as Reversible Jump Monte Carlo (RJMC) (Sambridge et al. (2006)) can be applied to different parameterisations of the model simultaneously for history matching. The algorithm in essence jumps between parameterisations to sample from all the prior distributions to estimate the posterior for all possible models in one step. This approach requires RJMC to be applied to the initial history matching step rather than as a post processor it will require more function evaluations than algorithms such as PSO or DE to find local minima. Overall it is clear that when presented with multiple possible scenarios of the reservoir model we must account for the uncertainties from the differences between these models. This benchmark was created to test new techniques that address this issue.
2.6. Gridding and numerical solution errors The choice of grid resolution presents the modeller with a dilemma in that by reducing the dimensions of the grid cell we can reduce the numerical solution errors in the model response however, the simulation run time will increase depending on the available computer power. Realistically for the purposes of history matching in time frames available to engineers, the model must limit their complexity. Christie et al., (2006) applied solution error models to account for solution errors in low resolution models when history matching, however in the absence of such techniques the modeller must make a decision on the appropriate model to use given the time available.
3. Field description 3.1. Overview The Watt field is a synthetic study based on a mixture of real field and synthetic data to describe a realistic field example seen through appraisal into the early development life stage. Top structure and wireline data is based on real field data however, the fluid properties, relative permeability, and capillary data are synthetic. The field development plan is also synthetic resulting in an artificial production response. The model spans a 12.5 km by 2.5 km surface area, elongated in the East/West direction with a total modelled thickness of around 190 m, much of which is below the oil water contact. The field is located around 1555 m subsurface with an initial reservoir pressure of 2500 psi as measured from repeat formation testing (RFT) and well test data.
The Water/Oil contact (WOC) is identified from wireline and RFT data at a constant 1623.5 m subsurface. The interpreted depositional environment for the field is a braided river system, where a number of different possible outcrop analogues and modelling techniques could be applied. Common facies types in this kind of environment are fluvial channel sands, overbank fine sands and background shales. The field was initially appraised through a set of 6 wells, Well A–F then subsequently developed through a set of 16 horizontal production wells located across the central part of the reservoir and 5 horizontal and 2 vertical (one of which is a recompletion of Well B) injection wells around the edges. Horizontal multi-lateral wells were selected in the original development plan to maximise the distance from the WOC due to the low relief of the field and increase oil production rates. An interpretation of the top structure and major faults was picked from the seismic in the time domain and then depth converted. Faults are identified by displacement in the seismic data and curvature observed in the top structure. The depth conversion develops a single depth surface realisation that is matched is then tied to the well for a number of possible well horizon picks identified from wireline data provided as part of this case study. The major faults were identified where possible from the available seismic data, though there is ambiguity as to the number and extent of the faults in the reservoir. The general trend of the faults seen in the seismic data is East/West. Different structural realisations of the reservoir were possible due to uncertainty in the number and location of faults, specifically sub seismic faults. Three possible top structures, along with major faults and the OWC are shown in Fig. 1 in (a) plan view and (b) cross section. Fault seal is represented simply as transmissibility multipliers that is constant along the fault surface. Figs. 1 and 2 shows significant variations in the extent of the OWC and the thickness of the reservoir zone for each of the 3 produced realisations. The depositional environment of the reservoir is identified as probably being a braided fluvial system. Data is provided in the appendix to this benchmark study as to the possible structures present in this system and the probable inputs into the model for facies modelling. Common facies types in this kind of environment are fluvial channel sands, overbank fine sands and background shales. Wireline logs are collected from across the 6 appraisal wells only, with no data coming from the horizontal production wells. We provide porosity, permeability and facies logs based on core plug data from wells A and B and porosity predictions from neutron density logs from wireline data in the other wells. Electrofacies identification could be made from any of the log data available. In this case we have developed a set of facies logs based on Relative Porosity Difference (RPD) calculated from Neutron and Density porosity estimates from all 6 appraisal wells. RPD is expressed by RPD ¼
ðPor Neu þ0:05Por Den Þ Por Den
ð4Þ
where PorNeu is Neutron porosity and PorDen is density porosity. Cutoffs for 0.6, 0.7 and 0.8 were used to create 3 possible facies logs. Permeability/porosity from the cored wells A and C is used to predict permeability in the other wells and outside the cored sections. Fig. 3 shows a cross plot for all cored data with a global poro perm correlation for Well A. As can be seen multiple interpretations of the poro perm correlations are possible, with the prediction curve given here (red line) being used later in this paper as a permeability predictor.
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
9
Fig. 1. Seismic top structure and major identified faults. Seismic lines, top structure and fault data is provided in the data pack.
Fig. 2. Pay zone thickness maps for the 3 top structure scenarios.
PVT and relative permeability data is supplied for the reservoir. PVT data was sampled across the 6 wells and shows no variation in properties therefore this is kept as a fixed known. SCAL data was sampled from a number of a number of core plugs from well A, D and F, and shows variation between the braid bar and overbank sand facies and within those facies types. Ranges of uncertainty in the Rel perm curves is shown in Fig. 4 and can be described by a Corey model with variations in the
exponents of 1.5–6. There is no significant Capillary pressure in this reservoir therefore no J-functions are provided for the model. A corner point simulation/modelling fine grid was developed for a ‘‘truth case’’ model to provide a production history. A truth case realisation of the reservoir facies, porosity and permeability was generated with particular assumptions about their distribution and spatial correlation. As modelling choice, based on the provided data, is an uncertainty in the model the particulars of the truth case
10
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
modelling approach and settings are not expanded on in this paper. Seismic was not used for conditioning the facies model due to its poor quality and confidentiality. Production history for the field is generated from the truth case model based on a synthetic development scenario. The field development plan consists of a set of horizontal production and injection wells drilled and completed over a 4 year period as well as a recompletion of well B as an injector. A total production history of 20 years was produced but only 8 years of BHP, oil, gas and water rates for each of the production wells were made available for history period. The field was initially controlled by a maximum fluid rate to account for topsides handling constraints, switching to a BHP constraint once the BHP falls below the limiting pressure of 1000 psi. A random Gaussian noise of 15% was added to all the production data to simulate the impact of measurement errors in the data.
geologists and engineers typically make deterministic choices in creating models of the reservoir. A single model is often selected for simulation and subsequent history matching. Therefore to create the same situation we have generated a set of models based on several choices for each of the key interpretational elements, the modelling approach choices and grid resolution. Users can choose a combination of predefined descriptions of the reservoir structure, grid resolution and saturation profile, or they can interpret their own from the data. A set of key uncertainties were identified which are: (1) (2) (3) (4)
Top structure seismic interpretation Fault network definition Grid resolution Cutoffs for electrofacies identification
3.2. Model uncertainties and modelling choices The purpose of this paper is to create a case study that includes uncertainties in the interpretation of the reservoir. When presented with these kinds of uncertainties in real field studies,
Table 1 Table of uncertain choices to make in choosing a scenario for this model. The combination of Grid resolution, Top structure, fault model and facies cut-off uncertainty leads to 81 different possible combinations of the options. Model property
Description
Grid
100 m by G-1 Total of 81 100 m by 5 m different 100 m by G-2 combinations of 100 m by 10 m these 200 m by G-3 properties 200 m by 5 m 1 TS-1 2 TS-2 3 TS-3 1 FM-1 2 FM-2 3 FM-3 0.6 CO-1 0.7 CO-2 0.8 CO-3 Data is provided for different possible field depositional models based on different outcrop analogues. Data from these sources could be used in the construction of the geological model Coarse sand RP_0_1 relperms RP_0_2 RP_0_3 Fine sand RP_1_1 relperms RP_1_2 RP_1_3 100 m by 100 100 5 100 m by 5 m 100 m by 100 100 10 100 m by 10 m 200 m by 200 200 5 200 m by 5 m
Top structure
Fault model
Facies model (Cutoffs) Modelling approach
Relative permeability data
Simulation model
Fig. 3. Poro perm cross plot for Facies 0 in Well A and prediction model used in case studies Case 1 and Case 2. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
File name
Fig. 4. Relative permeability data gathered from wells A and C for (a) Fine sand (facies 1) and (b) Coarse sand (facies 0). A total of 6 samples are provided, 3 for each facies type. No data is provided for the shale facies.
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
(5) Facies modelling approach (6) Porosity/permeability prediction (7) Relative permeability For each of the first 4 uncertainties we have developed a range of possible descriptions and provide data so they can choose an appropriate model from a set of possible models. We have created a set of 81 simulation grids based on every combination of our predefined interpretations. The truth case lies in the model space of these choices, though is not necessarily any one of these models for all the uncertainties. In addition we provide the raw surface and fault data and wireline logs so other simulation models and facies interpretations can be developed. For the last 3 uncertainties we provide the necessary data (rel perm curves, poro perm data and descriptions of possible depositional models) for the researcher to develop a range of possible models based on the 81 grids provided. The key uncertainties are expanded upon in a separate Appendix document provided in the download pack for this field study. All scenarios are listed in Table 1 for each uncertainty along with the relevant file name of the simulation input data.
4. Test of case study From the 81 pre-defined scenarios (See Appendix/download for all possible scenarios) created for this case study we chose two at random to attempt history matching. We history matched a set of models, with different parameterisations using the RAVEN assisted history matching software (www.Epistemy.com). A single set of history match results were produced for each case study using the in-built Particle Swarm Optimisation (PSO). NA Bayes was applied to the ensemble of models to create estimates of the posterior probability, creating P10, P50 and P90 estimates of forecast production. A comparison of the P10–P90 estimates is given at the end of the paper to show any differences due to the choice of scenario, parameterisation and grid. 4.1. Case 1: Layered model parameterisation case study The first attempt at history matching was done using a layered model with a simple parameterisation of porosity, permeability and relative permeability. For this attempt the 200_200_5 grid was used with structural model 2. The 40 layers of the model were broken into 8 layers of constant properties, the 6 top layers being 4 cells thick with the lower 2 layers being 8 cells thick. A constant porosity parameter horizontal permeability was predicted based on an estimated porosity permeability function taken from Well A and Well C data. Initially a constant function was applied based on the main sand facies as this is the predominant reservoir facies. The function was predicted as Kh ¼ 1016:0|1:0
ð5Þ
Vertical permeability was parameterised by 4 multiplier parameters, 1 for each of 4 layers 10 cells thick. Finally we parameterised the relative permeability using a Oil water Corey function for the entire reservoir, parameterising the Corey exponents Cw and Co and Swc (critical water saturation). A total of 15 parameters were chosen for this parameterisation and are given in Table 2. The model was history matched using RAVEN’s Flexi-PSO algorithm run for a total of 500 iterations to WBHP, WWPR and WOPR for all wells. The results are shown in Fig. 5 for a subset of the history matched wells. The sigma values for each well are based on the 15% noise value for each data type.
11
The overall match quality for this parameterisation is variable between wells. As can be seen in Fig. 5 the well matches for some wells (Well 7) is good for water and oil rates, however other wells either have predicted earlier (Well 5) or later (Well 3) water breakthrough. Field rates are approximately correct for all parameters as the over and under predictions of water breakthrough times effectively cancel out.
4.2. Case 2: Zonal parameterisation case study Based on the results from Case 1 we identified that the well behaviour was different for each well, therefore the layered parameterisation with equal properties along the layer provided good matches in some wells and poor matches in others. A zonal parameterisation approach was developed to break the reservoir into regions around each well. In this case study we divided the models into 9 different sectors (see Fig. 6) around each group of wells to allow more freedom to tune individual wells. Each sector is parameterised separately with an individual porosity, Kz, critical water saturation and Corey oil and water components. The prior ranges applied to all sectors are given in Table 3 with a total of 45 parameters to describe the model uncertainty. As before horizontal permeability was predicted based on an estimated porosity permeability function based on the main sand facies. We again used the same Kh prediction function used in Case 1, given in Eq. (5) above. Vertical permeability was predicted by a field-wide constant multiplier The same 200 m by 200 m by 5 m grid/top structure 1 combination was used again in this case study to allow a more direct comparison with Case 1. The model was history matched using RAVEN’s Standard PSO algorithm run for a total of 500 iterations to WBHP, WWPR and WOPR for all wells. History matching results are presented in Fig. 7 for Wells 7, 5 and 3a. Overall the match quality is no better than for Case 1 with adequate matches for some wells (e.g. Well 7) and poor matches in others (well 3a). Comparing the field rates for Cases 1 and 2 shows both approaches manage a reasonable field match and both approaches produce similar quality matches (Fig. 8). This suggests that both models are scaled appropriately for the field though heterogeneities in the real field are not captured in these simple models and may well be the cause of the discrepancies in individual well matches. These results from Cases 1 and 2 suggest the simple models are missing some feature of the reservoir that is important in controlling the water breakthrough in individual wells. We expect this is linked to the
Table 2 Table of parameters and prior ranges. Parameter
Prior range
Porosity (Layer 1–4) Porosity (Layer 5–8) Porosity (Layer 9–12) Porosity (Layer 13–16) Porosity (Layer 17–20) Porosity (Layer 21–24) Porosity (Layer 25–32) Porosity (Layer 33–40) PermZ (Layer 1–10) PermZ (Layer 11–20) PermZ (Layer 21–30) PermZ (Layer 31–40) Corey water component (Cw) Corey oil component (Co) Critical water saturation (Swc)
0.05–0.3 0.05–0.3 0.05–0.3 0.05–0.3 0.05–0.3 0.05–0.3 0.05–0.3 0.05–0.3 0.01–1 0.01–1 0.01–1 0.01–1 1–6 1–6 0.1–0.5
12
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
Fig. 5. History matching results for Case 1—layered parameterization of the Watt reservoir.
Fig. 6. Zonal divisions around each group of production wells in the model.
Table 3 Prior ranges for each of the 9 sectors/zones of the model. Parameter
Prior range
Porosity PermZ Corey water component (Cw) Corey oil component (Co) Critical water saturation (Swc)
0.05–0.3 0.01–1 1–6 1–6 0.1–0.5
complexity of the real field case controlling the influx of water that cannot be expressed by simple poro perm representations of the reservoir. Additionally there is a potential issue around the number of runs carried out. Only a single run of 500 iterations was carried
out for each study therefore we could (a) increase the number of iterations of the models and (b) run several repeated runs with different initial seeds to allow different areas of parameter space to be explored. Alternatively a more exploratory algorithm could be employed such as the standard PSO algorithm used in Case 2. A comparison of Misfit vs. iteration for Case 1 and 2 (Fig. 9) shows the significantly faster convergence of the flexi-PSO (Case 1) towards good models is countered by a very rapid refinement of the models into a local minima. So over and above the differences in the model grid, parameterization and prior definition we see differences in the results from the sampling algorithm behaviour. Therefore we may well want to run for different algorithms as well as different scenarios of the model for appropriate numbers of iterations. Overall the performance of our trial parameterisations is limited in predicting individual well performance and the simple
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
13
Fig. 7. Production history matches for Case 2 parameterisation for Wells 7, 5 and 3A.
Fig. 8. Comparison of field rates for Case 1 (bottom) and Case 2 (top).
engineering approach may not lead to good forecasts however the model does have the capacity to match on field rates reasonably well. Improved parameterisations of the field may well lead to better history matches, possibly suggesting that more geological complexity should be included in the model to better capture behaviour.
5. Conclusions The Watt Field case study was designed to test the influence on history matching and uncertainty quantification of different parameterisations, structural models, interpretations and model building methods. The scenarios developed were designed to
14
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
Fig. 9. Model convergence over 500 iterations for 2 different algorithm configurations for Cases 1 (red squares) and 2 (blue diamonds). Case 1 used flexi-PSO which produces a better overall match but is more aggressive as illustrated in the refined view in (b). Case 2 used a more exploratory PSO configuration and is therefore less refining over the same 500 iterations. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
cover key uncertainties in the reservoir model and require the modeller to make decisions around which is the appropriate model selection. There are many choices to make in this benchmark problem and many ways in which the reservoir properties can be modelled spatially thus we expect future work on this field to include a large number of modelling techniques to be applied to a range of model scenarios. Here we have attempted only 2 simple and non-geological parameterisations of the reservoir. For Case 1 we attempted a layer based parameterisation, providing a general description of the porosity and permeability in 8 defined layers. The idea here is that the model has a high NTG and sand properties are fairly constant along the reservoir. The speed of water influx into the wells from below is controlled by vertical permeability values for 4 layers. For Case 2 we zoned the model into 9 sectors, 1 for each well that were controlled by their own porosity, Kv and relative permeability parameter. Here we saw little improvement in the overall quality of the history match with a similar match results in the wells. For all cases the reservoir complexity makes simple parameterisations of the reservoir less useful in matching to individual wells, though the field matches were adequate. Individual well rates in both the engineering cases did not provide good matches to all wells, though some wells matched quite well. This is something we observe in real fields where regional complexity in the field is not captured by the model. The work so far at matching this model is very early stage. At this point we have only made a couple of attempts at matching the model and not yet carried out any Bayesian inference of the models this being part of a planned future attempt at this problem. Our intention for this work is to revisit the study to investigate the differences between the different scenario choices in terms of predictions and uncertainty inferences. As there are a
large number of possibilities with this benchmark we have only scratched the surface of the modelling that could be done. The aim of this case study is to promote new techniques for interpolating between scenarios and to deal with regridding issues to allow more geological variability to be included in the history matching and uncertainty quantification process. We expect a range of techniques could be applied (some referenced in this paper) to account for this range of possibilities. We hope that this will prompt a number of groups to attempt this problem to create techniques that better cover the qualitative uncertainties in geological models. Technique such as Bayesian Model Averaging (BMA) would be suitable to deal with this problem and provides a possible route to bringing together a number of match attempts using different benchmark scenarios and different modelling/ parameterisation approaches. The benchmark case study can be possibly used to guide sensitivity studies for the identification of factors influencing development decisions. A sensitivity study is a good way to evaluate the robustness of the model response, however, the plausibility of the models obtained from a sensitivity study must be evaluated. Experimental design and response surface methods conventionally used in sensitivity analysis need to be applied across multiple scenarios. Furthermore, factors with more impact in one scenario may have less impact in another especially if their impact has a compensating nature. For instance, continuity of a shale barrier between the horizontal sand layers would have a major impact on vertical flow, in the absence or fragmental presence of the shale barrier the spatial variation of the kv/kh ratio can bring the major impact. Furthermore, the accurate definition of priors for some of these qualitative geological uncertainties will be a challenge and while we can assume that all scenarios are equally likely, geological knowledge may well lead us to believe that some models are more likely than others. Assigning a qualitative/judgment based prior probability to the model may require the input from several geologists/engineers to reduce bias in the estimate.
Acknowledgements We would like to thank JIP sponsors of the Uncertainty Project for their support in funding the ongoing work in uncertainty in Heriot-Watt, Schlumburger and Epistemy for providing the Petrel, Eclipse and Raven software and the Heriot-Watt IPE MSc courses and data sponsors of that course for providing some of the data used in this field. We greatly appreciate the review comments from Jef Caers and the other reviewer that helped to improve the paper and tie in ideas.
Appendix A. Supporting information Supplementary data associated with this article can be found in the online version at http://dx.doi.org/10.1016/j.cageo.2012.09. 011.
References Arnold, D., 2009. Geological Parameterisation of Petroleum Reservoir Models for Improved Uncertainty Quantification. Heriot-Watt University Ph.D. Thesis. 198 pp. Bond, C.E., Gibbs, A.D., Shipton, Z.K., Jones, S., 2007. What do you think this is? Conceptual uncertainty in geoscience interpretation. GSA Today 17 (11), 4–10. Caers, J., 2011. Modeling Uncertainty in the Earth Sciences. Wiley, p. 6. Castro, S.A., Caers, J., Mukerji, T., 2005. The Stanford VI Reservoir.’’ 18th Annual Report. Stanford Center for Reservoir Forecasting. Stanford University, May 2005.
D. Arnold et al. / Computers & Geosciences 50 (2013) 4–15
Challenor P., Tokmakian, R., 2011. Modelling future climates. In: Christie, M., Cliffe, A., Dawid, P., Senn, S., (Ed.), Simplicity, Complexity and Modelling, Wiley and Sons, Ltd. pp. 69–81 (Chapter 5). Christie, M.A., Blunt, M.J., 2001. Tenth SPE comparative solution project: a comparison of upscaling techniques. SPE Reservoir Engineering and Evaluation 4, 308–317. Christie, M.A., Glimm, J., Grove, J.W., Higdon, D.M., Sharp, D.H., Wood-Schultz, M.M., 2005. Error analysis and simulations of complex phenomena. Los Alamos Science 29, 6–25. Christie, M., Cliffe, Andrew, Philip, D., Senn, S.S. (Eds.), 2011. Wiley, p. 31. Christie, M., Demyanov, V., Erbas, D., 2006. Uncertainty quantification for porous media flows. Journal of Computational Physics, 217 (1), September 2006, pp. 143–158. Demyanov, V., Christie, M., Subbey, S., 2004. Neighbourhood Algorithm with Geostatistical Simulations for Uncertainty Quantification Reservoir Modelling: PUNQ-S3 Case study. Presented at 9th European Conference on Mathematics in Oil Recovery ECMOR IX 2004. Cannes, France, September 2004. Erbas, D., Christie, M., 2006. ‘‘How does Sampling Strategy Affect Uncertainty Estimations?,’’ Oil & Gas Science and Technology. Erbas, D., Christie, M.A., 2007. Effect of sampling strategies on prediction uncertainty estimation. SPE 106229, 8. Evensen, G., Hove, J., Meisingset, H.C., Reiso, E., Seim, K.S., Espelid O., 2007. Using the EnKF for Assisted History Matching of a North Sea reservoir model. SPE106184-MS. SPE Reservoir Simulation Symposium. 26–28 February 2007, Houston, Texas, USA. Floris, F.J.T., Bush, M.D., Cuypers, M., Roggero, F., Syversveen, A.R., 2001. Methods for quantifying the uncertainty of production forecasts: a comparative study. Petroleum Geoscience, S87–S96. Hajizadeh, Y., Christie, M., Demyanov, V., 2009. Application of Differential Evolution as a New Method for Automatic History Matching. SPE 127251, Kuwait International Petroleum Conference and Exhibition. 14–16 December 2009, Kuwait City, Kuwait. Hajizadeh, Y., Christie, M., Demyanov, V, 2010. Comparative Study of Novel Population-Based Optimization Algorithms for History Matching and Uncertainty Quantification:PUNQ-S3 Revisited. SPE 136861, Abu Dhabi International Petroleum Exhibition and Conference. Abu Dhabi, UAE, 1–4 November 2010. Kim, Ch.-S., 2002. New Uncertainty Measures for Predicted Geological Properties from Seismic Attribute Calibration. In Soft Computing for Reservoir Characterization and Modeling, ed. Wong P., Aminzadeh F., M/Nikravesh. Liu, N., Betancourt, S., Oliver, D.S., 2001. Assessment of uncertainty assessment methods. SPE, 71624. Mariethoz, G., Renard, P., and Caers, J., 2010. Bayesian inverse problem and optimization with iterative spatial resampling. Water Resources Research vol. 46. W11530, 17 pp., http://dx.doi.org/10.1029/2010WR009274. Mohamed, L., Christie, M.A., Demyanov, V., 2006. Comparison of stochastic sampling algorithms for uncertainty quantification. SPE, 119139. Mohamed, L., Christie, M., Demyanov, V., 2009. Comparison of stochastic sampling algorithms for uncertainty quantification. SPE 119139, SPE Reservoir Simulation Symposium. The Woodlands, USA, 2–4 February. Mohamed, L., Calderhead, B., Filippone, M., Christie, M., Girolami, M., 2010a. Population MCMC Methods for history matching and uncertainty quantification, B012. In: Proceedings of the 12th European Conference on the Mathematics of Oil Recovery. 6–9 September, Oxford, UK.
15
Mohamed, L., Christie, M., Demyanov, V., 2010b. Comparison of stochastic sampling algorithms for uncertainty quantification. SPE Journal 15 (1), 31–38, SPE-119139-PA, March 2010. Okano, H., Pickup, G., Christie, M., Subbey, S., Sambridge, M., Monfared, H., 2005. Quantification of uncertainty in relative permeability for coarse-scale reservoir simulation. SPE Journal 1, 11, SPE94140. Oliver, D.S., Reynolds, A.C., Ning Liu, N., 2008. Inverse Theory for Petroleum Reservoir Characterization and History Matching. Cambrige University Press. p. 270. O’Sullivan, A., Christie, M., 2005. Error models for reducing history match bias. Computational Geosciences 9, 125–153. Peters, L., Arts, B., Brouwer, G.K., et al., 2010. Results of the Brugge benchmark study for flooding optimization and history matching. SPE Reservoir Evaluation & Engineering 13 (3), 391–405. Pickup, G.E., Valjak, M., Christie, M.A., 2008. Model complexity in reservoir simulation. In Presented at the 11th European Conference on the Mathematics of Oil Recovery. Bergen, Norway, September 2008. Rankey, E.C., Mitchell, J.C., 2003. Interpreter’s corner that’s why it’s called interpretation: impact of horizon uncertainty on seismic attribute analysis. The Leading Edge 22, 820. Rojas, T., Demyanov, V., Christie, M., Arnold, D., 2011. Use of Geological Prior Information in Reservoir Facies Modelling. In: Marschallinger and Zobl (Eds.), Proceedings of the IAMG 2011. Salzburg, Austria. pp. 266–285. Sambridge, M., 1999a. Geophysical inversion with a neighbourhood algorithm-I. Searching a parameter space. Geophysical Journal International 138 (2), 479–494. Sambridge, M., 1999b. Geophysical inversion with a neighbourhood algorithm-II. Appraising the ensemble. Geophysical Journal International 138 (3), 727–746. Sambridge, M., Gallagher, K., Jackson, A., Rickwood, P., 2006. Trans-dimensional inverse problems, model comparison and the evidence. Geophysical Journal International 167 (2), 528–542. Scheidt, C., Caers, J., 2008. A new method for uncertainty quantification using distances and kernel methods: application to a deepwater turbidite reservoir.’’. SPE Journal 14 (4), 680–692. Seiler, A., Aanonsen, S.I., Evensen, G., Lia, O., 2010. An Elastic Grid Approach for Fault Uncertainty Modelling and Updating Using the Ensemble Kalman Filter. SPE 130422-MS. Sun, A.Y., 2011. Identification of geologic fault network geometry by using a gridbased ensemble Kalman filter. Journal of Harardous Toxic and Radioactive Waste 15 (4), 228. Suzuki, S., Caumon, G., Caers, J., 2008. Dynamic data integration for structural modeling: model screening approach using a distance based model parameterisation. Computational Geosciences 12, 105–119. Tarantola, A., 2005. Inverse Problem Theory and Model Parameter Estimation. SIAM, p. 13. Wood, R., Curtis, A., 2004. Geological prior information, and its applications to geoscientific problems. In: Curtis, A., Wood, R. (Eds.), Geological Prior Information: Informing Science and Engineering. Geological Society, London Special Publications, London, pp. 239. Zadeh, L.A., 2002. Toward a perception-based theory of probabilistic reasoning with imprecise probabilities. Journal of Statistical Planning and Inference 105, 233–264.