A new parameterization method for data assimilation and uncertainty assessment for complex carbonate reservoir models based on cumulative distribution function

A new parameterization method for data assimilation and uncertainty assessment for complex carbonate reservoir models based on cumulative distribution function

Journal of Petroleum Science and Engineering 183 (2019) 106400 Contents lists available at ScienceDirect Journal of Petroleum Science and Engineerin...

8MB Sizes 0 Downloads 27 Views

Journal of Petroleum Science and Engineering 183 (2019) 106400

Contents lists available at ScienceDirect

Journal of Petroleum Science and Engineering journal homepage: www.elsevier.com/locate/petrol

A new parameterization method for data assimilation and uncertainty assessment for complex carbonate reservoir models based on cumulative distribution function

T

Célio Maschio , Denis José Schiozer ⁎

Center for Petroleum Studies - CEPETRO, University of Campinas - Unicamp, Campinas, SP, Brazil

ARTICLE INFO

ABSTRACT

Keywords: Reservoir simulation History matching Data assimilation and uncertainty assessment Cumulative distribution function Complex geological systems Super-K

Data assimilation (also known as history matching) and uncertainty assessment is the process of conditioning reservoir models to dynamic data to improve its production forecast capacity. One of the main challenges of the process is the representation and updating of spatial properties in a geologically consistent way. The process is even more challenging for complex geological systems such as highly channeling reservoirs, fractured systems and super-K layered reservoirs. Therefore, mainly for highly heterogeneous reservoirs, a proper parameterization scheme is crucial to ensure an effective and consistent process. This paper presents a new approach based on cumulative distribution function (CDF) for parameterization of complex geological models focused on layered reservoir with the presence of high permeability zones (super-K). The main innovative aspect of this work is focused on a new sampling procedure based on a cut-off frequency. The proposed method is simple to implement and, at the same time, very robust. It is able to properly represent super-K distribution along the reservoir during the data assimilation process, obtaining good data matches and reducing the uncertainty in the production forecast. The new method, which preserves the prior characteristics of the model, was tested in a complex carbonate reservoir model (UNISIM-II-H benchmark case) built based on a combination of Brazilian Pre-salt characteristics and Ghawar field information available in the literature. Promising results, which indicate the robustness of the method, are shown.

1. Introduction The process of modifying the properties (such as porosity, absolute and relative permeability, among others) of reservoir simulation models in order to reproduce (within some tolerance) the historical data is known as history matching or data assimilation. The ultimate goal of the process is to improve the predictive capacity of reservoir simulation models. The traditional way of solving the history-matching problem consists of a manual trial-and-error approach by which a matched model is sought. As discussed in the work of Subbey et al. (2002), the traditional approach can be summarized in three steps: (1) construction of an initial (deterministic) reservoir model, (2) changing the reservoir model properties to perform the history matching, (3) use the adjusted model to forecast future reservoir performance. However, due to measurement errors, modeling limitation, history matching is an inverse problem with multiple solutions. Therefore, ideally, we have to be able to find all possible solutions to properly assess uncertainty in production forecast.



Mathematically, history matching can be formulated as a minimization or sampling problem. Minimization involves the use of one (or more) optimization algorithm to minimize an objective function (that measures the difference between the observed data and simulation results) to find one or more adjusted model. Sampling process involves the use of a sampling algorithm, such as Markov Chain Monte Carlo MCMC (Yustres et al., 2012) or Iterative Latin Hypercube (Maschio and Schiozer, 2016) to generate posterior samples by conditioning the sampling process to observed data. Optimization algorithms can be divided into two main groups: deterministic, which normally involves gradient-based methods (Gomez et al., 2001) and stochastic, such as simulated annealing (Kirkpatrick et al., 1983), genetic algorithms (Maschio et al., 2008, 2015), particle swarm optimization (Mohamed et al., 2011), among others. The sampling algorithms are stochastic by nature, because they are designed to sample a probability distribution. The term deterministic means that if we run the optimization method several times (without changing anything) the final solution is always the same. On the other hand, stochastic methods do not generate the

Corresponding author.. E-mail addresses: [email protected] (C. Maschio), [email protected] (D.J. Schiozer).

https://doi.org/10.1016/j.petrol.2019.106400 Received 25 March 2019; Received in revised form 31 July 2019; Accepted 17 August 2019 Available online 20 August 2019 0920-4105/ © 2019 Elsevier B.V. All rights reserved.

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

same solution. The difference between the optimization and sampling processes is that in the first we seek for one or more history-matched models, in a minimization scheme. In the second, we are concerned about how to use observed data to reduce uncertainties in reservoir properties and in the production forecast, obtaining samples of the posterior distribution. Because the sampling process is conditioned to the observed data, the posterior samples have smaller variance compared to the prior samples, i.e. models with better quality match have a higher probability to be sampled. This means that we are reducing the uncertainty of the reservoir properties. Although it is possible to find multiple solutions using optimization methods and, in a certain way, assess uncertainty in the forecasting period, stochastic processes using sampling methods offer more adequate framework to deal with uncertainty assessment. The efficiency and effectiveness of a history matching process depend on two key components: (1) the method used to explore the search space (optimization or sampling) and (2) the parameterization scheme. Most of the recent research efforts have focused on one of these two components. This paper focuses on a parameterization method for data assimilation and uncertainty assessment in complex geological models, more specifically carbonate reservoirs with super-K (high permeability) zones. We proposed a new sampling procedure based on a cut-off frequency which properly represents the spatial distribution of super-K in the reservoir model.

According to the scheme used to change spatial petrophysical properties, there are three main types of workflow involved in a data assimilation process. In the first (and simplest) type, the changes are applied directly to the dynamic simulation models using multipliers. When the multipliers are applied over geometrical regions of the reservoir (around a producer well for example) to update spatial properties the scheme is known as zonation. The zonation method is easy to apply and in general facilitates the well-by-well matching. However, the main disadvantage is that it yields discontinuous rock properties at zonation boundary, and does not preserve the geostatistical characteristics and geological consistence of the reservoir model. Another possible workflow involves the integration of the geostatistical modeling and the reservoir simulation processes in a scheme known as big loop. In this kind of workflow, a commercial software for geological modeling is connected (automatically or not) with the reservoir simulator. Normally, the geostatistical modeling is performed in a fine grid and the spatial properties of the model are upscaled to the reservoir simulation scale. The advantage of the big loop scheme is that it enables the incorporation of the geological details and also the exploration of several modern geostatistical techniques. Examples of this workflow can be found in Maschio et al. (2015), Oliveira et al. (2017) and Costa et al. (2018). The disadvantages are: (1) complexity of implementation because commercial modeling approaches rely on complex workflow that are hard to interact with and require multiple technical disciplines interacting in the history matching process and (2) the extra high computational effort involved in the fine-grid modeling and upscaling processes. These two aspects make this kind of workflow difficult to apply to practical cases. According to Arnold et al. (2019), most companies still keep (for simplicity) the history-matching workflow under the control of only the engineering parameters (applied directly to the dynamic simulation models). The third scheme to update spatial properties in a history matching process is by using a scheme commonly known as re-parameterization. Under this scheme, a set of initial (prior) geostatistical realizations (images) is generated using commercial geostatistical software. Then, a given technique is applied to extract the features from the initial images (also known as training images) and new images are generated. Principal component analysis (PCA), also known as Karhunen–Loève (K-L) expansion, has gained attention as an effective re-parametrization technique and several PCA-based methods have been proposed in recent years. Sarma et al. (2008) introduced the Kernel Principal Component Analysis (KPCA) which is a nonlinear form of the K-L expansion. Chen et al. (2016) proposed a workflow to integrate cumulative distributionfunction (CDF) mapping with PCA (CDF/PCA) for assisted history matching on a two-facies channelized reservoir. Integer variables such as facies indicators were regenerated by truncating their corresponding PCA results with thresholds that honor the fraction of each facies. Continuous variables (such as porosity and permeability) resulting from PCA were mapped in each facies according to the CDF curves of each property. They compared the proposed method with standard PCA. Vo and Durlofsky (2014) formulated the PCA as a bounded-constrained optimization problem and proposed a new method referred to as O-PCA. The method was successfully applied in synthetic reservoir models with channelized features. Babaei et al. (2015a) and Babaei et al. (2015b) applied O-PCA to represent channelized permeability field in robust optimization problems (optimization under uncertainty). Emerick (2017) presented a detailed investigation on PCA-based schemes for parameterizing channelized facies models for history matching with ensemble-based methods. He presented several implementations of PCA-based methods combined with ensemble smoother with multiple data assimilation (ES-MDA) and obtained promising results, showing that channelized facies was properly updated. The re-parameterization methods (third scheme) mentioned above are, in general, applied to channelized reservoirs. Typically, synthetic models structurally simple are treated in the mentioned works. In the

2. Motivation and objective Promising results have been reported in the literature regarding parameterization methods for data assimilation applied in channelized reservoirs. In general, synthetic and structurally simple models have been shown. However, there is a lack of methodology for data assimilation and uncertainty assessment focused on highly layered reservoir, applicable to complex field-scale cases. Additionally, most of the work analyzed, with focus on dynamic data integration in reservoir with super-K zones, do not deal with uncertainty quantification. Normally, they present deterministic history matching. The main objective of this paper is to present an innovative parameterization method for data assimilation and uncertainty assessment based on cumulative distribution function (CDF) focused on complex layered reservoirs. The goal is to propose a method suitable to reproduce super-K layers, which is the most influential feature (parameter) in such kind of reservoirs. The application of the proposed method in a complex benchmark case (UNISIM-II-H) is another objective of the paper. 3. Theoretical background and literature review In the following subsections, the key subjects connected to the present work are addressed. Subsection 3.1 addresses parametrization techniques for data assimilation. Some aspects related to super-K layered reservoirs are presented in 3.2. Finally, subsection 3.3 addresses aspects related to dynamic data integration in reservoirs with super-K. 3.1. Parameterization techniques for data assimilation Parameterization is the representation of a reservoir model with a certain number of ‘parameters’. A parameter can be represented by a scalar, a table or an array of values that represent petrophysical properties that are spatially distributed in the reservoir, such as porosity and permeability. The main challenge in the process of conditioning reservoir models to dynamic data is related to the representation and updating of spatial properties in a geologically consistent way. The difficulties increase when it is necessary to adjust property values near wells (especially in cases with a high number of wells) to perform local history matching. It is not trivial to change the properties locally while maintaining the spatial continuity and correlation. 2

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

present work, our method is focused on complex layered reservoir with the presence of super-K.

regions. The authors emphasized that the understanding of the general behavior of the reservoir is important and they found that super-K adjustment done in one region primarily affected wells in that region, while not severely affecting wells in the other regions. Al-Dhamen et al. (1998) discussed a quick procedure to identify super-K localization by using simulation model in conjunction with flowmeters and production data in a large elongated carbonate anticline Middle-East oil reservoir, with a high level of super-K occurrences. In order to identify the area where super-K phenomenon was most prevalent, Dogru et al. (2001) carried out an initial screening of the geological features of the Ghawar field. They choose an area that showed premature water breakthrough and irregular flood front movement. They used multimillion-cell description to capture premature breakthrough and complex reservoir behavior with minimum upscaling of the original geological description. Uba et al. (2007) applied a hybrid dual-porosity, dual-permeability system to represent large-scale fractures and super-K to the simulation of a giant carbonate reservoir. The method was initially tested in a conceptual slab model containing a fracture corridor crossing the entire model and one super-K body intersecting the corridor. This model was used to bring insights for the history matching of the whole field. According to Vargas-Guzmán et al. (2009), production data can be a vital source of information for identifying the spatial distribution and regional orientation of high permeability zones in a reservoir. The authors presented a method using streamline-based travel time inversion modeling to integrate a prior geological model with dynamic data. Discrepancies between the flow response from the prior geological model and measured dynamic data (available in a significant number of wells) was identified and reconciled. A detailed stratigraphic model was studied in a reservoir 3D grid with more than one million cells. The role of the geometry and the vertical distribution of high-permeability streaks in the history matching of a giant offshore carbonate reservoir was discussed by Brantferger et al. (2012). During the history matching, they performed local modifications of the high-permeability streaks in a feedback loop between the simulation engineers and the geoscientists, keeping the simulation and geological models synchronized. Liu et al. (2016) evaluated water injection pilots using inverted nine spots with different well spacing in the Mishrif carbonate reservoirs in southern Iraq with high permeability zones. A numerical simulation sector model was matched with production history and well pressure data. One of the objectives of the work was to improve full field reservoir modeling through history matching of the pilots to support the modeling of reservoir properties and upscaling methods to preserve the key features that govern water flood in the target reservoirs. Ghadami et al. (2015) presented a workflow to model a giant carbonate gas-condensate reservoir characterized by several heterogeneous layers. They applied a hierarchical dynamic modeling, starting with a single-well modeling, in which well test data were used to evaluate formation capacity, skin factor and non-Darcy effects, deducted from a match to drill stem test (DST) results. In sequence, they built a full field model. According to the authors, the application of the scaling process improved the permeability distribution and minimized the history matching efforts.

3.2. Super-K layered reservoir Super-K layered reservoirs are very heterogeneous reservoirs with stratified high permeability intervals, or super-K zones. These geological features characterize the supergiant Ghawar field in Saudi Arabia. According to Alqam et al. (2001), the term “super-K” was defined by Saudi Aramco in the 1970's to describe thin layers of exceptional flow capacity. As pointed out by Swart et al. (2005), super-K zones are able to produce more than 500 barrels per day per foot of vertical interval. They reported that for one of the analyzed wells, the super-K zones contributed with more than 70% of the production. Al-Dhafeeri and Nasr-El-Din (2007) reported even higher values: in one of the wells, the high permeability zone contributed more than 80% of the total well production. They used core data and production logging to identity and characterize the high permeability zones. However, according to Voelker et al. (2003), super-K may form high conductivity conduits from water injection wells to adjacent production wells, causing early water breakthrough and potentially premature abandonment. Therefore, in principle, the presence of super-K is beneficial in areas effectively under primary production. Since the fluid flow within the reservoir is extremely sensitive to the spatial distribution of super-K, the success of the data assimilation and uncertainty assessment process is highly dependent on the correct parameterization and representation of the super-K in the reservoir simulation model. 3.3. Dynamic data integration in reservoir with super-K zones Liu et al. (1993) presented a geostatistical method (named Indicator Conditioned Estimation) designed to honor the spatial structure inherent in the source data and to allow for the prediction of super-K streaks in areas of no well data. The authors used the proposed technique to generate initial permeability distributions for a simulation model, serving as a better starting point for the history matching process. According to the authors, the location, magnitude and extent of the high flow intervals are critical to adequately define permeability distributions for the reservoir simulation. Gill and Al-Zayer (2004) used observed pressure derivative signatures (from transient test data) to characterize super-K intervals in the Ghawar field. They verified that the super-K intervals are not continuous, appearing as high permeability patches. The authors analyzed vertical and horizontal wells. Li et al. (2010) integrated high-resolution image logs with dynamic well measurements (observations from production logging tools – PLT) to detect and characterize super-permeability zones (called by the authors thief zones). Borehole images were used to recognize potential thief zone associated with either fractures or vugs. Abdel-Ghani et al. (2011) used PLT profiles to improve the history matching results in a giant, fractured carbonate reservoir with very high permeability and super-K streaks. Guerrero et al. (2012) applied comprehensive pressure transient analyses to identify super-K zones. In one of the wells, they captured a high-flow interval with a thickness of 3–5 ft, which accounted for more than 90% of the total production, revealing the presence of super-K. To match water-cut in wells, Valle at al. (1993) used some super-K configurations, dividing the modeled area into three broad geographic regions (A, B and C) with similar geologic features and well performance. In Region A, they considered sporadic super-K occurrence, because in this region the wells showed good waterflood conformance and sweep efficiency. In Region B, dolomite occurrence in cores, early observed water cut in most wells and the flow profile behaviors supported the modeling of widespread occurrence of super-K in this region. Region C, due to uniform flow profiles and sporadic occurrence of dolomite, was modeled with less occurrence of super-K among the three

4. Methodology The general workflow proposed in this work is shown in Fig. 1, which consists of an iterative process to generate a new set of images at each iteration. The steps of the methodology are described in sequence and the main contributions of the paper are in Steps 4 to 7. A specific procedure to treat super-K is proposed in Step 6. 4.1. Step 1: generate prior images Before applying the approach proposed in this paper, it is necessary 3

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 1. General workflow of the proposed methodology.

to generate a set of prior realizations (or images) using any geostatistical modeling tool. Each realization is composed of a set of spatial properties. The prior realizations can be generated by any geologicalmodeling method normally available in commercial packages, such as Sequential Gaussian Simulation (SGS), multipoint geostatistic or objectbased modeling. The prior models should have enough variability to encompass, in terms of reservoir output, the historical data. If the production curves (oil and water well rate, for example) for the majority of the wells generated by the prior models are biased with respect to the observed data, the prior modeling process should be revised to correct any possible problems. Sometimes, may be necessary to re-interpret some geological data or to revisit some premises. This is a multidisciplinary step that normally involves intense interaction among geophysicists, geologists and reservoir engineers, which make it difficult to automate the process. The quality of the prior images may be assessed by a visual inspection of the simulated curves (comparing with the observed data) or by using a numerical quality indicator (see NQDS in the next subsection) to help the analysis.

Nobs i=1

This step consists in performing the flow simulations corresponding to a given set of images (each image corresponds to a reservoir model). After the simulation, an objective function is computed. The objective function used in this paper is based on a normalized quadratic distance (NDQ) computed according to Eq. (1).

NQDS =

LD NQD LD

(4)

where Nobs

LD =

QD AQD

(3)

where Tol is a tolerance given by a percentage of the observed data (Hist) and Cp is a constant used to prevent division by zero, when a data series has observed data equal to zero, or to avoid excessive weight when the observed data is close to zero, which can occur with produced water rate. Sim is the simulated results and Nobs is the number of observed data for a given data series (water rate in a producer well, for example). Equations (1)–(3) return one value of NQD for each data series. The tolerance (represented by Tol e Cp) expresses the reliability (or the uncertainty) of the observed data and it is used to prevent excessive weight, that is, to avoid that the observed data exert an excessive influence in the reservoir model conditioning process. In other words, as the observed data is uncertain, it should not be strictly assimilated. The tolerance can also be used to assign different weights to different data series. For diagnostic purposes, for instance, to analyze well behavior individually and to verify if the solutions are biased or not, it is also interesting to know the position of the simulated curve with respect to the observed data. Thus, NQDS (normalized quadratic distance with sign) is defined according to Eq. (4):

4.2. Step 2: run flow simulation and compute objective function (OF)

NQD =

(Tol × Histi + Cp)2

AQD =

(Simi

Histi )

i=1

(1)

where

(5)

4.3. Step 3: selection of images by region Nobs

QD =

(Simi i=1

Histi )2

In this step, the reservoir model is divided into flux regions. For each region, a set of best images is selected based on a cut-off value (NQDR) of all OF corresponding to the wells belonging to that region. If the number of selected images (n) is lower than a pre-defined minimum

(2)

and 4

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

region. Consider, for illustration purposes, the example in Fig. 3 where there are 5 selected images and where the green region in each image represents a super-K. The block highlighted in the figure pertains to a super-K region in images 1, 2 and 5 (3 out of the 5 images), meaning that for that block f is equal to 0.6 (or 60%). The theoretical CDF for the block highlighted in Fig. 3 is illustrated in Fig. 4a. 4.6. Step 6: generate new images of kx based on a cut-off value of f The main innovative aspect of this work is focused on this step, where we propose a new sampling procedure based on a cut-off frequency (fc ) , used as a criterion to draw a new value of kx from the CDF generated in Step 4. Therefore, a new image of kx is generated by sampling, for each grid block i, a new value (kx inew ) according to the following rule:

Fig. 2. Illustration of the procedure used to generate CDF map.

(nmin), increase NQDR until n is equal to nmin. The reason for the selection of images per region is due to the difficulty to find images from a set generated a priori that match all OF simultaneously. In general, a given image may be good, in terms of history matching quality, for some wells in a region but may not be for other wells in another region. This happens because the generation of the prior images (static modeling) is not constrained to observed dynamic data.

if f

i . e. kxsk i . e.

kx imin

kx inew

[kxsk , kx imax ],

kx imax [kximin , kxsk ],

< kxsk

where and kx imin are the maximum and minimum kx values for the block i. Fig. 4b illustrates the process. Note that the proposed procedure respects the limits stablished by the prior images. fc is an input parameter (chosen by the user of the methodology) which reflects how likely it is that a new block will be part of a super-K in the new image. For example, if fc = 0.5, it is necessary that at least 50% of the selected images have super-K in the block considered. Considering the example in Fig. 2, let f = 0.4 for the block b1 highlighted in Region R1. If, for example, fc = 0.3, the value of kx in the new image will be between kxsk and kx imax , that is, the new value will correspond to a super-K block in the new image. On the other hand, if fc = 0.5, the value of kx in the new image will be between kx imin and kxsk , meaning that the block will not belong to a super-K. The procedure proposed in this step is applied to prevent super-K discontinuities, as discussed later in subsection 6.1. In theory, fc could range from 0 to 1. As there is no pre-determined value for fc , the purpose of this paper is to treat it as an uncertain parameter, defining a probability distribution function (PDF). The choice of the range of fc is analyzed in sub-section 6.2.1. In each iteration, we update the PDF of the parameter fc using the IDLHC (Iterative Discrete Latin Hypercube) method, proposed by Maschio and Schiozer (2016). A basic description of the IDLHC method is shown in Appendix A. As we combine CDF with the cut-off frequency (fc ) , the proposed method to generated kx image is identified by the acronym CDFFC. In each iteration, the update of the PDF of the parameter fc is composed of the following sub steps (for additional details on PDF update using IDLHC, see Appendix A):

kx imax

This step consists in generating a CDF map, which means that a cumulative distribution curve is generated for each grid block based on the set of images selected in the previous step. To illustrate the procedure, consider the hypothetical reservoir divided into two regions (R1 and R2) shown in Fig. 2. Suppose that 15 images were generated a priori (using a geostatistical modeling software). Suppose also that for Region 1, seven images (1,3,5,8,10,12,15) were selected according to Step 3. For each block pertaining to Region 1, a CDF curve is generated based on these seven images. Obviously, this number of images is insufficient in practical cases and is only used here for illustration purposes. The same procedure is applied for Region 2 using another set of selected images (2,3,5,7,9,11,13,14). Note that images 3 and 5 are repeated on the set of images selected for Region 2, and images 4 and 6 do not appear in both sets of selected images. This means that images 4 and 6 are not good for any OF from both regions and are therefore not selected. Two blocks are highlighted in both regions and the corresponding theoretical CDF are illustrated on the right part of Fig. 2. Note that each CDF is associated to a grid block. 4.5. Step 5: generate frequency map For each grid block, compute the frequency (fi ) of values of horizontal permeability (kx ) higher than a cut-off value (kxsk ) above which a given block is considered as a super-K block, according to the following equation:

ni NSI

kx inew

if f < fc , draw kx inew from

4.4. Step 4: generate a CDF map based on the selected images

fi =

fc , draw kx inew from

1) For each region, select n images (see Step 3). Each region has a corresponding vector fc , being fc1 of length n1 (number of images selected for Region 1), fc2 of length n2 (number of images selected for Region 2), fcr of length nr (number of imagens selected for Region r), where r is the number of regions. 2) Based on the n1 images selected for Region 1, generated a histogram

(6)

where ni is the number of images for which the block i pertains to a super-K and NSI is the number of images selected for the corresponding

Fig. 3. Illustration of the procedure proposed to generate frequency map based on selected images of kx.

5

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 4. (a) Illustration of f computation, (b) sampling according to a super-K cut-off value.

of fc1 (using the n1 values of fc1) and compute the new probabilities of each uncertain level. Each uncertain level corresponds to a bin in the histogram (see Fig. A1 in Appendix A). 3) Repeat the sub step 2 for the other regions. The new PDF is used to generate new values of fc in the next iteration.

complex carbonate reservoir model, built based on a combination of Brazilian Pre-salt characteristics and the Ghawar field information available in the literature. Before the creation of the UNISIM-II-H, a fine-grid reference model (UNISIM-II-R) was created to represent our real field. The reference geological model was built in a fine grid in order to ensure a high level of geological details. It has a horizontal grid cell size of 50 × 50 m and approximately 1 m in vertical direction. The vertical refining is suitable to include thin zones (super-K). The reference model is characterized by four facies derived from different geological environments: High Energy (grainstone facies), Medium Energy (packstone facies), Low Energy with non-reservoir facies and super-K. Super-K flow unit, whose size is in the order of 1000 m in the horizontal direction and 2 m in the vertical direction, is stochastically distributed using objects modeling approach. Due to its post-depositional genesis related to diagenetic events, the super-K are modeled separately from the other facies. Apart from the super-K, the presence of fractures was also assumed, following trends observed in the Ghawar field. The fracture intensity is inversely proportional to the faults distance. More details about the reference model can be found in Correia et al. (2015). The UNISIM-II-H represents a developed field. Firstly, a model representing an initial stage of field development (UNISIM-II-D) was created using the information of three exploration wells. These wells were used to build a geological model, assuming that structural characteristics of the field were mapped from seismic data. After an upscaling process, the simulation model (UNISIM-II-D) was created by defining a grid cell size of 100 × 100 × 8 m and a corner-point grid with 46 × 69 × 8 cells in x, y and z directions, respectively. This model was used to define a viable production and injection strategy composed of 11 producer and 9 injector wells. Well log information for these 20 wells was collected from the reference model. Using the same structural model of UNISIM-II-D and the well log information from the 20 wells, another geological model was built. After upscaling process, the structural (same grid of UNISIM-II-D) and geostatistical model for the UNISIM-II-H was created. The UNISIM-II-R was simulated with the 20 wells defined for the UNISIM-II-H and a history of 3257 days was generated. During the historical period, the producer wells are controlled by liquid rate and the injector wells are controlled by water injected rate. Using a commercial geological modeling software, 500 geostatistical realizations were generated. Each realization is composed of 13 spatial properties: 7 for the fractures (porosity, permeability in x, y and z directions and fracture spacing in x, y and z directions) and 6 for matrix (porosity, permeability in x, y and z directions, net-to-gross and facies indicator). For the matrix, the permeability in x (kx) and y (ky) directions are very similar, thus for practical purposes, we considered ky equal to kx.

4.7. Step 7: generate new images for the other properties For the other properties (excepting kx), the new value of each property (pptinew ) for the grid block i is generated according to the following criterion:

draw pptinew from

[pptimin, pptimax ]

where pptimin and pptimax are minimum and maximum values of a given property for the block i. Thus, for each property, the algorithm scans the grid assigning one value to each grid block. After generating the complete set of images (Step 6 and 7) return to Step 2 to run the flow simulations and to compute the objective function. The number of flow simulations at each iteration is equal to the number of images (defined by the user) to be generated at each iteration. The stopping criterion is the number of iterations, which may depend on the objectives and on the case studied. In general, the process can be stopped when a reasonable quantity of models (matched according to some quality criterion) is obtained to perform production forecast under uncertainty. Note that Step 7 is applied independently of Steps 5 and 6, that is, the other properties are generated independently of kx . The property kx follows the specific rules described in subsections 4.5 (Step 5) and 4.6 (Step 6) to model the super-K. It is important to highlight that this independency is not an issue in terms of consistency. Suppose that there is a correlation between the permeability and the porosity. If the set of selected images (Step 3) has a trend of high permeability (and consequently high porosity, assuming that they are correlated) in a given region, the CDF curves generated from these images capture the trend, in such a way that the new images will preserve this trend. It is important to highlight that the selected images take into account the history match quality. Therefore, new images are generated by sampling new values conditioned to the match quality, such that the minimum and maximum values of a given property will be constrained by the selected (matched) images. See an example of result with additional explanation about this at the end of subsection 6.1. 5. Case studied (UNISIM-II-H) The proposed method was applied to the UNISIM-II-H benchmark case (UNISIM, 2018), a developed field with 11 producer wells and 9 water injector wells with 3257 days of history. This benchmark is a

6

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 5. Images of UNISIM-II-H showing a horizontal (K) layer and two cross sections highlighting the super-K distributions.

As described in Correia et al. (2015) the facies indicator (rock type) is defined by applying a cutoff procedure. For matrix permeability below 800 mD, rock type 1 is assigned; otherwise, rock type 2 is assigned. For the rock type 2 (super-K), the relative permeability curves are represented by two straight lines (based on Romm, 1966) with endpoints at zero and 100% of saturation, the same curve is normally used for fractures in flow simulation (Correia and Schiozer, 2018). For the rock type 1 (matrix background) a typical relative permeability curve was used. Fig. 5 shows images of UNISIM-II-H showing a horizontal (K) layer and two cross sections, highlighting the super-K distributions. It is important to highlight that the methodology used to create the benchmark enables the reproduction of the workflow and also the challenges of a real case. Table 1 shows the tolerances (Tol and Cp) defined to compute the AQD (Acceptable Quadratic Distance, Eq. (3)) used to compute the NQDS.

values of kx are sampled from the whole CDF curve, i.e. the new values of kx may assume any value between the minimum (kx imin ) and the maximum (kx imax ) values of kx for the block i (schematically shown in Fig. 4b). Clearly, it is possible to see that the image generated using the proposed criteria based on fc (Fig. 6b) preserves the super-K continuity, avoiding the pixelated effect observed in Fig. 6a. The consistency of our method, regarding the prior characteristics of the model, can be verified comparing Fig. 6b and c. Fig. 6c shows one of the prior images generated in a geostatistical modeling software and Fig. 6b shows one of the images generated using the proposed methodology. Note that both images have similar characteristics. Additionally, Fig. 6d and e shows cross sections depicting facies indicator (rock type) from the same model shown in Fig. 6c (one of the prior images) and from the model shown in Fig. 6b (generated with the proposed method). Clearly, it is possible to observe that the super-K (indicated in red) have similar characteristics (in terms of size and distribution) comparing both models. Additionally, Fig. 7 shows that the continuity of super-K in the boundary of contiguous regions is maintained, as indicated by the red ellipses. Fig. 8a and b presents two images of kx comparing one of the prior image (a) and one of the images generated with the proposed method (b). Firstly, it is possible to note that the image generated by the proposed method has similar characteristics when compared with the prior image. Note that the histograms corresponding to the prior image (c) and the proposed method (d) are practically identical. This shows that the proposed method is consistent and preserves the characteristic of the prior model, in this case, a bi-modal distribution. Fig. 9 shows a comparison of fracture permeability (kfx) for a prior image (a) and an image generated with the proposed methodology (b) according to Step 7. The image generated with the proposed method has similar characteristics compared to the prior image. Note that the proposed method preserves the trends of high permeability in the regions close to the faults. These trends occur because the intensity of fractures is inversely proportional to the distance of the faults. Fig. 10 shows an example of porosity and permeability image generated with the proposed method showing that the trend between the properties (in the matrix background) is maintained. The red ellipses highlight high-porosity/high permeability trends and the black ellipse highlights low-porosity/low permeability trend. To show how the proposed method preserves the trends, cumulative probability curves of porosity and permeability were built for two blocks (blocks 1 and 2 depicted in porosity image in Fig. 10). The red curves represent the 500 prior images. The blue curves correspond to the set of selected images (based on the quality match) in the first iteration. Note that for the block 1 (which belongs to a region of low-porosity/low permeability

6. Results Subsection 6.1 presents an analysis of the parameter (fc ) as well as an assessment of the overall quality of the images generated with the proposed method. The results of the data assimilation and uncertainty assessment process using the proposed method are presented in subsection 6.2. 6.1. Analysis of fc and overall image consistency generated with the proposed method Fig. 6 a and b show the effect of using the proposed sampling procedure based on the frequency cut-off value (fc ) . Fig. 6a shows an image generated without using fc . This means that, for each grid block i, new Table 1 Tolerance used to compute AQD (acceptable quadratic distance, Eq. (3)). Functions

Tol

Cp

Ql Qo Qw (PROD1, PROD2, PROD4, PROD6, PROD7) Qw (PROD8, PROD9, PROD10) Qw (WILDCAT, PROD3, PROD5) Qwi BHP

0.05 0.1 0.1 0.1 0.1 0.05 0.1

0 0 5 m3/d 10 m3/d 20 m3/d 0 0

7

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 6. (a) Image generated without using fc , (b) image generated using fc , (c) image generated in a geostatistical modeling software, (d) and (e) are cross sections showing rock type indicator for a prior (d) and for a model generated with the proposed method (e).

trend) the cumulative probability curves for permeability and porosity are in a range of low values. Also note that for the block 2 (which belongs to a region of high-porosity/high permeability) the cumulative probability curves for permeability and porosity are in a range of high

values. This happens because all properties (porosity, permeability etc) are selected all together in a given image. Therefore, if in a given set of selected image the permeability is high in certain regions, and considering that there is correlation between porosity and permeability in

Fig. 7. Images showing the continuity of super-K in the boundary of contiguous regions indicated by the red ellipses. The colors in the left image are only used to differentiate the regions domain. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) 8

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 8. Horizontal permeability (kx) and ln(kx) histograms comparing: prior (a and c) and the proposed method (b and d).

the matrix background, the porosity is also high, and vice versa.

with 500 images generated at each iteration. Therefore, the whole process took 2500 reservoir simulations (2000 corresponding to the four iterations plus 500 corresponding to the prior images). In each iteration, the PDF of fc was updated.

6.2. Application of the proposed method for data assimilation of UNISIM-IIH As shown in Fig. 7 (left image), we defined 9 regions according to well positions, in such a way that each region contains one injector. In this way, we capture the most important regions in terms of flow behavior (see Appendix B for more details about the regions definition). For each region, fc was defined as an uncertain attribute that controls the generation of new kx images (according to Step 6 in the methodology). Other properties are generated according to Step 7 in the methodology. Four iterations of the IDLHC method were carried out

6.2.1. Choice of the range of fc The definition of the ranges of fc was based on the frequency map shown in Fig. 11. Note that just a few blocks have a percentage greater than 50%. This means that if fc is set to a value greater than 50% practically no super-K will appear in the model. On the other hand, if fc is set to a very low value (close to zero), the quantity and dimension of super-K may be too high. Therefore, we chose 5% as the minimum value and 50% as the maximum value for fc . This range was divided

Fig. 9. Comparison of fracture permeability (kfx) for a prior image (a) and an image generated with the proposed methodology (b). 9

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 10. Example of porosity and permeability showing trends between the properties and cumulative probability curves showing the equivalence of the trends between porosity and permeability in the selected images.

the following criteria

[P 75 + 1.5(P 75

P 25)] < NQDS < [P 25

1.5(P 75

P 25)]

(7)

The first observation is that the produced water rate is the most critical function. Note the high amplitude (difference between P75 and P25) in the prior NQDS values for this function. The second observation is that, after the application of the proposed method, there was a significant reduction in the NQDS amplitude for the majority of the functions. The water rate, the most critical, is well-matched for practically all producer wells. Compare the size of the boxes for the prior and posterior and note that most blue boxes (posterior) are centralized around the zero line. Fig. 13 shows water rate curves for two wells (PROD2 and PROD3) comparing the prior and posterior models. Good data matches can be seen in these plots. As shown in Fig. 12 (NQDS plot), the PROD6 is the most critical well, mainly for water rate. Fig. 14a shows water rate curves for this well. Note that the well has a strong tendency to produce less water than the historical data. In fact, this well is very challenging to match. This can be explained with the aid of Fig. 14b–d. Fig. 14b shows a cross section of the model (kx ) highlighting the position of the wells PROD6 and INJ6 (see the red ellipse). Note that there is only one grid block sequence (in layer 19) connecting the injector and the producer, which make it very difficult for the algorithm to right the exact position of super-K to adjust the water front (the water moves preferentially through the super-K). It is also possible to observe that the percentage of super-K (computed from the prior images) in the region connecting INJ6 and PROD6 (layer 19) is extremely low (Fig. 14c), meaning that the probability of appearing a super-K in this region is very low. Fig. 14d shows, on the other hand, a layer of the model (kfx) highlighting the position of the wells PROD6 and INJ6 (see the black ellipse). The white region indicates that there is no fracture between INJ6 and PROD6. This forces the displacement of the water (injected by INJ6) through the fractures concentrated at north part (above INJ6). Fig. 15 shows plots of number of model vs. different values of NQD

Fig. 11. Frequency map obtained from the images selected from the set of 500 prior realizations (truncated to 50%).

into 20 intervals (discrete probability distribution) and the initial distribution of fc was assumed as uniform. 6.2.2. History matching quality The distribution of NQDS related to water rate, oil rate and bottomhole pressure for all producer wells and bottom-hole pressure for all injector wells is shown in Fig. 12 in the form of boxplots. Gray and blue boxes represent the prior and the posterior NQDS distribution, respectively. The top line of each box corresponds to the percentile P75 and the bottom line corresponds to the percentile P25. The lines inside the boxes represent the median. The red crosses are outliers according to 10

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 12. Distribution of NQDS related to water rate, oil rate and bottom-hole pressure for all producer wells and injected water rate for all injector wells. Gray represents the prior and blue represents the posterior (fourth iteration) models. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

for the same functions shown in Fig. 12. In these plots, the more the curves tend to the left, the better the results. An important improvement of the results can be seen from the prior to the fourth iteration, mainly for water rate. For example, there are 235 models in the fourth iteration for which the NQD of water rate for all producer wells simultaneously (excluding PROD6) is lower or equal to 10. For the second iteration there are only 19 models and for the prior there is no model for this value of NQD.

intervals. Fig. 17 shows the probability distributions of fc for Regions 8 and 9. Region 9 contains the producer PROD2 and Region 8 contains the producer PROD3. Fig. 18 shows, for two layers, the frequency map generated from the prior models where Regions 8 and 9 are highlighted. Note that the percentage of super-K in Region 8 is higher than in Region 9, which is in agreement with Fig. 17. Now, observe the behavior of PROD2 and PROD3 in Fig. 12 regarding NQDS of water rate. Note that for PROD3, all prior models produce much more water than the historical data, and for PROD2 the majority of prior models produce much less water than the historical data. Fig. 17 suggests that in Region 8, the quantity (and size) of super-K should be reduced (keeping in mind that, the higher fc , the less likely a block belongs to a super-K), which is in agreement with the behavior of PROD3, since super-K is the most influential parameter on the water production. Therefore, reducing the quantity of super-K in the vicinity of Region 8 improves the match of PROD3, as observed in Fig. 12. An opposite reasoning can be applied for PROD2 and Region 9, i.e. the quantity of super-K in the vicinity of Region 9 should be increased to match PROD2. This is in agreement with the PDF of fc for Region 9 in Fig. 17. Fig. 19 shows maps of f (the same as Fig. 16) for two layers highlighting Regions 8 and 9. Note that the percentage of super-K in Region 8 decreased with respect to the prior models, while in Region 9

6.2.3. Analysis of the final images One important aspect of a history matching process is to assess if the method respects the well data and prior characteristics of the reservoir model. In Fig. 16, we show cross section maps (in I and J directions) for fpost ) . To build these maps, a frequency map was generated f (fprior using the 500 prior kx images (fprior ) and another frequency map was generated using the 500 posterior kx images. Then, the difference was obtained subtracting one map (fprior ) from another (fpost ) . Observe that in the vicinity of the wells, the difference is between −2% and 2% (gray regions), which clearly indicates that the proposed method preserves the trends close to the wells, respecting the well data. It is worth mentioning that the contrast between the gray and yellow regions in Fig. 16 does not represent discontinuities generated by the proposed method. It is a scale effect due to the range of the gray and yellow

Fig. 13. Water rate curves for two wells comparing the prior and posterior models. 11

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 14. Water rate curves for PROD6 (a), details of the model showing the region between INJ6 and PROD6 (b, c and d).

the percentage increased. Compare Fig. 6d and e and observe the part of the reservoir on the left of PROD3 (which corresponds to Region 8). The posterior model (Fig. 6e) has much less super-K than the prior model (Fig. 6d), which is in agreement with the previous analysis. Observe that the transition between the Regions 8 and 9 are similar in Figs. 18 and 19, i.e. the proposed method did not create discontinuities between these regions. It correctly inverted the trend, increasing the frequency of super-K in Region 9 and decreasing in Region 8. Fig. 20 shows cross sections depicting rock type and water saturation front (3257 days) for one of the prior models (a and c) and for one of the posterior models (b and d). It is possible to observe that in the

prior model, the water front does not reach the producer PROD2, while in the posterior model the water reaches PROD2 due to the presence of more super-K. A comparison of water rate for these models is depicted in Fig. 20e, which shows that PROD2 is well-matched in the posterior model and very far from the historical data in the prior model. The analyses presented in this subsection show that the proposed method adjusts (automatically) the distribution and size of super-K for a proper data assimilation. 6.2.4. Production forecast analysis The primary goal of data assimilation process is to improve the

Fig. 15. Number of models vs. NQD comparing prior, Iter 2 and Iter 4. 12

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 16. Cross section maps of fprior

fpost .

Fig. 17. Probability distributions of fc for Regions 8 and 9.

predictive capacity of reservoir simulation models and increase the confidence of the forecasted reservoir behavior. Additionally, it is important to show how the observed data enables the reduction of uncertainty in the production forecast. The best 47 models (in terms of quality match) were filtered using

the NQDS. These 47 models were extrapolated until 10957 days. Fig. 21 shows oil and water rate for the field for the prior models (before data assimilation) and for the posterior filtered models (after data assimilation). The red line in the forecast period corresponds to the reference case (UNISIM-II-R). Note that the posterior models encompass the

Fig. 18. Frequency map generated from the prior models. 13

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 19. Maps of fprior

fpost for two layers of the reservoir model.

Fig. 20. Cross sections depicting rock type and water saturation front (3257 days) for one of the prior models (a and c) and for one of the posterior models (b and d). Water rate for PROD2 is shown in (e).

reference over the entire forecast period. The plateau verified in the water rate curves corresponds to the platform system capacity for water processing defined for the case. Fig. 22 shows the analysis of the oil rate for 4 producer wells. It is possible to see that the posterior models capture satisfactorily the reference model. The deviation observed in some wells (specifically in the beginning of the forecast) is caused by

some problems of well productivity that arise when we change the wells controls from the end of the historical period to the beginning of the forecast. Despite this deviation observed in the beginning of the forecast, we can see that the reference is contained in the set of solution in a considerable part of the forecast period. Due to the high complexity of the case studied in this paper, this is a good result. Cumulative Fig. 21. Comparison of oil (Qo) and water (Qw) field production rates resulting from the prior models (Prior) and from the selected models (Post) in the history and forecast periods. The red points are the observed data and the red line is the forecast of the reference solution. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

14

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. 22. Comparison of oil rate (Qo) for 4 wells resulting from the prior models (Prior) and from the selected models (Post) in the history and forecast periods. The red points are the observed data and the red line is the forecast of the reference solution. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

distribution curves showing reduction of uncertainty in cumulative oil production (Np) for two times in the forecasted period are depicted in Fig. 23. Note that there is a significant reduction of variability from the prior to the posterior models. It is also possible to see that the posterior models properly encompass the reference model. To summarize, the following points are worth mentioning: (1) as discussed in subsection 3.1, the integration of a geostatistical modeling software in data assimilation process is complex and time consuming. For the case studied in this paper (UNISIM-II-H), the generation of the 500 prior realizations in a commercial software took approximately 3 days. Using our method, we took approximately 3 h to generated 500 realizations, preserving the prior characteristics. A personal computer (PC) with an Intel Core i7 3.4 GHz and 16 GB of memory (RAM) was used to generate the realizations. Another advantage is that we control the generation of the realizations using our own algorithm, while using commercial software one have to use only the available methods; (2) to capture the main flow regions, we divided the reservoir in sectors in such a way that each sector contains one injector. But other alternatives may be studied in future works, such as, for example, the use of streamlines; (3) the application of the method proposed in this paper in

other kind of geology, such as channelized reservoirs, may be addressed in future works. 7. Conclusions An innovative parameterization method for data assimilation and uncertainty assessment was proposed, based on cumulative distribution function (CDF) focused on complex layered reservoirs. We proposed a new sampling procedure based on a cut-off frequency which properly represents the spatial distribution of super-K in the reservoir model. The proposed method is very simple and easy to implement. The robustness of the method was tested in a complex carbonate reservoir model (UNISIM-II-H) with the presence of super-K layers and promising results were obtained. We highlight the following specific findings: 1) The results showed that the new method preserves the prior characteristics of the model. It captured the bi-modal behavior of the permeability distribution, honoring the prior modeling and properly representing the spatial distribution of the super-K layers. 2) The proposed method preserves the trends close to the wells,

Fig. 23. Cumulative distribution curves showing reduction of uncertainty in cumulative oil production (Np) for two times in the forecasted period.

15

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

respecting the well data, which is important to preserve the consistency of the model. 3) Proper data assimilation was possible with the parameterization method proposed in this paper. Good data matches were obtained and the uncertainty in the production forecast was reduced in a consistent way. The filtered models after data assimilation encompassed the reference model.

Agreement No. 0050.0100204.16.9) and Energi Simulation within the ANP R&D tax as “commitment to research and development investments”. The authors are grateful for the support of the Center of Petroleum Studies (CEPETRO-UNICAMP/Brazil), the Department of Energy (DE-FEM-UNICAMP/Brazil) and Research Group in Reservoir Simulation and Management (UNISIM-UNICAMP/Brazil). In addition, a special thanks to CMG and Schlumberger Information Solutions for software licenses.

Acknowledgements This work was conducted with the support of Petrobras (Grant Appendix A. Iterative Discrete Latin Hypercube

The Iterative Discrete Latin Hypercube (IDLHC), proposed by Maschio and Schiozer (2016), is an iterative sampling process based on Latin Hypercube. In IDLHC, discretized probability density functions (PDF) are used. The algorithm starts generating N combinations from the prior PDF (for each attribute, an independent prior PDF is defined), in such a way that each combination generates a simulation model. Then, the N models are simulated and the objective function is calculated. In sequence, we select a quantity of best models based on the quality match (NQDS). In this paper, we select the models based on the NQDS corresponding to each region. Thus, the selection is made by taking into account one attribute at a time and the number of models selected for each attribute depends on the quality match for the corresponding region. After selecting the models, a histogram is generated (using the discrete levels of the attribute) and a new PDF is generated such that the new probability of each discrete level is directly proportional to the number of observations (frequency) of the levels of the attribute. To illustrate the idea, suppose that for the attribute fc1 100 models are selected and consider that the number of discrete level is 5 (Fig. A1). Let yh = [10, 25, 30, 20, 15] the vector containing the frequency of each level. As illustrated in Fig. A1a, the new probability vector (one value for each level) for this attribute is pnew = [0.10, 0.25, 0.30, 0.20, 0.15]. The new probabilities are used to generate new combinations for the next iterations and the iterative process (Fig. A1b) continues until the stopping criteria is reached (normally, the maximum number of iteration). More details about the IDLHC can be found in Maschio and Schiozer (2016).

Fig. A1. Illustration of the Iterative Discrete Latin Hypercube (IDLHC): probability update proportional to the frequency (a) and iterative process (b).

Appendix B. Considerations about the regions definition In this paper, we defined the regions according to well positions, in such a way that each region contains one injector. This procedure was used to capture the most important regions in terms of flow behavior. Before the definition of the regions, we analyzed the position of the wells (injector and producer) spatially, i.e. horizontally and vertically, and we tried to group producer and injector completed in similar depth. The premise is that, as the flow is mainly controlled by super-K in the reservoir studied in this paper, it is more likely that the water saturation front reaches that producer that is more vertically aligned with the injector. We illustrated this analysis in Fig. B1, taking into account the Regions 1, 2 and 3 (the delimitation of these regions is shown in Fig. 7 - left image). We chose these regions (to illustrate) due to the well PROD6, which is located in the boundary of these regions. Observe that INJ1 and PROD9 (Region 1) and INJ6 and PROD6 (Region 2) are completed in similar vertical position (c and e). Analyzing parts c and d, it is possible to see that the water saturation front arriving in PROD6 in both models with high water production (a) is connected with INJ6. It is also possible to see that the water front arriving PROD1 comes from INJ4 (Region 3) and PROD9 is connected to INJ1 (Region 1). It is important to highlight that the procedure adopted in this paper was appropriate for the reservoir model studied. The region definition may depend on the configuration of the wells and the geological structures. Therefore, this step must be carefully analyzed for each case.

16

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer

Fig. B1. Water rate for two models (a), cross section views (c, d, e and h) according to the positions indicated in b, Layer 23 (f and g): (b) to (h) represent water saturation in 3257 days.

Appendix C. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.petrol.2019.106400.

uncertainty of global attributes in naturally fractured reservoirs. Oil Gas Sci. Technol. 73, 1–16. Dogru, A.H., Dreiman, W.T., Hemanthkumar, K., Fung, L.S., 2001. Simulation of super-K behavior in ghawar by a multi-million cell parallel simulator (SPE-68066). In: SPE Middle East Oil Show, 17-20 March, Manama, Bahrain. Emerick, A.A., 2017. Investigation on principal component analysis parameterizations for history matching channelized facies models with ensemble-based data assimilation. Math. Geosci. 49 (1), 85–120. Ghadami, N., Rasaei, M.R., Hejri, S., 2015. Toward robust history matching of a giant carbonate gas-condensate reservoir. Asia Pac. J. Chem. Eng. 10 (2), 228–240. Gill, H.S., Al-Zayer, R., 2004. Pressure transient derivative signatures in presence of stratiform super-K permeability intervals, ghawar arab-D reservoir (SPE-88704). In: Abu Dhabi International Conference and Exhibition, 10-13 October, Abu Dhabi, United Arab Emirates. Gomez, S., Gosselin, O., Barker, J.W., 2001. Gradient-based history matching with a global optimization method (SPE-71307). SPE J. 6 (2), 200–208. Guerrero, R.J.P., Lyngra, S., BinAkresh, S.A., Widjaja, D.R., Alhuthali, A.H., 2012. A pressure transient approach for dynamic characterization of a giant carbonate Middle eastern oil field (SPE-160899). In: SPE Saudi Arabia Section Technical Symposium and Exhibition, 8-11 April, Al-Khobar, Saudi Arabia. Kirkpatrick, S., Gelatt Jr., C.D., Vecchi, M.P., 1983. Optimization by simulated annealing. Science 220, 671–680. Li, B., Najeh, H., Lantz, J., Rampurawala, M., Gok, I.M., Al-Khabbaz, M., 2010. Detecting thief zones in carbonate reservoirs by integrating borehole images with dynamic measurements: case study from the mauddud reservoir, north Kuwait (SPE-116286). SPE Reserv. Eval. Eng. 13 (2), 193–202. Liu, J.S., Enwall, R.E., Davis, R.R., Douglas, J.L., Barber, R.M., 1993. Integration of engineering data and geostatistics in simulation of a complex carbonate reservoir (SPE25598). In: Middle East Oil Show, 3-6 April, Bahrain. Liu, L., Zheng, X., He, E., Liu, F., Luo, H., 2016. Findings and challenges of high permeability zone on water injection pilots in Iraqi carbonate reservoirs (SPE-183485). In: Abu Dhabi International Petroleum Exhibition & Conference, 7-10 November, Abu Dhabi, UAE. Maschio, C., Schiozer, D.J., 2016. Probabilistic history matching using discrete Latin Hypercube sampling and nonparametric density estimation. J. Pet. Sci. Eng. 147, 98–115. Maschio, C., Davolio, A., Correia, M.G., Schiozer, D.J., 2015. A new framework for geostatistics-based history matching using genetic algorithm with adaptive bounds. J.

References Abdel-Ghani, R., Krinis, D., Nieto Camargo, J., 2011. Incorporating PLT-distributed dynamic permeability - into reservoir simulation models - improves and accelerates the history matching process (SPE-145416). In: SPE Reservoir Characterisation and Simulation Conference and Exhibition, 9-11 October, Abu Dhabi, UAE. Al-Dhamen, A.A., Pham, T.R., Al-Khatib, M.R., 1998. A quick method of identifying and history matching a gravity dominated reservoir with localized super-permeabilities (SPE-49276). In: Annual Technical Conference and Exhibition, 27-30 September, New Orleans. Al-Dhafeeri, A.M., Nasr-El-Din, H.A., 2007. Characteristics of high-permeability zones using core analysis, and production logging data. J. Pet. Sci. Eng. 55, 18–36. Alqam, M.H., Nasr-El-Din, H.A., Lynn, J.D., 2001. Treatment of super-K zones using gelling polymers (SPE 64989). In: SPE International Symposium on Oilfield Chemistry, 13–16 February, Houston, Texas. Babaei, M., Pan, I., Alkhatib, A., 2015a. Robust optimization of well location to enhance hysterical trapping of CO2: assessment of various uncertainty quantification methods and utilization of mixed response surface surrogates. Water Resour. Res. 51 (12), 9402–9424. Arnold, D., Demyanov, V., Rojas, T., Christie, M., 2019. Uncertainty quantification in reservoir prediction: Part 1–model realism in history matching using geological prior definitions. Math. Geosci. 51 (2), 209–240. Babaei, M., Alkhatib, A., Pan, I., 2015b. Robust optimization of subsurface flow using polynomial chaos and response surface surrogates. Comput. Geosci. 19 (5), 979–998. Brantferger, K.M., Kompanik, G., Al-Jenaibi, H., Dodge, S., Patel, H., 2012. Impact and lessons of using high permeability streaks in history matching a giant offshore Middle East carbonate reservoir (SPE-161426). In: Abu Dhabi International Petroleum Conference and Exhibition, 11-14 November. UAE, Abu Dhabi. Chen, C., Gao, G., Gelderblom, P., Jimenez, E., 2016. Integration of cumulative-distribution-function mapping with principal-component analysis for the history matching of channelized reservoirs. SPE Reserv. Eval. Eng. 19 (02), 278–293. Correia, M.G., Hohendorff Filho, J.C., Gaspar, A.T.F.S., Schiozer, D.J., 2015. UNISIM-II-D: benchmark case proposal based on a carbonate reservoir (SPE-177140). In: SPE Latin American and Caribbean Petroleum Engineering, 18-20 November, Quito, Ecuador. Correia, M.G., Schiozer, D.J., 2018. Scaling up highly permeable thin layers into flow simulation (SPE-185875). SPE Reserv. Eval. Eng. 21 (2), 503–520. Costa, L.A.N., Maschio, C., Schiozer, D.J., 2018. A new methodology to reduce

17

Journal of Petroleum Science and Engineering 183 (2019) 106400

C. Maschio and D.J. Schiozer Pet. Sci. Eng. 127, 387–397. Maschio, C., Vidal, A.C., Schiozer, D.J., 2008. A framework to integrate history matching and geostatistical modeling using genetic algorithm and direct search methods. J. Pet. Sci. Eng. 63, 34–42. Mohamed, L., Christie, M.A., Demyanov, V., 2011. History matching and uncertainty quantification: multiobjective particle swarm optimisation approach (SPE-143067). In: SPE EUROPEC/EAGE Annual Conference and Exhibition, 23-26 May, Vienna, Austria. Oliveira, G.S., Schiozer, D.J., Maschio, C., 2017. History matching by integrating regional multi-property image perturbation methods with a multivariate sensitivity analysis. J. Pet. Sci. Eng. 153, 111–122. Romm, E.S., 1966. Flow Characteristics of Fractured Rocks. Nedra Publishers, Moscow. Sarma, P., Durlofsky, L.J., Aziz, K., 2008. Kernel principal component analysis for efficient differentiable parameterization of multipoint geostatistics. Math. Geosci. 40 (1), 3–32. Subbey, S., Christie, M., Sambridge, M., 2002. Uncertainty reduction in reservoir modelling. Contemp. Math.-Am. Math. Soc. 285, 457–467. Swart, P.K., Cantrell, D.L., Westphal, H., Handford, C.R., Kendall, C.G., 2005. Origin of dolomite in the arab-D reservoir from the ghawar field, Saudi arabia: evidence from petrographic and geochemical constraints. J. Sediment. Res. 75 (3), 476–491. Uba, H.M., Chiffoleau, Y., Pham, T., Divry, V., Kaabi, A., Thuwaini, J., 2007. Application of a hybrid dual-porosity dual-permeability representation of large scale fractures to

the simulation of a giant carbonate reservoir (SPE-105560). In: SPE Middle East Oil and Gas Show and Conference, 11-14 March, Bahrain. UNISIM, 2018. Research group on reservoir simulation and management. UNISIM-II-H. Case study for field history matching and uncertainties reduction based on UNISIM-II. https://www.unisim.cepetro.unicamp.br/benchmarks/en/unisim-ii/unisim-ii-h. Valle, A., Pham, A., Hsueh, P.T., Faulhaber, J., 1993. Development and use of a finely gridded window model for a reservoir containing super permeable channels (SPE25631). In: SPE Middle East Oil Technical Conference & Exhibition, 3-6 April, Bahrain. Vargas-Guzmán, J.A., Al-Gaoud, A., Datta-Gupta, A., Jiménez, E.A., Oyeriende, D., 2009. Identification of high permeability zones from dynamic data using streamline simulation and inverse modeling of geology. J. Pet. Sci. Eng. 69, 283–291. Vo, H.X., Durlofsky, L.J., 2014. A new differentiable parameterization based on principal component analysis for the low-dimensional representation of complex geological models. Math. Geosci. 46 (7), 775–813. Voelker, J., Liu, J., Caers, J., 2003. A geostatistical method for characterizing superpermeability from flow-meter data: application to ghawar field (SPE-84279-MS). In: SPE Annual Technical Conference and Exhibition, 5-8 October, Denver, Colorado. Yustres, A., Asensio, L., Alonso, J., Navarro, V., 2012. A review of Markov chain Monte Carlo and information theory tools for inverse problems in subsurface flow. Comput. Geosci. 16 (1), 1–20.

18