Hydrocarbon reservoir model detection from pressure transient data using coupled artificial neural network—Wavelet transform approach

Hydrocarbon reservoir model detection from pressure transient data using coupled artificial neural network—Wavelet transform approach

Applied Soft Computing 47 (2016) 63–75 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/as...

1MB Sizes 4 Downloads 108 Views

Applied Soft Computing 47 (2016) 63–75

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Hydrocarbon reservoir model detection from pressure transient data using coupled artificial neural network—Wavelet transform approach Behzad Vaferi, Reza Eslamloueyan ∗ , Najmeh Ghaffarian School of Chemical and Petroleum Engineering, Shiraz University, Shiraz, Iran

a r t i c l e

i n f o

Article history: Received 27 October 2013 Received in revised form 29 June 2015 Accepted 31 May 2016 Available online 4 June 2016 Keywords: Well testing data Reservoir model detection Dimension reduction Discrete wavelet coefficients Multilayer perceptron network

a b s t r a c t Well testing analysis is performed for detecting oil and gas reservoir model and estimating its associated parameters from pressure transient data which are often recorded by pressure down-hole gauges (PDG). The PDGs can record a huge amount of bottom-hole pressure data, limited computer resources for analysis and handling of these noisy data are some of the challenging problems for the PDGs monitoring. Therefore, reducing the number of the recorded data by PDGs to a manageable size is an important step in well test analysis. In the present study, a discrete wavelet transform (DWT) is employed for reducing the amount of long-term reservoir pressure data obtained for eight different reservoir models. Then, a multi-layer perceptron neural network (MLPNN) is developed to recognize reservoir models using the reduced pressure data. The developed algorithm has four steps: (1) generating pressure over time data (2) converting the generated data to log-log pressure derivative (PD) graphs (3) calculating of the multi-level discrete wavelet coefficient (DWC) of the PD graphs and (4) using the approximate wavelet coefficients as the inputs of a MLPNN classifier. Sensitivity analysis confirms that the most accurate reservoir model predictions are obtained by the MLPNN with 17 hidden neurons. The proposed method has been validated using simulated test data and actual field information. The results show that the suggested algorithm is able to identify the correct reservoir models for training and test data sets with total classification accuracies (TCA) of 95.37% and 94.34% respectively. © 2016 Published by Elsevier B.V.

1. Introduction In the recent years pressure down-hole gauges (PDGs) have been widely installed in oil and gas fields during intelligent completions of the production and injection wells. The main objectives of the PDGs installation are continuous and real-time measurements of pressure, temperature, downhole flow rate, and phase fractions over a long period of time for well monitoring and evaluating performance of reservoirs. During this long period of recording time, the data acquire noises and outliers from different sources, or even it may fail to record data at several occasions. These issues along with limited computer resources for handling of these massive and noisy data are some of the challenging problems that can adversely affect the interpretation process of PDG data. Therefore, it becomes imperative to develop a data processing algorithm that can correctly process the PDGs data for further analysis.

∗ Corresponding author at: School of Chemical and Petroleum Engineering, Shiraz University, Mollasadra Ave., Shiraz, Iran. E-mail addresses: [email protected], [email protected] (R. Eslamloueyan). http://dx.doi.org/10.1016/j.asoc.2016.05.052 1568-4946/© 2016 Published by Elsevier B.V.

Construction of reliable dynamic model which can predict both current and future transient behavior of reservoir is a crucial stage in the optimization and management of productions policy of oil and gas reservoirs. Although a direct identification of these heterogeneous hydrocarbon reservoirs is almost impossible, some indirect techniques such as seismic, well log and well test have been developed to construct a reliable reservoir model. In spite of static description of reservoirs by well log and seismic techniques, well testing can present a dynamic view of these highly heterogeneous media. Since 1937, the well testing is the most widely used tools in the petroleum engineering for identifying hydrocarbon reservoirs [1]. Well testing is basically conducted by creating a flow disturbance in wellbore and monitoring the pressure response at the bottom-hole. By analyzing the recorded pressure signal over time which is obtained from well testing operations, reservoir model and its boundary (formation model) can be identified [2]. Moreover some reservoir parameters such as initial reservoir pressure, average conductivity of matrix and fracture, storativity ratio, interporosity flow coefficient, value of reservoir damage can be estimated using these signals [2–4]. It should be noted that prior to start the parameter estimation, decision should be made on the formation model. All of the aforementioned parameters and for-

64

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

Nomenclature Bias of jth neuron Exponential function Activation or transfer function Point of interest for pressure derivative calculation and normalization k Permeability (m2 ) MTCA Modified total classification accuracy Output of jth neuron nj netj Net input of transfer function Number of inputs to jth neuron N p Pressure drop data P Pressure derivative data s Skin factor S Sensitivity t Superposition time function (lnt and modified Horner or superposition times for drawdown and Build-up respectively) TCA Total classification accuracy w Wellbore storage coefficient (m3 /pa) Synaptic weight corresponding to rth synapse of jth wjr neuron xr rth input Xmax Maximum value of pressure derivative data in each data set Xmin Minimum value of pressure derivative data in each data set Xnormalized Normalization of interesting pressure derivative data Fraction of the overall patterns which belong to ith Zi reservoir model t Elapsed time (h) Interporosity flow coefficient  ω Storativity ratio bj e f i

mation model can be evaluated by knowing both pressure and production/injection flow rate over time. Pressure derivative plot i.e., the log–log presentation of the rate of pressure change with respect to superposition time function is one of the most widely used techniques for detection of reservoir model and its boundaries [5,6]. Since the focus of the present study has only been put to detecting of reservoir structure and its boundary model, the type curve matching by pressure derivative plot is effectively employed for the considered task. Once the formation model is detected, their various parameters can be estimated utilizing the specific portion of the pressure derivative plot. Various types of flow regimes have been extensively explained in our previous research [5]. Artificial Neural Network (ANN) is one of branches of the artificial intelligence methodologies which have played important roles in many scientific disciplines for replacing traditional analysis by computer aided ones [5–8]. MLP and recurrent network are among the most frequently used ANNs for interpretation and recognition of complicated pattern in various fields specially in well testing [5,8–11]. In our previous researches two different automated models based on MLPNN and recurrent network were developed for detection of eight different oil reservoir models from synthetic PD patterns which have 33 sample points [5,9]. In 1995, Athichanagorn and Horne used MLPNN for recognizing characteristic parts and their appearance times in pressure derivative plots of some candidate reservoir models [12]. The obtained values from MLPNN are then used as initial guesses in sequential predictive proba-

bility method for diagnosing reservoir model and estimating its parameters [12]. In the last two decades, wavelet transform has appeared several times in the petroleum engineering for detecting changes in flow rate, transient identification, de-noising and up-scaling of reservoir properties [13–15]. Kikani and He used the wavelet transform for data de-noising and transient detection among synthetic pressure transient data [13]. Athichanagorn et al. developed a wavelet based model for pre-processing and interpretation both long-term simulated as well as actual field data [14]. Olsen and Nordtvedt investigated the wavelet transform ability for filtering and noises removal from production data [15]. They also proposed some empirical rules and automatic methods for threshold approximation and reducing the amount of reservoir production data [15]. In the next section, a brief explanation of well testing, discrete wavelet transform, MLP network, the employed procedures for generating pressure transient data and calculating log–log pressure derivative graphs is presented.

2. Method 2.1. Transient test operation and analysis Since geological formations hosting oil, gas and water have complex dynamics behavior and contain different types of rocks, fluids and barriers, their direct identification may not be possible. On the other hand to decide about the best production strategy, reservoir size and its parameters such as deliverability (ability to produce) have to be known. Reservoir pressure transient is probably the most important data which can be employed for reservoir descriptions, forecast reservoir performance and develop recovery schemes. The pressure changes in the reservoir due to alteration of the production policy can represent reservoir’s inherent characteristics. Based on the systems identification theory, flow rate is the input and variation in pressure can be considered as the system’s output. The inverse solution can be effectively employed for estimation reservoir’s properties by matching the observed pressure response to some mathematical models. The dynamic behavior of the reservoir can then be predicted by the developed mathematical model for future reservoir management.

2.1.1. Drawdown well testing In the drawdown test, pressures measure over time in bottomhole of the well which producing at constant rate after a shut-in period. Fig. 1 illustrates the schematic of pressure change as function of time and radial distance from the well in an oil reservoir during a drawdown test. These types of pressure transient curves can be calculated from the governing equations of transient radial-flow into a wellbore for a specific reservoir [16,17]. As soon as a constant rate production is imposed on the well after a shut-in period, the fluid near the well expands and moves toward the area where its pressure has fallen below the original reservoir pressure. This movement of reservoir’s fluid creates a pressure disturbance in other part of the formation and results in motion of fluid toward the producing well. This pressure transient response away from the well and moves through the reservoir. In the initial time of well tests, the movement of pressure signal is rapid and only controlled by the effects of wellbore storage and then becomes slow while spreads out further from the wellbore and senses progressively larger reservoir volume. This process continues until the pressure transient signal propagates throughout the formation and senses its reservoir boundary.

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

x 10

65

4

2.75

Pressure (Kpa)

2.7 2.65 2.6 2.55 2.5 300 2.45 0

200 5

10

100

15

20

0

25

Radius (m)

Time (hr) Fig. 1. Typical 3D pressure profiles in a cylindrical reservoir as a function of flowing time radial distance from the wellbore [16].

2.2. Reservoir models and their related flow regimes By analysis the well testing data, formation model (reservoir + boundary) and its associated parameters can be estimated. Prior to reservoir’s parameter estimation decision should be made on the formation model, it is solely the second step which we shall be concerned in this paper. In the present study homogenous and dual porosity reservoir models with different outer conditions such as no flow, constant pressure, infinite acting and single sealing fault boundaries are considered. Dual porosity is the formations containing two different media i.e., a fissure system having very low storage capacity and high fluid transmissibility and matrix block that has high storage capacity and low fluid transmissibility. 2.3. Artificial neural networks Artificial neural networks are nonlinear mathematical techniques that are designed by simulation of biological nervous systems. Up to now, this tool has applied for information processing in many scientific disciplines [18–21]. These networks are able to correlate inputs and outputs of most nonlinear multi-variable phenomena with any complexity. These networks consist of a large number of key processing elements i.e., neurons, that are connected together in a specific manner according to the type of the network. The output neurons of the MLPNN are responsible of computing the values of dependent variables as follow: N 

nj = f (

wjr xr + bj )

(1)

r=1

wjr refers to the weight from neuron r to neuron j, bj and nj are the bias and output of jth neuron. The summation often called the net input of transfer function, and often known as netj . The biases are activation thresholds that are added to the multiplication of inputs and their particular weight coefficients. The net output of each neuron passes through a function which is called activation or transfer function (f) of the neuron. Different types of transfer functions have been proposed for artificial neural networks such as

linear, log-sigmoid, tan-sigmoid, and radial basis transfer functions [22,23]. In the present study, the defined functions by Eqs. (3) and (4) which usually called tan-sigmoid and log-sigmoid are utilized as the transfer functions in the hidden and output layer of classifier, respectively. netj can be expressed mathematically as: netj =

N 

wjr xr + bj

(2)

r=1

f (netj ) = f (netj ) =

enetj − e−netj enetj + e−netj 1 1 + e−netj

(3) (4)

f (netj ) is the output of the neuron, which will become the input for the next neuron or network output. The tan-sigmoid and logsigmoid transfer functions compress their inputs into [–1 1] and [0 1], as schematically display in Fig. 2. 2.3.1. Training the MLPNN To yield a proper categorization results, the weights and biases of the MLPNN have to be optimized with respect to some performance criteria. In this study, 3298 PD graphs have been employed for developing our proposed coupled DWT-MLPNN model. Seventy percent of the generated PD plots are utilized during the training step and remaining thirty percent are used for validating of the proposed model. Since each neuron of the MLPNN’s output layer is responsible for identifying one of the considered oil formation model, the number of neurons in the output layer should be equal to the number of reservoir models, i.e., 8. During training step, the output neurons take an integer value 0 or 1 to show a probability of belonging of an input pattern to a specific reservoir model. For instance, the target vector [0, 0, 0, 0, 0, 0, 0, 1] indicates that the entry data belongs to 8th reservoir model. 2.3.2. Selection of optimum configuration The most important issues in developing of any ANN model are specifying the optimal number of hidden layers and the number of neurons per each layer. Although back-propagation can be applied

1

1

0

0.5

-1 -5

-4

-3

-2

-1

0

1

2

3

4

Log Sigmoid (x)

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

Tan Sigmoid (x)

66

0 5

x Fig. 2. Schematic presentation of the log-sigmoid and tan-sigmoid activation functions.

Input signal

Filters Low pass ilter

Down sampling

High pass ilter

2

2

Approximation coeficients

Detailed coeficients

Fig. 3. Filtering process of the discrete wavelet transforms.

to networks with any number of layers it has been mathematically proven that any multivariable function with arbitrary discontinuities can be approximated to desired accuracy using the MLPNN with only one hidden layer provided non-linear transfer functions in its hidden units i.e., sigmoid [24–27]. Cybenko [26] substantiated his theory using the Hahn–Banach theorem [28] while the proof of Hornik et al. [24] is based on the Stone–Weierstrass theorem [28], and Funahashi [25] proved the same problem using an integral formula. Xiang et al. [29] proof is most elegant and simple, and is derived from a piecewise-linear approximation of the sigmoidal activation function.

2.4. Wavelet transform Wavelet transform (WT) has recently achieved great popularity for de-noising, system identification and analysis of scientific signals and time series [30–33]. Interested readers can find more details about this transformation technique in the survey article by Akansu et al. [34]. Converting of long-term signal to compressed parameters which usually called the wavelet coefficients is one of a significant application of the WT [35]. Discrete wavelet trans-

form (DWT) can convert a given signal into the wavelet basis and provides a combination of time and frequency localization for it. In order to calculate the DWC of the given signal, it passes through low-pass and high-pass discrete filters. The following figure shows once a typical signal passes through a low-pass and high-pass filters, its outputs down-samples by a factor of two and then comprises the approximation and detail coefficients. cA and cD are used for representation of the approximation and detail information, respectively. The schematic of DWT process is presented in Fig. 3. Interesting point of DWT is that, this transformation can be performed in the multiple levels. In the multi-levels decomposition, the approximation and detail coefficients of the next level can be obtained from approximation information of the present level. Fig. 4 shows the procedure which is employed for decomposition of an input signal IS to fifth level of decomposition. The corresponding DWT is comprised of cD1, cD2, cD3, cD4, cD5 and cA5. 2.4.1. Computation of wavelet coefficients The multi-level DWT uses the particular coarseness of the PD signals for decomposing them into specific number of coefficients. The schematic presentation of the multi-level DWT is presented

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

67

Input signal (IS)

cA1

cD1

cA2

cD2

cA3

cD3

cA4

cD4

cA5

cD5

Fig. 4. Workflow of the multi-levels of discrete wavelet transforms.

in Fig. 4. Selection of suitable wavelet type and number of decomposition level are crucial steps in signals decomposition using the wavelet transform. The number of decomposition levels is often evaluated using the dominant frequency components of available signals. In our study, details as well as approximate coefficients of the PD signals have been calculated at various decomposition levels and selection performs based on their classification accuracy. The sensitivity error analysis confirmed that the approximate coefficients of the fifth decomposition levels are an optimal decomposition level for the considered task.

Table 1 Distribution of the train and test data set for various reservoir models. Reservoir formation model

Data

Train

Test

HI model HS model HCP model HCB model DPI model DPS model DPCP model DPCB model Total data set

357 371 386 380 327 384 552 541 3298

250 260 270 266 229 269 386 379 2309

107 111 116 114 98 115 166 162 989

2.5. Well testing simulation, data generation and pre-processing 2.5.1. Well testing simulation In the present study, pressure transient signals of the following eight different reservoir models have been calculated using simulation of the well testing operation for each individual model. 1. Homogeneous Reservoir, Infinite Acting Boundary (HI) 2. Homogeneous Reservoir, Single Sealing Fault Boundary (HS) 3. Homogeneous Reservoir, Constant Pressure Outer Boundary (HCP) 4. Homogeneous Reservoir, Closed Outer Boundary (HCB) 5. Dual Porosity Reservoir, Infinite Acting Boundary (DPI) 6. Dual Porosity Reservoir, Single Sealing Fault Boundary (DPS) 7. Dual Porosity Reservoir, Constant Pressure Outer Boundary (DPCP) 8. Dual Porosity Reservoir, Closed Outer Boundary (DPCB)

Number of the generated pressure transient signals for each individual reservoir model and their distribution in the training and test subsets are presented in Table 1. Appropriate number of training and test subsets evaluates using the validation procedure. In order to simulate the well testing operations and generate pressure transient signals, numerical values of formation parameters such as permeability, skin factor and distance from its boundary should be given. All of the required parameters for well testing simulation of the considered oil reservoir models and their ranges are presented in Table 2. 2.5.2. Pre-processing 2.5.2.1. Calculation of the pressure derivative. Since reservoir model identification is better carried out using the pressure derivative plots, all of the pressure transient patterns are converted to the PD graph using the method of Bourdet et al. [36]. This algorithm

Table 2 Range of value of parameters which are used for generating of well testing data. Reservoir models

Parameters

Minimum

Maximum

Unit

Homogeneous

Permeability (k) Skin factor (s) Wellbore storage (w)

9.87 × 10−16 −5 2.3 × 10−9

4.93 × 10−13 7 4.61 × 10−6

m2 – m3 /pa

Dual porosity

Permeability (k) Skin factor (s) Wellbore storage (w) Storativity ratio (ω) Interporosity flow coefficient ()

9.87 × 10−16 −5 2.3 × 10−8 0.01 10−5

5.92 × 10−13 7 2.3 × 10−7 0.07 10−10

m2 – m3 /pa – –

68

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

General properties of wellbore, luid and hydrocarbon formation, production policy i.e., well radius, viscosity, oil formation factor, initial pressure, thickness, porosity and etc.

Laplace

Inverse

Laplace Solution of Homogeneous and Natural Fracture Reservoirs

k, s, w, λ, ω

Natural Fracture Reservoirs

Numerical

k, s, w

Homogeneous Reservoirs

Bottom-hole Pressure Data

Derivation and Normalization using Eq. (5) and (6)

Discrete Wavelet Transform

Artiicial Neural Network

Reservoir model detection Fig. 5. Workflow of the performed analysis for reservoir model detection.

where is expressed by Eq. (5), utilizes a weighted central-difference approximation to calculate pressure derivative at each individual point i.

Pi =

[((pi − pi−1 )/(ti − ti−1 ))(ti+1 − ti ) + ((pi+1 − pi )/(ti+1 − ti ))(ti − ti−1 )] ti+1 − ti−1

(5)

2.5.2.2. Scaling. In order to increase the convergence rate of the artificial neural network and avoid saturation of its weights, the pressure derivative graph are normalized in the interval [–1 1]. The expressed correlation by Eq. (6) is used to perform normalization. Xnormalized =

2(Xi − Xmin ) −1 Xmax − Xmin

(6)

Schematic presentation of the processes which are employed for reservoir model identification i.e., simulation of well testing operation, generation of bottom-hole pressure, pre-processing stages (normalization and data reduction) and developing the coupled approach is shown in Fig. 5. The schematic of the normalized pressure derivative plots of homogeneous and dual porosity reservoirs with various outer boundaries is shown in Figs. 6 and 7, respectively.

2.6. Detailed steps of the proposed approach In our present study the discrete wavelet transform is employed for reducing the number of samples of those PD signals which are obtained from synthetic long-term pressure data. Multi levels wavelet decomposition has been used for calculating detail and approximate information of the PD signals. Since approximate coefficients preserve the characteristic shape of the PD signals during decreasing their data samples, they provide better representation than detail coefficients. Therefore the approximate coefficients of the PD signals have only been used for discrimination among various reservoir models by the MLP classifier.

3. Results and discussion Schematic presentation of the detail and approximation wavelet coefficients of the DPCB pressure derivative signals at various decomposition levels is illustrated in Fig. 8. Selection of suitable filter type for the particular application is often done by finding the wavelet which gives a maximum efficiency [8]. In the present study decompositions by the first order Daubechies show smaller misclassification than the others examined wavelet types, and hence it is selected as the best wavelet type for detection of oil reservoir model [37]. The smoothing feature of

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

HI

HS

HCP

69

HCB

Normalized pressure derivative

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0

100

200

300

400

500

Sample Fig. 6. Normalized PD plots of various homogeneous oil formation models.

DPI

DPS

DPCP

DPCB

Normalized pressure derivative

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0

100

200

300

400

500

Sample Fig. 7. Normalized PD plots of various dual porosity oil formation models.

the first order Daubechies results in its high sensitivity to detection the special properties of the PD signals. The schematic of normalized approximate coefficients of the optimal DWT (fifth decomposition levels using the first order Daubechies) of the homogeneous and dual porosity reservoirs with various outer boundaries is presented in Figs. 9 and 10, respectively. It can be seen from these figures that the DWT extracts the 16 coefficients from 512 samples of each individual PD pattern without removing their significant characteristics and distorting their shapes. Above statement can be investigated by comparing Figs. 6 with 9, and Figs. 7 with 10, separately. 3.1. Implementation of classifiers In this study the attention has been directed to develop a coupled multi-level DWT and MLP network for identification of

formation model from synthetic and field pressure transient signals. Finding the optimal structure of a MLPNN is a critical stage for developing the suitable classifier model [38]. Based on Cybenko study a MLP network with only one hidden layer is able to perform any type of nonlinear mapping [26]. Therefore, we employed a MLP network with only one hidden layer for recognizing the reservoir models in the present research. Appropriate number of neurons in hidden layer depends mainly on three issues: (1) complexity of correlation between input and output data (2) number of available training and test data (3) severity of noise on the datasets. Small number of neurons may cause a network unable to reach to the desired error, while a large number of neurons may result in over fitting. Therefore, number of the hidden neurons is often determined through an optimization procedure which maximizes some indexes i.e., percent of correct diagnosis [8].

70

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

Fig. 8. The detail and approximation wavelet coefficients of the DPCB signal.

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

7

HI

HS

HCP

HCB

MSE Goal -1

5

10

4

M ean Squared Error

Approximate Wavelet Coeficients cA5

6

3 2 1 0 -1

-2

10

-2 -3 -4

-3

2

4

6

8

10

12

14

16

Sample Fig. 9. Computed wavelet coefficients for the homogeneous oil formations using optimum DWT.

7

DPI

6

Approximate Wavelet Coeficient cA5

71

DPS

DPCP

DPCB

5 4 3 2 1 0 -1 -2 -3 -4 2

4

6

8

10

12

14

16

Sample Fig. 10. Wavelet coefficients of the dual porosity formations computed using optimum DWT.

3.2. Evaluations of the optimal MLPNN topology and appropriate wavelet type The optimal architecture of the coupled DWT-MLPNN model is determined by a trial and error procedure through examination of approximate coefficients of various decomposition levels, different type of filters, number of hidden layers as well as number of neurons in each hidden layer. In order to assessment the effect of wavelets type on the classifications accuracy of MLPNN, various decompositions have been performed using first order Daubechies, 10th order Symmlet, 4th order Coiflet and 8th order Daubechies [37]. Table 3 shows the results of performed error sensitivity analysis for investigation of capabilities of different coupled DWT-MLPNN models in detection of correct reservoir model. The optimal configuration has been selected by finding the smallest structure which also has maximum classification accuracies. It can be clearly seen that the first order Daubechies wavelet offers the smallest misclassification accuracies rather than the other types of considered filters. Various networks have been trained 3200 times and only those topologies which show the minimum misclassification accuracies are depicted in Table 3. The maximum classification accuracies of 95.37 and

10

0

200

400

600

800

1000

1200

1400

1600

1800

2000

Epoch Fig. 11. Schematic of variation of mean square error during the training stage.

94.34 for training and testing data sets are obtained by MLP network consisting of 17 hidden neurons trained by seventy percent of the available signals. Therefore the two layers MLP network with 17 hidden neurons which are trained by 16 input samples (approximation coefficients of the fifth decomposition level of the PD patterns which are calculated using first order Daubechies) has been considered as an optimal configuration for discrimination among various oil formation models using pressure transient data. Variation of the training error versus epoch for the MLPNN with an optimal configuration is shown in Fig. 11. The y-axis of Fig. 11 represents the amounts of mean square error (MSE) which is observed between MLPNN prediction and real target (during the training step), while its x-axis shows epoch. Epoch indicates how many times the training datasets have been fed to the MLPNN model for adjusting its parameters based on Section 3.3. By increasing the number of epochs, the weights and biases of the MLPNN model converge to their optimal values and hence the observed MSE between MLPNN prediction and real target decreases. This is of special importance to allow other researchers to reproduce the results and use our MLPNN model appropriately. For utilizing of our trained MLPNN and exact reproducing its results, the weight and bias matrixes and transfer functions of our proposed model should be used. Therefore the detailed information (weight and bias matrix) of our proposed MLPNN model is reported in Table 4. Other required information such as architecture and activation functions can be found in Section 2.3. Since in our study, all of the input and output information have been normalized, it is necessary that other researchers perform a same procedure to achieve the reported results. 3.3. Validations of the optimal coupled model using synthetic data Table 5 represents the number of both correct and incorrect diagnostic of the coupled DWT-MLP network for the training and testing data sets. As it can be seen from Table 5, the proposed model shows remarkable abilities for identifying the correct reservoir models especially HI, HS, HCB, DPI, DPS and DPCB. The relatively small classification accuracies of the proposed approach for the HCP and DPCP model can be explained by the fact that the difference in shapes of the PD plots of reservoir models is the key factor which helps the MLPNN model to know each ones and discriminate among them. The characteristic shape of the dual porosity reservoir

72

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

Table 3 The results of network architecture studies. % training set

Network architecturea

% TCA

Wavelet type

Training data set

Testing data set

Sixty percent of total data set

16 × 20 × 8b 23 × 16 × 8 15 × 18 × 8 19 × 19 × 8

Daubechies 1 Coiflets 4 Daubechies 8 Symlets 10

93.59 85.97 85.74 84.97

94.4388 88.9788 86.0218 88.0688

Seventy percent of total data set

16 × 17 × 8 23 × 20 × 8 15 × 19 × 8 19 × 19 × 8

Daubechies 1 Coiflets 4 Daubechies 8 Symlets 10

95.37 89.22 88.05 90.26

94.34 83.822 83.9232 77.25

Bold values represent the best obtained results in the study. a Best architecture among 400 times training of each distinct networks with 1 to 20 hidden neurons (400 training per each wavelet type). b Design of MLPNNs, number of input coefficients × neurons in the first hidden layer × output neurons, respectively. Table 4 Weights and− biases of the optimal MLPNN architecture. Neurons

Weight values of connections between input and hidden layer

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

−2.31 6.11 −4.85 2.42 −0.28 −1.93 0.42 −0.69 −6.50 18.77 12.13 −9.21 −3.38 6.28 −3.37 −15.37 −2.31

−0.06 −5.82 4.22 0.96 −0.15 −0.21 −0.84 1.88 0.83 −13.24 31.94 −25.82 15.04 −15.65 2.80 4.46 −0.06

−2.82 3.04 −0.54 −1.35 3.76 3.79 −0.11 −1.28 −1.53 −3.01 0.51 2.78 0.55 −6.86 −0.12 8.76 −2.82

−4.07 2.27 −5.68 7.88 1.40 −3.73 3.00 −2.74 −6.19 10.87 6.03 7.44 −23.72 1.23 8.19 −2.93 −4.07

−1.42 3.24 −2.05 0.13 −2.73 1.68 2.77 −3.90 −4.57 4.59 10.56 −2.79 −8.22 −7.88 −5.99 10.37 −1.42

−2.87 −2.20 −5.28 −6.14 −2.60 −0.22 2.67 0.46 −3.16 −0.89 0.80 1.22 −2.76 −6.97 −9.24 5.74 −2.87

5.57 −3.23 1.18 5.85 −5.69 −1.15 4.37 2.15 0.80 −0.90 −3.40 2.77 6.66 −0.55 −5.27 3.67 5.57

−1.12 6.00 5.15 1.47 1.60 2.47 2.98 2.27 3.14 1.77 −2.64 −1.53 −1.19 −0.85 −2.42 −2.39 −1.12

3.44 −4.14 −2.81 −1.74 −1.36 −0.69 0.14 4.01 −0.79 −7.92 −3.28 6.57 5.35 −4.74 1.64 −9.91 3.44

0.06 −3.96 0.26 0.11 0.40 4.02 0.44 −3.33 1.64 0.58 0.90 1.04 0.44 −1.69 0.12 0.14 −0.31 1.23 −0.53 −2.57 −2.68 0.91 −1.59 2.84 0.36 2.02 0.64 −4.55 0.47 −14.25 −2.35 15.87 0.06 −3.96

−0.64 1.02 −0.19 2.96 1.78 0.55 0.87 0.57 −1.51 2.22 −0.91 −0.64 0.58 0.96 −1.78 0.50 −0.64

−0.81 −8.45 −2.33 5.18 1.45 −5.09 −5.18 3.66 −1.48 −0.05 −2.03 −5.29 −0.89 2.54 0.87 −4.74 −0.81

1.43 −0.68 −7.79 −1.76 2.96 2.50 0.53 −2.42 −4.10 0.26 2.30 −1.55 0.96 −7.32 −1.11 −1.56 −7.18 −5.04 2.20 0.34 17.96 1.05 2.91 3.45 −2.96 −6.23 −12.61 2.31 −11.98 −5.25 2.87 10.79 4.72 −0.65 −6.96 −2.64 6.18 −2.70 −5.76 9.81 5.10 3.98 6.01 1.99 3.35 −27.03 −3.47 −2.99 1.43 −0.68 −7.79

Weight values of connections between hidden and output layer 13.88 −20.72 2.85 −1.53 6.75 −13.28 −1.87 2.73

9.93 11.73 −1.68 28.1 1.92 −16.06 1.84 −23.74

−7.57 3.56 −1.42 4.56 3.1 4.82 8.21 −6.49 −1.83 −3.26 −10.73 9.89 5.08 −12.34 11.33 −5.70 1.78 2.81 8.10 4.36 0.45 −3.03 −17.47 3.19 −0.87 4.32 7.75 −0.40 −12.25 −0.68 10.65 3.70 −2.06 −6.50 −5.43 0.43 −9.33 3.33 −8.97 3.60 1.96 −10.89 −11.50 6.20 −4.04 −3.39 −7.74 −4.22 0.54 −11.85 14.86 −2.97 16.04 0.77 −5.91 0.37

−7.93 −11.45 −6.46 −2.88 −11.17 −3.59 −8.32 −4.82 3.01 −0.51 −6.83 −1.24 4.29 6.14 10.84 −0.52 15.17 −4.37 6.57 −2.87 −15.95 −18.01 3.22 3.86 −24.59 −14.82 −1.67 9.07 −3.93 −11.61 −6.17 −3.45 −4.28 −5.86 9.17 −17.48 18.81 3.63 −6.27 −5.29

11.84 5.82 5.24 18.83 9.24 4.70 −0.67 −2.35 10.21 16.47 −0.18 9.96 −8.79 −1.60 5.23 −14.61 −14.34 1.25 −1.80 2.33 −9.16 −14.51 4.34 −8.90

Biases of the hidden layer 5.27

−1.43

6.15

0.66

−1.12

1.41

−1.54

−1.31

−6.16

−3.49

5.75

−4.34

2.24

−0.87

−21.14

2.35

−8.68

Biases of the output layer −9.95

−15.11

−5.48

−2.41

−19.77

−16.98

−12.65

−17.42

Table 5 Classification accuracy of each reservoir formation using optimal hybrid model, i.e, 16 × 17 × 8. Data set

Diagnosis

HI

HS

HCP

HCB

DPI

DPS

DPCP

DPCB

Training set

Correct Incorrect

250 0

260 0

228 42

265 1

228 1

266 3

330 56

375 4

Testing set

Correct Incorrect

107 0

109 2

95 21

114 0

96 2

114 1

137 29

161 1

is the U-shape pattern which often appears in the middle of its PD signals. By looking at the PD graphs of the dual porosity reservoir which are previously presented in Figs. 7 and 10, it will be clear that the U-shape pattern has not been appear in the DPCP model only.

Therefore this leads to equality in the shapes of both HCP and DPCP models and make them very similar. These similarities can result in amazing of the classifier and decrease its classification accuracies.

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

73

Table 6 Statistical analysis of correct detection of reservoir formations using optimal hybrid model, i.e, 16 × 17 × 8. Statistical Parameters

HI

HS

HCP

HCB

DPI

DPS

DPCP

DPCB

% sensitivity of training data set % TCA training data set

100 95.37

100

84.44

99.6

99.6

99.89

85.49

98.94

% sensitivity of testing data set % TCA of testing data set

100 94.34

98.2

81.9

100

97.96

99.13

82.53

99.38

100 90 80

Probability (%)

70 60 50 40 30 20 10 HI

HS

HCP

HCB

DPI

DPS

DPCP

DPCB

Reservoir Model Fig. 12. Responses of the developed approach to the well-testing data of the Ahwaz reservoir.

It should be mentioned that while reservoir model identification from PD signals is an inverse problem and often do not show unique results, the network outputs for a given entry PD pattern may belong to more than one reservoir models. In these cases the maximum output value of the proposed model is considered as detected reservoir model. The quantitative assessment of the diagnostic efficiencies of the proposed coupled DWT-MLPNN approach is accomplished by two statistical indexes i.e., sensitivity (S) and TCA. The sensitivity is defined as number of the correct recognition of individual reservoir model divided by the total number of patterns of this model. The TCA is the fraction of all of the patterns that is recognized correctly. If Zk be a fraction of the patterns of kth reservoir model, the relation between TCA and sensitivity of each individual reservoir model can be explained by Eq. (7). TCA =

8 

Zk × Sk

(7)

k=1

Table 6 presents the values of these statistical parameters multiplied by 100 to express them in terms of percent. The proposed model has classified the test datasets of the HI through DPCB with the sensitivities of 100, 98.2, 81.9, 100, 97.96, 99.13, 82.53 and 99.38, respectively. While the sensitivities of the coupled DWT-MLPNN model for training datasets of the HI through DPCB reservoir models are 100, 100, 84.44, 99.6, 99.6, 99.89, 85.49 and 98.94 respectively. The proposed model shows overall classification accuracies of 95.37 and 94.34 for the synthetic training and test patterns respectively.

The number of correct recognitions and values of the sensitivity and classification accuracy of the proposed model which are presented in Tables 5 and 6, confirm the excellent capabilities of the coupled DWT-MLPNN model for correct detection of the oil formation model form pressure transient data. Based on our previous study [5] the method of back-propagation scaled conjugate gradient [39] is the best training algorithm in terms of its required CPU1 time for training of the MLPNN [5]. Thus in the present study the scaled conjugate algorithm is used for training of the MLP network in the coupled DWT-MLPNN model. 3.4. Validations of the proposed model using actual field data To examine the performance of the proposed model in a real condition, the actual field pressure transient pattern which is obtained from Ahwaz reservoir2 is utilized. The field data is obtained during the drawdown test on a dual porosity reservoir with a single sealing fault boundary which is correspond to the DPS model in this study. At first, the pressures over time information have been converted to normalized pressure derivative graph and then decomposed by the method described in previous sections. The response of the developed coupled DWT-MLPNN model to this real field data has been illustrated in Fig. 12. According to this figure, the proposed model detects the correct reservoir model (i.e., DPS) with a probability of near 85% and the network outputs for all other models are less than 6%. Indeed the output of our pro-

1 2

Central processing unit. Iranian reservoir.

74

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75

Table 7 Comparison the performance of various developed approaches for reservoir model detection HI model. Model Coupled MLPNN-WT Total number of signals Number of training signals (TRDa ) Number of test signals (TSDb ) Number of signal’s sample Total number of neurons Number of weight and biases Training TCA (%) Test TCA (%) Total TCA (%) MTCA a b

MLPNN [5]

3298 2309 (70) 989 (30) 16 25 433 95.37 94.34 95.06 0.212

960 720 (75) 240 (25) 33 20 512 96.03 95.83 95.98 0.187

Recurrent network [9] 960 720 (75) 240 (25) 33 26 476 98.51 98.39 98.48 0.207

(Number of training signals/total number of signals) × 100. (Number of test signals/total number of signals) × 100.

posed model implies that the field data belongs to DPS model with probability of 85%. 3.5. Comparison of the coupled DWT-MLP with other approaches The aim of the present section is to compare the classification performance of the developed coupled DWT-MLPNN approach with other approaches which have constructed based on recurrent and MLP networks [5,9]. As it is depicted in Table 7, the comparisons have been directly conducted based on the number of parameters of the classifiers sand their classification accuracies for reservoir model detection. In analogy with curve fitting, a small classifier (here is neural network) that uses fewer parameters usually has better generalization capability. To achieve better generalization and avoid over-fitting, it is necessary that the flexibility of a developed ANN reduced [23]. Since the flexibility is directly related to the number of ANN parameters, increasing the number of network parameters increases its flexibility. For combining these effects in a unified criterion, the modified total classification accuracy (MTCA) is defined by Eq. (8). MTCA =

Total TCA Number of parameters of the classifier

(8)

As can be seen from Eq. (8), this index is weighted inversely by the number of the parameters of classifier. It should be mentioned that the approach having higher value of this index presents better classification accuracy and generalization capability. It can be seen from Table 7 that the coupled DWT-MLPNN approach has the minimum number of parameters, the lowest flexibility as well as the highest generalization capability. Therefore, although the classification accuracy of the coupled DWT-MLP model is a little smaller than the MLPNN and recurrent network, since it has the smallest size, it presents the best MTCA i.e., 0.212. 4. Conclusions In this paper, the coupled scheme based on DWT and MLPNN has been proposed for detection of oil reservoir models from long-term pressure transient data. Since significant quantities of signal information exists in its wavelet coefficients, the wavelet coefficients of PD graphs are used as the inputs of the MLPNNs. Oil reservoir model detection is performed in four stages: (1) pressure over time is calculated by simulation of well testing operation (2) the obtained pressure transient patterns are converted to PD signals (3) DWT coefficients of the PD signals are calculated (4) optimal coupled approach based on approximate coefficients of fifth decomposition level is designed to identify the reservoir model. The excellent classification rates of the proposed coupled approach demonstrated

that it can be applied for reservoir model identification from their PD signals effectively. Although the proposed methodology is able to detect all of the considered reservoir models correctly, it cannot detect those reservoir models which did not train for recognizing them. The performance of the proposed model in detection of other reservoir formation models including hydraulically fractured models such as infinite conductivity, uniform flux and finite conductivity fracture and compare its results with coupled support vector machines-principal component analysis will investigate in our future study.

References [1] M. Muskat, Use of data on the build-up of bottom-hole pressures, AIME 123 (1937) 44–48. [2] R. Eslamloueyan, B. Vaferi, S. Ayatollahi, Fracture characterizations from well testing data using artificial neural networks, in: 72nd EAGE Conference and Exhibition, Barcelona, 2010. [3] Z. Jeirani, A. Mohebbi, Estimating the pressure, permeability and skin factor of oil reservoirs using artificial neural networks, J. Pet. Sci. Eng. 50 (2006) 11–20. [4] S. Akin, Integrated nonlinear regression analysis of tracer and well test data, J. Pet. Sci. Eng. 39 (2003) 29–44. [5] B. Vaferi, R. Eslamloueyan, S. Ayatollahi, Automatic recognition of oil reservoir models from well testing data by using multi-layer perceptron networks, J. Pet. Sci. Eng. 77 (2011) 254–262. [6] N. Ghaffarian, R. Eslamloueyan, B. Vaferi, Model identification for gas condensate reservoirs by using ANN method based on well test data, J. Pet. Sci. Eng. 123 (2014) 20–29. [7] H.C. Yang, C.W. Chen, Potential hazard analysis from the viewpoint of flow measurement in large open-channel junctions, Nat. Hazards 61 (2012) 803–813. [8] I˙ . Güler, E.D. Übeyli, A recurrent neural network classifier for Doppler ultrasound blood flow signals, Pattern Recognit. Lett. 27 (2006) 1560–1571. [9] B. Vaferi, R. Eslamloueyan, Sh. Ayatollahi, Application of recurrent networks to classification of oil reservoir models in well-testing analysis, Energy Sources A 37 (2015) 174–180. [10] K.S. Swarp, Artificial neural network using pattern recognition for security assessment and analysis, Neurocomputing 71 (2008) 983–998. [11] B.B. Chaudhuri, U. Bhattacharya, Efficient training and improved performance of multilayer perceptron in pattern classification, Neurocomputing 34 (2000) 11–27. [12] S. Athichanagorn, R.N. Horne, Automatic parameter estimation from well test data using artificial neural network, in: SPE Annual Technical Conference and Exhibition, Dallas, Texas, 1995. [13] J. Kikani, M. He, Multi-resolution analysis of long-term pressure transient data using wavelet methods, in: SPE Annual Technical Conference and Exhibition, New Orleans, 1998. [14] S. Athichanagorn, R.N. Horne, J. Kikani, Processing and interpretation of long-term data acquired from permanent pressure gauges, SPE Reserv. Eval. Eng. 5 (2002) 384–391. [15] S. Olsen, J.E. Nordtvedt, Improved wavelet filtering and compression of production data, in: Paper SPE 96800 Presented at the Offshore Europe, Aberdeen, 2005. [16] B. Vaferi, V. Salimi, D. DehghanBaniani, A. Jahanmiri, S. Khedri, Prediction of transient pressure response in the petroleum reservoirs using orthogonal collocation, J. Pet. Sci. Eng. 98–99 (2012) 156–163. [17] B. Vaferi, R. Eslamloueyan, Simulation of dynamic pressure response of finite gas reservoirs experiencing time varying flux in the external boundary, J. Nat. Gas Sci. Eng. 26 (2015) 240–252.

B. Vaferi et al. / Applied Soft Computing 47 (2016) 63–75 [18] C.W. Chen, Neural network-based fuzzy logic parallel distributed compensation controller for structural system, J. Vib. Control 19 (2013) 1709–1727. [19] M. Lashkarbolooki, B. Vaferi, M.R. Rahimpour, Comparison the capability of artificial neural network (ANN) and EOS for prediction of solid solubilities in supercritical carbon dioxide, Fluid Phase Equilib. 308 (2011) 35–43. [20] F. Fourati, M. Chtourou, M. Kamoun, Stabilization of unknown nonlinear systems using neural networks, Appl. Soft Comput. 8 (2008) 1121–1130. [21] H. Rau, M.H. Tsai, C.W. Chen, W.J. Shiang, Learning-based automated negotiation between shipper and forwarder, Comput. Ind. Eng. 51 (2006) 464–481. [22] J. Reyes, A. Morales-Esteban, F. Martínez-Álvarez, Neural networks to predict earthquakes in Chile, Appl. Soft Comput. 13 (2013) 1314–1328. [23] S. Samarasinghe, Neural Networks for Applied Science and Engineering—From Fundamentals to Complex Pattern Recognition, Auerbach Publ., New York, 2007. [24] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Netw. 2 (1989) 359–366. [25] K. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Netw. 2 (1989) 183–192. [26] G.V. Cybenko, Approximation by superposition of a sigmoid function, Math. Control Signals Syst. 2 (1989) 303–314. [27] E.J. Hartman, J.D. Keeler, J.M. Kowalski, Layered neural networks with Gaussian hidden units as universal approximations, Neural Comput. 2 (1990) 210–215. [28] H.L. Royden, Real Analysis, 2nd ed., Macmillan, New York, 1968. [29] C. Xiang, S.Q. Ding, T.H. Lee, Geometrical interpretation and architecture selection of MLP, IEEE Trans. Neural Netw. 16 (2005) 84–96.

75

[30] Z. Zainuddin, O. Pauline, Modified wavelet neural network in function approximation and its application in prediction of time-series pollution data, Appl. Soft Comput. 11 (2011) 4866–4874. [31] C. Zanchettin, T.B. Ludermir, Wavelet filter for noise reduction and signal compression in an artificial nose, Appl. Soft Comput. 7 (2007) 246–256. [32] S. Srivastava, M. Singh, M. Hanmandlu, A.N. Jha, New fuzzy wavelet neural networks for system identification and control, Appl. Soft Comput. 6 (2005) 1–17. [33] O. Akyilmaz, H. Kutterer, C.K. Shum, T. Ayan, Fuzzy-wavelet based prediction of Earth rotation parameters, Appl. Soft Comput. 11 (2011) 837–841. [34] A.N. Akansu, W.A. Serdijn, I.W. Selesnick, Emerging applications of wavelets: a review, Phys. Commun. 3 (2010) 1–18. [35] I. Daubechies, The wavelet transform, time-frequency localization and signal analysis, IEEE Trans. Inf. Theory 36 (1990) 961–1005. [36] D. Bourdet, J.A. Ayoub, Y.M. Pirard, Use of pressure derivative in well-test interpretation, SPE Formation Eval. 4 (1989) 293–302. [37] I. Daubechies, Ten lectures on wavelets, in: CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, PA, 1992. [38] V. Dua, A mixed-integer programming approach for optimal configuration of artificial neural networks, Chem. Eng. Res. Des. 88 (2010) 55–60. [39] M.F. Moller, A scaled conjugate gradient algorithm for fast supervised learning, Neural Netw. 6 (1993) 525–533.