Modeling relative permeability of gas condensate reservoirs: Advanced computational frameworks

Modeling relative permeability of gas condensate reservoirs: Advanced computational frameworks

Journal Pre-proof Modeling relative permeability of gas condensate reservoirs: Advanced computational frameworks Mehdi Mahdavi Ara, Nait Amar Menad, M...

12MB Sizes 0 Downloads 92 Views

Journal Pre-proof Modeling relative permeability of gas condensate reservoirs: Advanced computational frameworks Mehdi Mahdavi Ara, Nait Amar Menad, Mohammad Hossein Ghazanfari, Abdolhossein Hemmati-Sarapardeh PII:

S0920-4105(20)30028-0

DOI:

https://doi.org/10.1016/j.petrol.2020.106929

Reference:

PETROL 106929

To appear in:

Journal of Petroleum Science and Engineering

Received Date: 4 March 2019 Revised Date:

26 September 2019

Accepted Date: 6 January 2020

Please cite this article as: Ara, M.M., Menad, N.A., Ghazanfari, M.H., Hemmati-Sarapardeh, A., Modeling relative permeability of gas condensate reservoirs: Advanced computational frameworks, Journal of Petroleum Science and Engineering (2020), doi: https://doi.org/10.1016/j.petrol.2020.106929. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier B.V.

1

Modeling Relative Permeability of Gas Condensate Reservoirs: Advanced

2

Computational Frameworks

3 4

Mehdi Mahdavi Ara a, Nait Amar Menad b, Mohammad Hossein Ghazanfari c, Abdolhossein Hemmati-Sarapardeh d, e, *

5

a

Department of Petroleum Engineering, Islamic Azad University, Omidieh, Iran

6

b

Département Etudes Thermodynamiques, Division Laboratoires, Sonatrach, Boumerdes, Algeria

7

c

Chemical and Petroleum Engineering Department, Sharif University of Technology, Tehran, Iran

8

d

Department of Petroleum Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

9

e

College of Construction Engineering, Jilin University, Changchun, China

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Abstract: In the last years, an appreciable effort has been directed toward developing empirical models to link the relative permeability of gas condensate reservoirs to the interfacial tension and velocity as well as saturation. However, these models suffer from non-universality and uncertainties in setting the tuning parameters. In order to alleviate the aforesaid infirmities in this study, comprehensive modeling was carried out by employing numerous smart computer-aided algorithms including Support Vector Regression (SVR), Least Square Support Vector Machine (LSSVM), Extreme Learning Machine (ELM), Multilayer Perceptron (MLP), Group Method of Data Handling (GMDH), and Gene Expression Programming (GEP) as predictors and Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Levenberg-Marquardt Algorithm (LMA), Bayesian Regularization (BR), Scaled Conjugate Gradient (SCG) and Randomized Polynomial Time (RP) as optimizers. To this end, a wide variety of reliable databanks encompasses more than 1000 data points from eights sets of experimental data was utilized in the training and testing steps of the modeling process. The predictors were integrated with optimization algorithms to assign the optimum tuning parameters of each model. The modeling was implemented in two different manners from the standpoint of models inputs: (1) 2-input (saturation and capillary number); (2) 3-input (saturation, interfacial tension, and capillary number). The results of the comparison between these strategies demonstrate more accuracy of the models when employing three independent parameters as the input (3-input). Among the developed models, the MLP-LMA modeling algorithm outperformed all other models with root mean square errors (RMSEs) of 0.035 and 0.019 for gas and condensate phases, respectively. At the end, in a comparison between both 2-input and 3-input MLP-LMA models and five traditional literature models, both smart modeling approaches were established themselves as the most accurate techniques for estimation of relative permeability in gas condensate reservoirs.

33 34

Keywords: Relative permeability; gas condensate; empirical correlation; artificial intelligence;

35

comprehensive modeling

36

*Corresponding author: A. Hemmati-Sarapardeh ([email protected] & [email protected] )

1

1

1. Introduction

2

Producing from the gas condensate reservoirs is associated with complexities from the standpoint

3

of phase behavior and phase distribution in porous media as well as modeling (Dandekar, 2015).

4

Deliverability reduction as a result of condensate banking in near wellbore regions lies at the

5

heart of the problem. Gas condensate mixtures are mainly composed of light hydrocarbons and a

6

rather small portion of heavier hydrocarbon components. When the reservoir pressure decreases

7

in response to depletion, condensate phase drops out from the gas phase, mainly in the nearby

8

well regions where the pressure is lower than the fluids dew point pressure. This phenomenon

9

leads to the reduction of the gas deliverability. In fact, the most valuable components of the gas

10

condensate are considered to be a hindrance to gas flow. Furthermore, the condensate begins to

11

flow in the reservoir if its saturation reaches its critical value. Therefore, two-phase system will

12

occur (or three in the presence of water), in which the phases are subject to competing to flow

13

through the porous media (Fig. 1).

14

During the last decade, modeling the fluid flow in gas condensate and near critical reservoirs has

15

been at the center of much attention. Relative permeability is one of the most significant and

16

widespread parameters in the modeling phase of such complex systems. However, calculation of

17

this parameter in gas condensate reservoirs is associated with difficulties such as the multiplicity

18

of inputs and uncertainty for choosing optimal values for tuning parameters (Tani et al., 2014). In

19

the case of conventional reservoirs, several researchers have been attempted to correlate the

20

relative permeability mainly as a function of phase saturation as the dominant variable (Brooks

21

and Corey, 1964, Stone, 1973, Lomeland et al., 2005). In gas condensate/near critical reservoirs,

22

however, considering the interfacial tension (IFT) and velocity may be equally essential. The

23

improvement of relative permeability due to a reduction in IFT has been widely accepted 2

1

(Amaefule and Handy, 1982b, Chukwudeme et al., 2014). This relationship becomes prominent

2

when the IFT is lower than its critical (base) value (Longeron, 1980, Asar and Handy, 1988,

3

Kalla et al., 2014). Furthermore, an increase in velocity improves the relative permeability of

4

both phases (Henderson et al., 1997, Mott et al., 1999, Henderson et al., 2000b, Jamiolahmady et

5

al., 2008). The positive effect of IFT and low/moderate velocities on the relative permeability

6

was later known as coupling effect (Henderson et al., 2000b, Jamiolahmady et al., 2009). On the

7

contrary, the relative permeability decrease at high velocities as a result of the inertia effect (non-

8

Darcy flow). Accordingly, the interaction of coupling and inertia effects determines the shape

9

and the value of relative permeability curves (Henderson et al., 2000a).

10

Over the last decades, several researchers have attempted to propose empirical models for

11

estimation of gas condensate relative permeability with differences in the parameters employed

12

as inputs. Early models only relied on IFT and saturation variables because meanwhile, the effect

13

of velocity on relative permeability was not understood (Coats, 1980, Nghiem et al., 1981, Betté

14

et al., 1991). After that, most researchers applied the positive effect of IFT and velocity

15

(coupling effect) by employing capillary number (Nc) (Whitson and Fevang, 1997, Shaidi, 1997,

16

Jamiolahmady et al., 2009). This dimensionless parameter is defined as the ratio of viscous to

17

capillary force. The magnitude of the capillary forces is determined by IFT, wettability and pore

18

geometry (Delshad et al., 1986). The magnitude of the viscous forces is set by the fluid viscosity,

19

flow velocity, and the flow path length (Fulcher Jr et al., 1985). Hence, the capillary number can

20

include both IFT and velocity parameters simultaneously. Accordingly, an increase in Nc

21

improves the relative permeability of gas and condensate phases. For low Nc, traditional

22

(immiscible) relative permeability behavior is distinguished. For high values of Nc, relative

23

permeability have a trend toward the straight line (miscible like behavior) and the residual

3

1

saturation tends toward zero (Whitson et al., 1999). However, Jamiolahmady et al. state that the

2

presence of IFT in the denominator of the Nc equation is not enough to express the dependency

3

on IFT (Jamiolahmady et al., 2006).

4

The literature proposed correlations can be divided into two groups (Blom and Hagoort, 1998):

5

(1) Modification of Corey equation for including Nc (Fulcher Jr et al., 1985, Blom and Hagoort,

6

1998, App and Burger, 2009); (2) interpolation between miscible and immiscible relative

7

permeability curves using a Nc dependence weighting factor (Coats, 1980). Existing researches

8

and surveys such as those conducted by (Coats, 1980, Amaefule and Handy, 1982a, Betté et al.,

9

1991, Whitson and Fevang, 1997, Henderson et al., 2000b, Jamiolahmady et al., 2009) recognize

10

the Coats (1980) (Coats, 1980) model as the favorite approach to determine Kr. Nevertheless, the

11

aforementioned correlations are applicable only in the absence of experimental data, because

12

they are associated with limitations. In addition, some of these models have a large number of

13

regression parameters (constants). The value of each constant is not unique in different rock and

14

fluid systems, because these models have been derived from a limited number of experimental

15

data. Therefore, choosing the best value for them is not a simple task. Above all, there is

16

uncertainty for employing the most appropriate model for estimating the relative permeability in

17

a given reservoir, because the accuracy/precision of each model is not unique for different cases.

18

To deal with the aforementioned challenges, it is essential to apply robust predictive methods.

19

Computer-based simulators and artificial intelligence approaches are breakthrough technologies

20

that have recently acquired a great deal of attention. These approaches have been considered as

21

reliable methods for overcoming widespread challenges in different areas of science including

22

petroleum engineering. These methodologies could connect the input data to output in an

23

unconventional manner, aiming to generating improved predictive models (Velez-Langs and 4

1

Engineering, 2005). Several authors have addressed the modeling of the reservoir absolute

2

permeability by utilizing various machine learning approaches including artificial neural network

3

(ANN), genetic algorithm (GA), and fuzzy logic (Ahmadi and Chen, 2018, Basbug and Karpyn,

4

2007, Cheng et al., 2009, Huang et al., 1996, Singh, 2005, Wiener et al., 1991, Soto et al., 1997,

5

Aliouane et al., 2014). Moreover, some studies were undertaken to predict the formation porosity

6

by employing the aforementioned soft computing strategies (Wong et al., 1995, Leiphart and

7

Hart, 2001, Soto et al., 1997, Ahmadi and Chen, 2018). A number of authors have utilized the

8

ANN methodologies for assessing the water saturation, irreducible water saturation, and fluid

9

distribution in the hydrocarbon deposits (Goda et al., 2007, Cvetković et al., 2009, Al-Bulushi et

10

al., 2009). In addition to the aforementioned static properties of the reservoirs, a substantial and

11

enlarging body of literature has investigated the application of the artificial intelligence

12

techniques for estimation of dynamic properties of the reservoirs such as two/three phase relative

13

permeability (Silpngarmlers and Ertekin, 2002, Silpngarmlers et al., 2001, Guler et al., 2003,

14

Ahmadi, 2015, Arigbe et al., 2019). However, to the best of our knowledge, the aforementioned

15

studies are constrained to the relative permeability of the conventional oil and gas reservoirs.

16

In this study, we propose to implement several artificial intelligence techniques to establish

17

reliable models for estimating relative permeability in gas condensate reservoirs by considering

18

two sorts of inputs parameters. To this end, a comprehensive modeling was carried out by

19

applying various intelligent techniques including Support Vector Regression (SVR), Least

20

Square Support Vector Machine (LSSVM), Extreme Learning Machine (ELM), Group Method

21

of Data Handling (GMDH), Multilayer Perceptron (MLP) and Gene Expression Programming

22

(GEP) as predictors. Furthermore, Genetic Algorithm (GA), Particle Swarm Optimization (PSO),

23

Levenberg-Marquardt Algorithm (LMA), Bayesian Regularization (BR), Scaled Conjugate

5

1

Gradient (SCG) and Randomized Polynomial Time (RP) were used as optimizers of the

2

aforementioned predictive methods. A wide-range of dataset encompasses various rock and fluid

3

properties and different experimental conditions were adopted from the literature for the

4

modeling process (Longeron, 1980, Asar and Handy, 1988, Haniff and Ali, 1990, Chen et al.,

5

1995, Henderson et al., 1997, Henderson et al., 1998, Calisgan and Akin, 2008, Blom et al.,

6

1997). It is notable that two different strategies were taken into account for model construction:

7

(1) considering saturation and capillary number as inputs (2-input modeling); (2) considering

8

saturation, IFT and capillary number as inputs (3-input modeling). For assessing the performance

9

of the developed models, root mean square error (RMSE) and determination coefficient (R2)

10

were utilized as statistical quality measures to determine models accuracy. Validation process

11

was implemented in two different steps. Firstly, the developed models were compared to each

12

other and the best model was chosen; then, the best model was compared with traditional

13

literature models. At the end, correlations were developed using GEP and GMDH algorithms for

14

prediction of gas and condensate relative permeabilities for the above-mentioned strategies,

15

namely 2-input and 3-input modeling.

16

2. Data Gathering

17

The value and the shape of relative permeability curves could be affected by several factors

18

including reservoir rock and fluid properties, experimental condition, and the method of

19

measurement. Therefore, employing a wide variety of reliable databanks is unavoidable for

20

conducting comprehensive modeling. Hence, a widespread databank of gas/condensate relative

21

permeability as a function of saturation, IFT, and the capillary number have been gathered from

22

published literature (Longeron, 1980, Asar and Handy, 1988, Haniff and Ali, 1990, Chen et al.,

23

1995, Henderson et al., 1997, Henderson et al., 1998, Calisgan and Akin, 2008, Blom et al., 6

1

1997). This databank thoroughly encompasses more than 1000 data points including 576 and 441

2

points for gas and condensate phases, respectively. Table 1 represents experimental details

3

pertinent to the foresaid datasets utilized in this study. As it is evidence, the databank

4

encompasses a broad range of rock and fluid systems, experimental condition, and experimental

5

methods like steady state (SS) and unsteady state (USS). Accordingly, a multitude data points

6

integrated with various experimental conditions leads to universal models for prediction of

7

relative permeability in different gas condensate reservoirs.

8

3. Description of models

9

3.1 Predictive Methods

10

3.1.1

11

Support Vector Machine (SVM) (Cortes and Vapnik, 1995) is a set of related supervised

12

machine learning algorithm which can be employed for clustering (SVC) as well as regression

13

(SVR) tasks. Support Vector Regression (SVR) has been cherished due to its ability to model

14

non-linear relationships in many projects pertinent to petroleum and mining engineering

15

(Gholami and Fakhari, 2017). The basic idea behind SVR has been widely described in the

16

literature (Scholkopf and Smola, 2001, Gholami and Fakhari, 2017, Smola et al., 2004,

17

Nasrabadi, 2007, Cristianini and Shawe-Taylor, 2000). Therefore, for the sake of brevity, a brief

18

explanation of the SVR conception is provided in this study. For a given dataset

19 20 21

Support Vector Regression (SVR)

[x , y , … , x , y ] with x∈Rd as the d-dimensional input space and y∈R as the output vector,

SVR aims to acquire a regression function f(x) based on the input data for predicting the output as expressed below:   = .    + 

(1) 7

1 2

where  is a weight vector and  is a bias, while   is a high mapping dimensional feature space. To do so, the following constrained minimization problem is formulated: 1  ‖‖ +  2

3

4 5 6 7

$

%

!" + !# 

(2)

subject to ' − .    +  ≤ * + !# &.    +  − ' ≤ * + !" !" , !# ≥ 0,  = 1,2, … , -

(3)

where * stands for the fixed precision of the function approximation, !# and !# are the slack

variables, and C denotes a positive regularization parameter which determines the amount of deviation from *. Choosing the least value for * leads to locating some data outside the specified

precision, which brings about the unattainable solution. Hence, slack variables are utilized for

8

determining the acceptable margin of error in order to check for infeasible solutions (Esfahani et

9

al., 2015, Hemmati-Sarapardeh et al., 2016). The final form of the SVR equation is as follow:   =

10

1

,0%

. − .∗ [  .  0 ]

+

(4)

where α ≥ 0 and α∗ ≤  are Lagrangian multipliers and the constant C>0 is one of the tuning

11

parameters of SVR that determines the trade-off between the training error and model simplicity

12

(Üstün et al., 2007). The entity b stands for bias, namely the regression functions offset. The

13 14 15

entity φ denotes the mapping function and the non-linear mapping term [φx . φx0 ] represents the mapping function from the input space to the feature space. This term can often be approximated by a so-called Kernel function:

8

45  , 0 6 = [  .  0 ]

(5)

1

The Kernel function models a non-linear relationship in a linear manner aided with transforming

2

the primary input data to a high dimensional feature space. Hence, solving the problem of Eq. (1)

3

requires dealing with a quadratic programming problem which is immensely difficult and time-

4

consuming, especially when the size of data points is significant. Some of the most common

5

Kernel functions are linear and polynomial inner-product functions and the Radial Basis

6

Function (RBF) (Üstün et al., 2007). In the current study, the RBF Kernel function was

7

employed as follows (Esfahani et al., 2015):

8 9 10

4  ,   =  7 −‖  − ‖ ⁄8  

(6)

where the entity 8  is the control parameter of the RBF Kernel function that must be tuned.

Accordingly, the parameters C and * together with RBF control parameter (8  ) are the hyper-

parameters of the SVR model. For choosing the appropriate values of these parameters, genetic

11

algorithm (GA) and Particle Swarm Optimization (PSO) were implemented in this study.

12

3.1.2

13

In 1999, Suykens and Vandewalle (Suykens and Vandewalle, 1999) proposed an alternative

14

formulation for standard SVM known as Least Square Support Vector Machine (LSSVM). In LS

15

version of SVM, a set of linear equations are solved instead of the quadratic programming

16

problem (Suykens and Vandewalle, 1999), which leads to promising simplifications regarding

17

the learning process. This algorithm improves the cost function of the SVM as follow (Suykens

18

and Vandewalle, 1999, Suykens et al., 2002):

Least Square Support Vector Machine (LSSVM)

1 1 :;< =><: =  ?  + @ 2 2

1

%

 9

(7)

1

2 3 4 5 6

Subjected to the following constraint: ' =  ?    +  + 

(8)

where w is the regression weight and the superscript T stands for the transpose matrix. The

entities @ and  represent the tuning parameter and variable error of the LSSVM algorithm, respectively. The parameters of the model could be achieved by means of equating the Lagrange

function of the LSSVM to zero and differentiating with respect to , ,  and α as follows (Suykens and Vandewalle, 1999, Suykens et al., 2002):

D EF = 0 ⇒  = .    B E % B 1 BEF = 0 ⇒ . = 0 E % C B EF = 0 ⇒ . = @  ;  = 1, 2, … , I   B E B EF = 0 ⇒  ?    +  +  − ' = 0;  = 1, 2, … , I A α 1

7 8 9 10 11

(9)

Thus, the parameters of LSSVM can be achieved by solving the above system of equations which encompasses 2N+2 equations and 2N+2 unknown parameters (. ,  , w, and b).

It is noteworthy that the LSSVM reliability depends mainly on its control parameters namely @

and 8  . Therefore, we propose to use GA and PSO to optimize these parameters in this investigation.

12

3.1.3

13

In 2005, Huang et al. (Huang et al., 2006) proposed a new learning algorithm called Extreme

14

Learning Machine (ELM) based on the Single Hidden Layer Feedforward Networks (SLFNs)

15

architecture, which can be employed directly in classification and regression applications.

Extreme Learning Machine (ELM)

10

1

According to (Huang et al., 2006, Huang and Siew, 2005), the ELM technique outstands the

2

traditional gradient-based learning methods from the standpoints of learning speed, scalability

3

and the generalization performance. In this algorithm, the hidden nodes including the input

4

weights and the neurons biases needed are not tuned but their values are assigned randomly. On

5

the other hand, the ELM determines the output weights in an analytical manner (Huang and

6

Siew, 2005). Given these advantages, the ELM alleviates the time required for the optimization

7

of parameters of the model. The mathematical expression of the SLFN is as follows (Yaseen et

8

al., 2018): J   =

9 10 11

J

%

ℎ  L = ℎ L

(10)

where L represents the number of hidden neurons, L = [L , L , … , LJ ]? is the output weight

matrix connecting the hidden and output neurons, ℎ   = M N ,  ,  is the output of the ith hidden neuron which represents the randomized hidden features of predictor by employing a

12

non-linear piecewise continuous function (G) (Yaseen et al., 2018). The employed non-linear

13

function (G) which embedded two hidden neuron parameters (a, b) must satisfy the ELM

14

approximation theorem (Huang et al., 2006, Huang et al., 2015). In the current study, the sigmoid

15

equation is utilized for developing the ELM model (Yaseen et al., 2018): M N, ,  =

16 17

1 1 + exp [−N + ]

(11)

In the sequent stage of the ELM learning, L is solved by minimizing the approximation error in

the squared error sense (Huang et al., 2015): min ‖YL − Z‖

T∈UV×X

(12)

11

1

where H and T stand for the randomized hidden layer output matrix and the training data target

2

matrix, respectively (Huang et al., 2015). An optimal solution to Eq. (12) can be represented as

3

follows:

4

L∗ = Y[Z

(13)

where the entity Y [ stands for the Moore-Penrose generalized inverse of matrix H (Huang et al.,

5

2015).

6

Consequently, an optimum method such as iterative method, orthogonal projection method,

7

Gaussian elimination, and single value decomposition (SVD) are used to solve Eq. (13) for

8

conducting an appropriate prediction (Huang et al., 2015).

9

3.1.4

Group Method of Data Handling (GMDH)

10

Polynomial neural network (PNN) or so-called Group Method of Data Handling (GMDH)

11

approach (Ivakhnenko and Cybernetics, 1971) is one of the heuristic self-organizing kinds of the

12

neural network which makes use of polynomial equations to deal with a wide variety of

13

sophisticated non-linear problems in different areas including petroleum engineering. This

14

modeling approach attempts to establish a relationship between the input variables and the single

15

target by developing a tree of multilayer network structure embodies a set of quadratic neurons

16

in different layers (Sadi et al., 2018). As a matter of fact, this method utilizes feedforward

17

networks, in which the independent nodes of each layer can be determined using the neuron of

18

the previous layer based on the quadratic transfer functions (Rostami et al., 2018, Farlow, 1984).

19

For this purpose, the discrete form of functional Volterra series which known as Kolmogorov-

20

Gabor polynomial is used as follows:

12

' =   = N\ +

1 2

1

%

N  +

1

1

% 0%

N0  0 +

1

1

1

% 0% $%

N0$  0 $

(14)

where ]  ,  , … , 1  and ^N , N , … , N1  represent the vectors of input variables and

polynomial coefficients, respectively. Consequently, the quadratic polynomial functions are

3

employed for combining the nodes in the previous layer in order to generate new nodal

4

parameters as follows (Dargahi-Zarandi et al., 2017, Hemmati-Sarapardeh and Mohagheghian,

5

2017): _`ab = N  +  0 + >  0 + E  +  0 +

(15)

6

Tuning the constants of the above equation can be carried out by satisfying the following

7

conditions for minimization of the error: c0

8 9 10

11

12 13

=

1d

%

_`ab

I − '  < * f = 1, 2, … ,   2 

(16)

where Ig denotes the number of training data points. The following matrix is implemented for solving the problem (Dargahi-Zarandi et al., 2017, Hemmati-Sarapardeh and Mohagheghian, 2017): h = ^? ]

(17)

^? = h] ? ]] ? "

(18)

or

where h = [' , ' , … , '1 ] and ^ = [N, , >, E, , ]. As the algorithm iterates, the new independent parameters substitute the previous parameters until reaching the minimum error of

14

calculations. For an ample mathematical description behind the GMDH readers are referred to

15

the literature (Sadi et al., 2018, Farlow, 1984, Rostami et al., 2018, Ivakhnenko et al., 1995, 13

1

Ivakhnenko and Cybernetics, 1971, Dargahi-Zarandi et al., 2017, Hemmati-Sarapardeh and

2

Mohagheghian, 2017). Fig. 2 represents a schematic description of the GMDH methodology to

3

establish a relationship between input variables and the target.

4

3.1.5

5

Multilayer perceptron (MLP) is a popular type of artificial neural network (ANN) that comprises

6

three layers of nodes including the input layer, output layer, and one or more hidden layers. This

7

feedforward network embedded several nonlinear elements (neurons) in each layer, and biases

8

assigned to them along with interconnections/links known as weights connecting between layers

9

(Fath et al., 2018). Nodes from one layer are connected to all other nodes in the sequent layer(s)

10

(Lek and Park, 2008). The MLP carries out the training phase on the basis of back propagation

11

(BP) algorithm by employing a set of input/target pairs (Lek and Park, 2008, Fath et al., 2018,

12

Haykin et al., 2009). Subsequently, the error value is calculated for investigating the

13

performance of the network. Afterward, minimizing the errors is conducted by tuning the

14

weights and the biases of the network.

15

3.1.6

16

Gene Expression Programming (GEP) (Ferreira, 2001) is an adaptive evolutionary algorithm

17

which has been developed based on the Genetic Algorithm (GA) and Genetic programming (GP)

18

techniques. One of the main advantages of this learning algorithm is generating accessible

19

correlations between the input and output parameters. The GEP model establishes different

20

computer programs including conventional mathematical models and decision trees. The GEP

21

encompasses two main entities, namely chromosomes with a fixed length and expression trees

22

(ET) with different shapes and sizes. The modeling process initiates with random creation of an

Multilayer Perceptron (MLP)

Gene Expression Programming (GEP)

14

1

initial population of chromosomes which are embedded multiple genes. Each gene is a fixed

2

length string comprises mathematical function symbols (e.g. addition, subtraction, multiplication,

3

square root, sin, etc.) and the predictor variables (e.g. a, b, c) which known as the function (head)

4

and terminal (tail) domains, respectively (Zhong et al., 2017, Hong et al., 2018). The computer

5

programs of GEP are encoded in these genes by adapting the Karva expression or K-expression

6

(Ferreira, 2001). Subsequently, each chromosome can be expressed or translated into expression

7

trees. For example, consider the following algebraic expression as a chromosome: N >:; [N × ] − [i ] 

8 9

(19)

The terms >:;[N × ] and [i ] are two genes of the chromosome, and the l = >:;,×, −, m,/ j k

and Z = N,  denote the function and terminal domains of these genes, respectively.

10

As can be seen in Fig. 3, the aforementioned chromosome can be converted to an ET, and each

11

gene can be expressed according to its own k-expression. Accordingly, the GEP modeling is

12

developed in five steps (Hong et al., 2018, Hajirezaie et al., 2015): (1) Generation: randomly

13

generating the chromosomes of the initial population; (2) Expression: converting the generated

14

chromosomes to the expression tree (ET); (3) Execution: executing each program consisting of

15

chromosomes; (4) Evaluation: evaluating the generated chromosomes by determining the fitness

16

function via an appropriate cost function. (5) Reproduction: modification of the chromosomes

17

through important genetic operators including replication, mutation, transposition, and

18

recombination, and then implementing iteration until fulfilling the termination criterion.

19

3.2 Optimization Methods

15

1

In this study, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Levenberg-

2

Marquardt Algorithm (LMA), Bayesian Regularization (BR), Scaled Conjugate Gradient (SCG),

3

and Randomized Polynomial Time (RP) were coupled with aforementioned prediction

4

techniques for tuning the model parameters in an appropriate manner. To keep the work concise,

5

and as we have described these algorithms in our previous works, the reader is advised to refer to

6

the literature for a wide outline of the theoretical description of these algorithms (Goldberg and

7

Holland, 1988, Booker et al., 1989, Kennedy and Eberhart, 1995, Shi and Eberhart, 1998,

8

Levenberg, 1944, Marquardt and Mathematics, 1963, Burden and Winkler, 2008, Møller, 1993,

9

Andrei and Applications, 2007, Gasarch, 2014, Amar et al., 2018, Redouane et al., 2018,

10

Hemmati-Sarapardeh et al., 2018). Fig. 4 to Fig. 6 illustrate the flowchart of the SVR, LSSVM,

11

and MLP coupled with the aforementioned optimization algorithms, respectively.

12

4. Results and Discussion

13

4.1 Developing the models

14

In this study, a cumulative of 1017 data points including 576 gas points and 441 condensate

15

points were gathered to meticulously predict the gas condensate relative permeability over a

16

wide range of rock and fluid systems using smart modeling approaches. The foregoing data sets

17

encompass various input variables including saturation, IFT, velocity, and capillary number. As

18

aforementioned, the relative permeability primarily changes as a function of saturation.

19

Furthermore, it increases by a decrease in IFT and velocity increment (in low/moderate

20

velocities). The capillary number, which encompasses both effects of IFT and velocity,

21

represents the competition between the capillary and viscous forces for governing the flow

22

behavior. The capillary forces govern the fluid flow in the pore-scale at small capillary number 16

1

values (high IFT/low velocity). The relative permeability enlarges by viscous force enhancement

2

resulting from capillary number increment. Consequently, the relative permeability (for both gas

3

and condensate phases) increases and its curve tends toward the straight line.

4

A sensitivity analysis was implemented to investigate the abovementioned relevancy of different

5

variables including saturation, IFT, velocity, and capillary number to the relative permeability.

6

For this purpose, the (Henderson et al., 1998) dataset, in which the relative permeability alters by

7

a wide variety of input points, was utilized. The relevancy factor of each input variable was

8

assessed using the following expression: o=

9 10 11

∑ % qqq$ − $. 'q − ' 

qqq$ − $.  ∑ %'q − '  r∑ %

(20)

where k and i denote the type of the input and the number of the data point, respectively; x and y

stand for the input and output parameters, respectively; ̅ and 'q are the averages of the input and

target, respectively. The relevancy factor changes between -1 and +1, which correspond to the

12

thoroughly inverse and direct relationships between the input and the target, respectively. Results

13

of the analysis are graphically depicted in Fig. 7. As can be seen, the analysis supports the

14

aforementioned experimental findings represented in the literature. The saturation parameter

15

corresponds to the high positive relevancy factors of 0.4798 and 0.6804 for the gas and

16

condensate phases, respectively. Furthermore, the capillary number (Nc) (simultaneously

17

encompasses the IFT and velocity parameters) outstandingly influence the relative permeability

18

with the relevancy factors of 0.8833 and 0.6257 for the gas and condensate phases, respectively.

19

In addition to the capillary number as well as saturation, the IFT and velocity entities also have a

20

remarkable impact on the relative permeability of the gas condensate reservoir. However, a

21

closer examination of the results reveals the superior role of IFT in comparison to the velocity. 17

1

Hence, as per earlier work of (Jamiolahmady et al., 2006), utilizing the IFT parameter

2

individually as the input parameter may offer better performance over prediction of relative

3

permeability in gas condensate reservoirs.

4

According to the abovementioned sensitivity analysis, the modeling was carried out in two

5

different manners from the standpoint of input parameters, namely (1) saturation and Nc (2-input

6

modeling), (2) saturation, IFT, and Nc (3-input modeling). A statistical description of the

7

employed data including Maximum, Minimum, Mean, and Standard deviation (SD) values is

8

represented in Table 2. Comprehensive modeling was carried out by employing the

9

aforementioned prediction algorithms integrated with optimization techniques. For developing

10

the models, about 80% of available data were randomly allocated for the training set and the

11

remaining 20% data points were considered for investigating the accuracy and validation of the

12

modeling technique. Accordingly, eleven models were delineated for each phase (gas and

13

condensate). Four of the models are SVR and LSSVM integrated with GA and PSO optimizers.

14

The values of the GA and PSO setting parameters used in this study is represented in Table 3. In

15

addition, the obtained hyper-parameters of the developed SVR models and the key parameters of

16

the established LSSVM models are represented in Table 4 and Table 5, respectively. Another

17

four models are MLP algorithms coupled with LMA, BR, SCG, and RP optimizers. A network

18

with three hidden layers were considered in all of these MLP based models. In these modeling

19

approaches, the Tansig function was found to be more efficient Transfer function for input and

20

hidden layers, and Pureline function was assigned to the output layer. The most appropriate

21

architectures for the gas and condensate models of MLP-LMA, MLP-BR, MLP-SCG, and MLP-

22

RP were found to be 3-11-9-9-1, 3-11-9-9-1, 3-11-11-8-1, and 3-11-10-9-1, respectively, for 2-

23

input modeling, and 3-11-9-9-1, 3-11-10-9-1, 3-11-12-8-1, and 3-11-11-9-1, respectively, for 3-

18

1

input modeling approach. These architectures denote "the number of inputs-number of neurons

2

in the first hidden layer, second hidden layer, and third hidden layer-number of outputs". The last

3

three models are ELM, GMDH, and GEP which benefits from the self-organizing capability. The

4

control parameters of the GEP model including the number of head size, chromosomes, genes,

5

population, mutation and inversion rates along with employed operators are represented in Table

6

6.

7

4.2 Accuracy analysis of the developed models

8

Accuracy and validity of each model for estimating the relative permeability of gas and

9

condensate phases were investigated using statistical parameters including root mean square

10

error (RMSE) and coefficient of determination (R2). It is noteworthy that these statistical

11

parameters are defined as follows: tuvw = x t = 1 −

1 I

1

%

(21)

-y z{|, − -y |yz}, 

 ∑1 %-y z{|, − -y |yz},  qqqqqqqqq  ∑1 %-y z{|, − -y z{|,~ 

(22)

12

However, it should be mentioned that the R-squared is an invalid entity for investigating the

13

accuracy of the non-linear problems, and its values were reported habitually in this study.

14

The values of the aforementioned statistical parameters for all of the models developed in the

15

current study are summarized in Table 7 and Table 8 for gas and condensate phases,

16

respectively. As it is evidence, all of the employed models in both modeling approaches, namely

17

2-input and 3-input, represent fair enough accuracy for estimation of gas and condensate relative

18

permeability. A visual representation of the aforesaid results is depicted in the cross plots of Fig. 19

1

8 to Fig. 11, in which the predicted relative permeability versus the corresponding actual data of

2

training and testing steps were sketched for all models employed in this study. As can be seen,

3

the training and test data points in each utilized models have situated nearby the unit slop line.

4

However, there are a number of scattered points in some cross plots such as ELM and GMDH

5

which reveals the frailty of the corresponding models to fit on the experimentally measured

6

relative permeability data.

7

A visual comparison between two employed approaches, namely 2-input and 3-input is

8

represented in Fig. 12 for gas and condensate phases. The higher accuracy of all the models is

9

achieved when considering 3 independent parameters as the input of the model. However, the

10

ELM model is an exception when estimating the Krg. Furthermore, in the 3-input modeling

11

method, the MLP predictor integrated with LMA optimizer outperforms all other models by

12

RMSE values of 0.035 and 0.019 for gas and condensate phases, respectively.

13

A comparison between the actual experimental relative permeability points which have been

14

employed for training the testing steps versus predictions made using the MLP-MLA model was

15

represented in Fig. 13 and Fig. 14 for gas and condensate data, respectively. The plots

16

demonstrate the high accuracy and reliability of the aforesaid model for estimation of gas and

17

condensate relative permeability in both training and testing steps. In addition, a representation

18

of the distributions of the training and test dataset errors resulted in constructing MLP-LMA is

19

depicted in the Histogram plots of Fig. 15 and Fig. 16 for gas and condensate phases,

20

respectively. As it is evidence, the most frequent errors which are associated with training and

21

testing procedures are consistently allocated in the range of -0.06 to 0.06, in which the peak is

22

equal to zero. In addition, the histogram benefits from symmetric distribution of the errors

20

1

without any skewness. Hence, accumulation of the errors in a symmetric and limited domain

2

ascertains the high accuracy of the MLP-LMA model.

3

In addition to the accuracy assessment, the generated models were evaluated from the

4

standpoints of time and memory requirements. Results of the analysis are represented in Table 9.

5

Furthermore, a visual comparison between time/memory attributed to each model is

6

demonstrated in Fig. 17. It is worth noting that calculations were made using an Intel® CoreTM

7

i7-7700HQ 2.80GHz and 16Gb of RAM. As can be seen, both the 2-input and 3-input strategies

8

illustrate moderate difference from the running time and required memory standing points of

9

view. Furthermore, the results indicate that MLP based models require the lowest running time

10

and memory, while LSSVM, SVR and GEP show the highest running time and need more

11

memory.

12

In statistics, outliers are unlikely extreme points which are distinguishable from other data

13

points. An outlier may cause detrimental effects on the accuracy of the developed models.

14

Therefore, detecting these abnormal points is an essential step in the modeling procedure

15

(Hemmati-Sarapardeh et al., 2018). There are several methods and algorithms which have the

16

capability of detecting and excluding the outliers from the rest of the points. In this study,

17

Leverage approach (Goodall, 1993, Gramatica, 2007, Rousseeuw and Leroy, 2005) was utilized

18

in William’s plot structure for investigating the outliers, namely the points which are different

19

from the bulk of data points. Fig. 18 and Fig. 19 demonstrate William’s plot of gas and

20

condensate phases, respectively, in which the plot of the standardized residual (R) of the MLP-

21

denotes the warning Leverage that is defined by 3 + 1⁄7; where f is the number of model

22 23

 MLA model versus Hat indices is employed to survey the reliability of the model. The entity Y

parameters and p stands for the number of data points (Hemmati-Sarapardeh et al., 2016). The 21

1 2 3 4

points which are located out of the range −3 ≤ t ≤ 3 are known as “Bad High Leverage” and

are considered as suspected data or outliers (Hemmati-Sarapardeh et al., 2018). In addition,

“Good High Leverage” are the data which are situated at Y ≥ Y and −3 ≤ t ≤ 3 ranges. Although these data can be predicted well, but are not located at the applicability domain of the

5

model, and thus employing the model to predict any other data with such range is associated with

6

 are considered as the applicable domain deviations and the Leverage threshold of Y

7 8 9 10 11

doubt in accuracy (Hemmati-Sarapardeh et al., 2018). The squared area within ±3 standard

(Khosrokhavar et al., 2010). As it is evidence, the majority of the data points are situated in the applicability domain of the MLP-MLA model, namely −3 ≤ t ≤ 3, 0 ≤ Y ≤ 0.0272 for gas and −3 ≤ t ≤ 3, 0 ≤ Y ≤ 0.0209 for condensate phase. There are only 1.4% of gas data and

3.6% of condensate data which are located at the Bad High Leverage area. In addition, there are

12

no gas points in the Good High Leverage domain and only about 0.45% of the condensate points

13

are situated in this area. Accordingly, this is another testimony for the high reliability and

14

statistically satisfactoriness of the aforementioned model.

15

Finally, the performance of the proposed MLP-LMA model for estimation of gas condensate

16

relative permeability curves was apprised by fitting the predicted points on the exact

17

experimental data (Fig. 20). As demonstrated in the figure, the fit goodness of the relative

18

permeability-saturation curve on the actual measured values is evidence for both gas and

19

condensate phases, which indicates the high performance and reliability of the MLP-LMA

20

model.

21

4.3 Comparison between the smart and traditional modeling techniques

22

1

A comparison between the MLP-LMA model and five literature traditional models (Coats, 1980,

2

Nghiem et al., 1981, Shaidi, 1997, Whitson and Fevang, 1997, Jamiolahmady et al., 2009) was

3

carried out in order to provide further evaluation regarding smart modeling approaches. The

4

aforesaid traditional models utilize the direct interpolation between immiscible and miscible

5

relative permeabilities to estimate the gas condensate relative permeability in different capillary

6

numbers: 4y =  4y„ + [1 −  ]4y`

7 8 9 10

(23)

where  denotes the gas and condensate phase. The 4y„ and 4y` are the value of relative

permeability at immiscible (minimum Nc) and miscible (maximum Nc) conditions, respectively, in which other relative permeability points are allocated between these limits. The entity 

stands for the weighting factor which changes between zero and one for fully miscible and

11

immiscible states, respectively. The weighting factors that have been employed in the five

12

aforementioned literature models are represented in Table 10. Regression analysis was carried

13

out to find the good fitness of these traditional models on aforesaid experimental data points used

14

in this study. The accuracy of the models was investigated using the statistical parameter of

15

RMSE. The constant parameters, as well as RMSE values for each model, are represented in

16

Table 11. A visual comparison between theses traditional models and the MLP-LMA model is

17

demonstrated in Fig. 21 for gas and condensate phases. As it is evident, both 2-input and 3-input

18

modeling strategies of MLP-LMA outperform all traditional models from the standpoint of

19

accuracy. Therefore, considering the more accuracy of the smart modeling approaches together

20

with difficulties associated with the traditional models including additional calculations for

21 22

determination of the 4y„ and 4y` , choosing the best function for weighting factors, and the

tough task of assigning the best constant parameters for the models, endorse the more 23

1

applicability of the smart modeling to predict the relative permeability in gas condensate

2

reservoirs.

3

4.4 GEP and GMDH derived mathematical expressions

4

At the end, the GEP and GMDH models were employed to formulate the optimum explicit

5

relationship between the inputs and the corresponding relative permeabilities in the user

6

accessible mathematical expressions.

7

The GEP-based expressions were developed according the two aforementioned modeling

8

approaches, namely 2-input and 3-input, as follows:

9

1. Considering the saturation and Nc as the inputs (2-input) 4y… = N;5^… + †… + … 6

10

(24)

4y‡ = N; ^‡ + †‡ + ‡ 

(25)

2. Considering the saturation, IFT and Nc as the inputs (3-input) 4y… = N;5^…ˆ + †…ˆ + …ˆ + ‰…ˆ + w…ˆ 6 4y‡ = N;5^…ˆ + †…ˆ + …ˆ + ‰…ˆ 6

(26)

(27)

11

where A, B, C, D, and E are saturation, IFT, and Nc dependent parameters; subscripts g and c

12

denote the gas and condensate phase, respectively; and subscripts 2 and 3 stand for 2-input and

13

3-input modeling approaches, respectively. The aforesaid parameters of the correlations can be

14

calculated using the equations represented in Table 12. In other to evaluate the performance of

15

these correlations from the standpoint of estimation capability, the RMSE values represented in

16

the Table 7 and Table 8 were utilized. As it is demonstrated in the table, in 2-input approach, the

17

gas and condensate RMSE values are 0.1148 and 0.0594, respectively; and in 3-input approach, 24

1

these values are 0.0719 and 0.0349, respectively. Accordingly, the proposed GEP models are

2

accurate enough to estimate the relative permeability in gas condensate reservoirs.

3

The GMDH-derived mathematical expressions for 2-input and 3-input modeling approaches are

4

as follows:

5

1. 2-input modeling approach 4y = N2 + 2 I> − >2 v − E2I2> + 2 v2Š + 2 I> v

6

2. 3-input modeling approach   4y = Nˆ − ˆ I ∓ >ˆI + Eˆ I + ˆ I − ˆ I . I

7 8 9

(28)

(29)

where i denotes the gas and condensate phase; a, b, c, d, e, and f are constant parameters; I and

I stand for two GMDH neurons which are saturation/IFT and saturation/Nc dependent, respectively; in the minutes-plus sign of Eq. (29), (-) and (+) operators are employed in the calculation of the relative

10

permeability of gas and condensate phases, respectively. A whole illustration of the GMDH-derived

11

mathematical expressions is represented in Table 13. According to Table 7 and Table 8, in 2-input

12

modeling of the GMDH, the values of the statistical parameter of RMSE are 0.1349 and 0.0888,

13

and in 3-input approach, the foregoing values are 0.1224 and 0.0840 for gas and condensate

14

phases, respectively. Hence, the comparison between the RMSE values of GEP and GMDH

15

models demonstrate that the former outperforms the latter from the standpoints of accuracy and

16

reliability.

17

5. Conclusions

25

1

In this study, various smart models were developed to predict the relative permeability of the gas

2

condensate reservoirs by employing more than 1000 data points from eight sets of experimental

3

data. To this end, two different modeling approaches including 2-input (saturation and Nc) and

4

3-input (saturation, IFT, and Nc) modeling were considered. A comparison between the

5

aforementioned models was carried out to achieve the most reliable algorithm; then the optimum

6

model was compared with five traditional literature models using the statistical parameter of

7

RMSE. These assessments were led to the following results:

8 9

1. All of the employed smart models in the current study successfully estimate the relative permeability of the gas and condensate phases with high enough accuracy.

10

2. Both 2-input and 3-input modeling approaches are seen to be reliable for prediction of

11

relative permeability of both gas and condensate phases. For instance, in the gas relative

12

permeability modeling using the MLP-LMA algorithm, the RMSE parameter corresponds

13

to 0.0717 and 0.0350 values for the 2-input and 3-input modeling strategies, respectively.

14

Furthermore, both 2-input and 3-input approaches demonstrate approximately similar

15

performance from the standpoint of time/memory requirements. Accordingly, the 3-input

16

strategy brings about more accuracy for the scope of the current study.

17

3. The MLP predictor integrated with LMA optimization algorithm (3-inputs) outperforms

18

other smart models with the RMSE values of 0.035 and 0.019 for the gas and condensate

19

phases, respectively.

20

4. Both MLP-LMA models which have been developed using 2-input and 3-input

21

approaches demonstrate more accuracy with respect to the traditional literature models.

22

For instance, in the case of the gas relative permeability modeling, the RMSE values are

23

0.035 and 0.0922 for the MLP-LMA (3-input) and the Jamiolahmady et al. (2009) (the

26

1

best correlation among the literature models) methodologies, respectively. Accordingly,

2

the smart modeling approach has established itself as the most reliable technique for

3

estimation of relative permeability in gas condensate reservoirs.

4

5. Two sets of user accessible correlations were developed using GEP and GMDH methods.

5

The RMSE values of 0.0719, 0.1224, 0.0349, and 0.084 are attributed to the GEP (gas),

6

GMDH (gas), GEP (condensate), and GMDH (condensate) modeling approaches,

7

respectively. Hence, the GEP model surpasses the GMDH approach from the standpoint

8

of reliability in developing mathematical expressions.

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

References AHMADI, M. A. & CHEN, Z. J. P. 2018. Comparison of machine learning methods for estimating permeability and porosity of oil reservoirs via petro-physical logs. AHMADI, M. A. J. F. 2015. Connectionist approach estimates gas–oil relative permeability in petroleum reservoirs: application to reservoir simulation. 140, 429-439. AL-BULUSHI, N., KING, P. R., BLUNT, M. J., KRAAIJVELD, M. J. J. O. P. S. & ENGINEERING 2009. Development of artificial neural network models for predicting water saturation and fluid distribution. 68, 197-208. ALIOUANE, L., OUADFEUL, S.-A., DJARFOUR, N. & BOUDELLA, A. 2014. Permeability prediction using artificial neural networks. A comparative study between back propagation and Levenberg– Marquardt learning algorithms. Mathematics of Planet Earth. Springer. AMAEFULE, J. O. & HANDY, L. L. 1982a. The Effect of Interfacial Tensions on Relative Oil/Water Permeabilities of Consolidated Porous Media. Society of Petroleum Engineers Journal, 22, 371381. AMAEFULE, J. O. & HANDY, L. L. J. S. O. P. E. J. 1982b. The effect of interfacial tensions on relative oil/water permeabilities of consolidated porous media. 22, 371-381. AMAR, M. N., ZERAIBI, N., REDOUANE, K. J. A. J. F. S. & ENGINEERING 2018. Optimization of WAG Process Using Dynamic Proxy, Genetic Algorithm and Ant Colony Optimization. 1-14. ANDREI, N. J. C. O. & APPLICATIONS 2007. Scaled conjugate gradient algorithms for unconstrained optimization. 38, 401-416. APP, J. F. & BURGER, J. E. 2009. Experimental Determination of Relative Permeabilities for a Rich Gas/Condensate System Using Live Fluid. SPE Reservoir Evaluation & Engineering, 12, 263-269. ARIGBE, O. D., OYENEYIN, M., ARANA, I., GHAZI, M. J. J. O. P. E. & TECHNOLOGY, P. 2019. Real-time relative permeability prediction using deep learning. 9, 1271-1284. ASAR, H. & HANDY, L. L. J. S. R. E. 1988. Influence of interfacial tension on gas/oil relative permeability in a gas-condensate system. 3, 257-264. BASBUG, B. & KARPYN, Z. T. Estimation of permeability from porosity, specific surface area, and irreducible water saturation using an artificial neural network. Latin American & Caribbean Petroleum Engineering Conference, 2007. Society of Petroleum Engineers. 27

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

BETTÉ, S., HARTMAN, K., HEINEMANN, R. J. J. O. P. S. & ENGINEERING 1991. Compositional modeling of interfacial tension effects in miscible displacement processes. 6, 1-14. BLOM, S. & HAGOORT, J. How to include the capillary number in gas condensate relative permeability functions? SPE Annual Technical Conference and Exhibition, 1998. Society of Petroleum Engineers. BLOM, S., HAGOORT, J. & SOETEKOUW, D. Relative permeability at near-critical conditions. SPE Annual Technical Conference and Exhibition, 1997. Society of Petroleum Engineers. BOOKER, L. B., GOLDBERG, D. E. & HOLLAND, J. H. 1989. Classifier systems and genetic algorithms. BROOKS, R. & COREY, A. J. C. S. U., HYDRO PAPER 1964. Hydraulic properties of porous media. 3, 27. BURDEN, F. & WINKLER, D. 2008. Bayesian regularization of neural networks. Artificial neural networks. Springer. CALISGAN, H. & AKIN, S. J. T. O. P. E. J. 2008. Near critical gas condensate relative permeability of carbonates. 1. CHEN, H., WILSON, S. & MONGER-MCCLURE, T. Determination of relative permeability and recovery for North Sea gas condensate reservoirs. SPE Annual Technical Conference and Exhibition, 1995. Society of Petroleum Engineers. CHENG, G.-J., CAI, L. & PAN, H.-X. Comparison of extreme learning machine with support vector regression for reservoir permeability prediction. 2009 International Conference on Computational Intelligence and Security, 2009. IEEE, 173-176. CHUKWUDEME, E. A., FJELDE, I., ABEYSINGHE, K. P., LOHNE, A. J. S. R. E. & ENGINEERING 2014. Effect of interfacial tension on water/oil relative permeability on the basis of history matching to coreflood data. 17, 37-48. COATS, K. H. 1980. An Equation of State Compositional Model. Society of Petroleum Engineers Journal, 20, 363-376. CORTES, C. & VAPNIK, V. J. M. L. 1995. Support-vector networks. 20, 273-297. CRISTIANINI, N. & SHAWE-TAYLOR, J. 2000. An introduction to support vector machines and other kernelbased learning methods, Cambridge university press. CVETKOVIĆ, M., VELIĆ, J. & MALVIĆ, T. J. G. C. 2009. Application of neural networks in petroleum reservoir lithology and saturation prediction. 62, 115-121. DANDEKAR, A. 2015. Critical evaluation of empirical gas condensate correlations. Journal of Natural Gas Science and Engineering, 27, 298-305. DARGAHI-ZARANDI, A., HEMMATI-SARAPARDEH, A., HAJIREZAIE, S., DABIR, B. & ATASHROUZ, S. J. J. O. M. L. 2017. Modeling gas/vapor viscosity of hydrocarbon fluids using a hybrid GMDH-type neural network system. 236, 162-171. DELSHAD, M., BHUYAN, D., POPE, G. & LAKE, L. Effect of capillary number on the residual saturation of a three-phase micellar solution. SPE Enhanced Oil Recovery Symposium, 1986. Society of Petroleum Engineers. ESFAHANI, S., BASELIZADEH, S., HEMMATI-SARAPARDEH, A. J. J. O. N. G. S. & ENGINEERING 2015. On determination of natural gas density: least square support vector machine modeling approach. 22, 348-358. FARLOW, S. J. 1984. Self-organizing methods in modeling: GMDH type algorithms, CrC Press. FATH, A. H., MADANIFAR, F. & ABBASI, M. J. P. 2018. Implementation of multilayer perceptron (MLP) and radial basis function (RBF) neural networks to predict solution gas-oil ratio of crude oil systems. FERREIRA, C. 2001. Gene Expression Programming: A New Adaptive Algorithm for Solving Problems. FULCHER JR, R. A., ERTEKIN, T. & STAHL, C. J. J. O. P. T. 1985. Effect of capillary number and its constituents on two-phase relative permeability curves. 37, 249-260. GASARCH, W. 2014. Classifying problems into complexity classes. Advances in computers. Elsevier. 28

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

GHOLAMI, R. & FAKHARI, N. 2017. Chapter 27 - Support Vector Machine: Principles, Parameters, and Applications. In: SAMUI, P., SEKHAR, S. & BALAS, V. E. (eds.) Handbook of Neural Computation. Academic Press. GODA, H. M., MAIER, H. & BEHRENBRUCH, P. Use of artificial intelligence techniques for predicting irreducible water saturation-Australian hydrocarbon basins. Asia Pacific Oil and Gas Conference and Exhibition, 2007. Society of Petroleum Engineers. GOLDBERG, D. E. & HOLLAND, J. H. J. M. L. 1988. Genetic algorithms and machine learning. 3, 95-99. GOODALL, C. R. 1993. 13 Computation using the QR decomposition. Handbook of Statistics. Elsevier. GRAMATICA, P. 2007. Principles of QSAR models validation: internal and external. 26, 694-701. GULER, B., ERTEKIN, T. & GRADER, A. J. J. O. C. P. T. 2003. An artificial neural network based relative permeability predictor. 42. HAJIREZAIE, S., HEMMATI-SARAPARDEH, A., MOHAMMADI, A. H., POURNIK, M., KAMARI, A. J. J. O. N. G. S. & ENGINEERING 2015. A smooth model for the estimation of gas/vapor viscosity of hydrocarbon fluids. 26, 1452-1459. HANIFF, M. & ALI, J. Relative permeability and low tension fluid flow in gas condensate systems. European Petroleum Conference, 1990. Society of Petroleum Engineers. HAYKIN, S. S., HAYKIN, S. S., HAYKIN, S. S. & HAYKIN, S. S. 2009. Neural networks and learning machines, Pearson Upper Saddle River. HEMMATI-SARAPARDEH, A., AMELI, F., DABIR, B., AHMADI, M. & MOHAMMADI, A. H. J. F. P. E. 2016. On the evaluation of asphaltene precipitation titration data: modeling and data assessment. 415, 88-100. HEMMATI-SARAPARDEH, A. & MOHAGHEGHIAN, E. J. F. 2017. Modeling interfacial tension and minimum miscibility pressure in paraffin-nitrogen systems: Application to gas injection processes. 205, 80-89. HEMMATI-SARAPARDEH, A., VARAMESH, A., HUSEIN, M. M., KARAN, K. J. R. & REVIEWS, S. E. 2018. On the evaluation of the viscosity of nanofluid systems: Modeling and data assessment. 81, 313329. HENDERSON, G., DANESH, A., TEHRANI, D. & AL-KHARUSI, B. The relative significance of positive coupling and inertial effects on gas condensate relative permeabilities at high velocity. SPE Annual Technical Conference and Exhibition, 2000a. Society of Petroleum Engineers. HENDERSON, G., DANESH, A., TEHRANI, D., AL-SHAIDI, S., PEDEN, J. J. S. R. E. & ENGINEERING 1998. Measurement and correlation of gas condensate relative permeability by the steady-state method. SPE Journal, 1, 134-140. HENDERSON, G., DANESH, A., TEHRANI, D., PEDEN, J. J. J. O. P. S. & ENGINEERING 1997. The effect of velocity and interfacial tension on relative permeability of gas condensate fluids in the wellbore region. 17, 265-273. HENDERSON, G. D., DANESH, A., AL-KHARUSI, B., TEHRANI, D. J. J. O. P. S. & ENGINEERING 2000b. Generating reliable gas condensate relative permeability data used to develop a correlation with capillary number. 25, 79-91. HONG, T., JEONG, K. & KOO, C. J. A. E. 2018. An optimized gene expression programming model for forecasting the national CO2 emissions in 2030 using the metaheuristic algorithms. 228, 808820. HUANG, G.-B. & SIEW, C.-K. J. I. J. O. I. T. 2005. Extreme learning machine with randomly assigned RBF kernels. 11, 16-24. HUANG, G.-B., ZHU, Q.-Y. & SIEW, C.-K. J. N. 2006. Extreme learning machine: theory and applications. 70, 489-501. HUANG, G., HUANG, G.-B., SONG, S. & YOU, K. J. N. N. 2015. Trends in extreme learning machines: A review. 61, 32-48. 29

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

HUANG, Z., SHIMELD, J., WILLIAMSON, M. & KATSUBE, J. J. G. 1996. Permeability prediction with artificial neural network modeling in the Venture gas field, offshore eastern Canada. 61, 422436. IVAKHNENKO, A., IVAKHNENKO, G. J. P. R. & IZOBRAZHENII, I. A. C. C. O. R. O. I. A. 1995. The review of problems solvable by algorithms of the group method of data handling (GMDH). 5, 527-535. IVAKHNENKO, A. G. J. I. T. O. S., MAN, & CYBERNETICS 1971. Polynomial theory of complex systems. 364-378. JAMIOLAHMADY, M., DANESH, A., TEHRANI, D. H. & SOHRABI, M. 2006. Variations of Gas/Condensate Relative Permeability With Production Rate at Near-Wellbore Conditions: A General Correlation. SPE Reservoir Evaluation & Engineering, 9, 688-697. JAMIOLAHMADY, M., SOHRABI, M. & IRELAND, S. Gas-Condensate Relative Permeabilities in Propped Fracture Porous Media: Coupling vs. Inertia. SPE Annual Technical Conference and Exhibition, 2008. Society of Petroleum Engineers. JAMIOLAHMADY, M., SOHRABI, M., IRELAND, S., GHAHRI, P. J. J. O. P. S. & ENGINEERING 2009. A generalized correlation for predicting gas–condensate relative permeability at near wellbore conditions. 66, 98-110. KALLA, S., LEONARDI, S. A., BERRY, D. W., POORE, L. D., SAHOO, H., KUDVA, R. A. & BRAUN, E. M. Factors That Affect Gas-Condensate Relative Permeability. IPTC 2014: International Petroleum Technology Conference, 2014. KENNEDY, J. & EBERHART, R. Particle swarm optimization. Proceedings of ICNN'95 - International Conference on Neural Networks, 27 Nov.-1 Dec. 1995 1995. 1942-1948 vol.4. KHOSROKHAVAR, R., GHASEMI, J. B. & SHIRI, F. J. I. J. O. M. S. 2010. 2D quantitative structure-property relationship study of mycotoxins by multiple linear regression and support vector machine. 11, 3052-3068. LEIPHART, D. J. & HART, B. S. J. G. 2001. Comparison of linear regression and a probabilistic neural network to predict porosity from 3-D seismic attributes in Lower Brushy Canyon channeled sandstones, southeast New Mexico. 66, 1349-1358. LEK, S. & PARK, Y. S. 2008. Multilayer Perceptron. In: JØRGENSEN, S. E. & FATH, B. D. (eds.) Encyclopedia of Ecology. Oxford: Academic Press. LEVENBERG, K. J. Q. O. A. M. 1944. A method for the solution of certain non-linear problems in least squares. 2, 164-168. LOMELAND, F., EBELTOFT, E. & THOMAS, W. H. A new versatile relative permeability correlation. International Symposium of the Society of Core Analysts, Toronto, Canada, 2005. LONGERON, D. J. S. O. P. E. J. 1980. Influence of very low interfacial tensions on relative permeability. 20, 391-401. MARQUARDT, D. W. J. J. O. T. S. F. I. & MATHEMATICS, A. 1963. An algorithm for least-squares estimation of nonlinear parameters. 11, 431-441. MØLLER, M. F. J. N. N. 1993. A scaled conjugate gradient algorithm for fast supervised learning. 6, 525533. MOTT, R., CABLE, A. & SPEARING, M. A new method of measuring relative permeabilities for calculating gas-condensate well deliverability. SPE Annual Technical Conference and Exhibition, 1999. Society of Petroleum Engineers. NASRABADI, N. M. J. J. O. E. I. 2007. Pattern recognition and machine learning. 16, 049901. NGHIEM, L. X., FONG, D. & AZIZ, K. J. S. O. P. E. J. 1981. Compositional Modeling With an Equation of State (includes associated papers 10894 and 10903). 21, 687-698. REDOUANE, K., ZERAIBI, N. & NAIT AMAR, M. 2018. Automated Optimization of Well Placement via Adaptive Space-Filling Surrogate Modelling and Evolutionary Algorithm. Abu Dhabi International Petroleum Exhibition & Conference. Abu Dhabi, UAE: Society of Petroleum Engineers. 30

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

ROSTAMI, A., KAMARI, A., PANACHAROENSAWAD, E. & HASHEMI, A. J. J. O. T. T. I. O. C. E. 2018. New empirical correlations for determination of Minimum Miscibility Pressure (MMP) during N2contaminated lean gas flooding. 91, 369-382. ROUSSEEUW, P. J. & LEROY, A. M. 2005. Robust regression and outlier detection, John wiley & sons. SADI, M., SHAHRABADI, A. J. J. O. P. S. & ENGINEERING 2018. Evolving robust intelligent model based on group method of data handling technique optimized by genetic algorithm to predict asphaltene precipitation. 171, 1211-1222. SCHOLKOPF, B. & SMOLA, A. J. 2001. Learning with kernels: support vector machines, regularization, optimization, and beyond, MIT press. SHAIDI, S. M. A. 1997. Modelling of gas-condensate flow in reservoir at near wellbore conditions. HeriotWatt University. SHI, Y. & EBERHART, R. A modified particle swarm optimizer. 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), 4-9 May 1998 1998. 69-73. SILPNGARMLERS, N. & ERTEKIN, T. Artificial neural network architectures for predicting two-phase and three-phase relative permeability characteristics. SPE Annual Technical Conference and Exhibition, 2002. Society of Petroleum Engineers. SILPNGARMLERS, N., GULER, B., ERTEKIN, T. & GRADER, A. Development and testing of two-phase relative permeability predictors using artificial neural networks. SPE Latin American and Caribbean Petroleum Engineering Conference, 2001. Society of Petroleum Engineers. SINGH, S. Permeability prediction using artificial neural network (ANN): a case study of Uinta Basin. SPE Annual Technical Conference and Exhibition, 2005. Society of Petroleum Engineers. SMOLA, A. J., SCHÖLKOPF, B. J. S. & COMPUTING 2004. A tutorial on support vector regression. 14, 199222. SOTO, B., ARDILA, J., FEMEYNES, H. & BEJARANO, A. Use of Neural Networks to Predict the Permeability and Porosity of Zone" C" of the Cantagallo Field in Colombia. SPE Petroleum Computer Conference, 1997. Society of Petroleum Engineers. STONE, H. J. J. O. C. P. T. 1973. Estimation of three-phase relative permeability and residual oil data. 12. SUYKENS, J. A., VAN GESTEL, T. & DE BRABANTER, J. 2002. Least squares support vector machines, world scientific. SUYKENS, J. A. & VANDEWALLE, J. J. N. P. L. 1999. Least squares support vector machine classifiers. 9, 293-300. TANI, K., YAMADA, T. & IKEDA, S. 2014. Application of Velocity-Dependent Relative Permeability for Modelling Gas-Condensate Reservoirs: Field Example. SPE Asia Pacific Oil & Gas Conference and Exhibition. Adelaide, Australia: Society of Petroleum Engineers. ÜSTÜN, B., MELSSEN, W. J. & BUYDENS, L. M. C. 2007. Visualisation and interpretation of Support Vector Regression models. Analytica Chimica Acta, 595, 299-309. VELEZ-LANGS, O. J. J. O. P. S. & ENGINEERING 2005. Genetic algorithms in oil industry: An overview. 47, 15-22. WHITSON, C. H. & FEVANG, Ø. Generalized pseudopressure well treatment in reservoir simulation. Proc. IBC Conference on Optimisation of Gas Condensate Fields, 1997. WHITSON, C. H., FEVANG, Ø. & SÆVAREID, A. Gas condensate relative permeability for well calculations. SPE Annual Technical Conference and Exhibition, 1999. Society of Petroleum Engineers. WIENER, J. M., ROGERS, J. A., ROGERS, J. R. & MOLL, R. F. 1991. Predicting carbonate permeabilities from wireline logs using a back-propagation neural network. SEG Technical Program Expanded Abstracts 1991. Society of Exploration Geophysicists. WONG, P. M., GEDEON, T. D., TAGGART, I. J. J. I. T. O. G. & SENSING, R. 1995. An improved technique in porosity prediction: a neural network approach. 33, 971-980. 31

1 2 3 4 5

YASEEN, Z. M., DEO, R. C., HILAL, A., ABD, A. M., BUENO, L. C., SALCEDO-SANZ, S. & NEHDI, M. L. J. A. I. E. S. 2018. Predicting compressive strength of lightweight foamed concrete using extreme learning machine model. 115, 112-125. ZHONG, J., FENG, L. & ONG, Y. 2017. Gene Expression Programming: A Survey [Review Article]. IEEE Computational Intelligence Magazine, 12, 54-72.

6

Table 1 Summary of the experimental conditions of the eight datasets employed in the modeling. Data set

Bardon & Longeron 1980

Asar &Handy 1988

Haniff & Ali 1990

Chen et al. 1995

Henderson et al. 1997

Blom et al. 1997

Henderson et al. 1998

Calisgan et al. 2006

Experiment method

U.S.S

S.S

S.S

S.S

U.S.S

S.S

S.S

S.S

Rock type

Fontainebleau sand stone

Berea sand stone

Spynie sand stone

North Sea gas condensate

Glass porous media

Berea sand stone

Berea sand stone

Carbonate

9.9

20

22

17.4

36

18.2

19.8

-

83

193

23

73.39

972.72

92

92

18.56

0

0

0

0.209

0

26.4

26.4

0

C6CH3OH

C1-C3nC5nC10nC16

C1-nC4

C1-nC6

37

-

Porosity (%) Absolute permeability (md) Irreducible water (%) Fluid composition

C1-nC7

C1-C3

C1-C3

North Sea gas condensate

Temperature (°C)

71.1

21

31.7

121.1

-

-

0.001 to 0.05 5.56 to 947

0.03 to 0.35 5.36 to 42.9

0.006 to 0.31 267 to 15300

0.05 to 0.4 9.4 to 36

0.03 to 0.83 0.443 to 0.05 to 1100 Nc (e-6) 148 ( - ): means that data were not reported. IFT (mN/m)

0.001 to 12.6

7

32

0.14 to 0.9 0.28 to 14

0.01 to 0.39 0.124 to 8.62

Table 3 GS and PSO setting parameters used in the study. Table 2 Statistical description of the employed data. Algorithm Parameters Modeling approach GA Modeling 1 (2-input) PSO

Modeling 2 (3-input)

Population size Fluid Data Crossover’s probabilityParameter Phase size Mutation’s probability Output Krc Condensate 441 Type of selection Nc phase points Max numberInput of generation Sc Coding Output Krg 574 Size of the Swarm Nc Gas phase points Input Max number of iteration Sg C1 Output Krc C2 IFT (mN/m) Condensate 441 Œ phase points Input Nc Sc Output

Max

Modeling Approach

574 points

Modeling 1 (2-input)

Condensate

Modeling 2 (3-input)

Condensate

Gas

Gas

0.2229 0.0045 0.2149 0.2757 0.0042 0.2423 0.2229 1.9254 0.0045 0.2149

1

0

0.4480

0.2757

0.9 0.0153 1

0.0010 0 0

0.1666 0.0019 0.6281

0.2505 0.0042 0.2423

Krg

Input

Fluid Phase

SD

1.0 015 % 0.1575 Linear 0.0153 5e-8ranking 0.0019 1 0100 0.3864 Binary 1 0 0.4480 0.0153 0 50 0.0019 100 1 0 0.6281 2.05 1.0 02.05 0.1575 12.6000 0.0010 0.729 0.5960 0.0153 5e-8 0.0019 1 0 0.3864

IFT (mN/m) Nc Table 4 Sg Obtained SVR hyper-parameters for developed models. Gas phase

Value/setting 50 Min 85 % Mean

Model

C

ε

γ

SVR-GA SVR-PSO SVR-GA SVR-PSO SVR-GA SVR-PSO SVR-GA SVR-PSO

11572 10478 824.12 755.5664 12747 15043 1472 775.3012

0.00298 0.00384 0.0084 0.0097 0.00654 0.00147 0.00718 0.0066

4.43 2.77 1.079 0.7820 1.59 1.5 1.397 1.2410

1 2 Table 5 Key parameters of the established LSSVM models for estimation of Kr. Modeling Approach

Fluid phase

Modeling 1 (2-input)

Condensate

Modeling 2 (3-input)

Condensate

@ 8 @ 8 @ 8 @ 8

Tuning Parameters

Gas

Gas

33

LSSVM-GA

LSSVM-PSO

145000 0.25 142347 0.154 67580 0.9902 4955.08 0.7702

122780 0.6441 129261.6 0.0589 3243406.20 1.75 11173 0.2176

1 2 3 4 5 6 7 8 9 10

Table 6 GEP setting parameters used in the study. Parameters The number of head size Chromosome Gene Population Mutation rate Inversion rate Operators used

Value/setting 8 50 8-12 500-900 0.25 0.1 +, −, *, /, EXP, X2, INV, LOG, SQRT

34

1 2 3

Table 7 Statistical indexes of the established models for Krg.

Modeling Approach

Modeling 1 (2-input)

Modeling 2 (3-input)

Model

Training (459 points)

Test (115 points)

All (574 points)

RMSE

R2

RMSE

R2

RMSE

R2

LSSVM-GA

0.0917

0.9429

0.1270

0.8916

0.0988

0.9326

LSSVM-PSO

0.0854

0.9507

0.1270

0.8927

0.0937

0.9391

SVR-GA

0.1099

0.9194

0.1445

0.8645

0.1168

0.9084

SVR-PSO

0.1083

0.9228

0.1398

0.8732

0.1146

0.9129

ELM

0.1148

0.9090

0.1435

0.8590

0.1205

0.8990

GMDH

0.130

0.8797

0.1507

0.8413

0.1349

0.8720

MLP-LMA

0.0611

0.9751

0.1139

0.9131

0.0717

0.9626

MLP-BR

0.0899

0.9454

0.1106

0.9182

0.0941

0.9400

MLP-SCG

0.1113

0.9150

0.1242

0.8948

0.1139

0.9109

MLP-RP

0.0992

0.9330

0.1185

0.9051

0.1031

0.9274

GEP

0.1086

0.9190

0.1396

0.8679

0.1148

0.9087

LSSVM-GA

0.0723

0.9651

0.0751

0.9612

0.0729

0.9643

LSSVM-PSO

0.0438

0.9874

0.0829

0.9590

0.0517

0.9817

SVR-GA

0.0782

0.9594

0.0743

0.9620

0.0774

0.9599

SVR-PSO

0.0776

0.9601

0.0761

0.9600

0.0773

0.9601

ELM

0.0683

0.9690

0.5831

0.3041

0.1714

0.8358

GMDH

0.1259

0.8903

0.1086

0.9184

0.1224

0.8959

MLP-LMA

0.0314

0.9935

0.0494

0.9841

0.0350

0.9916

MLP-BR

0.0683

0.9690

0.0514

0.9823

0.0649

0.9717

MLP-SCG

0.0904

0.9451

0.0779

0.9587

0.0879

0.9478

MLP-RP

0.0818

0.9552

0.0669

0.9698

0.0788

0.9582

GEP

0.0708

0.9668

0.0766

0.9594

0.0719

0.9653

35

1 2 Table 8 Statistical indexes of the established models for Krc.

Modeling Approach

Modeling 1 (2-input)

Modeling 2 (3-input)

Model

Training (353 points)

Test (88 points)

All (441 points)

RMSE

R2

RMSE

R2

RMSE

R2

LSSVM-GA

0.0565

0.9671

0.0832

0.9375

0.0618

0.9611

LSSVM-PSO

0.0501

0.9741

0.0979

0.9078

0.0597

0.9609

SVR-GA

0.0684

0.9517

0.0825

0.9363

0.0712

0.9486

SVR-PSO

0.0609

0.9623

0.0819

0.9384

0.0651

0.9575

ELM

0.0638

0.9578

0.0803

0.9373

0.0671

0.9537

GMDH

0.0856

0.9226

0.1020

0.8996

0.0888

0.9180

MLP-LMA

0.0394

0.9841

0.0634

0.9604

0.0442

0.9794

MLP-BR

0.0401

0.9836

0.0594

0.9651

0.0439

0.9799

MLP-SCG

0.0609

0.9618

0.0726

0.9500

0.0633

0.9594

MLP-RP

0.0519

0.9724

0.0659

0.9584

0.0547

0.9696

GEP

0.0562

0.9674

0.0722

0.9488

0.0594

0.9637

LSSVM-GA

0.0337

0.9879

0.0465

0.9815

0.0363

0.9866

LSSVM-PSO

0.0313

0.9896

0.0430

0.9842

0.0336

0.9885

SVR-GA

0.0345

0.9875

0.0509

0.9781

0.0378

0.9857

SVR-PSO

0.0357

0.9872

0.0477

0.9806

0.0381

0.9858

ELM

0.0336

0.9880

0.0753

0.9517

0.0419

0.9808

GMDH

0.0817

0.9268

0.0932

0.9231

0.0840

0.9260

MLP-LMA

0.0164

0.9971

0.0293

0.9927

0.0190

0.9963

MLP-BR

0.0295

0.9908

0.0291

0.9928

0.0294

0.9912

MLP-SCG

0.0374

0.9851

0.0381

0.9876

0.0376

0.9856

MLP-RP

0.0374

0.9852

0.0354

0.9893

0.0370

0.9860

GEP

0.0316

0.9894

0.0482

0.9804

0.0349

0.9876

36

1 Table 9 A comparison between time/memory requirements of each established model in the current study. Modeling Approach

Modeling 1 (2-input)

Modeling 2 (3-input)

Model

Time (s)

Memory (Kb)

LSSVM-GA

22.864

23308

LSSVM-PSO

21.154

22478

SVR-GA

24.847

25451

SVR-PSO

22.881

23232

ELM

12.52

40

GMDH

2.78

3084

MLP-LMA

2.172

1584

MLP-BR

2.563

68

MLP-SCG

1.44

64

MLP-RP

1.586

68

GEP

311.764

40568

LSSVM-GA LSSVM-PSO

23.395 22.174

23448 22656

SVR-GA

25.146

25789

SVR-PSO ELM

23.708 1.156

23256 64

GMDH

3.432

8510

MLP-LMA MLP-BR

2.297 2.35

1652 68

MLP-SCG

1.784

68

MLP-RP

1.464 923

68 43768

GEP

2 3 4

37

1 2 3 4 5

Table 10 The weighting factors of five traditional models employed for comparison with MLP-LMA models.

Nghiem et al. 1981

7

Al-Shaidi 1997

9 10



I‡k ‘’   I‡

Coats 1980

6

8

Ž

Authors

1−

1 " ‘’  “’”  1“’

I‡k ‘’ •’   I‡ 1  . I‡  ˜’ + 1 I 8 1 +  . ™:Š š ‡  \ › I‡k 8  I 8 I 8 1 +  . ™:Š š ‡  \ › +  . œ™:Š š ‡  \ › I‡k 8 I‡k 8

Whitson and Fevang 1997 Jamiolahmady et al. 2009

11 12

–—˜’

13 14 15

Table 11 The statistical parameter of RMSE and the constant parameters of the five traditional models obtained from the curve fitting analysis.

16

RMSE Coats 1980 Nghiem et al. 1981 Al-Shaidi 1997 Whitson and Fevang 1997 Jamiolahmady et al. 2009

Gas Condensate Gas Condensate Gas Condensate Gas Condensate Gas Condensate

0.1030 0.0911 0.1328 0.1229 0.1123 0.0691 0.1032 0.0908 0.0922 0.0795

38

Constant parameters n1 n2 12.7969 20.1727 3310.35 4482.28 0.11250 1.00000 0.19005 2.10068 2207.72 0.29000 1100.00 0.50000 -0.02622 0.00677 -0.03701 0.00138

Table 12 The obtained expression using GEP.

A B Condensate C

Modeling 2 (3-input)

D

A

B

Gas

C

D

E

A Condensate Modeling 1 (2-input)

B

Gas

1

žŸ  = ¡¢£¤ + ¥ + ¦ + §

−0.5542 + 28.84I‡ + 342.6I‡ ¬lZ + I‡  − 0.08279v‡ √¬lZ + 0.6608 expv‡ˆ  11.15 + 6.795 log¬lZ

9.0075√¬lZ + logI‡ 6



0.06203¬lZI‡  exp−I‡  5.765¬lZ − 1.079

log¬lZ±0.0827951 − v‡ 6 + 0.1039v‡² ³ + 14.75I‡ exp−¬lZ logI‡  −4.522v‡ˆ ¬lZ + 1.88v‡ I‡  expv‡ − ¬lZ + 10.02¬lZ  I‡ exp−3¬lZ + 4.102¬lZv‡²  2I‡ + v‡ exp5v‡ − v‡ ¬lZ6

žŸ´ = ¡¢£¤ + ¥ + ¦ + § + µ

2.762 + 0.06093 log5I‡ ¬lZ + 2I‡ 6 − 0.66065¬lZ + exp5v… 66 − 0.6606¬lZ ² ¬lZ − 0.1109 + 59.775v… 6 + 2.1085v…¶ 6 − 9.235v… 65I‡ − v… 65¬lZ + v… − 7.1286 −0.02884 exp ·−5v… 65¬lZ + v… − 6.1636¸ − 3.373exp ·− exp5v… 6 − 9.2935v… 6 − 5v…ˆ 6¸

−65.98¬lZ  rI‡ + 47.19v… √¬lZ − 32.26v… i¬lZ. v… − 22.47¹º−¬lZ. iv… . 5I‡ − v… 6» 0.02118v…ˆ 394.6I‡ ¬lZ − 0.1589 0.02118 exp5−v… 6 + + 23.06¬lZ − 4.13 23.06¬lZ − 8.08 0.21995v… 6 + 1.224 0.33125v…ˆ 6¬lZ − 1.021 + 7.206I‡ + 7.206v… − 2.632

I‡  5I‡ − v… + 0.35356 ¬lZ + 53.58. 5I‡ − v… 6. ¬lZ. I‡ . v…ˆ exp53v… 6 logI‡ 

−0.22775¬lZ − v… 6 2¬lZ + logI‡  +

žŸ  = ¡¢£¤ + ¥ + ¦ 

 v‡ ¼ 1.508I‡ v‡ − v‡ˆ + expv‡  + 0.001376   − 12.41I‡ ¼ v‡ ½ 5−v‡ + rv‡ + expI‡ 6 I‡

32.25rI‡ v‡ 5rv‡ − 4.8886 − 323.1I‡ rv‡ I‡ v‡ + v‡ + I‡ 

C

−22.89v‡ rI‡ log0.2296I‡  + 213.1I‡ v‡½ −v‡ˆ + I‡  + 3505I‡ v‡ rI‡ v‡ − 1.515

A

3.836exp5−2v… 6 − 11.51v… − 1.204 log52I‡ − 1.823 logI‡ 6 + 10.41 exp5−2.021rI‡ 6

B

C

žŸ´ = ¡¢£¤ + ¥ + ¦

ˆ

−52.12 exp5v… 6 − 875.3¹I‡ˆ iv… exp5v… 6 + 90.1552I‡ + v… 6 + 2.57 exp5−3I‡ v… 6 exp5−6v… 6

²

−0.00015635v… 6¼ logI‡ ˆ + 3.8365v… 6 − 37.79 exp ·−35I| + v… 6¸ exp5− exp5v… 66 + 52.85 ½

39

1 2

Table 13 The obtained expression using GMDH model.

3 4

Condensate

6 7

Modeling 2 (3-input)

8 Gas

9

11

13 14

I

I

10

12

I

I

5

Modeling 1 (2-input)

Condensate Gas

žŸ  = ¾. ¾¾¿ÀÀ¾ÁÀ − ¾. ¾ÃÄÁÅ¿ÁÁÁÆÇ + ¾. ÁÅÁÇžÃÃÂÆ + Ç. ÈÄÅÂÀÇÃÄÈÆÂÇ + Ç. þÃÁ¿¿ÀÆ − Â. ÈÁÁÃÂÁÀÃÁÆÇ . Æ 0.0314 − 0.0515¬lZ − 0.1568v‡ + 0.0031¬lZ  + 1.0437v‡ − 0.0014v‡ . ¬lZ 0.0508 + 14.7966I‡ − 0.3659v‡ − 1100I‡ + 1.1756v‡ + 18.7305v‡ . I‡ 4y… = 0.02374273272054580 − 0.900157042648174I − 0.1664207161274432I + 2.997468222421233I + 3.62559075943213I − 6.31868163958043 I . I 0.1928 − 0.3316¬lZ − 0.0214v… + 0.3588¬lZ  + 0.7661v… − 0.5782v… . ¬lZ] 0.2308 + 46.4719I‡ − 0.6426v… − 2831.9I‡ + 1.2845v… + 12.2504v… . I‡ ]

4y‡ = 0.0718 + 12.5545I‡ − 0.4766v‡ − 1075.1I‡ + 1.2963v‡ + 24.5545I‡ v‡

4y… = 0.1673 + 50.8908I‡ − 0.4753v… − 3058.5I‡ + 1.1687v… + 11.029I‡ v…

15 16 17 18 19 20

40

k

rg

Gas flow

k

Condensate flow

rc

Sand grain 1 2

Fig. 1. Condensate banking phenomenon in a gas condensate reservoir.

3 Input Layer ¬7=< 1

¬7=< 2

¬7=< 3





























¬7=< 5



















4 5

Output Layer



¬7=< 4

Hidden Layers

Fig. 2 A schematic description of the GMDH structure.

6 7

41

ZNoŠ<

Algebraic Expression ¡  É£[¡ × ¢] − [i ] ¢

Expression Tree (ET) − Gene 2

Gene 1 Cos

Q

×

/

b

a

b

a

Karva Expression ×

Gene 2

Gene 1

Cos

1 2 3

a

Q

b

/

a

b

Fig. 3. A scheme of converting an algebraic expression to the expression tree and k-expression.

4

42

Input and output data

Randomly splitting the data to training and testing

Implement the GA or PSO optimizers

Training Data

Select model features (, *, NE 8  )

Employ feature subset (, *, NE 8  )

Testing Data

Train the SVR model

No Meet stopping criterion?

Evaluate the model using training and testing data

Yes

1 2 3

Optimum model features (, *, NE 8  ) achieved

Final relative permeability model

Fig. 4. A typical flowchart for SVR-GA/PSO algorithms employed for estimation of gas condensate relative permeability.

43

Input and output data

Randomly splitting the data to training and testing

Implement the GA or PSO optimizers

Training Data

Select model features (8  , @)

Testing Data

Employ feature subset (8  , @) Training the LSSVM model

No Meet stopping criterion?

Evaluate the model using training and testing data

Yes

1 2 3

Optimum model features (8  , @) achieved

Final relative permeability model

Fig. 5. A typical flowchart for LSSVM-GA/PSO algorithms employed for estimation of gas condensate relative permeability.

4

44

Input and output data

Randomly splitting the data to training and testing

Implement the optimizer (LMA, BR, SCG, or RP)

Training Data

Select model weights and biases

Select the number of layers and randomly initialize weights and biases

Testing Data

Train the MLP model

No Meet stopping criterion?

Evaluate the model using training and testing data

Yes Optimum model features (weight and biases) achieved

Final relative permeability model

1 2 3

Fig. 6. A typical flowchart for MLP-LMA/BR/SCG/RP algorithms employed for estimation of gas condensate relative permeability.

4

45

1 2

(a)

1 0.88331413 0.8

3

5 6 7

Relevancy Factor

0.6

4

0.479775978

0.429322163

0.4 0.2 0 SG

IFT

VELOCITY

NC

-0.2 -0.4

8

-0.6

9

-0.741205885

-0.8

10 11

Variables

(b)

0.8

12

0.680422466

15 16 17

Relevancy Factor

13 14

0.4 0.222386802 0.2 0 SC

-0.4

19

-0.6

21

IFT

VELOCITY

NC

-0.2

18

20

0.625680056

0.6

-0.552658962 Variables

Fig. 7 Relevancy of various parameters to the relative permeability of (a) gas and (b) condensate phases in (Henderson et al., 1998) data set.

22 23 24 25

46

1 LSSVM-GA

0.6 0.4 Slope 1 Training data Test data

0 0

0.2

0.8

Slope 1 Training data Test data

0.2

0.4 0.6 Measured Krg

0.8

0.8

0.4 0.6 Measured Krg

0.4 Slope 1 Training data Test data

0.2

0.2

0.2

0.4 0.6 Measured Krg

0.8

1

0.8

1

MLP-BR Slope 1 Training data Test data

1

0.4

1

0.6

0

0.6

0.8

SVR-PSO

0

0

0.8 0.6 0.4 0.2 0

0

0.2

0.4 0.6 Measured Krg

0.8

1

0

MLP-SCG Slope 1 Training data Test data

0.8

0.2

0.6 0.4 0.2 0

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0

0

0.2

0.4 0.6 Measured Krg

0.8

1

0

47

0.4 0.6 Measured Krg MLP-RP

1 Predicted Krg

1 Predicted Krg

0.2

0.8

1

MLP-LMA Slope 1 Training data Test data

1

Slope 1 Training data Test data

0.2

1

0.4

0

0.4

0

0.6

0

0.6

0

0.8

0.2

0.8

1

Predicted Krg

Predicted Krg

0.4 0.6 Measured Krg SVR-GA

1

Predicted Krg

Predicted Krg

0.8

0.2

LSSVM-PSO

1

Predicted Krg

Predicted Krg

1

0.2

0.4 0.6 Measured Krg

0.8

1

1 GEP

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0 0

0.2

0.8

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0

1

0

0.2

0.4 0.6 Measured Krg

0.8

1

0.8

1

0.8

1

ELM

1 Predicted Krg

0.4 0.6 Measured Krg

GMDH

1 Predicted Krg

Predicted Krg

1

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0 0

0.2

0.4 0.6 Measured Krg

0.8

1

Fig. 8. Cross plots of the established models for Krg (Model 1 with two inputs).

LSSVM-GA Slope 1 Training data Test data

0.8 0.6 0.4 0.2 0

0.8 0.6 0.4 0.2 0

0

0.2

0.4 0.6 Measured Krc

0.8

1

SVR-GA Slope 1 Training data Test data

0.8

0

0.6 0.4 0.2 0

0.2

0.4 0.6 Measured Krg

SVR-PSO Slope 1 Training data Test data

1 Predicted Krg

1 Predicted Krg

LSSVM-PSO Slope 1 Training data Test data

1 Predicted Krg

Predicted Krg

1

0.8 0.6 0.4 0.2 0

0

0.2

0.4 0.6 Measured Krg

0.8

1

0

48

0.2

0.4 0.6 Measured Krg

1 MLP-LMA Slope 1 Training data Test data

0.8 0.6 0.4 0.2 0

0.4 0.6 Measured Krg

0.8

0.6 0.4 0.2

1

0.8 0.6

0

0.4 0.2 0

0.2

0.4 0.6 Measured Krg

0.8

1

0.8

1

MLP-RP Slope 1 Training data Test data

1 Predicted Krg

Predicted Krg

0.2

MLP-SCG Slope 1 Training data Test data

1

0.8 0.6 0.4 0.2 0

0

0.2

0.4 0.6 Measured Krg

0.8

1

0

GEP

1 0.8 0.6 0.4

Slope 1 Training data Test data

0.2 0 0

0.2

0.2

0.4 0.6 Measured Krg

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0

0.8

1

0.8

1

0

0.2

ELM Slope 1 Training data Test data

1 0.8 0.6 0.4 0.2 0 0

0.2

0.4 0.6 Measured Krg

Fig. 9. Cross plots of the established models for Krg (Model 2 with three inputs).

49

0.4 0.6 Measured Krg GMDH

1 Predicted Krg

Predicted Krg

0.8

0 0

Predicted Krg

MLP-BR Slope 1 Training data Test data

1 Predicted Krg

Predicted Krg

1

0.4 0.6 Measured Krg

0.8

1

1 LSSVM-GA

0.6 0.4 Slope 1 Training data Test data

0 0

0.2

0.4 0.6 Measured Krc

0.8

0.4 Slope 1 Training data Test data 0

0.2

0.4 0.6 Measured Krc

0.8

Slope 1 Training data Test data 0

0.2

0.2

0.4 0.6 Measured Krc

0.8

0.4 0.6 Measured Krc

0.6 0.4 Slope 1 Training data Test data

0.2 0.2

0.4 0.6 Measured Krc

0.8

1

MLP-RP

0.8 0.6 0.4 Slope 1 Training data Test data

0.2 0 0

50

1

0.8

0

1

0.8

MLP-BR

1

0.4

0

Slope 1 Training data Test data

0.2

0

0.6

0.2

0.4

1

0.8

1

0.6

0

MLP-SCG

1

0.8

SVR-PSO

1

0.6

0

0.4 0.6 Measured Krc

0

0.8

0.2

0.2

0.8

1

MLP-LMA

1

Slope 1 Training data Test data

0.2 0

Predicted Krc

Slope 1 Training data Test data 0.2

0.4

1

0.4

0

0.6

0

0.6

0.2

0.8

1

0.8

0

Predicted Krc

0.8

Predicted Krc

Predicted Krc

0.4 0.6 Measured Krc SVR-GA

1

Predicted Krc

Predicted Krc

0.8

0.2

LSSVM-PSO

1

Predicted Krc

Predicted Krc

1

0.2

0.4 0.6 Measured Krc

0.8

1

1 GEP Slope 1 Training data Test data

0.8 0.6 0.4 0.2 0

0.8 0.6 0.4 0.2 0

0

0.2

0.4 0.6 Measured Krc

0.8

1

0.8

1

0

0.2

0.4 0.6 Measured Krc

0.8

1

ELM Slope 1 Training data Test data

1 Predicted Krc

GMDH Slope 1 Training data Test data

1 Predicted Krc

Predicted Krc

1

0.8 0.6 0.4 0.2 0 0

0.2

0.4 0.6 Measured Krc

Fig. 10. Cross plots of the established models for Krc (Model 1 with two inputs).

LSSVM-GA

0.6 0.4 Slope 1 Training data Test data

0 0

0.2

0.4 0.6 Measured Krc

0.8

Slope 1 Training data Test data 0.2

Slope 1 Training data Test data

0.2 0.2

0.4 0.6 Measured Krc

0.8

1

0.6 0.4 Slope 1 Training data Test data

0.2 0

51

0.8

0.8

0 1

0.4 0.6 Measured Krc SVR-PSO

1

0.4

0

0.4

0

0.6

0

0.6

0

0.8

0.2

0.8

1

SVR-GA

1 Predicted Krc

Predicted Krc

0.8

0.2

LSSVM-PSO

1

Predicted Krc

Predicted Krc

1

0.2

0.4 0.6 Measured Krc

0.8

1

1 MLP-LMA

0.6 0.4 Slope 1 Training data Test data

0 0

0.2

Slope 1 Training data Test data 0.2

0.4 0.6 Measured Krc

0.8

Slope 1 Training data Test data 0.2

0.4 0.6 Measured Krc

0.8

0.4 Slope 1 Training data Test data

0.2 0.2

0.4 0.6 Measured Krc

0.6 0.4 Slope 1 Training data Test data

0.2 0 0

0.2

0.4 0.6 Measured Krc

0.6 0.4 Slope 1 Training data Test data

0 0

0.2

0.4 0.6 Measured Krc

0.8

1

Fig. 11. Cross plots of the established models for Krc (Model 2 with three inputs).

52

1

0.8

0.8

0.2

0.8

GMDH

ELM

1

1

0.6

0

1

0.8

MLP-RP

1

0.4

0

0.4 0.6 Measured Krc

0

0.6

0

0.2

0.8

1

0.8

0.2

Slope 1 Training data Test data

0.2 0

GEP

1

0.4

1

0.4

0

0.6

0

0.6

0.2

0.8

1

0.8

0

Predicted Krc

0.8

Predicted Krc

Predicted Krc

0.4 0.6 Measured Krc MLP-SCG

1

Predicted Krc

Predicted Krc

0.8

0.2

MLP-BR

1

Predicted Krc

Predicted Krc

1

0.8

1

1

(a)

0.18 0.16 0.14

RMSE

0.12 0.1 0.08 0.06 0.04 0.02 0

RMSE

(b)

2-input

3-input

2-input

3-input

0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

Fig. 12. Comparison of RMSE of the established models for (a) Krg and (b) Krc, using employed AI methods, namely Modeling 1 (2-input) and Modeling 2 (3-input).

53

1 2

(a)

3

1

4

0.8

Predicted Krg Measured Krg

Krg

5

Training Data

6

0.6 0.4

7

0.2

8 0 0

9

40

80

120

160

10 11

(b)

12

1

200

240 280 Data Index

320

360

400

440

480

Testing Data Predicted Krg Measured Krg

13 Krg

14

0.8 0.6

15 0.4

16 17 18 19 20

0.2 0 0

20

40

60 Data Index

80

100

120

Fig. 13. The comparison between the predicted Krg values by the MLP-LMA model (3-input) and the Krg real values: (a) training data and (b) testing data.

21 22

54

1

(a)

3

1

4

0.8

5

Krc

2

Training Data

Predicted Krc Measured Krc

0.6

6 0.4

7 0.2

8

0

9

0

40

80

120

160 200 Data Index

10 11

16

Krc

Measured Krc 0.6 0.4

17

0.2

18

0 0

19 20 21

360

Predicted Krc

0.8

15

320

1

13 14

280

Testing Data

(b)

12

240

20

40

60

80

100

Data Index Fig. 14. The comparison between the predicted Krc values by the MLP-LMA model (3-input) and the Krc real values: (a) training data and (b) testing data.

22 23

55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

Fig. 15. Histogram plot for the datasets applied in constructing MLP-LMA for Krg: (a) train and (b) test.

23 24

56

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Fig. 16. Histogram plot for the datasets applied in constructing MLP-LMA for Krc: (a) train and (b) test.

22 23

57

1

Time (s)

(a)

1000 900 800 700 600 500 400 300 200 100 0

2-input

3-input

2-input

3-input

2 (b) 45000 40000

Memory (Kb)

35000 30000 25000 20000 15000 10000 5000 0

3 4 5

Fig. 17 A Visual comparison between the (a) time and (b) memory requirements of each established model for 2input/3-input modeling strategies.

6 7 8 9 10 11 58

1 8 Leverage limit Suspected limit Valid data Vertical suspected points Out of Leverage points

Standardized residuals

6 4 2 0 -2 -4 -6 -8 0

0.01

0.02

0.03

0.04

0.05

Hat

2 3

Fig. 18. The Williams plot of Krg dataset for MLP-LMA model.

4 8 Leverage limit Suspected limit Valid data Vertical suspected points Out of Leverage points

Standardized residuals

6 4 2 0 -2 -4 -6 -8 0

0.005

0.01

0.015

0.02 Hat

5 6

Fig. 19. The Williams plot of Krc dataset for MLP-LMA model.

7 59

0.025

0.03

0.035

0.04

1 2 3

(a)

4

1

Predicted Krc (Bardon 1980) Measured Krc (Asar 1988) Predicted Krc (Haniff 1990) Measured Krc (Bardon 1980) Predicted Krc (Asar 1988) Measured Krc (Haniff 1990)

0.8

5 0.6 Krg

6

0.4

7 8

0.2

9 0

10

0

0.2

0.4

Sg

0.6

0.8

1

0.6

0.8

1

11 12

1

(b)

13

Predicted Krc (Bardon 1980) Measured Krc (Asar 1988) Predicted Krc (Haniff 1990) Measured Krc (Bardon 1980) Predicted Krc (Asar 1988) Measured Krc (Haniff 1990)

0.8

14

16 17

Krc

0.6

15

0.4 0.2

18 19

0 0

0.2

0.4

Sc

20 21

Fig. 20. Comparison between Krg and Krc obtained via measurements and those generated by MLP-LMA, as functions of: (a) Sg and (b) Sc, respectively.

22 23 24 25 26 27 28 60

1 2

(a)

0.1328

0.14

3

0.12

0.0922

RMSE

0.1

0.1032

0.1123

0.0717

0.08 0.06

0.103

0.035

0.04 0.02 0

(b)

0.1229

0.14 0.12 0.09085 0.09116

RMSE

0.1 0.06913

0.08 0.0442

0.06 0.04

0.07955

0.019

0.02 0

Fig. 21. Comparison of the accuracy of the MLP-LMA model and five traditional literature models for (a) Krg and (b) Krc, using the RMSE values.

61

1- Advanced computational frameworks were developed for predicting relative permeability in gas condensate reservoirs. 2- A wide variety of reliable databanks encompasses more than 1000 data points from eights sets of experimental data was utilized. 3- Two different sets of inputs were used for modeling, namely 2-input (saturation and capillary number); and 3-input (saturation, interfacial tension, and capillary number). 4- Three-input based models were found to be more accurate than the 2-input based models. 5- Three-input based MLP-LMA paradigm is the best implemented model. 6- The best established model outperforms largely the prior correlations.