Back-propagation neural network modeling on the load–settlement response of single piles

Back-propagation neural network modeling on the load–settlement response of single piles

Chapter 19 Back-propagation neural network modeling on the loadesettlement response of single piles Zhang Wengang1, 2, 3, Anthony Teck Chee Goh4, Zha...

655KB Sizes 0 Downloads 20 Views

Chapter 19

Back-propagation neural network modeling on the loadesettlement response of single piles Zhang Wengang1, 2, 3, Anthony Teck Chee Goh4, Zhang Runhong2, Li Yongqin2, Wei Ning2 1

Key Laboratory of New Technology for Construction of Cities in Mountain Area, Chongqing University, Chongqing, China; 2School of Civil Engineering, Chongqing University, Chongqing, China; 3National Joint Engineering Research Center of Geohazards Prevention in the Reservoir Areas, Chongqing University, Chongqing, China; 4School of Civil and Environmental Engineering, Nanyang Technological University, Singapore

1. Introduction As an important type of deep foundation, piles are long, slender structural elements used to transfer the loads from the superstructure aboveground through weak strata onto more suitable bearing strata including the stiffer soils or rocks. Therefore, the safety and stability of pile-supported structures depend largely on the behavior of the piles. The evaluation of the loadesettlement performance of a single pile is one of the main aspects in the design of piled foundations. Consequently, an important design consideration is to check the loadesettlement characteristics of piles, under influences of several factors, such as the mechanical nonlinear behavior of the surrounding soil, the characteristics of the pile itself, and the installation methods (Berardi and Bovolenta, 2005). In relation to the settlement analysis of piles, Poulos and Davis (1980) have demonstrated that the immediate settlements contribute the major part of the final settlement, and this also takes into account the consolidation settlement for saturated clay soils and even for piles in clay as presented by Murthy (2002). As for piles in sandy soils, immediate settlement accounts for almost the entire final settlement. In addition, Vesic (1977) suggested a semiempirical method to compute the immediate settlement of piles.

Handbook of Probabilistic Models. https://doi.org/10.1016/B978-0-12-816514-0.00019-9 Copyright © 2020 Elsevier Inc. All rights reserved.

467

468 Handbook of Probabilistic Models

There are also theoretical and experimental methods for predicting the settlement of piles. Recently, soft computing methods, including the commonly used Artificial neural networks (ANNs), have been adopted with varying degrees of success to predict the axial and lateral bearing capacities of pile foundations in compression and uplift, including the driven piles (Chan et al., 1995; Goh 1996; Lee and Lee 1996; Teh et al., 1997; Abu-Kiefa 1998; Goh et al., 2005; Das and Basudhar 2006; Shahin and Jaksa 2006; Ahmad et al., 2007; Ardalan et al., 2009; Shahin 2010; Alkroosh and Nikraz 2011(a) (b); Tarawneh and Imam 2014; Shahin 2014; Zhang and Goh 2016). Shahin (2014) developed an ANN model for loadesettlement modeling of axially driven steel piles using recurrent neural networks (RNNs). These models were then calibrated and validated using 23 in situ, full-scale pile load tests, as well as cone penetration test (CPT) data. Nevertheless, Shahin’s model focused solely on driven steel piles and include only a single input of the average of CPT’s cone tip resistance qc to account for the variability of soil strength along the pile shaft. Nejad and Jaksa (2017) developed an ANN model to predict the pile behavior based on the results of CPT data, based on approximately 500 data sets from the published articles and compared the results with those values from a number of traditional methods. They claimed that the developed ANN model with the full 21 input variables are the optimal, based on which the complete loadesettlement behavior of concrete, steel, and composite piles, either bored or driven, is examined. However, they neglected the input parameter combinations and the descriptive uncertainty due to redundancy of parameter information. Consequently, they fail to take into account the submodels, i.e., the models developed through less input variables. The aims of the book chapter are to (1) develop a BPNN model for accurately estimating the loadesettlement behavior of single, axially loaded piles over a wide range of applied loads, pile characteristics, and installation methods, as well as soil and ground conditions; (2) examine the influence of selection of descriptive factors, categorical or numerical, on modeling accuracy; and (3) explore the relative importance of the factors affecting pile behavior by carrying out sensitivity analyses.

2. Back-propagation neural network methodologies A three-layer, feed-forward neural network topology shown in Fig. 19.1 is adopted in this study. As shown in Fig. 19.1, the back-propagation algorithm involves two phases of data flow. In the first phase, the input data are presented forward from the input to output layer and produces an actual output. In the second phase, the error between the target values and actual values are propagated backward from the output layer to the previous layers and the connection weights are updated to reduce the errors between the actual output values and the target output values. No effort is made to keep track of the

Back-propagation neural network modeling Chapter | 19

469

FIGURE 19.1 Back-propagation neural network architecture used in this study.

characteristics of the input and output variables. The network is first trained using the training data set. The objective of the network training is to map the inputs to the output by determining the optimal connection weights and biases through the back-propagation procedure. The number of hidden neurons is typically determined through a trial-and-error process; normally, the smallest number of neurons that yields satisfactory results (judged by the network performance in terms of the coefficient of determination R2 of the testing data set) is selected. In the present study, a MATLAB-based back-propagation algorithm BPNN with the LevenbergeMarquardt (LM) algorithm (Demuth and Beale, 2003) was adopted for neural network modeling.

3. Development of the back-propagation neural network model The development of BPNN models requires the determination of the model inputs and outputs, division and preprocessing of the available data, determination of the appropriate network architecture, stopping criteria, and model verification. Nejad and Jaksa (2017) used the NEUFRAME, version 4.0, to simulate the ANN operation, and the database used to calibrate and validate the neural network model was compiled from pile load tests from the published literature. To obtain the accurate prediction of the pile responses including the settlement and the capacity, an understanding of the factors affecting pile behavior is essential. Conventional methods include the following fundamental parameters: pile geometry, pile material properties, soil properties, and applied load for estimation of settlement, as well as additional factors including the pile installation methods, the type of load test, and

470 Handbook of Probabilistic Models

whether the pile tip is closed or open. In view that pile behavior depends on soil strength and compressibility, and CPT is one of the most commonly used tests in practice for quantifying these soil characteristics, the CPT results in terms of qc and fs along the embedded length of the pile are used.

3.1 The database Suitable case studies were those involving pile load tests that include field measurements of full-scale pile settlements, as well as the corresponding information relating to the piles and soil characteristics. The database compiled by Nejad and Jaksa (2017) contains a total of 499 cases from 56 individual pile load tests. The descriptive 21 variables including 5 parameters on pile information, 11 parameters on soil information, the applied load, type of test, type of pile, type of installation, and type of pile end are regarded as inputs to estimate the target pile settlement. The references used to compile the database are given in Table 19.1. The details of the database, for each pile load test, are given in Table 19.2, while a summary of the input variables and output as well as the descriptions are listed in Table 19.3. The numerical values for the corresponding categorical variables are listed in Table 19.4. The full database can be referred to the supplementary material of Nejad and Jaksa (2017). In addition, the applied load, P, and the corresponding pile settlement, dm, values were obtained by selecting a series of points from the loade settlement curve associated with each pile load test.

3.2 Data division The cross-validation method suggested by Stone is implemented by Nejad and Jaksa (2017) to divide the data into three sets: training, testing, and validation. The training set is used to adjust the connection weights, whereas the testing set is used to check the performance of the model at various stages of training and to determine when to stop training to avoid overfitting. The validation set is used to estimate the performance of the trained network in the deployed environment. This study used the three data sets chosen by Nejad and Jaksa (2017). To eliminate the data bias and any possibility of extrapolation beyond the range of the training data sets, several random combinations of the training, testing, and validation sets are assessed until statistical consistency in terms of the mean, the standard deviation, minimum, maximum, and range, as suggested by Shahin et al. are obtained.

3.3 Back-propagation neural network model architecture Determining the network architecture is one of the most important and difficult tasks in BPNN model development. In this study, even one hidden layer is adopted for simplicity, apart from the selection of the optimal number of nodes (neurons) in this hidden layer, and determination of the proper number of

Back-propagation neural network modeling Chapter | 19

471

TABLE 19.1 Database references. References

Location of test(s)

No. of pile load tests

Nottingham (1975)

USA

2

Tumay and Fakhroo (1981)

Louisiana, USA

5

Viergever 1982

Almere, the Netherlands

1

Campnella et al.1981

Vancouver, Canada

1

Gambini 1985

Milan, Italy

1

Horvitz et al. (1986)

Seattle, USA

1

CH2M Hill (1987)

Los Angeles, USA

1

Briaud and Tucker (1988)

USA

12

Haustoefer and Plesiotis 1988

Victoria, Australia

1

Tucker and Briaud (1988)

USA

1

Reese et al. (1988)

Texas, USA

1

O’Neil (1988)

California, USA

1

Ballouz et al. (1991)

Texas A and M University, USA

2

Avasarala et al. (1994)

Florida, USA

1

Harris and Mayne 1994

Georgia, USA

1

Matsumoto et al. (1995)

Noto Island, Japan

1

Florida Department of Transportation (FDOT) 2003

USA

8

Paik and Salgado (2003)

Indiana, USA

2

Fellenius et al. (2004)

Idaho, USA

1

Poulos and Davis (2005)

UAE

1

Brown et al. (2006)

Grimsby, UK

2

McCabe and Lehane (2006)

Ireland

1

Omer et al., 2006

Belgium

4

U.S. Department of Transportation, 2006

Virginia, USA

3

South Aust. Dept. of Transport, Energy and Infrastructurea

Adelaide, Australia

1

a Unpublished, based on Nejad and Jaksa (2017) Adapted from Nejad, F.P., Jaksa, M.B., 2017. “Load-settlement behavior modeling of single piles using artificial neural networks and CPT data.” Computers and Geotechnics 89 9e21.

References

Test type

Pile type

Method

Pile end

EA (MN)

Atip (103 mm2)

O (mm)

L (mm)

Lembed (m)

Max.Load (kN)

Sm (mm)

Nottingham (1975)

ML

Conc.

Driven

C

4263

203

1800

8

8

1140

24.5

Nottingham (1975)

ML

Steel

Driven

C

797

59

858

22.5

22.5

1620

37

Tumay and Fakhroo (1981)

ML

Conc.

Driven

C

13,356

636

2830

37.8

37.8

3960

12

Tumay and Fakhroo (1981)

ML

Conc.

Driven

C

4263

203

1800

36.5

36.5

2950

18.5

Tumay and Fakhroo (1981)

ML

Steel

Driven

C

1302

126

1257

37.5

37.5

2800

20

Tumay and Fakhroo (1981)

ML

Steel

Driven

C

1138

96

1010

31.1

31.1

1710

11.5

Tumay and Fakhroo (1981)

ML

Conc.

Driven

C

11,823

563

3000

19.8

19.8

2610

7.5

Campnella et al. 1981

ML

Steel

Driven

C

1000

82

1018

13.7

13.7

290

18.8

Viergever 1982

ML

Conc.

Driven

C

1323

63

1000

9.25

9.25

700

100

Gambini 1985

ML

Steel

Driven

C

1072

86

1037

10

10

625

19

Horvitz et al. (1986)

ML

Conc.

Bored

C

2016

96

1100

15.8

15.8

900

37

CH2M Hill 1987

ML

Conc.

Driven

C

6468

308

2020

25.8

25.8

5785

66

Briaud and Tucker (1988)

ML

Conc.

Driven

C

2583

123

1400

5.5

5.5

1050

58

472 Handbook of Probabilistic Models

TABLE 19.2 Details of pile load test database.

Conc.

Driven

C

3360

160

1600

8.4

8.4

1240

52.5

Briaud and Tucker (1988)

ML

Conc.

Driven

C

3360

160

1600

21

21

1330

28

Briaud and Tucker (1988)

ML

Conc.

Driven

C

4263

203

1800

10.3

10.3

1250

27

Briaud and Tucker (1988)

ML

Conc.

Driven

C

4263

203

1800

15

15

1420

28

Briaud and Tucker (1988)

ML

Conc.

Driven

C

4263

203

1800

10.4

10.4

1070

34

Briaud and Tucker (1988)

ML

Conc.

Driven

C

3360

160

1600

11.3

11.3

870

31

Briaud and Tucker (1988)

ML

Steel

Driven

O

2100

10

1210

19

19

1370

46.5

Briaud and Tucker (1988)

ML

Conc.

Driven

C

2583

123

1400

25

25

1560

18

Briaud and Tucker (1988)

ML

Conc.

Driven

C

3360

160

1600

19.2

19.2

1780

18

Briaud and Tucker (1988)

ML

Steel

Driven

O

2100

10

1210

9

9

2100

12

Briaud and Tucker (1988)

ML

Conc.

Bored

C

2016

96

1100

12.5

12.5

1100

27

Haustoefer and Plesiotis 1988

ML

Conc.

Driven

C

2646

126

1420

10.2

10.2

1300

60

O’Neil (1988)

ML

Steel

Driven

C

805

59

585

9.2

9.2

490

84

Reese et al., 1988

ML

Conc.

Bored

C

10,563

503

2510

24.1

24.1

5850

50 Continued

473

ML

Back-propagation neural network modeling Chapter | 19

Briaud and Tucker (1988)

Tucker and Briaud (1988)

ML

Steel

Driven

C

1081

96

1100

14.4

14.4

1300

75

Ballouz et al., 1991

ML

Conc.

Bored

C

16,493

785.4

3142

10.7

10

4130

137.9

Ballouz et al., 1991

ML

Conc.

Bored

C

13,809

657.56

2875

10.7

10

3000

68.43

Avasarala et al. (1994)

ML

Conc.

Driven

C

2016

96

1100

16

16

1350

33

Harris and Mayne (1994)

ML

Conc.

Bored

C

9073

453.65

2388

16.8

16.8

2795

20.94

Matsumoto et al. 1995

ML

Steel

Driven

O

3167

41

2510

11

8.2

4700

40

FDOT (2003)

CRP

Conc.

Driven

O

32,387

729.66

9576

39.78

33.5

9810

15.98

FDOT (2003)

CRP

Conc.

Driven

O

25,909

583.73

7661

56.08

42.58

4551

7.87

FDOT (2003)

CRP

Conc.

Driven

O

25,909

583.73

7661

44.78

31.97

6000

4.8

FDOT (2003)

CRP

Conc.

Driven

O

33,106

745.87

7341

24.38

15.39

11,000

66.04

FDOT (2003)

CRP

Conc.

Driven

O

33,106

745.87

7341

24.38

14.02

16,000

9.4

FDOT (2003)

CRP

Conc.

Driven

O

32,387

729.66

9576

56.39

28.65

10,000

10.47

FDOT (2003)

CRP

Conc.

Driven

O

25,909

583.73

7661

44.87

23.52

7500

10.31

FDOT (2003)

CRP

Conc.

Driven

O

32,387

729.66

9576

53.34

32

10,000

13.49

Paik and Salgado (2003)

ML

Steel

Driven

O

6840

32.572

2036

8.24

7.04

1140

57.5

Paik and Salgado (2003)

ML

Steel

Driven

C

2876

99.538

1118

8.24

6.87

1620

62.5

474 Handbook of Probabilistic Models

TABLE 19.2 Details of pile load test database.dcont’d

ML

Comp.

Driven

C

7005

129.46

1276

45.86

45

1915

13

Poulos and Davis (2005)

ML

Conc.

Bored

C

19,085

636.17

2827

40

40

30,000

32.52

McCabe and Lehane (2006)

ML

Conc.

Driven

C

2110

62.5

1000

6

6

60

8.21

Omer et al. (2006)

ML

Conc.

Driven

C

2773

132

1288

10.66

8.45

2670

35.94

Omer et al. (2006)

ML

Conc.

Driven

C

2773

132

1288

10.63

8.45

2796

41.74

Omer et al. (2006)

ML

Conc.

Driven

C

2773

132

1288

10.74

8.52

2257

40.68

Omer et al. (2006)

ML

Conc.

Driven

C

2773

132

1288

10.64

8.53

2475

47.65

Brown et al. (2006)

ML

Conc.

Bored

C

8101

282.74

1885

12.76

9.96

1800

23.05

Brown et al. (2006)

CRP

Conc.

Bored

C

8101

282.74

1885

12.76

9.96

2205

26.78

U.S. D T et al., 2006

ML

Conc.

Driven

C

8200

372.1

2440

18

16.76

3100

15.52

U.S. D T et al., 2006

ML

Comp.

Driven

C

7360

303.86

1954

18.3

17.22

2572

35.84

U.S. D T et al., 2006

CRP

Comp.

Driven

C

3200

275.25

1860

18.3

17.27

2500

80.6

SA DPTI (unpublished)

ML

Conc.

Bored

C

11,874

282.74

1885

16.8

7.2

518

2.31

C, closed; Comp., composite; Conc., concrete; CRP, constant rate of penetration; ML, maintained load; O, open.

Back-propagation neural network modeling Chapter | 19

Fellenius et al. (2004)

475

476 Handbook of Probabilistic Models

TABLE 19.3 Summary of input variables and outputs. Inputs and output

Parameters and parameter descriptions

Input variables

Pile information

Soil information from CPT

Axial rigidity of pile, EA (MN)

Variable 1 (x1)

Cross-sectional area of pile tip, Atip (m2)

Variable 2 (x2)

Perimeter of pile, O (mm)

Variable 3 (x3)

Length of pile, L(m)

Variable 4 (x4)

Embedded length of pile, Lembed (m)

Variable 5 (x5)

fs1 (kPa)

Variable 6 (x6)

qc1 (MPa)

Variable 7 (x7)

fs2 (kPa)

Variable 8 (x8)

qc2 (MPa)

Variable 9 (x9)

fs3 (kPa)

Variable 10 (x10)

qc3 (MPa)

Variable 11 (x11)

fs4 (kPa)

Variable 12 (x12)

qc4 (MPa)

Variable 13 (x13)

fs5 (kPa)

Variable 14 (x14)

qc5 (MPa)

Variable 15 (x15)

qctip (MPa)

Variable 16 (x16)

Applied load, P (kN) Categorical information for piles and the testing methods

Output

Variable 17 (x17) Type of test (TT)

Variable 18 (x18)

Type of pile (TP)

Variable 19 (x19)

Type of installation (TI)

Variable 20 (x20)

Type of pile end (PE)

Variable 21 (x21)

Measured piles settlement, dm (mm)

inputs out of the full 21 variables is an essential task because there is no unified rule for determination of an optimal BPNN architecture. In view of these two issues, a “trial-and-error” procedure is carried out to determine the optimal BPNN model architecture from aspects of the proper number of inputs and the number of nodes in the hidden layer, as listed in Table 19.5.

Back-propagation neural network modeling Chapter | 19

477

TABLE 19.4 Numerical values for the corresponding categorical variables. Categorical factors

Description

Input value

Type of test (x18, TT)

Maintained load

0

Constant rate of penetration

1

Steel

0

Concrete

1

Composite

2

Driven

0

Bored

1

Open

0

Closed

1

Type of pile (x19, TP)

Type of installation (x20, TI)

Type of pile end (x21, PE)

TABLE 19.5 Selection of the optimal BPNN model architecture. Case no.

Combination of numerical and categorical variables

Number of hidden nodes

1

17 (x1,.x17)

1, 2, ., 33, 34

2

17 þ TT

1, 2, ., 33, 34, 35, 36

3

17 þ TP

1, 2, ., 33, 34, 35, 36

4

17 þ TI

1, 2, ., 33, 34, 35, 36

5

17 þ PE

1, 2, ., 33, 34, 35, 36

6

17 þ TT þ TP (optimal)

1, 2, ., 35, 36, 37, 38

7

17 þ TP þ TI

1, 2, ., 35, 36, 37, 38

8

17 þ TI þ PE

1, 2, ., 35, 36, 37, 38

9

17 þ TT þ PE

1, 2, ., 35, 36, 37, 38

10

17 þ TT þ TI

1, 2, ., 35, 36, 37, 38

11

17 þ TP þ PE

1, 2, ., 35, 36, 37, 38

12

17 þ TT þ TP þ TI

1, 2, ., 37, 38, 39, 40

13

17 þ TP þ TI þ PE

1, 2, ., 37, 38, 39, 40

14

17 þ TT þ TI þ PE

1, 2, ., 37, 38, 39, 40

15

17 þ TT þ TP þ PE

1, 2, ., 37, 38, 39, 40

16

17 þ TT þ TP þ TI þ PE

1, 2, ., 39, 40, 41, 42

BPNN, back-propagation neural network.

478 Handbook of Probabilistic Models

3.4 Training and stopping criteria for back-propagation neural network models Training, or learning, is the process of optimizing the connection weights, based on first-order gradient descent. Its aim is to identify a global solution to what is typically a highly nonlinear optimization problem. The BPNN model has the ability to escape local minima in the error surface and, thus, produces optimal or near-optimal solutions. Stopping criteria determine whether the model has been optimally or suboptimally trained. Various methods can be used to determine when to stop training. The training set is used to adjust the connection weight, whereas the resting set measures the ability of the model to generalize, and using this set, the performance of the model is checked at many stages during the training process and training is stopped when the testing set error begins to increase. The preset rules as for the transfer function, the maximum epoch, and the stopping criteria are as follows, in MATLAB language: logsig transfer function from the input layer to the hidden layer; tansig transfer function from the hidden layer to the output layer; maxepoch ¼500; learning rate¼0.01; min_grad¼1e-15; mu_dec¼0.7; mu_inc¼1.03.

3.5 Validations Once model training has been successfully accomplished, the performance of the trained model should be validated against data that have not been used in the learning process, known as the validation set, to ensure that the model has the ability to generalize within the limits set by the training data in a robust fashion, i.e., to the new situations, rather than simply having memorized the inputeoutput relationships that are contained in the training sets.

4. The optimal back-propagation neural network model Based on the “trial-and-error” procedure in section 3.3, different BPNN model architectures have been tried and it is assumed that the BPNN model with the highest coefficient of determination R2 value for the testing data sets is considered to be the optimal model. Table 19.6 lists the R2 values of the testing data sets for the BPNN models with different number of inputs and nodes. It can be observed that the BPNN model with 17 numerical variables andand categorical TT þ TP variables as the inputs with two hidden nodes is the optimal one.

5. Modeling results Fig. 19.2A and B show the BPNN predictions for the training and testing data patterns, respectively. For pile settlement prediction, considerably high R2 (approximately 0.9) is obtained for both the training (R2¼0.856) and testing

Back-propagation neural network modeling Chapter | 19

479

TABLE 19.6 The optimal BPNN model selection. Case no.

Combination of numerical and categorical variables

R2 for the testing sets

1

17 (x1,.x17)

0.773

2

17 þ TT

0.888

3

17 þ TP

0.761

4

17 þ TI

0.833

5

17 þ PE

0.824

6

17 þ TT þ TP (optimal)

0.908

7

17 þ TP þ TI

0.816

8

17 þ TI þ PE

0.738

9

17 þ TT þ PE

0.827

10

17 þ TT þ TI

0.788

11

17 þ TP þ PE

0.789

12

17 þ TT þ TP þ TI

0.785

13

17 þ TP þ TI þ PE

0.743

14

17 þ TT þ TI þ PE

0.829

15

17 þ TT þ TP þ PE

0.801

16

17 þ TT þ TP þ TI þ PE

0.763

BPNN, back-propagation neural network.

(R2¼0.908) patterns. Based on the plot, it is obvious that the developed BPNN model is less accurate in predicting small pile settlement mainly as a result of the bias (errors).

6. Parametric relative importance The parametric relative importance determined by the BPNN is based on the method by Garson (1991) and discussed by Das and Basudhar (2006). Fig. 19.3 gives the plot of the relative importance of the input variables for the BPNN models. It can be observed that pile settlement is mostly influenced by the input variable x17 (applied load, P), followed by x8 (fs2) and 4 (length of pile). It is marginally influenced by x15 (qc5) and x19 (type of pile), which also explains that input variable x19 slightly enhances the predictive capacity of the BPNN model from 0.888 for case no. Two to 0.908 for case no. 6.

7. Model interpretabilities For brevity, only the developed optimal 17 þ TT þ TP model is interpreted. The BPNN model is expressed through the trained connections weights, the

480 Handbook of Probabilistic Models

FIGURE 19.2 Prediction of pile settlements using BPNN. BPNN, back-propagation neural network.

Back-propagation neural network modeling Chapter | 19

481

FIGURE 19.3 Relative importance of the input variables for the optimal BPNN model. BPNN, back-propagation neural network.

bias, and the transfer functions. The mathematical expression for pile settlement obtained by the optimal 17 þ TT þ TP analysis is shown in Appendix A. In addition, Appendix B provides the weights and bias values used for partitioning of BPNN weights for pile settlement. The specific procedures can be referred to Zhang (2013). For simplicity, this part has been omitted.

8. Summary and conclusion A database containing 499 pile load test data sets with a total of 21 full variables is adopted to develop the BPNN model for prediction of loadesettlement characteristics of piles. The predictive accuracy, model interpretability, and parametric sensitivity analysis of the developed BPNN pile settlement model are demonstrated. Performance measures indicate that BPNN model for the analyses of pile settlement provides reasonable predictions and can thus be used for predicting pile settlement.

Appendix A BPNN pile settlement model The transfer functions used for BPNN output for pile settlement are “logsig” transfer function for hidden layer to output layer and “tansig” transfer function for output layer to target. The calculation process of BPNN output is elaborated in detail as follows: From the connection weights for a trained neuron network, it is possible to develop a mathematical equation relating the input parameters and the single output parameter Y using ( " !#) h m X X Y ¼ fsig b0 þ wik Xi wk fsig bhk þ (A.1) k¼1

i¼1

482 Handbook of Probabilistic Models

in which b0 is the bias at the output layer, uk is the weight connection between neuron k of the hidden layer and the single output neuron, bhk is the bias at neuron k of the hidden layer (k¼1,h), uik is the weight connection between input variable i (i ¼1, m) and neuron k of the hidden layer, Xi is the input parameter i, and fsig is the sigmoid (logsig & tansig) transfer function. Using the connection weights of the trained neural network, the following steps can be followed to mathematically express the BPNN model: Step1: Normalize the input values for x1, x2,. and x19 linearly using  xnorm ¼ 2ðxactual  xmin Þ ð xmax  xmin Þ  1 Let the actual x1 ¼ X1a and the normalized x1 ¼ X1 X1¼ 1þ2(X1a e 796.74)/(33106.34 e 796.74) Let the actual x2 ¼ X2a and the normalized x2 ¼ X2

(A.2)

X2¼ 1þ2(X2a e 100)/(7854 e 100) Let the actual x3 ¼ X3a and the normalized x3 ¼ X3

(A.3)

X3¼ 1þ2(X3a e 58.5)/(957.56 e 58.5) Let the actual x4 ¼ X4a and the normalized x4 ¼ X4

(A.4)

X4¼ 1þ2(X4a e 5.5)/(56.39 e 5.5) Let the actual x5 ¼ X5a and the normalized x5 ¼ X5

(A.5)

X5¼ 1þ2(X5a e 5.5)/(45 e 5.5) Let the actual x6 ¼ X6a and the normalized x6 ¼ X6

(A.6)

X6¼ 1þ2(X6a e 0)/(10.38 e 0) Let the actual x7 ¼ X7a and the normalized x7 ¼ X7

(A.7)

X7¼ 1þ2(X7a e 0)/(274 e 0) Let the actual x8 ¼ X8a and the normalized x8 ¼ X8

(A.8)

X8¼ 1þ2(X8a e 0.05)/(17.16 e 0.05) Let the actual x9 ¼ X9a and the normalized x9 ¼ X9

(A.9)

X9¼ 1þ2(X9a e 1.83)/(275.5 e 1.83) Let the actual x10 ¼ X10a and the normalized x10 ¼ X10

(A.10)

X10¼ 1þ2(X10a e 0.3)/(31.54 e 0.3) Let the actual x11 ¼ X11a and the normalized x11 ¼ X11

(A.11)

X11¼ 1þ2(X11a e 1.615)/(618.7 e 1.615) Let the actual x12 ¼ X12a and the normalized x12 ¼ X12

(A.12)

X12¼ 1þ2(X12a e 0.25)/(33.37 e 0.25) Let the actual x13 ¼ X13a and the normalized x13 ¼ X13

(A.13)

X13¼ 1þ2(X13a e 4.421)/(1293 e 4.421)

(A.14)

Back-propagation neural network modeling Chapter | 19

483

Let the actual x14 ¼ X14a and the normalized x14 ¼ X14 X14¼ 1þ2(X14a e 0.25)/(53.82 e 0.25) Let the actual x15 ¼ X15a and the normalized x15 ¼ X15

(A.15)

X15¼ 1þ2(X15a e 7.99)/(559 e 7.99) Let the actual x16 ¼ X16a and the normalized x16 ¼ X16

(A.16)

X16¼ 1þ2(X16a e 0.25)/(70.29 e 0.25) Let the actual x17 ¼ X17a and the normalized x17 ¼ X17

(A.17)

X17¼ 1þ2(X17a e 0)/(30000 e 0) Let the actual x18 ¼ X18a and the normalized x18 ¼ X18

(A.18)

X18¼ 1þ2(X18a e 0)/(1 e 0) Let the actual x19 ¼ X19a and the normalized x19 ¼ X19

(A.19)

X19¼ 1þ2(X19a e 0)/(2 e 0) (A.20) Step2: Calculate the normalized value (Y1) using the following expressions: A1¼0.0796þ5.0087logsig(X1)e3.3926logsig(X2)þ6.8371logsig(X3) 75.6342logsig(X4) þ 45.8013logsig(X5)þ13.0191logsig(X6)þ 24.0145logsig(X6)e96.1639logsig(X8)e41.1331logsig(X9) þ14.57logsig(X10) þ 24.0111logsig(X11) þ 58.357logsig(X12) e 23.5117logsig(X13) e21.0635logsig(X14)e2.6677logsig(X15) þ 36.8799logsig(X16) þ 18.098logsig(X17)e 15.3542logsig(X18) þ1.7168logsig(X19) (A.21) A2¼ e61.9379-7.165logsig(X1)þ9.7258logsig(X2) þ4.0935logsig(X3) þ 9.7937logsig(X4) þ1.3488logsig(X5)þ8.2361logsig(X6)þ0.1617logsig(X7) e18.4019logsig(X8)þ0.705logsig(X9) þ 4.9512logsig(X10) þ 1.7347logsig(X11) þ 3.1179logsig(X12)  1.1133logsig(X13) e 0.4005logsig(X14) þ 0.5711logsig(X15) þ4.4941logsig(X16) e84.7805logsig(X17)þ0.9767logsig(X18)þ1.6406logsig(X19) (A.22) B1¼ 2.6304tanh(A1)

(A.23)

B2¼ e3.0709tanh(A2)

(A.24)

C1¼e1.3496þ B1þ B2

(A.25)

Y1¼C1 Step3: Denormalize the output to obtain pile settlement dm ¼ 0 þ ð137:88e0Þ  ðY1 þ 1Þ=2

(A.26) (A.27)

Note: logsig(x)¼1/(1þexp(x)) while tanh(x)¼ 2/(1þexp(2x)) e 1

Appendix B weights and bias values for BPNN pile settlement model See Tables B.1eB.3.

Hidden neuron 1

Hidden neuron 2

Input 1

Input 2

Input 3

Input 4

Input 5

Input 6

Input 7

Input 8

Input 9

Input 10

5.009

3.393

6.837

75.63

45.801

13.019

24.015

96.16

41.13

14.57

Input 11

Input 12

Input 13

Input 14

Input 15

Input 16

Input 17

Input 18

Input 19

24.011

58.357

23.51

21.06

2.668

36.88

18.098

15.35

1.717

Input 1

Input 2

Input 3

Input 4

Input 5

Input 6

Input 7

Input 8

Input 9

Input 10

7.165

9.7258

4.0935

9.7937

1.3488

8.2361

0.1617

18.40

0.705

4.9512

Input 11

Input 12

Input 13

Input 14

Input 15

Input 16

Input 17

Input 18

Input 19

1.7347

3.1179

1.113

0.401

0.5711

4.4941

84.78

0.9767

1.6406

484 Handbook of Probabilistic Models

TABLE B.1 Weights for inputs layer to the hidden layer.

Back-propagation neural network modeling Chapter | 19

485

TABLE B.2 Bias for inputs layer to the hidden layer. Theta

Hidden neuron 1

Hidden neuron 2

0.0796

61.9379

TABLE B.3 Weights for the hidden layer to output layer. Weight

Hidden neuron 1

Hidden neuron 2

2.6304

3.0709

The bias value for the hidden layer to output layer is 1.3496.

Acknowledgments The authors would like to express their appreciation to Nejad and Jaksa (2017) for making their pile loadesettlement database available for this work.

References Abu-Kiefa, M.A., 1998. General regression neural networks for driven piles in cohesionless soils. Journal of Geotechnical and Geoenvironmental Engineering 124 (12), 1177e1185. Ahmad, I., El Naggar, H., Kahn, A.N., 2007. Artificial neural network application to estimate kinematic soil pile interaction response parameters. Soil Dynamics and Earthquake Engineering 27 (9), 892e905. Alkroosh, I., Nikraz, H., 2011a. Correlation of pile axial capacity and CPT data using gene expression programming. Geotechnical and Geological Engineering 29, 725e748. Alkroosh, I., Nikraz, H., 2011b. Simulating pile load-settlement behavior from CPT data using intelligent computing. Central European Journal of Engineering 1 (3), 295e305. Ardalan, H., Eslami, A., Nariman-Zahed, N., 2009. Piles shaft capacity from CPT and CPTu data by polynomial neural networks and genetic algorithms. Computers and Geotechnics 36, 616e625. Avasarala, S.K.V., Davidson, J.L., McVay, A.M., 1994. An evaluation of predicted ultimate capacity of single piles from spile and unpile programs. In: Proc Int Conf on Design and Construction of Deep Foundations, 2, pp. 12e723. Orlando: FHWA. Ballouz, M., Nasr, G., Briaud, J.-L., 1991. Dynamic and Static Testing of Nine Drilled Shafts at Texas A&M University Geotechnical Sites. Res Rep. Civil Engineering, Texas A&M University, College Station, Texas, p. 127. Berardi, R., Bovolenta, R., 2005. Pile settlement evaluation using field stiffness nonlinearity. Geotechnical Engineering 158, 35e44. Briaud, J.L., Tucker, L.M., 1988. Measured and predicted axial capacity of 98 piles. J Geotech Eng 114 (9), 984e1001. Brown, M.J., Hyde, A.F.L., Anderson, W.F., 2006. Analysis of a rapid load test on an instrumented bored pile in clay. Ge´otechnique 56 (9), 627e638.

486 Handbook of Probabilistic Models Campnella, R.G., Gillespie, D., Robertson, P.K., 1981. Pore pressure during cone penetration testing. In: Proc of 2nd European Symp on Penetration Testing, Amsterdam, vol. 2, pp. 507e512. Chan, W.T., Chow, Y.K., Liu, L.F., 1995. Neural network: an alternative to pile driving formulas. Computers and Geotechnics 17, 135e156. Das, S.K., Basudhar, P.K., 2006. Undrained lateral load capacity of piles in clay using artificial neural network. Computers and Geotechnics 33 (8), 454e459. Demuth, H., Beale, M., 2003. Neural Network Toolbox for MATLAB-User Guide Version 4.1. The Math Works Inc. Fellenius, B.H., Harris, D.E., Anderson, D.G., 2004. Static loading test on a 45m long pile in Sanpoint, Idaho. Canadian Geotechnical Journal 41, 613e628. Florida Department of Transportation (FDOT), 2003. Large Diameter Cylinder Pile Database. Research Management Center. Gambini, F., 1985. Experience in Italy with centricast concrete piles. In: Proc Int Symp on Penetrability and Drivability of Piles, San Francisco, vol. 1, pp. 97e100. Garson, G.D., 1991. Interpreting neural-network connection weights. AI Expert 6 (7), 47e51. Goh, A.T.C., 1996. Pile driving records reanalyzed using neural networks. Journal of Geotechnical Engineering 122 (6), 492e495. Goh, A.T., Kulhawy, F.H., Chua, C.G., 2005. Bayesian neural network analysis of undrained side resistance of drilled shafts. Journal of Geotechnical and Geoenvironmental Engineering 131 (1), 84e93. Harris, E., Mayne, P., 1994. Axial compression behavior of two drilled shafts in piedmont residual soils. In: Proc Int Conf on Design and Construction of Deep Foundation, vol. 2. FWHA, Washington, DC, pp. 352e367. Haustoefer, I.J., Plesiotis, S., 1988. Instrumented dynamic and static pile load testing at two bridges. In: Proc 5th Australia New Zealand Conf on Geomechanics e Prediction vs Performance, Sydney, pp. 514e520. CH2M Hill, 1987. Geotechnical Report on Indicator Pile Testing and Static Pile Testing. Berths 225-229 at Port of Los Angeles. CH2M Hill, Los Angeles. Horvitz, G.E., Stettler, D.R., Crowser, J.C., 1986. Comparison of predicted and observed pile capacity. In: Proc Symp on Cone Penetration Testing. ASCE, St. Louis, pp. 413e433. Lee, I.M., Lee, J.H., 1996. Prediction of pile bearing capacity using artificial neural networks. Computers and Geotechnics 18 (3), 189e200. Matsumoto, T., Michi, Y., Hirono, T., 1995. Performance of axially loaded steel pipe piles driven in soft rock. Journal of Geotechnical and Geoenvironmental Engineering 121 (4), 305e315. McCabe, B.A., Lehane, B.M., 2006. Behavior of axially loaded pile groups driven in clayey silt. Journal of Geotechnical and Geoenvironmental Engineering 132 (3), 401e410. Murthy, V.N.S., 2002. Principles and Practices of Soil Mechanics and Foundation Engineering. Marcel Dekker Inc. Nejad, F.P., Jaksa, M.B., 2017. Load-settlement behavior modeling of single piles using artificial neural networks and CPT data. Computers and Geotechnics 89, 9e21. Nottingham, L.C., 1975. Use of Quasi-Static Friction Cone Penetrometer Data to Predictload Capacity of Displacement Piles. PhD Thesis. Dept Civil Eng, University of Florida. Omer, J.R., Delpak, R., Robinson, R.B., 2006. A new computer program for pile capacity prediction using CPT data. Geotechnical and Geological Engineering 24, 399e426. O’Neil, M.W., 1988. Pile Group Prediction Symposium e Summary of Prediction Results. FHWA draft report.

Back-propagation neural network modeling Chapter | 19

487

Paik, K.H., Salgado, R., 2003. Determination of bearing capacity of open-ended piles in sand. Journal of Geotechnical and Geoenvironmental Engineering 129 (1), 46e57. Poulos, H.G., Davis, E.H., 1980. Pile Foundation Analysis and Design. Wiley. Poulos, H.G., Davis, A.J., 2005. Foundation design for the emirates twin towers, Dubai. Canadian Geotechnical Journal 42, 716e730. Reese, J.D., O’Neill, M.W., Wang, S.T., 1988. Drilled Shaft Tests, Interchange of West Belt Roll Road and US290 Highway. Texas. Lymon C. Reese and Associates, Austin, Texas. Shahin, M.A., 2010. Intelligent computing for modelling axial capacity of pile foundations. Canadian Geotechnical Journal 47 (2), 230e243. Shahin, M.A., 2014. Load-settlement modeling of axially loaded steel driven piles using CPTbased recurrent neural networks. Soils and Foundations 54 (3), 515e522. Shahin, M.A., Jaksa, M.B., 2006. Pullout capacity of small ground anchors by direct cone penetration test methods and neural networks. Canadian Geotechnical Journal 43 (6), 626e637. Tarawneh, B., Imam, R., 2014. Regression versus artificial neural networks: predicting pile setup from empirical data. KSCE Journal of Civil Engineering 18 (4), 1018e1027. Teh, C.I., Wong, K.S., Goh, A.T.C., Jaritngam, S., 1997. Prediction of pile capacity using neural networks. Journal of Computing in Civil Engineering 11 (2), 129e138. Tucker, L.M., Briaud, J.L., 1988. Analysis of Pile Load Test Program at Lock and Dam 26 Replacement Project Final Report. US Army Corps of Engineering. Tumay, M.Y., Fakhroo, M., 1981. Pile capacity in soft clays using electric QCPT data. In: Proc Conf on Cone Penetration Testing and Experience. ASCE, St Louis, pp. 434e455. U.S. Department of Transportation, 2006. A Laboratory and Field Study of Composite Piles for Bridge Substructures. FHWA-HRT-04-043. Vesic, A.S., 1977. Design of Pile Foundations. National Cooperative Highway Research Program. Transportation Research Board, Washington, DC. Synthesis of Practice No. 42. Viergever, M.A., 1982. Relation between cone penetration and static loading of piles on locally strongly varying sand layers. In: Proc of 2nd European Symp on Penetration Testing, Amsterdam, vol. 2, pp. 927e932. Zhang, W.G., 2013. Probabilistic Risk Assessment of Underground Rock Caverns. PhD thesis. Nanyang Technological University. Zhang, W.G., Goh, A.T.C., 2016. Multivariate adaptive regression splines and neural network models for prediction of pile drivability. Geoscience Frontiers 7, 45e52.