Computers and Structures 230 (2020) 106171
Contents lists available at ScienceDirect
Computers and Structures journal homepage: www.elsevier.com/locate/compstruc
A fuzzy weighted relative error support vector machine for reverse prediction of concrete components Zongwen Fan a, Raymond Chiong a,⇑, Zhongyi Hu b, Yuqing Lin a a b
School of Electrical Engineering and Computing, The University of Newcastle, Callaghan, NSW 2308, Australia School of Information Management, Wuhan University, Wuhan 430072, PR China
a r t i c l e
i n f o
Article history: Received 15 July 2019 Accepted 15 November 2019
Keywords: Relative error support vector machine (RESVM) Fuzzy weighted RE-SVM Reverse prediction of concrete components Multi-input, multi-output prediction
a b s t r a c t Concrete is one of the most commonly used construction materials in civil engineering. Being able to accurately predict concrete components based on concrete strength, slump and flow is crucial for saving manpower and financial resources. The reverse prediction nature of this task, however, makes it a very difficult problem to solve. Relative error support vector machines (RE-SVMs) have been successfully applied to tackle this problem using relative errors as equality constraints. Nevertheless, RE-SVMs are sensitive to noise, and their target values cannot be zero. In this paper, we present a fuzzy weighted RE-SVM (FW-RE-SVM) to address the limitations of RE-SVMs. A fuzzy weighted operation is first utilised to improve the robustness of RE-SVMs by assigning weights to the relative error constraints. A small value is further added to the denominators of the relative error constraints, in case their values are equal to zero. This helps to generalise the approach. Experimental results confirm that our proposed model has very good performance for reverse prediction of concrete components under both multi-input, oneoutput and multi-input, multi-output scenarios. Ó 2019 Elsevier Ltd. All rights reserved.
1. Introduction As one of the most widely used construction materials, concrete is commonly utilised for the construction of homes, roads and dams, among others [1]. Concrete strength is a very important measure for assessing the performance of concrete in designing buildings and other structures [2], as well as for engineering judgement and quality assurance of the produced concrete [3]. However, concrete strength is also a highly non-linear function that involves concrete components such as cement, blast furnace slag (Slag), fly ash, water, age, superplasticiser (SP), coarse aggregate (CA), and fine aggregate (FA) [4]. Being a very complex construction material [5], the forming process of concrete is determined not only by the mixture of its components (complex physical and chemical processes at different stages) but also influenced by its environment (e.g., temperature, humidity). To date, forward prediction of concrete strength, slump and flow has been widely studied. The three most commonly used mathematical regression models in civil engineering are Scheffe’s regression model [6], Osadebe’s regression model [7], and Ibearug⇑ Corresponding author. E-mail addresses:
[email protected] (Z. Fan),
[email protected] (R. Chiong),
[email protected] (Z. Hu),
[email protected]. au (Y. Lin). https://doi.org/10.1016/j.compstruc.2019.106171 0045-7949/Ó 2019 Elsevier Ltd. All rights reserved.
bulem’s regression model [8]. Although these models can, to some extent, approximate concrete strength, it is rather difficult for them to accurately measure concrete strength both qualitatively and quantitatively. Machine learning algorithms have thus been introduced as alternatives for concrete strength prediction, considering their successful use in practical applications [9]. For example, artificial neural networks (ANNs) were applied to predict the compressive strength of concrete in multiple studies [10–15]. Their results showed that the ANN has good performance in predicting concrete strength. Support vector machines (SVMs) were also used to model concrete strength with good estimation [16,17]. While researchers have devoted much of their attention to predict concrete strength, slump and flow from concrete components (i.e., forward prediction), only limited work has attempted to predict certain concrete components based on concrete strength, slump, flow, and the remaining concrete components. Unlike forward prediction that predicts the target outputs (concrete strength, slump, or flow) from inputs (concrete components), reverse prediction (or inverse prediction) predicts inputs (concrete components) from the target outputs (concrete strength, slump, and flow) [18]. Accurate prediction of concrete components is very important in practical engineering applications to reduce the cost of manpower, concrete components, and financial resources [4]. For example, if an engineer needs to build a house, and he or she already knows the concrete strength (required for the house) and
2
Z. Fan et al. / Computers and Structures 230 (2020) 106171
has some concrete components (e.g., water, fly ash), making the best use of the remaining concrete components (e.g., cement, slag) to obtain the same or similar quality and quantity of concrete can be financially beneficial. However, the reverse prediction of concrete components is an inverse problem, which is much more difficult than forward prediction, because its solution is often ill-posed or ill-conditioned [19,20], and its existence, uniqueness, and certainty are difficult to verify [21,22]. Even a small change (e.g. noise, measurement error) in the data can produce an extremely large error in the solution [23]. In addition, there may exist many solutions for inverse problems [24]. For example, given a simple linear function c ¼ f ða; bÞ ¼ a þ b, it is easy to get c given a and b. However, given c, we cannot easily get the right a and b, because there are too many possible combinations that can satisfy the equation. A well-posed problem needs to satisfy the requirements that, given a dataset, a solution exists; the solution is unique; and the solution depends on the data [25]. In other words, the problem is ill-posed if any one of these constraints is violated. A relative error SVM (RE-SVM) optimised by particle swarm optimisation (PSO) [26,4] has shown the ability to reversely predict the concrete components, with results better than ANNs and the least-square SVM (LS-SVM). However, the RE-SVM is not robust to outliers and/or noises, and the denominator of any relative error constraint cannot be zero. To address these limitations, we present a fuzzy weighted RE-SVM (FW-RE-SVM) in this paper. A fuzzy weighted technique is applied to improve robustness of the RESVM. In addition, a small value, g, is added to the denominator of the relative error to avoid the denominator being zero — note that for our problem, the target values (concrete components) are greater than or equal to zero. The remainder of this paper is organised as follows. In Section 2, RE-SVMs for multi-input, one-output and multi-input, multioutput scenarios are presented. In Section 3, the fuzzy weighted operation is first described, followed by details of our proposed FW-RE-SVM for reverse prediction of one-output and multioutput concrete components. Experimental results and analyses for simultaneous reverse prediction of concrete components are discussed in Section 4. Finally, we draw conclusion in Section 5. 2. RE-SVMs The SVM, proposed by Vapnik and Chervonenkis [27], is based on the structural risk minimisation principle and statistical learning theory [28,29]. By solving quadratic programming problems, it can obtain global optimum solutions, and thus has been widely used in many applications [30–35]. Unlike traditional SVMs that use inequality constraints to optimise convex problems [36], LSSVMs utilise equality constraints [37] that simplify the calculation by transforming the quadratic programming problem into a linear equation problem [38]. Instead of using error constraints in LSSVMs, as shown in Eq. (1), RE-SVMs apply relative error constraints expressed in Eq. (2). n 1 1 X arg min wT w þ c n2i w;b;n 2 2 i¼1
Lðw; b; e; aÞ ¼
n X 1 T 1 X w w þ C e2i ai ½wT /ðxi Þ þ b yi ei yi ; 2 2 i i¼1
ð3Þ where a ¼ ½a1 ; a2 ; ; an are Lagrange multipliers. With the use of Karush-Kuhn-Tucker (KKT) conditions [39], the matrix solution for the optimisation problem can be expressed as:
"
# b
1T
0
1 X þ yT yC 1 I
a
¼
0 ; y
ð4Þ
where y ¼ ½y1 ; y2 ; ; yn T ; 1 ¼ ½1; 1; ; 1; I ¼ diagð1; 1; ; 1Þ is the diagonal matrix, Xij ¼ /ðxi Þ/ðxj Þ ¼ Kðxi ; xj Þ, and Kðxi ; xj Þ is a kernel function [40] that is a symmetric function satisfying Mercer’s condition [41]. Based on matrix inversion in Eq. (4), parameters a and b are obtained for the RE-SVM in Eq. (5).
f ðxÞ ¼
n X
ai Kðxi ; xÞ þ b:
ð5Þ
i¼1
So far, the multi-input, one-output RE-SVM has been presented. It is easy to extend RE-SVMs to a multi-input, multi-output scenario (i.e., RE-MIMO-SVMs). Given an m-input, l-output problem, Eq. (2) can be reformulated as: n 1 1 X 1 argmin wT w þ C ei eTi s:t: ei ¼ ðyTi yi Þ yTi ðyi ðwT /ðxi Þ þ bÞÞ; w;b;e 2 2 i¼1
ð6Þ where ei ¼ ½e1i ; e2i ; ; eli is the relative error vector between the ith sample’s target and prediction values, yi ¼ ½y1i ; y2i ; ; yli is an output vector of the ith sample, and b ¼ ½b1 ; b2 ; ; bl is a bias vector. To address the optimisation problem in Eq. (6), the Lagrangian function is defined as: n n X 1 1 X Lðw; b; e; aÞ ¼ wT w þ C ei eTi ai ½yi eTi ðyi ðwT /ðxi Þ þ bÞÞ; 2 2 i¼1 i¼1
ð7Þ where ai ¼ ½a a a are Lagrangian multipliers. According to KKT conditions [42], the conditions for optimality are: 1 i;
2 i ;;
l i
8 > @L T > > > @ai ¼ 0 ! yi ¼ w /ðxi Þ þ b þ yi ei ; < > > @L T > > : @ei ¼ 0 ! Cei ¼ yi ai ;
@L @b
n X ¼ 0 ! ai ¼ 0; i¼1
@L @w
¼0!w¼
n X
ai /ðxi Þ:
i¼1
ð8Þ s:t: ni ¼ yi ðwT /ðxi Þ þ bÞ;
ð1Þ
fðxi ; yi Þgni¼1 ; w
where n is the number of training samples is a weight vector, wT is the transpose of w; c > 0 is a penalty parameter (or regularisation parameter), ni is the relaxation factor, b is the bias of linear function f ðxÞ ¼ w/ðxÞ þ b, and /ðxi Þ is a function that maps the present input space to a higher dimensional space. n 1 1 X arg min wT w þ C e2i w;b;e 2 2 i¼1
where C > 0 is a penalty parameter, and ei is the relative error between the target and prediction values for the ith sample. To address the optimisation problem in Eq. (2), the Lagrangian function for the RE-SVM is expressed as:
s:t: ei ¼
wT /ðxi Þ þ b yi ; yi
ð2Þ
where operation in yi ei is an element-wise multiplication. Then, after eliminating variables w and ei , the optimisation problem can be transformed into solving a linear matrix equation.
"
0
1T
1 X þ YY T C 1 I
# b
a
¼
0 Y
;
ð9Þ
where Y ¼ ½y1 ; y2 ; ; yn T is the output matrix, yi ¼ ½y1i ; y2i ; ; yli is an output vector of the ith sample, and 0 ¼ ½0; 0; ; 0 is a zero vector with l zeros.
3
Z. Fan et al. / Computers and Structures 230 (2020) 106171
By solving the matrix equation in Eq. (9), the RE-MIMO-SVM can be expressed as follows:
FðxÞ ¼
n X
ai Kðxi ; xÞ þ b:
ð10Þ
To solve Eq. (13), the Lagrangian function is introduced as: n n X 1 1 X Lðw;b;e; aÞ ¼ wT w þ C xi e2i ai ½wT /ðxi Þ þ b yi ei yi gei : 2 2 i¼1 i¼1
ð14Þ
i¼1
To further improve the predictive performance of RE-SVM and RE-MIMO-SVM models, PSO [43] can be applied to optimise penalty parameter C and the variance of Gaussian kernel function r2 . For the rest of this paper, we use ‘‘PSO-RE-SVM” to denote the PSO-based RE-SVM and ‘‘PSO-RE-MIMO-SVM” to represent the PSO-based RE-MIMO-SVM. 3. The proposed model In this section, we describe the details of our proposed FW-RESVM. The fuzzy weighted operation is first introduced, and detailed processes of the FW-RE-SVM are then given for both multi-input, one-output and multi-input, multi-output scenarios. After that, the reverse prediction of concrete components using our proposed model is presented. 3.1. Fuzzy weighted operation The fuzzy set theory was originally proposed by Zadeh [44]. It is capable of handling problems with vagueness and uncertainty [45]. Considering the uncertainties in practical applications, a fuzzy weighted operation is introduced to improve robustness of the RE-SVM to noises or outliers by assigning weights to relative error constraints. This fuzzy weighted operation is also an indication for samples with different contributions to the learning of decision surface [46,47]. The fuzzy weight for the ith sample can be expressed as:
xi ¼
m Y
lAp ðxpi Þ
exp
i
;
ð11Þ
p¼1
With the use of KKT conditions [51], the matrix solution of Eq. (14) can be expressed as:
"
# b
1T
0
a
1 X þ C 1 UI
lApij ðxpi Þ ¼ exp
p 2 ðcij x Þ i 2r2 ij
;
ð12Þ
Here, cij and rij are the centre and bandwidth of the j fuzzy set Aj for the pth input feature of the ith sample, respectively. th
y
;
ð15Þ
nal values, y ¼ ½y1 ; y2 ; ; yn T is an output vector, 1 ¼ ½1; 1; ; 1T is the identity vector with n ones, I is the diagonal of the identity matrix, Xij ¼ /ðxi Þ/ðxj Þ ¼ Kðxi ; xj Þ, and Kðxi ; xj Þ is the kernel function. We use the Gaussian kernel, as shown in Eq. (16), for its good performance [52].
! jjxi xj jj2 Kðxi ; xj Þ ¼ exp ; 2r2
ð16Þ
where r is the kernel’s bandwidth. Based on matrix inversion in Eq. (15), parameters a and b can be obtained for the FW-RE-SVM as follows:
gðxÞ ¼
n X
ai Kðxi ; xÞ þ b:
ð17Þ
i¼1
To extend the FW-RE-SVM for a multi-input, multi-output scenario, the optimisation problem can be expressed as follows: n 1 1 X arg min wT w þ C xi ei eTi w;b;e 2 2 i¼1 1
s:t: ei ¼ ððyi þ gÞT ðyi þ gÞÞ ðyi þ gÞT ðwT /ðxi Þ þ b yi Þ; 2
ð18Þ
l
where w and b ¼ ½b ; b ; ; b are weight and bias vectors, respectively, wT is the transpose of w, and ei ¼ ½e1i ; e2i ; ; eli is a relative error vector between the ith sample’s target values and prediction values. Then, the Lagrangian function of the optimisation problem can be expressed as:
Lðw; b; e; aÞ ¼ Jðw; eÞ ¼
3.2. FW-RE-SVM
n 1 T 1 X w w þ C xi ei eTi 2 2 i¼1
n X
ai ½wT /ðxi Þ þ b yi ei yi gei ;
ð19Þ
i¼1
We extend the RE-SVM by using the fuzzy weighted operation to assign weights to relative error constraints, and allow target values to be any non-negative real number by adding a small value, g (e.g., g ¼ 104 ), to the denominators of relative error constraints. In this case, the optimisation problem for the proposed method can be expressed as follows: n 1 1 X arg min wT w þ C xi e2i w;b;e 2 2 i¼1
0
where U ¼ ðy þ gÞðy þ gÞT :=x is a matrix, x ¼ diag½x1 ; x2 ; ; xn , the symbol ‘‘:=” means element-wise division is only for the diago-
1
where m is the number of input features, p is the index of each input feature, xi is the weight for the ith data sample, and lðÞ is a Gaussian membership function given as follows:
¼
s:t: ei ¼
wT /ðxi Þ þ b yi yi þ g
ð13Þ
where C > 0 is a penalty parameter, w is a weight vector, b is a bias term, /ðÞ is a mapping function, ei is the ith sample’s relative error between the target value and its prediction value (note that output P value yi can be zero in Eq. (13)). The second term 12 C ni¼1 xi e2i makes our proposed FW-RE-SVM robust to outliers and noise data. The reason for using relative errors as constraints here is that in civil engineering, the relative error is a good indicator for measuring experimental performance [48–50].
where ai ¼ ½a1i ; a2i ; ; ali are Lagrangian multipliers. 8 n n X X > @L < @L ¼ 0 ! w ¼ ai /ðxi Þ; ¼ 0 ! ai ¼ 0; @w @b
> : @L
@ei
i¼1
¼ 0 ! Cei xi þðyi þ gÞaTi ¼ 0;
i¼1 @L ¼ 0 ! yi ¼ wT /ðxi Þþbyi ei @ ai
ge i :
ð20Þ where operation in yi ei is an element-wise multiplication. After eliminating parameters w and ei by replacement, the matrix solution for the optimisation problem can be expressed as follows:
"
0
1T
1 X þ C 1 PI
# b
a
¼
0 ; Y
ð21Þ
where P ¼ ðY þ gÞðY þ gÞT :=x is a symmetric matrix, x ¼ diag½x1 ; x2 ; ; xn and the remaining elements are equal to
4
Z. Fan et al. / Computers and Structures 230 (2020) 106171
one, the symbol ðY þ gÞðY þ gÞT :=x means element-wise division is only for the diagonal values, Y ¼ ½y1 ; y2 ; ; yn T is the output matrix, yi ¼ ½y1i ; y2i ; ; yli is a l-output vector, 1 ¼ ½1; 1; ; 1T is an identity vector, I ¼ diagð1; 1; ; 1Þ is the diagonal matrix, b ¼ ½b1 ; b2 ; ; bl is a bias vector, and Xij ¼ /ðxi ÞT /ðxj Þ ¼ Kðxi ; xj Þ. By applying the matrix operation in Eq. (21), parameters a and b can be obtained for the multi-output FW-RE-SVM as follows:
GðxÞ ¼
n X
ai Kðxi ; xÞ þ b:
ð22Þ
i¼1
The main difference between this and the RE-SVM is that we generalise the method to outliers/noises and allow the target values to be zero. 3.3. FW-RE-SVM for reverse prediction of concrete components Reverse (or inverse) problems can be found in many practical applications [53–57], ranging from environmental sciences [58] and communication studies[59] to geophysics [60]. Addressing these reverse problems is not only very important but also very difficult [61], mainly because they cannot guarantee a mathematically unique solution [62]. In addition, they are often ill-posed, and their solutions can oscillate wildly even with a small change or noise [63]. The transformation from a forward problem to a reverse problem is depicted in Fig. 1. The left side is an m-input, l-output problem, while its corresponding l-input, m-output reverse problem is shown on the right side. Usually, for forward prediction, m is larger than l (i.e., the number of input features is greater than the number of output features), which means reverse prediction will have more output features than input features. In this case, we cannot get a satisfying solution because such an ill-defined problem can lead to more than one solution, making it very difficult or impossible to do prediction [64]. Some changes to the dataset are thus needed. For our problem of reverse prediction of concrete components, we have l (l < m) outputs, which means at most we can reversely predict l concrete components. Then, there will be m l components left. To make the problem solvable, we add i known components to the inputs to predict the remaining components by satisfying the constraint, m i 6 l, which is described in Fig. 2. A flowchart showing the FW-RE-SVM for reverse prediction of multiple concrete components is given in Fig. 3, where d is the number of components we want to predict, and l is the output number of original data samples. In this case, we will have m! C dm ¼ d!ðmdÞ! different combinations of concrete components, where
4. Experiments and results In this section, we report on the performance of our proposed FW-RE-SVM, first on a small concrete dataset with multiple outputs for both multi-input, one-output and multi-input, multioutput reverse predictions, and then on a bigger concrete dataset for multi-input, one-output reverse prediction. We compare results obtained by our FW-RE-SVM to only the PSO-RE-SVM [26] and PSO-RE-MIMO-SVM [4], since these two models have shown better performance than other methods such as the LS-SVM, back-propagation neural network, and radial basis function neural network. 4.1. Parameter and experimental settings Our proposed FW-RE-SVM has three main parameters, C; r2 , and g. For each experiment, we used grid search [9] to decide on the
best
4
5
parameter 6
settings. 7
Values
of
5 103 ; 5
8
10 ; 5 10 ; 5 10 ; 5 10 and 5 10 were considered for the penalty parameter, C, with the Gaussian kernel selected; values of 0.4, 4, 40, 400 and 4000 were considered for the Gaussian kernel variance, r2 . The best combination of these values was selected adaptively via grid search based on 30 simulation runs. To ensure fair comparisons with the PSO-RE-SVM [26] and PSORE-MIMO-SVM [4], and also for consistency’s sake, leave-one-out cross-validation (LOOCV) [65,66] was applied to assess the performance of the prediction models (target values equal to zero were removed before training). Given a dataset with n samples, we traversed all the samples. Each time, n 1 samples were used to train a prediction model, while the remaining one was used as the test sample. This way, LOOCV can derive a more accurate estimate for the prediction performance [67]. 4.2. Performance measures For performance analysis, we considered the mean absolute relative error (MARE), standard deviation (SD), and root mean squared error (RMSE), which are described as follows:
MAREj ¼
n j ^ yj 1X y i j i ; n i¼1 yi
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n u1 X j 2 SD ¼ t ðe ej Þ ; n i¼1 i j
! is the factorial function.
Fig. 1. Transformation from a forward problem to a reverse problem.
ð23Þ
ð24Þ
Z. Fan et al. / Computers and Structures 230 (2020) 106171
5
Fig. 2. Simultaneous reverse prediction of concrete components.
Fig. 3. A flowchart of the FW-RE-SVM for simultaneous reverse prediction of concrete components.
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n u1 X j 2 ^ yji Þ ; RMSE ¼ t ðy n i¼1 i j
ð25Þ
^ji and yji are the prediction and target values of the ith samwhere y ple for the jth output, respectively; eji is the jth output of the ith sample’s absolute relative error, and ej is the jth output’s MARE. Specifically, each reference data set includes data for the n samples tested. For each sample, we have the concrete parameters and a set of output parameters measured experimentally. Hence, we can fit the prediction model, evaluate any possible error made for the jth concrete parameter, and take its average over the entire sample population. SDj of Eq. (24) represents the SD of the error made for the jth concrete parameter. For a d-output reverse prediction problem, we can take all the d prediction results into account for simultaneous prediction. In this case, the evaluation is based on the smallest average of d MARE and RMSE. Our objective is to minimise the average MARE and RMSE for all the d outputs:
min
d 1X MAREj : d j¼1
d 1X min RMSEj : d j¼1
ð26Þ
ð27Þ
[68]. Its statistical properties are described in Table 1. For reverse prediction of concrete components, however, some adjustments need to be made. 4.3.2. Dataset organisation If we directly reverse the problem to predicting the concrete components based on this dataset, there would be a three-input, seven-output problem. However, the number of inputs in the dataset is larger than the number of outputs. Considering that there are seven concrete components, with three of them used for prediction, the other four components will be left behind. In this case, the remaining components can be added to the inputs, resulting in a seven-input, three-output problem, which can then be solved as a multi-objective inverse problem [69]. The same applies for twooutput and one-output predictions. If our problem is to simultaneously predict three concrete components, then we have Cð7; 3Þ ¼ 35 groups of experiments to be conducted; if our problem is to simultaneously predict two concrete components, then Cð7; 2Þ ¼ 21 groups of experiments have to be conducted; and if our problem is to predict only one concrete component, then Cð7; 1Þ ¼ 7 groups of experiments have to be conducted. The inputs of the dataset are normalised between 0 and 1 using Eq. (28) below:
xi ¼
xi xmin i xmax xmin i i
ð28Þ
where xmax and xmin are the maximum and minimum values of the i i ith input column vector xi , respectively.
4.3. Simultaneous reverse prediction of concrete components 4.3.1. Dataset description The dataset used for our first series of experiments consists of 103 samples with seven-input concrete components (cement, blast furnace slag, fly ash, water, superplasticiser, coarse aggregate, and fine aggregate) and three outputs (concrete strength, flow, and slump). This dataset was downloaded from the UCI database
4.3.3. Results of simultaneous reverse prediction of concrete components Figs. 4–6 show direct comparisons between the PSO-RE-MIMOSVM/PSO-RE-SVM and our proposed FW-RE-SVM in terms of MARE and RMSE. In Fig. 4, there are 35 different combinations of experiments for predicting three concrete components. In Fig. 5, there are 21 different combinations of experiments for predicting two
6
Z. Fan et al. / Computers and Structures 230 (2020) 106171
Table 1 Statistical properties of the UCI concrete slump test dataset. Variables Cement Blast furnace slag Fly ash Water Superplasticizer Coarse aggregate Fine aggregate Slump Flow 28-day concrete compressive strength
Data type Real Real Real Real Real Real Real Real Real Real
Unit 3
kg/m kg/m3 kg/m3 kg/m3 kg/m3 kg/m3 kg/m3 cm cm MPa
Description
Symbol
Minimum
Maximum
Mean
Standard deviation
Input Input Input Input Input Input Input Output Output Output
Cement Slag Fly ash Water SP CA FA Slump Flow Concrete strength
137 0 0 160 4.4 708 640.6 0 20 17.19
374 193 260 240 19 1049.9 902 29 78 58.53
229.894 77.974 149.015 197.168 8.540 883.979 739.605 18.049 49.611 36.039
78.877 60.461 85.418 20.208 2.808 88.391 63.342 8.751 17.569 7.838
Fig. 4. Comparison of results in terms of (a) MARE and (b) RMSE obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM for 35 different combinations of concrete component prediction.
concrete components. In Fig. 6, there are 7 groups of experiments, each reversely predicting one concrete component. As we can see from these figures, our FW-RE-SVM outperforms both the PSORE-MIMO-SVM (Figs. 4 and 5) and PSO-RE-SVM (Fig. 6) on multiinput, multi-output and multi-input, one-output scenarios, respectively. While Figs. 4–6 depict the overall performance, Tables 2–4 present detailed results obtained by the PSO-RE-MIMO-SVM, PSO-RESVM and FW-RE-SVM for simultaneous reverse prediction of concrete components in terms of MARE, SD, and RMSE. The results in Table 2 are about reverse prediction of three concrete components, whereas results for reverse prediction of two concrete components
and one concrete component can be found in Tables 3 and 4, respectively. The smallest averages of MARE and RMSE for each group of experiments in these tables are highlighted in bold, and the smallest average of SD is marked in italic. As we can see from Table 2, for all the 35 different combinations, our proposed FW-RE-SVM outperforms the PSO-RE-MIMOSVM on both the MARE and RMSE. This indicates that the fuzzy weighted operation, which assigns different weights to error constraints, is effective. The results of slag prediction for both the PSO-RE-MIMO-SVM and FW-RE-SVM are not as good as the other concrete components in terms of MARE. This is acceptable though, since it is difficult to guarantee good performance for predicting all
Z. Fan et al. / Computers and Structures 230 (2020) 106171
7
Fig. 5. Comparison of results in terms of (a) MARE and (b) RMSE obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM for 21 different combinations of concrete component prediction.
Fig. 6. Comparison of results in terms of (a) MARE and (b) RMSE obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM for 7 different combinations of concrete component prediction.
three concrete components. For some groups like (cement, SP, CA) and (slag, fly ash, SP), although our FW-RE-SVM has better overall results, when it comes to predicting SP, the proposed model is not as good as the PSO-RE-MIMO-SVM (but with very close results). From Table 3, we see that in most cases, our FW-RE-SVM has better performance than the PSO-RE-MIMO-SVM in terms of MARE and RMSE. On those seven different combinations of concrete components that the PSO-RE-MIMO-SVM has outperformed our
proposed model, their results are very similar. From Table 4, we see that our proposed model outperforms the PSO-RE-SVM too, for almost all of the predictions except on SP and FA. For SP prediction, the MARE; SD and RMSE for the PSO-RE-SVM are 0.171, 0.180 and 1.820, while for our proposed model they are 0.208, 0.153 and 1.908, respectively. For FA prediction, results for the PSO-RE-SVM are 0.005, 0.006 and 5.810, while for our method they are 0.006, 0.006 and 6.355.
8 Table 2 Experimental results for simultaneous reverse prediction of three concrete components obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM. Best results of each group, in terms of MARE and RMSE, are highlighted in bold; while for SD they are marked in italic. Ingredient
Method PSO-RE-MIMO-SVM
Ingredient FW-RE-SVM
Method PSO-RE-MIMO-SVM
SD
RMSE
MARE
SD
RMSE
Cement Slag Fly ash
0.175 11.134 0.211
0.132 48.904 0.158
40.281 14.255 42.761
0.119 4.419 0.153
0.099 21.692 0.129
27.523 9.852 33.214
Cement Slag Water
0.106 14.809 0.035
0.094 61.499 0.025
25.118 27.299 8.706
0.085 3.726 0.031
0.081 17.994 0.023
Cement Slag SP
0.105 7.884 0.236
0.094 41.746 0.218
25.087 15.797 2.641
0.058 4.972 0.246
Cement Slag CA
0.100 21.905 0.037
0.097 85.848 0.031
24.845 36.988 41.680
Cement Slag FA
0.102 21.363 0.042
0.097 85.832 0.036
Cement Fly ash Water
0.169 0.220 0.026
Cement Fly ash SP
Method PSO-RE-MIMO-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Cement SP CA
0.076 0.280 0.018
0.086 0.199 0.016
21.944 2.675 21.008
0.044 0.315 0.012
0.054 0.260 0.011
12.828 3.082 13.798
21.593 24.848 7.681
Cement SP FA
0.144 0.272 0.030
0.118 0.184 0.020
33.066 2.700 27.143
0.037 0.344 0.016
0.040 0.332 0.016
0.050 26.583 0.192
14.750 11.425 2.503
Cement CA FA
0.164 0.059 0.067
0.125 0.037 0.041
37.582 59.723 59.795
0.076 0.054 0.065
0.062 9.491 0.028
0.074 39.579 0.026
17.512 33.884 32.278
Slag Fly ash Water
23.036 0.127 0.037
80.550 0.090 0.027
28.471 28.621 9.333
24.520 34.743 41.231
0.075 10.618 0.035
0.086 59.414 0.031
22.426 24.871 33.503
Slag Fly ash SP
9.834 0.118 0.242
47.282 0.090 0.192
0.150 0.178 0.021
42.772 43.045 6.951
0.119 0.120 0.027
0.097 0.131 0.028
32.931 30.780 7.672
Slag Fly ash CA
34.093 0.139 0.041
0.144 0.161 0.288
0.119 0.113 0.223
34.553 34.368 2.854
0.103 0.135 0.257
0.105 0.110 0.214
27.609 30.187 2.663
Slag Fly ash FA
Cement Fly ash CA
0.371 0.308 0.050
0.283 0.190 0.039
76.374 72.495 54.455
0.114 0.144 0.010
0.124 0.126 0.011
29.350 34.546 12.824
Cement Fly ash FA
0.359 0.301 0.043
0.287 0.202 0.028
73.158 68.443 39.431
0.123 0.149 0.027
0.132 0.151 0.026
Cement Water SP
0.060 0.024 0.293
0.068 0.021 0.215
16.599 6.433 2.822
0.056 0.023 0.288
Cement Water CA
0.124 0.060 0.047
0.094 0.043 0.038
29.980 14.899 51.672
Cement Water FA
0.121 0.045 0.048
0.010 0.032 0.032
28.774 11.250 42.496
FW-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Slag CA FA
18.208 0.055 0.066
66.233 0.044 0.043
36.276 58.474 58.351
13.645 0.053 0.052
78.333 0.049 0.043
27.259 58.777 50.475
10.101 3.400 16.718
Fly ash Water SP
0.089 0.024 0.291
0.070 0.018 0.218
20.138 5.907 2.887
0.063 0.017 0.322
0.053 0.018 0.211
14.824 4.988 3.089
0.081 0.043 0.049
19.340 57.544 59.761
Fly ash Water CA
0.143 0.062 0.048
0.110 0.043 0.038
29.692 15.421 50.188
0.086 0.047 0.036
0.077 0.042 0.032
19.907 12.313 40.755
11.352 0.096 0.027
45.315 0.073 0.024
19.717 20.080 7.671
Fly ash Water FA
0.187 0.060 0.050
0.129 0.036 0.032
41.427 13.660 44.767
0.106 0.043 0.044
0.097 0.030 0.034
27.107 10.586 39.823
14.084 47.282 2.537
4.494 0.067 0.339
17.599 0.055 0.247
9.559 14.538 3.382
Fly ash SP CA
0.193 0.321 0.035
0.143 0.192 0.027
45.064 2.988 36.397
0.050 0.337 0.009
0.048 0.329 0.008
11.989 3.600 10.557
114.563 0.097 0.034
42.984 30.608 45.798
25.174 0.088 0.032
132.768 0.076 0.029
33.936 18.819 36.321
Fly ash SP FA
0.189 0.312 0.037
0.130 0.194 0.023
40.795 2.967 31.811
0.081 0.274 0.017
0.066 0.185 0.015
18.586 2.749 16.505
88.938 0.214 0.115
406.049 0.208 0.110
83.037 51.221 113.390
28.310 0.123 0.047
126.599 0.134 0.042
33.961 28.535 45.155
Fly ash CA FA
0.186 0.058 0.078
0.137 0.040 0.040
36.440 58.797 66.346
0.121 0.063 0.077
0.119 0.049 0.048
29.477 65.432 66.049
Slag Water SP
8.128 0.030 0.318
52.055 0.030 0.260
17.141 8.898 3.200
2.748 0.023 0.280
11.564 0.018 0.251
13.299 6.085 2.983
Water SP CA
0.046 0.274 0.029
0.039 0.192 0.024
12.526 2.592 32.418
0.045 0.275 0.027
0.040 0.211 0.024
12.535 2.630 31.845
36.285 38.674 27.777
Slag Water CA
14.086 0.055 0.058
49.513 0.041 0.051
34.829 13.842 65.791
9.442 0.046 0.043
61.746 0.036 0.036
22.423 12.045 46.584
Water SP FA
0.071 0.572 0.054
0.069 0.569 0.053
19.891 6.046 56.327
0.040 0.262 0.029
0.034 0.196 0.024
10.697 2.557 27.190
0.063 0.020 0.213
16.331 6.280 2.787
Slag Water FA
21.281 0.039 0.054
88.202 0.031 0.041
34.807 10.152 50.760
12.134 0.038 0.043
67.541 0.031 0.042
27.166 10.196 44.185
Water CA FA
0.055 0.069 0.062
0.038 0.053 0.041
13.527 72.331 55.208
0.049 0.059 0.055
0.039 0.048 0.045
12.840 63.856 53.816
0.088 0.055 0.042
0.081 0.043 0.035
23.332 13.532 47.052
Slag SP CA
15.428 0.276 0.042
54.398 0.195 0.033
34.755 2.726 47.096
7.005 0.244 0.029
27.744 0.204 0.025
24.735 2.547 31.805
SP CA FA
0.282 0.051 0.057
0.185 0.041 0.044
2.692 55.504 53.465
0.267 0.046 0.053
0.185 0.042 0.042
2.523 51.791 51.099
0.105 0.046 0.047
0.091 0.032 0.030
26.212 11.173 40.689
Slag SP FA
22.699 0.246 0.046
101.775 0.182 0.037
34.923 2.431 44.089
17.742 0.256 0.032
83.283 0.219 0.029
29.372 2.827 32.199
Z. Fan et al. / Computers and Structures 230 (2020) 106171
MARE
Ingredient FW-RE-SVM
Table 3 Experimental results for simultaneous reverse prediction of two concrete components obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM. Best results of each group, in terms of MARE and RMSE, are highlighted in bold; while for SD they are marked in italic. Ingredient
Method PSO-RE-MIMO-SVM
Ingredient FW-RE-SVM
Method
Ingredient
PSO-RE-MIMO-SVM
SD
RMSE
MARE
SD
RMSE
Cement Slag
0.071 5.571
0.071 33.095
19.013 13.189
0.047 2.189
0.055 12.490
13.228 10.254
Cement Fly ash
0.110 0.134
0.113 0.126
30.250 31.875
0.135 0.154
0.123 0.129
Cement Water
0.071 0.055
0.071 0.066
19.013 15.885
0.054 0.021
Cement SP
0.031 0.248
0.032 0.238
8.275 2.643
Cement CA
0.046 0.009
0.047 0.008
Cement FA
0.056 0.014
Slag Fly ash
4.528 0.074
Method PSO-RE-MIMO-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Slag Water
3.069 0.028
12.366 0.022
15.384 7.358
2.824 0.021
13.766 0.021
13.495 6.358
32.670 33.244
Slag SP
0.621 0.226
2.628 0.172
5.131 2.441
0.864 0.256
3.402 0.180
0.061 0.016
15.778 5.436
Slag CA
4.258 0.021
19.930 0.026
21.666 27.944
3.393 0.023
0.026 0.225
0.028 0.197
6.875 2.188
Slag FA
10.605 0.027
54.721 0.029
25.007 29.490
11.770 10.081
0.034 0.007
0.047 0.009
10.082 9.513
Fly ash Water
0.046 0.015
0.046 0.013
0.068 0.013
15.907 14.291
0.038 0.011
0.040 0.010
9.945 10.921
Fly ash SP
0.048 0.260
19.468 0.057
9.249 15.306
4.426 0.061
16.259 0.061
10.100 14.446
Fly ash CA
0.038 0.008
FW-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Fly ash FA
0.075 0.016
0.071 0.016
18.196 16.415
0.071 0.016
0.065 0.015
17.306 16.081
5.750 2.505
Water SP
0.014 0.277
0.013 0.227
3.750 2.716
0.018 0.259
0.015 0.204
4.668 2.602
12.704 0.021
19.746 25.354
Water CA
0.045 0.027
0.041 0.027
12.490 32.428
0.046 0.027
0.040 0.027
12.445 32.916
14.830 0.031
74.680 0.031
27.461 32.148
Water FA
0.039 0.028
0.032 0.025
10.415 27.439
0.039 0.028
0.032 0.024
10.396 26.654
10.875 3.834
0.049 0.014
0.053 0.015
12.964 4.132
SP CA
0.255 0.009
0.212 0.010
2.598 11.959
0.254 0.010
0.171 0.008
2.559 11.632
0.044 0.203
10.940 2.720
0.040 0.281
0.041 0.228
10.053 2.908
SP FA
0.247 0.010
0.198 0.008
2.541 9.803
0.252 0.011
0.187 0.011
2.543 11.050
0.031 0.007
8.615 8.965
0.037 0.008
0.039 0.008
9.457 9.777
CA FA
0.044 0.050
0.042 0.044
50.891 50.515
0.048 0.054
0.042 0.043
52.947 51.645
Table 4 Experimental results for reverse prediction of one concrete component obtained by the PSO-RE-SVM and FW-RE-SVM. Best results in terms of MARE and RMSE are highlighted in bold, while for SD it is marked in italic. Ingredient
Method PSO-RE-SVM
Ingredient
Method
FW-RE-SVM
PSO-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Cement Slag
0.021 1.448
0.029 9.589
6.868 2.821
0.020 0.454
0.021 1.885
5.464 2.773
Fly ash
0.024
0.021
5.598
0.025
0.021
5.473
Ingredient
Method
FW-RE-SVM
PSO-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
Water SP
0.009 0.171
0.007 0.180
2.272 1.820
0.008 0.208
0.008 0.153
2.174 1.908
CA
0.005
0.006
6.661
0.005
0.005
6.398
FA
Z. Fan et al. / Computers and Structures 230 (2020) 106171
MARE
FW-RE-SVM
FW-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
0.005
0.006
5.810
0.006
0.006
6.355
9
10
Table 5 Statistical properties of the UCI concrete compressive strength dataset. Data type
Description
Cement Slag Fly ash Water SP CA FA Age Concrete strength
Real Real Real Real Real Real Real Real Real
Input Input Input Input Input Input Input Output Output
Quantitative kg kg kg kg kg kg kg
3
in one m mixture in one m3 mixture in one m3 mixture in one m3 mixture in one m3 mixture in one m3 mixture in one m3 mixture Day(1–365) MPa
Minimum
Maximum
Mean
Standard deviation
102 0 0 121.8 0 801 594 1 2.33
540 359.4 200.1 247 32.2 1145 992.6 365 82.6
281.168 73.896 54.188 181.567 6.205 972.919 773.580 45.662 35.818
104.506 86.279 63.997 21.354 5.974 77.754 80.176 63.170 16.706
Table 6 Experimental results for reverse prediction of one concrete component obtained by the PSO-RE-SVM and FW-RE-SVM. Best results in terms of MARE and RMSE are highlighted in bold, while for SD they are marked in italic. Symbol ‘‘–” implies that the value is not available. Ingredient
Method PSO-RE-SVM
Cement Slag Fly ash
Ingredient
Method
FW-RE-SVM
PSO-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
0.0471 0.0492 0.0114
0.0469 0.1373 0.0471
– – –
0.0387 0.0353 0.0110
0.0760 0.1202 0.0567
20.9739 15.2946 7.1051
Water SP CA
Ingredient
Method
FW-RE-SVM
PSO-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
0.0160 0.0725 0.0103
0.0252 0.2428 0.0195
– – –
0.0127 0.0365 0.0103
0.0293 0.1342 0.0244
5.7837 0.9255 25.9116
FA Age
FW-RE-SVM
MARE
SD
RMSE
MARE
SD
RMSE
0.0144 0.8646
0.0256 1.9531
– –
0.0116 1.2581
0.0207 3.7574
23.4419 41.7060
Z. Fan et al. / Computers and Structures 230 (2020) 106171
Variables
Z. Fan et al. / Computers and Structures 230 (2020) 106171
11
Fig. 7. Comparison of results obtained by the PSO-RE-MIMO-SVM and FW-RE-SVM in terms of MARE, (a) with 8 different concrete components as inputs, and (b) with 7 different concrete components as inputs, by removing age for clearer presentation of results.
Fig. 8. The relationship between forecast values and target values based on reverse prediction of cement.
Fig. 9. Forecast values and targets for the reverse prediction of cement.
Fig. 10. Sensitivity analysis on C and
r2 of our proposed FW-RE-SVM. The x-axis is the value of r2 , while the y-axis is the MARE with fixed C.
12
Z. Fan et al. / Computers and Structures 230 (2020) 106171
Fig. 11. Sensitivity analysis on C and
r2 of our proposed FW-RE-SVM. The x-axis is the value of r2 , while the y-axis is the RMSE with fixed C.
Fig. 12. Sensitivity analysis on g of our proposed FW-RE-SVM. The x-axis is the value of g, while the y-axis represents the MARE on the left and RMSE on the right.
Furthermore, our proposed FW-RE-SVM outperforms the PSORE-MIMO-SVM and PSO-RE-SVM in terms of SD, according to the results in Tables 2–4. This means that our FW-RE-SVM has higher reliability than the other two models. We also observe from Tables 2–4 that the prediction performance of our FW-RE-SVM on one component is better than on two components, and its performance on two components is better than that on three components. This is not a surprise, as problems with more outputs are often more complex than problems with fewer outputs [70]. To predict multiple concrete components, a prediction model needs to balance its performance on all prediction results, thereby sacrificing accuracy to some extent. Summing up, the results shown above confirmed that our proposed model can outperform the PSO-RE-MIMO-SVM and PSO-RESVM for reverse prediction of concrete components in both multioutput and one-output scenarios. This implies that the fuzzy weighted operation is useful, and that the proposed model is effective for simultaneous reverse prediction of concrete components. 4.4. Reverse prediction of one concrete component 4.4.1. Dataset description To further evaluate the performance of our proposed FW-RESVM, a bigger concrete dataset, which consists of 1030 data samples with eight-input components (cement, blast furnace slag, fly ash, age, water, superplasticiser, coarse aggregate, and fine aggregate), and one-output feature (concrete strength), was downloaded from the UCI database [71]. Statistical properties of the dataset are listed in Table 5. 4.4.2. Dataset organisation Since the dataset has only one output, only one concrete component can be reversely predicted. In this case, C 18 ¼ 8 groups of experiments have to be conducted. To be consistent with previous work [26] as well as experiments reported in Section 4.3, the LOOCV tech-
nique was used for validating the performance of the prediction models. Output values equal to zero were removed before training. 4.4.3. Results of reverse prediction of one concrete component Table 6 shows detailed results obtained by the PSO-RE-SVM and FW-RE-SVM in terms of MARE and RMSE, with symbol ‘‘–” denoting missing values (not provided in [26]). As we can see from Table 6, for reverse prediction of one concrete component, our proposed FW-RE-SVM outperforms the PSO-RE-SVM in terms of MARE in all cases except the prediction of age. This is likely because this bigger concrete dataset may contain outliers or noise data. Using the fuzzy weighted operation for relative error constraints can sometimes alleviate their influence. It is necessary to point out that, in some cases, the PSO-RE-SVM is better than our FW-RE-SVM in terms of SD (e.g., reverse prediction of age, where the MARE and SD for the PSO-RE-SVM are 0.8646 and 1.9531, but for our FWRE-SVM they are 1.2581 and 3.7574). On the left of Fig. 7, results obtained by the PSO-RE-SVM and FW-RE-SVM in terms of MARE for 8 groups of experiments with different concrete components are presented (RMSE results are omitted because of missing values). To better visualise these results, the result for the eighth concrete component (age) is removed, as shown on the right of Fig. 7. From Fig. 7, it is clear that the performance of our proposed FW-RE-SVM is promising. We also take reverse prediction of cement as an example to depict the relationships between forecast values and target values, as shown in Figs. 8 and 9. From Fig. 8, we see that the relationship between forecast values and targets of cement is linear, except for a few samples. This means our forecast values are very close to the target values. From Fig. 9, again we see that our forecast results (the blue line) are very close to the target values. 4.4.4. Sensitivity analysis of FW-RE-SVM parameters Finally, we use the same cement example for sensitivity analysis of parameters C; r2 and g of our proposed FW-RE-SVM. As can
Z. Fan et al. / Computers and Structures 230 (2020) 106171
be seen from Figs. 10 and 11, with fixed C, the MARE and RMSE decrease as r2 increases. This means, with fixed C, for better performance, r2 should be set to a larger value. For the RMSE, however, if C is too large (e.g., C ¼ 5 108 ), the results become unstable with increasing r2 . This is because as C increases, Lagrange multipliers a will also become larger (see Eq. (15)). The performance of our FW-RE-SVM is most stable when r2 > 2000 and C < 5 106 . With the obtained best C and r2 (C ¼ 5 104 ; r2 ¼ 4000), Fig. 12 shows the effect of g on our proposed FW-RE-SVM. As we can see from the figure, the performance of our FW-RE-SVM is not too sensitive to g in terms of MARE and RMSE. This is because a large C value results in a much smaller ð2yg þ g2 Þ=C in U in Eq. (15). Nevertheless, it is better to use small g (e.g., 104 ) in practice, especially when y is large, which increases the noise ð2yg þ g2 Þ=C. 5. Conclusion In this paper, we have proposed a fuzzy weighted RE-SVM for reverse prediction of concrete components in both multi-input, one-output and multi-input, multi-output scenarios. Making accurate (reverse) prediction in this context is not only very difficult but also very important for civil engineering, as it can reduce the costs of manpower, concrete components, and other resources. To balance the performance on different concrete components and improve the robustness of the RE-SVM, a fuzzy weighted operation was applied by assigning weights to relative error constraints as an indication of different contributions to the learning of decision surface. In addition, a small value, g, was added to the denominator of each relative error constraint, to avoid its denominator being equal to zero. With the proposed model, we can reversely predict specified concrete components based on the concrete strength, slump and flow, which may be very helpful for different construction requirements (like homes, roads, or dams). Experimental results have clearly demonstrated that our proposed model can outperform the PSO-RE-MIMO-SVM and PSORE-SVM for reverse prediction of concrete components in both multi-output and one-output scenarios, respectively. While the results are promising, it is worth noting that these results may vary for different practical engineering applications, because the forming process of concrete is not only influenced by the mixture of its components (complex physical and chemical processes at different stages), but also the environment (e.g., temperature, humidity). Nevertheless, our proposed FW-RE-SVM provides a good reference for simultaneous reverse prediction of concrete components. For future work, we would like to further investigate how we can generate better weights for the FW-RE-SVM (e.g., iteratively updating the weights using a gradient-based approach may be one possible option). We also plan to explore how constraints of the loss function for predicting multiple concrete components can be improved. Also, how to improve the relative error constraints is worth looking into. Finally, different optimisation approaches can be explored to optimise the parameters of our FW-RE-SVM. Declaration of Competing Interest The authors declare no conflict of interest. Acknowledgements The first author would like to acknowledge the support of an Australian Government Research Training Program scholarship to
13
study a PhD degree in Computer Science at the University of Newcastle, Australia. Appendix A. Supplementary material Supplementary data associated with this article can be found, in the online version, at https://doi.org/10.1016/j.compstruc.2019. 106171. References [1] Begum SJ, Anjaneyulu P, Ratnam M, Begum SJ, Anjaneyulu P, Ratnam M. A study on effect of steel fiber in fly ash based self compacting concrete. IJIRST – Int J Innovat Res Sci Technol 2018;5:95–9. [2] Erdal HI, Karakurt O, Namli E. High performance concrete compressive strength forecasting using ensemble models based on discrete wavelet transform. Eng Appl Artif Intell 2013;26(4):1246–54. [3] Khademi F, Akbari M, Jamal SM, Nikoo M. Multiple linear regression, artificial neural network, and fuzzy logic prediction of 28 days compressive strength of concrete. Front Struct Civil Eng 2017;11(1):90–9. [4] Gou J, Fan Z-W, Wang C, Guo W-P, Lai X-M, Chen M-Z. A minimum-ofmaximum relative error support vector machine for simultaneous reverse prediction of concrete components. Comput Struct 2016;172:59–70. [5] Aydogmus HY, Erdal H, Karakurt O, Namli E, Turkan YS, Erdal H. A comparative assessment of bagging ensemble models for modeling concrete slump flow. Comput Concr 2015;16(5):741–57. [6] Okere C, Onwuka D, Onwuka S, Arimanwa J. Simplex-based concrete mix design. IOSR J Mech Civil Eng 2013;5(2):46–55. [7] Okere C, Onwuka D. Mathematical model for optimisation of modulus of rupture of concrete using osadebe’s regression theory. Res Inventy: Int J Eng Sci 2013;2(5):1–12. [8] Ibearugbulem O, Ettu L, Ezeh J, Anya U. A new regression model for optimizing concrete mixes. Int J Eng Sci Res Technol 2013;2(7):1–8. [9] Witten IH, Frank E, Hall MA, Pal CJ. Data mining: Practical machine learning tools and techniques. Morgan Kaufmann; 2016. [10] Lai S, Serra M. Concrete strength prediction by means of neural network. Constr Build Mater 1997;11(2):93–8. [11] Ni H-G, Wang J-Z. Prediction of compressive strength of concrete by neural networks. Cem Concr Res 2000;30(8):1245–50. [12] Lee S-C. Prediction of concrete strength using artificial neural networks. Eng Struct 2003;25(7):849–57. [13] Nikoo M, Torabian Moghadam F, Sadowski Ł. Prediction of concrete compressive strength by evolutionary artificial neural networks. Adv Mater Sci Eng 2015 2015:1–8. [14] Asteris P, Kolovos K, Douvika M, Roinos K. Prediction of self-compacting concrete strength using artificial neural networks. Eur J Environ Civil Eng 2016;20(sup1):s102–22. [15] Naderpour H, Rafiean AH, Fakharian P. Compressive strength prediction of environmentally friendly concrete using artificial neural networks. J Build Eng 2018;16:213–9. [16] Abd AM, Abd SM. Modelling the strength of lightweight foamed concrete using support vector machine (SVM). Case Stud Constr Mater 2017;6:8–15. [17] Qian C, Kang W, Ling H, Dong H, Liang C, Chen H. Combination of Support Vector Machine and K-Fold cross-validation for prediction of long-term degradation of the compressive strength of marine concrete. Int J Comput Phys Ser 2018;1(1):120–30. [18] Xu L, White M, Schuurmans D. Optimal reverse prediction: a unified perspective on supervised, unsupervised and semi-supervised learning. In: Proceedings of the 26th Annual International Conference on Machine Learning. ACM; 2009. p. 1137–44. [19] Bertero M, Poggio TA, Torre V. Ill-posed problems in early vision. Proc IEEE 1988;76(8):869–89. [20] Johnson J, Campeau P. Reverse prediction. Berlin Heidelberg: Springer; 2005. [21] Engl HW, Kügler P. Nonlinear inverse problems: theoretical aspects and some industrial applications. Multidiscip Methods Anal Optim Control Complex Syst 2005:3–47. [22] Engl HW, Groetsch CW. Inverse and Ill-posed problems, Vol. 4. Elsevier; 2014. [23] Kabanikhin SI. Inverse and Ill-posed problems: Theory and applications, Vol. 55. Walter De Gruyter; 2011. [24] Dashti M, Stuart AM. The Bayesian approach to inverse problems. Handbook of Uncertain Quant 2016:1–118. [25] Jahromi OS. Multirate statistical signal processing. Springer Science & Business Media; 2007. [26] Fan ZW, Gou J, Wang C, Guo WP. Reverse prediction of concrete component based on particle swarm optimization for the minimum of maximum relative error support vector machine. J Comput Inform Syst 2015;11:5673–80. [27] Burges CJ. A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 1998;2(2):121–67. [28] Vapnik VN, Vapnik V. Statistical learning theory, Vol. 1. New York: Wiley; 1998. [29] Vapnik VN. An overview of statistical learning theory. IEEE Trans Neural Networks 1999;10(5):988–99.
14
Z. Fan et al. / Computers and Structures 230 (2020) 106171
[30] Lo SL, Chiong R, Cornforth D. Using support vector machine ensembles for target audience classification on Twitter. PloS One 2015;10(4):1–20. [31] Hu Z, Bao Y, Chiong R, Xiong T. Mid-term interval load forecasting using multioutput support vector regression with a memetic algorithm for feature selection. Energy 2015;84:419–31. [32] Hu Z, Bao Y, Xiong T, Chiong R. Hybrid filter–wrapper feature selection for short-term load forecasting. Eng Appl Artif Intell 2015;40:17–27. [33] Lo SL, Cambria E, Chiong R, Cornforth D. A multilingual semi-supervised approach in deriving Singlish sentic patterns for polarity detection. KnowlBased Syst 2016;105:236–47. [34] Lo SL, Chiong R, Cornforth D. Ranking of high-value social audiences on Twitter. Decis Supp Syst 2016;85:34–48. [35] Hu Z, Bao Y, Chiong R, Xiong T. Profit guided or statistical error guided? A study of stock index forecasting using support vector regression. J Syst Sci Complex 2017;30(6):1425–42. [36] Smola AJ, Schölkopf B. A tutorial on support vector regression. Stat Comput 2004;14(3):199–222. [37] Suykens JA, Lukas L, Vandewalle J. Sparse approximation using least squares support vector machines. In: Proceedings of the International Symposium on Circuits and Systems, 2. IEEE; 2000. p. 757–60. [38] Li K, Cheng G, Sun X, Yang Z. A nonlinear flux linkage model for bearingless induction motor based on GWO-LSSVM. IEEE Access 2019;7:36558–67. [39] Shawe-Taylor J, Sun S. A review of optimization methodologies in support vector machines. Neurocomputing 2011;74(17):3609–18. [40] Scholkopf B, Smola AJ. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT Press; 2001. [41] Zhang L, Zhang B. Relationship between support vector set and kernel functions in SVM. J Comput Sci Technol 2002;17(5):549–55. [42] Shevade SK, Keerthi SS, Bhattacharyya C, Murthy KRK. Improvements to the SMO algorithm for SVM regression. IEEE Trans Neural Networks 2000;11 (5):1188–93. [43] Bao Y, Hu Z, Xiong T. A PSO and pattern search based memetic algorithm for SVMs parameters optimization. Neurocomputing 2013;117:98–106. [44] Zadeh LA. Fuzzy sets. Inform Control 1965;8(3):338–53. [45] Esposito C, Ficco M, Palmieri F, Castiglione A. Smart cloud storage service selection based on fuzzy logic, theory of evidence and game theory. IEEE Trans Comput 2015;65(8):2348–62. [46] Lin C-F, Wang S-D. Fuzzy support vector machines. IEEE Trans Neural Networks 2002;13(2):464–71. [47] Fan Z, Gou J, Wang C, Luo W. Fuzzy model identification based on fuzzy-rule clustering and its application for airfoil noise prediction. J Intell Fuzzy Syst 2017;33(3):1603–11. [48] Antunes P, Lima H, Varum H, André P. Optical fiber sensors for static and dynamic health monitoring of civil engineering infrastructures: Abode wall case study. Measurement 2012;45(7):1695–705. [49] Duan Z-H, Kou S-C, Poon C-S. Prediction of compressive strength of recycled aggregate concrete using artificial neural networks. Constr Build Mater 2013;40:1200–6. [50] Li Y, Li J. Capillary tension theory for prediction of early autogenous shrinkage of self-consolidating concrete. Constr Build Mater 2014;53:511–6.
[51] Bo L, Jiao L, Wang L. Working set selection using functional gain for LS-SVM. IEEE Trans Neural Networks 2007;18(5):1541–4. [52] Amari S-I, Wu S. Improving support vector machine classifiers by modifying kernel functions. Neural Networks 1999;12(6):783–9. [53] Gulrajani RM. The forward and inverse problems of electrocardiography. IEEE Eng Med Biol Magaz 1998;17(5):84–101. [54] Isernia T, Crocco L, D’Urso M. New tools and series for forward and inverse scattering problems in lossy media. IEEE Geosci Remote Sens Lett 2004;1 (4):327–31. [55] Kaipio J, Somersalo E. Statistical inverse problems: discretization, model reduction and inverse crimes. J Comput Appl Math 2007;198 (2):493–504. [56] Gregson J, Ihrke I, Thuerey N, Heidrich W. From capture to simulation: connecting forward and inverse problems in fluids. ACM Trans Graph (TOG) 2014;33(4):139. [57] Jin KH, McCann MT, Froustey E, Unser M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process 2017;26 (9):4509–22. [58] Krasnopolsky VM, Schiller H. Some neural network applications in environmental sciences. Part I: forward and inverse problems in geophysical remote measurements. Neural Networks 2003;16(3–4):321–34. [59] Chakraborty A, Gopalakrishnan S. Wave propagation in inhomogeneous layered media: solution of forward and inverse problems. Acta Mech 2004;169(1–4):153–85. [60] Zhdanov MS. Inverse theory and applications in geophysics, Vol. 36. Elsevier; 2015. [61] Aster RC, Borchers B, Thurber CH. Parameter estimation and inverse problems. Elsevier; 2018. [62] Uhl T. The inverse identification problem and its technical application. Arch Appl Mech 2007;77(5):325–37. [63] Arridge SR, Schotland JC. Optical tomography: forward and inverse problems. Inverse Probl 2009;25(12):123010. [64] Neto FDM, da Silva Neto AJ. An introduction to inverse problems with applications. Springer Science & Business Media; 2012. [65] Cawley GC, Talbot NL. On over-fitting in model selection and subsequent selection bias in performance evaluation. J Mach Learn Res 2010;11 (Jul):2079–107. [66] Vehtari A, Gelman A, Gabry J. Practical Bayesian model evaluation using leaveone-out cross-validation and WAIC. Stat Comput 2017;27(5):1413–32. [67] Seni G, Elder JF. Ensemble methods in data mining: improving accuracy through combining predictions. Synth Lect Data Min Knowl Discov 2010;2 (1):1–126. [68] Yeh IC. Concrete slump test data set.
; 2009. [69] Richter F, Rein G. Pyrolysis kinetics and multi-objective inverse modelling of cellulose at the microscale. Fire Saf J 2017;91:191–9. [70] Borchani H, Varando G, Bielza C, Larrañaga P. A survey on multi-output regression. Wiley Interdiscip Rev Data Min Knowl Discov 2015;5(5):216–33. [71] Yeh IC. Concrete compressive strength data set. ; 2007.