Core axial power shape reconstruction based on radial basis function neural network

Core axial power shape reconstruction based on radial basis function neural network

Annals of Nuclear Energy 73 (2014) 339–344 Contents lists available at ScienceDirect Annals of Nuclear Energy journal homepage: www.elsevier.com/loc...

527KB Sizes 2 Downloads 77 Views

Annals of Nuclear Energy 73 (2014) 339–344

Contents lists available at ScienceDirect

Annals of Nuclear Energy journal homepage: www.elsevier.com/locate/anucene

Core axial power shape reconstruction based on radial basis function neural network Xingjie Peng a,b,⇑, Qing Li b, Kan Wang a a b

Department of Engineering Physics, Tsinghua University, Beijing, China Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu, China

a r t i c l e

i n f o

Article history: Received 12 March 2014 Received in revised form 29 June 2014 Accepted 30 June 2014

Keywords: Radial basis function neural network Alternating conditional expectation Axial power shape reconstruction

a b s t r a c t The core axial power shape reconstruction method based on radial basis function neural network (RBFNN) is proposed, and 18-node axial core power shape can be reconstructed from 6-segment in-core detector signals or 6-segment ex-core detector signals. Alternating conditional expectation algorithm is also used to validate the effectiveness of RBFNN algorithm. The results show that no matter what kind of detector is used, the RBFNN algorithm performs better than the ACE algorithm. The RBFNN algorithm has a good anti-noise ability when the in-core detector signals with noise are used, while the correct axial power shape can’t be reconstructed from the ex-core detector signals with noise. By analyzing the ex-core axial spatial weighting function’s condition number, the determination of the axial power shape from ex-core detector signals is found to be a typical ill-posed inverse problem. A regularized RBFNN algorithm is used to eliminate this ill-posedness and get the physically meaningful axial power shape. Ó 2014 Elsevier Ltd. All rights reserved.

1. Introduction Core power distribution monitoring in operating power reactors is very important in core surveillance, the power distribution is one of the basic operation parameters which can determine many other important parameters such as power peaking factor, enthalpy rising factor and quadrant tilt ratio used to evaluate the operation condition of reactor and the safe margin. The economy of reactor could be optimized if the real time power distribution is well obtained and used for surveillance and regulation. Most commercial power reactors in operation are equipped with in-core or ex-core neutron detectors to obtain power distribution information. A digital on-line core monitoring system based on in-core detector called Core Operating Limit Supervisory System (COLSS) is utilized by the Korean Standard Nuclear Power Plants (KSNPs). To compute axial power shapes, COLSS uses the Fourier series synthesis method, i.e. Fourier fitting method (FFM), but the accuracy of FFM tends to decrease when power shapes are deeply saddled or highly shifted to top or bottom of the core (Lee et al., 1999). In order to improve the accuracy, the alternating conditional expectation (ACE) algorithm (Lee et al., 1999) was applied to obtain an ⇑ Corresponding author at: Department of Engineering Physics, Tsinghua University, Beijing, China. E-mail addresses: [email protected], [email protected] (X. Peng). http://dx.doi.org/10.1016/j.anucene.2014.06.055 0306-4549/Ó 2014 Elsevier Ltd. All rights reserved.

optimal correlation between each plane power and detector powers. The ACE method is a generalized regression algorithm that yields an optimal relationship between a dependent variable and multiple independent variables. The results show that the ACE algorithm is far superior to FFM, average root mean square is just 35% of that of FFM. Another digital core protection system called Core Protection Calculator (CPC) system is also utilized by the KSNPs. The system has the capability of on-line real-time calculation for fuel limiting parameters using measurable plant parameters, and the axial power shape is obtained by applying fitting method to available ex-core detector signals. The current CPC utilizes the least square fitting and cubic spline curve fitting methods to reconstruct axial power shape (Lee et al., 1999). The axial power reconstruction error tends to increase as core burn-up increases due to the change of core conditions from start-up test conditions (Lee et al., 2002). Kim et al. (1997) applied the ACE algorithm to calculate axial shape index (ASI) for the 3-level ex-core detector. The numerical results show that simple correlations exist between the three ex-core signals and ASI, and the accuracy of the ACE algorithm is much better than the current CPC algorithm. Due to the complicated mathematical transformation of the ACE algorithm, easy implemented stochastic regression algorithm could be used to reconstruct axial power shape. The radial basis function neural network (Broomhead and Lowe, 1988) (RBFNN) has its origin in performing exact interpolation of a set of data points in a multidimensional space. It can approximate any

340

X. Peng et al. / Annals of Nuclear Energy 73 (2014) 339–344

multivariate continuous function on a compact domain to an arbitrary accuracy by given a sufficient number of units, and it has the best approximation property since the unknown coefficients are linear. In this study, the core axial power shapes reconstruction method based on RBFNN algorithm is proposed, and both in-core detector signals and ex-core detector signals can be used as inputs in this method. 2. Methodology 2.1. Radial basis function neural network regression method The RBF network (Broomhead and Lowe, 1988) is a standard three-layer (J1–J2–J3) neural network, as shown in Fig. 1, with the first input layer consisting of d input nodes, one hidden layer consisting of m radial basis functions in the hidden nodes and a linear output layer. There is an activation function /() for each of the hidden node. The hidden layer performs a nonlinear transform of the input, and the output layer is a linear combiner mapping the nonlinearity into a new space. Usually, the same RBF is applied on all nodes; that is, the RBF nodes have the nonlinearity /i ð~ xÞ ¼ /ðk~ x ~ ci kÞ; i ¼ 1; . . . ; J 2 , where ~ ci is the center of the i-th node and /ð~ xÞ is a RBF. The biases of the output layer neurons can be modeled by an additional neuron in the hidden layer, which has a constant activation function. For input ~ x, the output of the RBF network is given by

yi ð~ xÞ ¼

XJ 2 k¼1

wki /ðk~ x ~ ck kÞ;

i ¼ 1; . . . ; J 3 ;

ð1Þ

where yi ð~ xÞ is the i-th output, wki is the connection weight from the k-th hidden unit to the i-th output unit, and ||  || denotes the Euclidean norm. The RBF /() is typically selected as the Gaussian func  2 tion, i.e. /ðrÞ ¼ exp  2rr2 , where r is known as width. N For a set of N samples f~ xk ; yk gk¼1 , Eq. (11) can be expressed in the matrix form

Y ¼ WT U;

ð2Þ

~1 ; . . . ; w ~ i ¼ ðw1;i ; . . . ; wJ2 ;i ÞT ; ~ J3  is a J2  J3 matrix, w where W ¼ ½w /1 ; . . . ; ~ /p ¼ ð/p;1 ; . . . ; /p;J2 ÞT is the output U ¼ ½~ /N  is a J2  N matrix, ~ xp  ~ ck kÞ; of the hidden layer for the pth sample, that is, /p;k ¼ /ðk~ Y ¼ ½~ yN  is a J3  N matrix, and ~ y1 ; . . . ; ~ yp ¼ ðyp;1 ; . . . ; yp;J3 ÞT . The learning in RBF is done in two stages. Firstly, the widths and the centers are fixed. Next, the weights are found by solving the linear equation. The center ~ ci can be selected by clustering. The width usually is fixed. Once the centers have been selected, the weights that minimize the output error are computed by solving a linear pseudo-inverse solution y

1

W ¼ ðUT Þ YT ¼ ðUUT Þ UYT :

ð3Þ

2.2. Alternating conditional expectation regression method The ACE method (Lee et al., 1999) is a generalized regression algorithm that yields an optimal relationship between a dependent variable y and multiple independent variables {xn, n = 1,. . ., p}. The ACE transformation has the following relationships between x and y

h i Xp ðl ¼ 1; . . . ; pÞ; /l ðxl Þ ¼ E hðyÞ  / ðx Þ j j j–l hP i p E j¼1 /j ðxj Þ h i ; hðyÞ ¼   Pp  E j¼1 /j ðxj Þ 

ð5Þ

where E is the mathematical expectation, h(y) and /j(xj) represents the transformation of y and x separately. The objective of the ACE algorithm is to find optimal transformations h⁄(y) and f/n ðxn Þ; n ¼ 1; . . . ; pg that maximize the statistiP cal correlation between h(y) and pn¼1 /n ðxn Þ,

h i i Xp 1 XN h e2 h ðyÞ; /1 ðx1 Þ; . . . ; /p ðxp Þ ¼ min hðyi Þ  /j ðxji Þ : i¼1 j¼1 h;/1 ;...;/p N ð6Þ Because that h(y) and /j(xj) are coupled with each other, the iterative method should be used to obtain the optimal transformations. The traditional ACE algorithm then uses the simple nonlinear deterministic regression method to describe the relationship between optimal transformations and original variables. The only difference between our ACE algorithm and the traditional ACE algorithm is that the nonlinear deterministic regression model is replaced by RBFNN in our algorithm to eliminate the inversion of the polynomial function and simplify the calculation, and this simplification would not have a significant effect on results. The procedure of the ACE algorithm in this paper can be decomposed in the following steps: (1) Use iterative calculation to get the optimal transformation, and apply RBFNN to construct the relationship between optimal transformations and original variables. (2) The optimal transformations /l ðxl Þðl ¼ 1; . . . ; pÞ can be obtained via the corresponding RBFNNs which have been constructed in step 1. P (3) Considering that hðyÞ ¼ pl¼1 /l ðxl Þ; the dependent variable y can be obtained via the corresponding RBFNN which describes the inverse function of h. 2.3. Regression data sets To apply these regression methods mentioned before, enough data sets should be known in prior with regard to the normalized plane powers and detector signals. The neutronics code SMART in SCIENCE code package are used to obtain the three dimensional power distributions for varied core conditions. The small modular reactor ACP100 developed by Nuclear Power Institute of China is used to test these regression methods. Total 7740 power distributions for ACP100 cycle 1 are generated by varying core power, control rods position, xenon condition and burnup in the range of the operation diagram as shown in Fig. 2. Six axial core powers are extracted from the 18-node axial core power shapes to simulate the six-segment in-core detector signals, i.e.

P 3i2 Dincore ¼ P6 ; i j¼1 P 3j2

Fig. 1. Architecture of the RBF network.

ð4Þ

i ¼ 1; . . . ; 6

ð7Þ

where Pi is the i-th axial node power, Di is the i-th normalized incore detector signal. Six in-core detector signals and 18-node axial power shape form an in-core data set.

341

X. Peng et al. / Annals of Nuclear Energy 73 (2014) 339–344

Fig. 2. The operating diagram of ACP100.

The six-segment ex-core detector signals can be simulated by

R

PðrÞhi ðrÞdr Dexcore ¼ P6 V R ; i j¼1 V PðrÞhj ðrÞdr

i ¼ 1; . . . ; 6

ð8Þ

where P(r) is the three-dimensional core power distribution, hi(r) is the three dimensional spatial weighting function of the i-th ex-core detector and V denotes the reactor whole core volume. The spatial weighting function can be calculated by the adjoint mode of transport code, such as TORT or MCNP. Six ex-core detector signals and 18-node axial power shape form an ex-core data set.

3. Numerical results

Table 1 shows the comparison of the average RMS errors, and we can see that the RBFNN algorithm performs better than the ACE algorithm no matter what kind of detector is used. Table 2 shows the comparison of the RMS errors of each axial node of the ACP100 core and we can get the same conclusion as above. The ACE algorithm has a large reconstruction error due to its poor generalization ability. In the real core detector measurements, the signals are contaminated by random fluctuations which can degrade the accuracy of the power distribution reconstruction calculations. To examine this effect, we assumed that each of the simulated detector signals has the relative standard error of d which is fixed as 1% in this study:

Dmeasure ¼ Dtheoretic ð1 þ deÞ

ð11Þ

3.1. Reevaluation of data sets To test the validity and accuracy of the RBFNN method, we reevaluate 7740 axial power shapes using the in-core detector signals and the ex-core detector signals separately. 1105 data sets are extracted randomly from the in-core data sets to train the RBFNN model, and the same data sets are employed to generate the nonlinear regression model by the ACE algorithm. The same procedure is applied to the ex-core data sets. Two quantities are used to compare two axial power reconstruction algorithms, i.e., the average RMS error and the RMS error of each axial node

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi u RECON REF 2 X P  P N 1 XNdata u 1 nodal j;k j;k t Avg:RMS ¼ k¼1 j¼1 Ndata Nnodal PREF j;k vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi !2 u RECON REF u 1 XN Pj;k  Pj;k data RMSðjÞ ¼ t ðj ¼ 1; . . . ; N nodal Þ REF k¼1 N data Pj;k

By assuming the normal distribution of signal uncertainties about the mean value, the detector signals were sampled 50 times for each power shape of the total 7740 data sets. In order to evaluate the anti-noise abilities of the RBFNN method, the RMS error of each axial node is redefined as

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi u RECON REF 2 u X X P  P Nnoise N data 1 j;k;l j;k RMSnoise ðjÞ ¼ t l¼1 k¼1 Nnoise  Ndata P REF j;k ðj ¼ 1; . . . ; Nnodal Þ

ð12Þ

where, Ndata = number of total power shapes, i.e. 7740 in this study, Nnoise = number of detector signals perturbation samples, i.e. 50 in

ð9Þ

this study, PREF j;k = target power from SMART output at k-th node of j-th power shape, PRECON = target power from the reconstruction j;k;l

ð10Þ

where, Ndata = number of total test sets, Nnodal = number of nodes, PREF j;k = target power from SMART output at k-th node of j-th power shape, P RECON = target power from the reconstruction output at j;k k-th node of j-th power shape.

Table 1 The comparison of the average RMS errors (%). Algorithm

In-core data sets

Ex-core data sets

RBFNN ACE

0.1064 1.89

0.1371 4.85

342

X. Peng et al. / Annals of Nuclear Energy 73 (2014) 339–344

output at k-th node of j-th power shape of the l-th detector signals perturbation sample. The redefined RMS errors of each axial node are given in Table 3, and the ACE algorithm is not considered here due to its poor generalization capability. The results show that the RBFNN algorithm has a good anti-noise ability when the in-core detector signals with noise are used, while the correct axial power shape can’t be reconstructed from the ex-core detector signals with noise.

Table 2 The comparison of the RMS errors (%) of each axial node. Number of node

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

In-core data sets

Ex-core data sets

RBFNN

ACE

RBFNN

ACE

0.0984 0.0180 0.0201 0.0179 0.0170 0.0193 0.0197 0.0165 0.0141 0.0248 0.0167 0.1154 0.1180 0.0167 0.0890 0.1043 0.0174 0.4820

12.8097 5.5165 5.1347 4.8033 4.1357 4.8667 3.3932 7.0901 1.1198 0.9769 2.6066 1.8348 1.4848 2.7564 3.2677 3.1005 3.9957 8.3851

0.2096 0.1245 0.1214 0.1195 0.1083 0.0876 0.0621 0.0432 0.0454 0.0852 0.1185 0.2552 0.2256 0.1470 0.1924 0.1630 0.1868 0.3614

26.9981 13.6138 13.7719 11.0628 28.1460 12.5561 14.6226 18.7385 12.1509 12.1254 27.9339 12.9828 12.5498 12.3022 13.1353 12.2195 15.9991 20.8505

3.2. Ill-posedness of the ex-core detector system In order to explain why the RBFNN algorithm fail to reconstruct the axial power shape when the ex-core detector signals with noise are used, we should analyze the mathematical property of the excore detector system. In the linear algebra, for a linear system Ax = b, the solution perturbation satisfies the inequation in the following form:

  kDxk condðAÞ kDAk kDbk þ 6 kxk kbk 1  kA1 k kDAk kAk

where cond(A) is the condition number of matrix A. The condition number can characterize the ill-posedness of the linear system, and describe x’s sensitivity to the noise contained in A or b. To demonstrate the ill-posedness of the ex-core detector system, we can approximate the ex-core detector signals in the linear system form:

Table 3 The comparison of RMS errors (%) of each axial node with noise in detector signals. Node

In-core data sets

Ex-core data sets

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

2.3974 0.9738 0.8042 0.9245 0.8928 0.7425 0.7128 0.9019 1.0657 0.9764 0.8589 1.7363 1.5134 0.9258 1.6082 1.8153 0.9917 4.9499

23.7402 7.0216 7.1877 10.3780 10.9334 9.0337 5.5952 2.2049 2.6259 4.2217 10.2107 12.9828 7.8894 17.4970 25.8490 28.8239 19.1620 45.7797

ð13Þ

Dj ¼

XNaxial i¼1

axial

Paxial hij i

ðj ¼ 1; . . . ; MÞ

ð14Þ axial

where Paxial is power of the i-th axial segment, hij is axial spatial i weighting factor of i-th axial segment to the j-th ex-core detector, Naxial is the core axial segment number and M is the total number of ex-core detectors. The reconstruction of axial power shape using ex-core detector signals is a typical inverse problem, and the well-posedness is not always guaranteed in inverse problem. The reconstruction error of the axial power satisfies the following inequation with the assumption that there is no error in spatial weighting factor

kDPk condðHÞ kDDk 6 kPk 1  kH1 k kDHk kDk

ð15Þ

where H is the axial spatial weighting factor matrix.

1.6 1.4

Axial weighting factor

1.2 1 0.8 0.6 0.4 0.2 0 0

2

4

6

8

10

12

14

16

Core axial segment Fig. 3. The axial spatial weighting factor of 3-segment ex-core detector system.

18

343

X. Peng et al. / Annals of Nuclear Energy 73 (2014) 339–344

1.6 1.4

Axial weighting factor

1.2 1 0.8 0.6 0.4 0.2 0

0

2

4

6

8

10

12

14

16

18

Core axial segment Fig. 4. The axial spatial weighting factor of 6-segment ex-core detector system.

The condition number of the axial spatial weighting factor matrix should be examined to show the ill-posedness of the ex-core detector system. For a 3-segment ex-core detector system, the core axial spatial weighting factors are shown in Fig. 3, and the condition number of the axial spatial weighting factor matrix is 1.7954. For a 6-segment ex-core detector system which is utilized in this study, the core axial spatial weighting factors are shown in Fig. 4, and the condition number of the axial spatial weighting factor matrix is 15.5931. The illposedness of the ex-core detector system increase when the number of ex-core detectors located along the axial direction increases. Some regularization techniques are needed to handle this problem. 3.3. Regularized radial basis function neural network approach In the field of neural network, the network inversion method for solving inverse problem is useful but does not dissolute the illposedness of inverse problems. To overcome this difficulty, a regularized neural network is introduced (Aires et al., 2002). The learning algorithm of neural network is the optimization technique that estimates the optimal network parameters W = {wij} by minimizing a loss function C(W) so that the neural mapping approaches as closely as possible the desired function. The most frequently used criterion to adjust W is the mean-square error in network outputs

CðWÞ ¼

1 XmL k¼1 2

ZZ

½yk ðx; WÞ  t k 2 Pðtk jxÞPðxÞdt k dx

ð16Þ

where tk is the k-th desired output component, yk is the k-th neural output component, and P(x) is the probability density function of input data x. To reduce the estimation sensitivity to input noise in the data, the input perturbation technique is utilized. It is a heuristic method to control the effective complexity of the neural network mapping. The technique consists, during the learning step, of adding to each input a random vector representing the instrumental noise. It has been demonstrated (Bishop, 1996) that, under certain conditions (low noise assumption), training with noise is closely related to regularization technique. In the input perturbation technique, the usual error function C(W) takes the form

1 ~ CðWÞ ¼ 2

XmL



k¼1

ZZ

½yk ðx þ g; WÞ  t k 2 Pðtk jxÞPðxÞPðgÞdt k dxdg

ð17Þ

If the noise g is sufficiently small, the network function yk(x + g; W) can be expanded to first order. Then, the relationship is obtained

~ CðWÞ ¼ CðWÞ þ mXðWÞ

ð18Þ

where m is the noise variance, and

XðWÞ ¼

1X X i k 2

Z  2 @yk PðxÞdx @xi

ð19Þ

is a Tikhonov penalty term that avoids solutions with high gradi~ ents. The minimization of this new criterion CðWÞ constrains the solutions to be smooth. This regularization technique limits the number of degrees of freedom in the neural network to bring its complexity nearer to the desired function. This limitation reduces the class of possible solutions and makes the solution of the problem unique.

Table 4 The comparison of RMS errors of each node with noise in ex-core detector signals. Node

RBFNN

rRBFNN

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

23.7402 7.0216 7.1877 10.3780 10.9334 9.0337 5.5952 2.2049 2.6259 4.2217 10.2107 12.9828 7.8894 17.4970 25.8490 28.8239 19.1620 45.7797

2.5889 1.8044 1.2429 0.8948 0.8205 0.9042 0.9965 1.0253 0.9750 0.9349 1.2587 1.3633 1.0889 0.8525 1.0803 1.5662 2.2987 3.2214

344

X. Peng et al. / Annals of Nuclear Energy 73 (2014) 339–344

for ACP100 cycle 1. The following conclusions can be drawn from this study:

Table 5 The node reconstruction error distribution (18 * 7740 * 50 nodes). Relative error (%)

<1 <2 <3 <4 <5 <6 <7 <8 <9 <10

rRBFNN Number

Ratio (%)

4,296,229 5,976,922 6,544,234 6,778,550 6,881,250 6,926,723 6,947,513 6,957,257 6,961,946 6,964,223

61.67 85.80 93.95 97.31 98.78 99.44 99.73 99.87 99.94 99.97

The regularized RBFNN (rRBFNN) algorithm is then used to handle the ill-posedness of the 6-segment ex-core detector system, and the input perturbation times is 10 for each training data set. The comparison is made between the rRBFNN algorithm and the RBFNN algorithm, and the RMS errors of each node are given in Table 4. It shows that the regularized RBFNN algorithm has a good anti-noise ability when the ex-core detector signals with noise are used. Another evidence that proves the anti-noise ability of the regularized RBFNN algorithm is in Table 5 which shows the node reconstruction error distribution for all data points when the ex-core detector signals with noise are used. The number of data points representing <10% is 99.97% in the regularized RBFNN algorithm. 4. Conclusion and future work Accurate reconstruction of the axial power distribution based on the in-core detector signals or ex-core detector signals is important because it is strongly related with the thermal margin of the reactor core. This paper presents the concept of using the RBFNN algorithm to reconstruct reactor axial power distributions. The validity and accuracy of this algorithm was demonstrated over a very wide range of the axial power shapes, a total of 7740 cases

(1) The reconstruction results show that the RBFNN algorithm provides credible results and performs better than the ACE algorithm, and the RBFNN algorithm has a good generalization capability. (2) The RBFNN algorithm has a good anti-noise ability when the in-core detector signals with noise are used, while correct results can’t be obtained when the ex-core detector signals with noise are used. (3) Training with noise can construct a regularized neural network. The regularized RBFNN algorithm can handle ex-core detector signals with noise and reconstruct the correct axial power shape. More testings need to be done to ensure that the RBFNN algorithm is good enough for the entire operating cycle. And also, there is a potential for coupling the RBFNN algorithm with radial power reconstruction methods, such as ordinary kriging method (Peng et al., 2014), to form a three-dimensional power reconstruction code system when the in-core detector signals are used. References Aires, F. et al., 2002. A regularized neural net approach for retrieval of atmospheric and surface temperatures with the IASI instrument. J. Appl. Meteorol. 41, 144– 159. Bishop, C., 1996. Neural Networks for Pattern Recognition. Clarendon Press, pp. 482–483. Broomhead, D., Lowe, D., 1988. Multivariable functional interpolation and adaptive networks. Complex Syst. 2, 321–355. Kim, H. et al., 1997. Axial shape index calculation for the 3-level ex-core detector. Korean Nuclear Society Autumn Meeting, Taejon, Korea. Lee, E. et al., 1999. Reconstruction of core axial power shapes using the alternating conditional expectation algorithm. Ann. Nucl. Energy 26, 983–1002. Lee, G. et al., 2002. Improved methodology for generation of axial flux shapes in digital core protection systems. Ann. Nucl. Energy 29, 805–819. Peng, X. et al., 2014. A new power mapping method based on ordinary kriging and determination of optimal detector location strategy. Ann. Nucl. Energy 68, 118– 123.