A novel algorithm for wavelet neural networks with application to enhanced PID controller design

A novel algorithm for wavelet neural networks with application to enhanced PID controller design

Neurocomputing 158 (2015) 257–267 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom A novel...

1MB Sizes 2 Downloads 27 Views

Neurocomputing 158 (2015) 257–267

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

A novel algorithm for wavelet neural networks with application to enhanced PID controller design$ Yuxin Zhao a, Xue Du a,n, Genglei Xia b, Ligang Wu a a b

College of Automation, Harbin Engineering University, Harbin 150001, China Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, Harbin Engineering University, Harbin 150001, China

art ic l e i nf o

a b s t r a c t

Article history: Received 7 October 2014 Received in revised form 24 November 2014 Accepted 7 January 2015 Communicated by Xiaojie Su. Available online 7 February 2015

This paper presents a variable step-size updating algorithm for wavelet neural network (WNN) in setting the enhanced PID controller parameters. Compared to the iterative method with constant step-size, the most innovative character of the algorithm proposed is its capability of shortening tracking time and improving the convergence in weights updating process for complex systems or large-scale networks. By combining the relationship among WNN, the Kalman filter and the normalized least mean square (NLMS), we introduce the T–S fuzzy inference mechanism for activation derived functions. Furthermore, a once-through steam generator (OTSG) model is established for validating the practicability and reliability in a real complicated system. Finally, simulation results are presented to exhibit the effectiveness of the proposed variable step-size algorithm. & 2015 Elsevier B.V. All rights reserved.

Keywords: Wavelet neural network (WNN) Normalized least mean square (NLMS) Variable step-size Parameters tuning

1. Introduction Majority of marine nuclear power plants adopt integrated design, any of which once-through steam generator (OTSG) demands an efficient control system to ensure the safe operation of nuclear reactor. As a sophisticated advanced controller, PID and related controllers have played a significant role in industrial manufacture with various complex dynamic systems [1,2], and many of them involve nuclear reactor and OTSG [3–5]. The essential question of PID controller is parameters tuning, which is controlling system adaptively by setting parameters online. In recent years, artificial neural network (ANN) has developed new aspect to solve complex nonlinear uncertain problems by its strong learning ability and high ability of parallel computing. And neural network draws attention to process control, system identification and any other domains [6–9], thus PID parameters tuning problem also emerges in many different algorithms such as neural network [10–12]. In the realm of neural network, back propagation (BP) network has the most widely application. However, there are some open technical issues, and one of them is that the error coming from complex nonlinear function with high dimension vector causes local minimum easily.

☆ This work was partially supported by the National Natural Science Foundation of China (60904087, 61174126, 51379049 and 61222301), and the Fundamental Research Funds for the Central Universities (HEUCFX41302). n Corresponding author. E-mail address: [email protected] (X. Du).

http://dx.doi.org/10.1016/j.neucom.2015.01.015 0925-2312/& 2015 Elsevier B.V. All rights reserved.

Wavelet neural network (WNN) is structured as a novel neural network based on wavelet theory, which replaces the traditional activation function with wavelet function in hidden layer, and establishes the connection between wavelet theory and network coefficients by affine transformation. This network, integrating advantages of the wavelet and the ANN, is appropriate for dynamic processing by improving learning capacity and generalization ability, which enlarges WNN’ applications in many nonlinear and strong coupling systems [13–15]. While WNN still faces the problem of long tracking time in complex systems or large-scale network. This paper points out that the problem of WNN exists generally in algorithms deriving from recursive algorithms such as stochastic gradient (SG) descent or recursive least squares (RLS), and main research achievements are produced in adaptive filtering and system identification [16–18]. It is well known that the stability and convergence of this type of algorithm is influenced by step-size parameter and the choice of this parameter reflects a compromise between the dual requirements of rapid convergence and small misadjustment. To satisfy the conflicting requirements above, researchers have studied alternative means to adjust the step-size parameter constantly, and many of studies come out from normalized least mean square (NLMS) algorithm [19–22] in adaptive filtering. Adaptive filtering has been frequently utilized in system controlling, system identification, signal processing and other applications for its robustness and simplification of implementation. As one of adaptive filter algorithms, the NLMS algorithm is a good candidate for applications with a high degree of input power uncertainty [23–25]. Actually adaptive filtering and system identification could be considered as a linear single neuron

258

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

Nomenclature In Z φðÞ f ðÞ f L ðÞ w ^ w v(n) x(n)

η ηðnÞ δðnÞ ρðnÞ

I

WNN input WNN output wavelet function sigmoid function the piecewise linear form of f ðÞ WNN weight the estimation of w induced local field input signal of neuron/state vector step size step size function local gradient the linear form of δðnÞ the number of neurons in input layer/unit matrix

of MISO unknown system [26], and WNN also could be regarded as a self-adaptive filter with more complex topological structure. Since the existing algorithm of NLMS has been applied in various identification and adaptive filter algorithms, it is the motivation of this paper to introduce NLMS algorithm into WNN variable stepsize adjusting. For the algorithm boundedness, NLMS is applied in linear structure, which is difficult for WNN because of its nonlinear activation functions. This simplification issue has received great attention in the field of hardware implementation. In [27], a fuzzybased activation function for artificial neural networks was proposed. This approach makes hardware implementation easy by interpreting straightforward based on If–Then rules. Beside this, the performance of linear T–S fuzzy systems on approximation was analyzed in [28,29]. To further improve the performance of variable step-size algorithm, we present a novel algorithm from the perspective of state space. This thought is born in the state-space relationship among the NLMS, the WNN and the unscented Kalman filter (UKF) [30,31]. Through the application of UKF in [31] by reason of its nonlinear application, Kalman filter is more convenient to calculate neural network in T–S fuzzy linearization. Therefore, we consider fully the relationship between the WNN, the Kalman filter and the NLMS, and provide the variable step-size algorithm. The rest of this paper can be organized as follows. In Section 2, we connect the wavelet function and the BP network, then analyze the weights updating in theory and illustrates the necessity to introduce variable step-size and T–S inference mechanism. Section 3 develops a sort of T–S fuzzy inference mechanism for activation derived function, of which the membership function and relevant parameters setting are described. In Section 4, variable step-size is inferred from the perspectives of output layer and hidden layer, and then we propose a variable step-size updating algorithm and analyze the convergence of this algorithm. The model of OTSG is structured by RELAP 5 program in Section 5, and the enhanced PID controller is also designed. The performance of algorithm proposed in WNN is analyzed in Section 6, which displays a faster convergence rate and less dynamic error to control dynamic nonlinear system. As for the efficacy losing problem in some real controllers, Section 6 also emulates and analyzes the control process in the basis of OTSG model. Finally, conclusion of this paper is drawn in Section 7.

J K ewnn Ewnn(n) norm ξðnÞ ϕðnÞ H(n) u(n) y(n) z(n) θðnÞ νðnÞ P(n) R(n) Q(n)

the number of neurons in hidden layer the number of neurons in output layer the network output error the network output error energy ideal output the fuzzy logic value of derived function state transition matrix observed matrix system input signal system observed signal observed vector coefficient vector the process noise the estimation of correlation matrix of error the correlation matrix of ν1 ðnÞ the correlation matrix of ν2 ðnÞ

in hidden layer, and f ðÞ denotes sigmoid function in output layer. Weight between input layer and hidden layer is regarded as wji, as well as the one between hidden layer and output layer is wkj ði ¼ 1; 2; …; I; j ¼ 1; 2; …; J; k ¼ 1; 2; …; KÞ, then the output Zk(n) of WNN could be expressed that Z k ðnÞ ¼ f ðvk ðnÞÞ;

ð2:1Þ

vk ðnÞ ¼ xj ðnÞwjk ðnÞ   ¼ φ vj ðnÞ wjk ðnÞ  T  ¼ φ xi ðnÞwij ðnÞ wjk ðnÞ;

ð2:2Þ

where vk ðÞ and vj ðÞ are the induced local fields of output layer and hidden layer, respectively, then xk(n) and xj(n) are the input signals of neuron k and j, respectively. Thus the online PID parameters tuning is the process of wkj and wji updating. Let w(n) represent ^ generalized weight such as wji(n) or wkj(n), and wðnÞ is the estimation of w at iteration of n. Then the process of weights ^ updating could be displayed by wðnÞ that ^ ^  1Þ þ ηδðnÞxðnÞ wðnÞ ¼ wðn   ^ kj ðn  1Þ þ ηf 0 ðvk ðnÞÞ norm  f ðvk ðnÞÞ xj ðnÞ; ¼w or i  hX 0   ^ ^ ji ðn  1Þ þ ηφ0 vj ðnÞ f ðvk ðnÞÞ norm  f ðvk ðnÞÞ wkj ðnÞ xi ðnÞ; wðnÞ ¼w

This paper utilizes WNN with three layers. Input and output account for In and Z, respectively; φðÞ represents wavelet function

ð2:4Þ

where (2.3) displays output layer updating, (2.4) presents hidden layer adjusting and η is the step-size. Define the network output error ewnn and the network output error energy Ewnn(n) that ð2:5Þ Ewnn ðnÞ ¼ 12 ðewnn Þ2 :   Consider z ¼ z1 ; z2 ; …; zk ; … as the group of output signals, and there is a group of ideal outputs norm ¼ fnorm1 ; norm2 ; …; normk ; …g in WNN which inspires the system to steady. Assume that k is the sequence number of output neuron, then the error signal from kth output neuron could be generated based on (2.1) that ewnnk ¼ normk  zk ¼ normk  f ðvk ðnÞÞ:

ð2:6Þ

On the basis of literature [26], the instantaneous error energy of output neuron k is defining by Ewnnk ðnÞ ¼ 12 e2wnnk ðnÞ:

2. Problem description

ð2:3Þ

ð2:7Þ

And total instantaneous error energy could be obtained by Ewnn ðnÞ ¼

1X 2 e ðnÞ: 2 K wnnk

ð2:8Þ

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

η is the step-size controlling convergence and the steady-state behavior of network studying, so the choice of η represents a balance between the adaptation and misadjustment. The majority of papers on neural networks examine the study algorithm with a constant step-size. Thus variable step-size algorithm of neural network is required to adjust step-size with error adaptively. The step-size ηðnÞ of WNN consists of output layer ηout ðnÞ and hidden layer ηhid ðnÞ. When i represents the neuron in output layer,  g ηout ðnÞ is defined by using the negative gradient search (steepest descent method) and minimizing the networks output error energy Eoutput(n)   g ηout ðnÞ ¼ Eoutput ðnÞ ¼

K  2 1X normk  f ðvk ðnÞÞ : 2k¼1

ð2:9Þ

  When the best step-size exists, it is satisfied that g 0 ηout ðnÞ ¼ 0. Given (2.9) and [26], it is obvious that (2.9) can be translated into the variable step-size issue of NLMS when f ðÞ is a linear function with 0 gradient equaling to one. As both f ðÞ and f ðÞ are nonlinear functions, the computational process would be complicated. Moreover, the structure is more complex when utilizing (2.4) in (2.9), which even increases the updating burden of WNN. As a result, f ðÞ and φðÞ as well as their derived functions are difficult to implement in adaptive step-size adjusting. Referring to the achievement of sigmoid in realm of hardware implementation [27], the first step in this paper is simplifying the derived functions of f ðÞ and φðÞ by T–S fuzzy     inference, which also simplifies g ηout ðnÞ and g ηhid ðnÞ , and makes it possible to calculate variable step-size adaptively.

3. T–S fuzzy inference mechanism for activation derived functions Motivated by the reason stated above, thus a novel form of activation function can be drawn by means of the T–S fuzzy logic methodology. To activation functions (sigmoid function or wavelet function), their derived functions are continuous differentiability. And both types of activation functions could be described as piecewise-linear continuous functions by [27] 8 when x is low > < A1 f ðxÞ ¼ ξxðnÞ þ a0 when x is medium ð3:1Þ > :A when x is high 2 where x(n) is the function input, A1 and A2 are boundaries when x tends to positive or negative infinity. When f(x) is derivative, the derived function could be displayed that ( 0 when x is low or high 0 f ðxÞ ¼ ð3:2Þ ξ when x is medium where (3.2) illustrates that the value of ξ influences the results of (2.9) and later calculations directly, therefore T–S fuzzy inference mechanism is introduced to solve ξ, and ξ could be considered as a fuzzy logic output in the context of fuzzy logic, inspired by the theories both of WNN and T–S fuzzy. T–S fuzzy model is proposed by Takagi and Sugeno in 1985 to deal with multi-variable complex nonlinear system effectively. Unlike the Mamdani model, the consequence of T–S fuzzy model is linear functions of variable inputs. Thus every rule includes lots of information, which could produce the desired effect with fewer rules and achieve the linear regression of nonlinear function more easily. Before fuzzy regression, step functions have been utilized to section derived functions. Piecewise linear strategy is accepted to close to the activation derived functions by fuzzy algorithm. Divide a derived function into M subsections, and regard bm as the left border of th subsection. Then the fuzzy algorithm could calculate

259

the gradient of every subsection based on actual shape of function and match the derived function. In this paper, T–S fuzzy system consists of dual inputs x ¼ ½x1 ; x2  and single output ξðnÞ, and inputs could be illustrated that ( x1 ¼ xðnÞ; x ¼ ½x1 ; x2  ) ð3:3Þ m x2 ¼ xðnÞ  b : Then a typical derived function value ξðnÞ could be described by a set of following T–S fuzzy rules Ri. b ðnÞ ¼ p x ðnÞ þ q x ðnÞ þ r Ri: If x1 ðnÞ is Ai1 and x2 ðnÞ is Ai2, then ξ i i 1 i 2 i ði ¼ 1; 2; …; MÞ, where Aij ðj ¼ 1; 2Þ represents fuzzy set of ith rule. It is obvious that Ai2 DAi1 and xðnÞ is the input signal of neuron, namely, input of activation function. pi, qi and ri are constants related to fuzzy set, and they are inherent characteristics of function. As the fuzzy set, Aij's characterized  by the following Gaussian-type membership function, and Aij xj is the grade of membership of xj in Aij that 2 !2 3 xj cij i  5; j ¼ 1; 2; i ¼ 1; 2; …; M; ð3:4Þ A xj ¼ exp4 

σ ij

j

which contains two parameters cij and σij. Either of them has a physical meaning: cij determines the center and σij is the halfwidth. By applying T–S fuzzy inference mechanism, output ξðnÞ is calculated as

ξðnÞ ¼

M X

μb i ðnÞξbi ðnÞ;

ð3:5Þ

i¼1

P i i b i ðnÞ ¼ μi ðnÞ= M where μ i ¼ 1 μi ðnÞ, μi ðnÞ ¼ A1 ðnÞ  A2 ðnÞ, and (3.5) could be rewritten that

ξðnÞ ¼ ξbðnÞT μðnÞ;

ð3:6Þ h iT  b b b b b 1 ðnÞ; μ b 2 ðnÞ; …; μ bM where ξ ðnÞ ¼ ξ 1 ðnÞ; ξ 2 ðnÞ; …; ξ M ðnÞ , μðnÞ ¼ μ bðnÞ is the value of derived function and μðnÞ ðnÞT . In (3.6), ξ determines the degree of contribution of each rule so that each consequence fuzzy inference mechanism corresponds to only one adaptation value which needs to be adapted online. By T–S fuzzy inference mechanism, WNN that is established on BP network could be translated to a sort of piecewise-linear continuous structure. Then combining with the weights updating alg P 0 0 orithm in literature [26], f ðvk ðnÞÞ in (2.3) and φ0 vj ð nÞÞ f   ðvk ðnÞÞ norm  f ðvk ðnÞÞ wkj ðnÞ in (2.4) could be piecewisesimplified as linear functions. Hence (2.3) could be rewritten that b b  1Þ þ ηρðnÞxðnÞ; wðnÞ ¼ wðn

ð3:7Þ

where ρðnÞ is linear form of δðnÞ. And this equation is proposed only for step-size recursion, which has no effect on weights updating.

4. Proposed algorithm and convergence analysis 4.1. Relationship analysis A state-space equation could be presented by (

xðn þ 1Þ ¼ ϕðnÞxðnÞ þ ν1 ðnÞ; zðnÞ ¼ HðnÞxðnÞ þ ν2 ðnÞ;

ð4:1Þ

where xðn þ1Þ and xðnÞ are state vectors, zðnÞ is observed vector, ν1 ðnÞ and ν2 ðnÞ are process noise, ϕðnÞ and HðnÞ are state transition matrix and observed matrix respectively at time n. As a method to estimate the system state with noise, the Kalman filter equations could be displayed that eðnÞ ¼ zðnÞ  HðnÞxðnjn 1Þ;

ð4:2Þ

260

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

h i1 KðnÞ ¼ Pðnjn  1ÞH T ðnÞ HðnÞPðnjn  1ÞH T ðnÞ þ RðnÞ ;

ð4:3Þ

xðnjnÞ ¼ xðnjn  1Þ þ KðnÞ;

ð4:4Þ

xðn þ1jnÞ ¼ ϕðnÞxðnjnÞ;

ð4:5Þ

PðnÞ ¼ ½I  KðnÞHðnÞPðnjn  1Þ;

ð4:6Þ

Pðn þ 1jnÞ ¼ ϕðnÞPðnÞϕ ðnÞ þ Q ðnÞ:

ð4:7Þ

T

In the equations above, I is a unit matrix, xðnjn  1Þ is estimated from xðnÞ based on sequence composed by zðnÞ, PðnÞ is the estimation of correlation matrix of error, and Pðnjn  1Þ is a preliminary estimation of PðnÞ, as well as RðnÞ and Q ðnÞ account for the correlation matrixes of process noise ν1 ðnÞ and ν2 ðnÞ. Since the adaptive filter equation is that yðnÞ ¼ uT ðnÞθðnÞ þ νðnÞ;



1

  T c KJ ðn  1Þ þ X b KJ ðnÞη ðnÞξ ðnÞE K ðnÞ X J ðnÞ  bf ðnÞ h ηf ðnÞ ¼ norm  ξf ðnÞ W f f 2 b T ðnÞX J ðnÞ ¼ EK ðnÞ  ηf ðnÞξf ðnÞE K ðnÞX KJ 2 3 h1 ðnÞ 6 7 6 h2 ðnÞ 7 7 ¼6 6 ⋮ 7: 4 5 hK ðnÞ

ð4:8Þ

where uðnÞ means input signal, yðnÞ is the observed signal, θðnÞ is the coefficient vector which is supposed to be updated, and νðnÞ is still the process noise. Then consider the state vector in (4.8) as coefficient vector, set PðnÞ ¼ I, and ignore the updating equation (4.6) and (4.7). Thus refer to the literatures [23,25], the updating equation of θðnÞ could be displayed that  θb ðn þ1Þ ¼ θb ðnÞ þ ηðnÞuðnÞ yðnÞ  uT ðnÞθb ðnÞ ; ð4:9Þ

ηðnÞ ¼ uT ðnÞuðnÞ þ ε

layer (amount to the input of NLMS algorithm), ξf is a diagonal  T matrix including K gradients of f ðÞ,and  bf ¼ bf 1 ; bf 2 ; …; bfKT is the intercept vector. Define that EK ðnÞ ¼ ewnn1 ; ewnn2 ; …; ewnnK , E K ðnÞ is a diagonal matrix consisting of ewnn1 , ewnn2 ,…, ewnnK ,and ηf ðnÞ is a diagonal matrix composing of ηf 1 ðnÞ, ηf 2 ðnÞ; …; ηf K ðnÞ. According to (2.3),it could be found that every wkj ðnÞ which to be updated corresponds to a xj ðnÞ as the input of activation. To satisfy the multiplication principle of matrix and updating condition of (2.3), X J ðnÞ is required to be expanded to X KJ ðnÞ with J  K, whose every row is constituted by X J ðnÞ. Then substitute (2.4) into (4.11), the following equation could be obtained that

;

ð4:10Þ

where ηðnÞ is regarded as variable step-size of adaptive filter, and ε is a regularization factor whose role is helping the equation avoid the problem when uT ðnÞ is too small. Compared (4.9) with (3.7), a phenomenon could be observed that the process of parameters updating in these equations is the same. While the situation of ρðnÞ in hidden layer will be analyzed further in the next section. Based on discussion above, the relationship among linear neural network, Kalman filter and NLMS could be obtained, and this conclusion will be utilized in variable step-size algorithm for WNN.

  Substitute (4.13) into (4.12), and rewrite g ηf ðnÞ as G1 ηf ðnÞ ,  then G1 ηf ðnÞ is defined that 2 3 2 0  16 h1 ðnÞ ⋯ 7 ⋮ ⋱ ⋮ 7 ð4:14Þ G1 ηf ðnÞ ¼ 6 5: 24 2 0 ⋯ hK ðnÞ  0 Let G1 ηf ðnÞ ¼ 0, then give the best step-size 2 3 ηf 1 ðnÞ ⋯ 0 ⋱ ⋮ 7 ηf ðnÞ ¼ 6 ð4:15Þ 4 ⋮ 5; 0 ⋯ ηfK ðnÞ  2 where ηfk ðnÞ ¼ 1=ξfk ðnÞ x21 ðnÞ þ x22 ðnÞ þ ⋯ þ x2J ðnÞ . PJ When j ¼ 1 x2j ðnÞ is too small, the problem caused by numerical calculation has to be considered. To overcome this difficulty, (4.14) is altered to

ηf k ðnÞ ¼

4.2. Algorithm formulation ¼ As stated in the previous section, assume the piecewise-linear form of f ðÞ to be f L ðÞ, then f L ðnÞ ¼ ξðnÞvki ðnÞ þ bðnÞ, and bðnÞ is the intercept of f L ðÞ. According to NLMS theory, when ξðnÞ ¼ 1 and bðnÞ ¼ 0, f L ðÞ is equal to (4.8) without process noise. For a better approximation of ηout ðnÞ in (2.9), f L ðÞ is applied to instead of f ðxÞ whose gradient ξðnÞ is calculated at each time instant n. Although this treatment of f L ðÞ is relatively poor compared to the sigmoid (wavelet) function, it provides a more efficient solution compared to the WNN with constant step-size. NLMS is born in linear single neuron with scalar step. However, WNN has nonlinear multi-layer structure with vectorial step. Hence the step of WNN is supposed to calculate layer by layer. Assume WNN to be three-layer structure, and the number of neurons in input layer, hidden layer and output layer is I, J and K respectively. 4.2.1. Output layer By utilizing f L ðÞ with (3.6), (2.9) could be inferred as follows:   ð4:11Þ h ηf ðnÞ ¼ norm  ξf W TJK ðnÞX J ðnÞ bf ðnÞ;  h  i 2 ; g ηf ðnÞ ¼ 12 h ηf ðnÞ

ð4:12Þ

where norm ¼ ½norm1 ; norm2 ; …; normK T is the desired output of network, W JK ðJ  KÞis the weight T matrix between output layer and hidden layer, X J ¼ x1 ; x2 ; …; xJ is the output vector of hidden

ð4:13Þ



1



ξ2f k ðnÞ x21 ðnÞ þ x22 ðnÞ þ⋯ þ x2J ðnÞ þ σ 2v

PðnÞ  ; 2 PðnÞξf k ðnÞ x21 ðnÞ þ x22 ðnÞ þ ⋯ þx2J ðnÞ þ σ 2v

ð4:16Þ

where PðnÞ and σ 2v 4 0 which is set as a pretty small value in (4.16). Based on (4.16), when it is satisfied that ξðnÞ ¼ 1 and PðnÞ ¼ 1, (4.16) is the typical NLMS algorithm, when ξðnÞ ¼ 1 and PðnÞ a 1, (4.16) is a form of Kalman filter. From the conclusion above, (4.16) could be further optimized from the perspectives of NLMS and Kalman filter. As inferred in the previous section, assuming PðnÞ ¼ I for all instant n is a rough approximation of PðnÞ ¼ 1. For evaluating the ηout ðnÞ more precise, PðnÞ ¼ 1 is set to be λðnÞ. λðnÞ is updating at every instant n. Since the precision of SG is less than recursive least squares (RLS) [27], the introduction of λðnÞ drives the precision to close to RLS, moreover, the proposed treatment has less computational complexity than the RLS algorithm. By utilizing λðnÞ, the following update equation could be obtained based on (2.3) that   λ ðnÞ b jk ðn  1Þ þ f 0 ðvk ðnÞÞ norm  f ðvk ðnÞÞ xj ðnÞ k ; b jk ðnÞ ¼ w ð4:17Þ w Dλ  2 where Dλ ¼ λk ðnÞξf k ðnÞ x21 ðnÞ þ x22 ðnÞ þ ⋯ þx2J ðnÞ þ σ 2v . Given the relationship between the proposed algorithm and Kalman filter theory, λðnÞ could be determined that n 2 o ¼ xTk ðnÞP k ðnÞxk þ σ 2v E norm  f ðvk ðnÞÞ ¼ λk ðnÞxTk ðnÞxk ðnÞ þ σ 2v ;

ð4:18Þ

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

  Then g ηφ ðnÞ could be rewritten as G2 ηφ ðnÞ according to (4.12) and (4.22) that  h ih iT G2 ηφ ðnÞ ¼ 12 EK ðnÞ  AðnÞηφ ðnÞBðnÞ EK ðnÞ  AðnÞηφ ðnÞBðnÞ :

Table 1 Algorithm process. Input: Input signal In Ideal output norm Output: Observe output ZðnÞ Weights WðnÞ Initialize: The number of layers in WNN structure N layer The number of neurons in every layer M 1 , M 2 … The activation function of every layer f 1 ðÞ, f 2 ðÞ… Loop count Num Begin: Initialize weights W 0 Repeat Output layer ZðnÞ ¼ f ðvk ðnÞÞ T–S fuzzy inference: activation derived function Algorithm proposed: output layer ηf ðnÞ PK  η ðnÞ k ¼ 1 f hidden layer ηφ ¼ K Update WðnÞ Until n 4 Num End

and λðnÞ could be produced by n 2 o  σ 2v E norm  f ðvk ðnÞÞ : λðnÞ ¼ T xk ðnÞxk ðnÞ



Let G2 ηφ ðnÞ obtained that

ð4:19Þ

Assuming CðnÞ ¼ W JK ðnÞξf ðnÞE K ðnÞ, the consequence could be concluded according to (4.18) that 2 3 C 1 ðnÞ 6 7 h i 6 C 2 ðnÞ 7 7 ð4:21Þ ¼6 CðnÞ ¼ ξfK ðnÞwJK ðnÞewnnK ðnÞ 6 ⋮ 7: JK 4 5 C J ðnÞ

0







DJ ðnÞ

3 7 5;

Combine with (2.4), and substitute the results above into (4.20) that   n T n n h ηφ ðnÞ ¼ EK ðnÞ  AðnÞηφ ðnÞBðnÞ ¼ h1 ðnÞ h2 ðnÞ ⋯ hK ðnÞ ; ð4:22Þ where

    BðnÞ ¼ Bj ðnÞ J1 ¼ Dj X ij JI ½xI1 :

ηφ ðnÞ ¼

K 1X η ðnÞ: Kk¼1 f

ð4:25Þ

The WNN process is presented as Table 1.

4.3. Convergence of the algorithm proposed

KJ

;

n-1

In the equation above, the expectation squared ea ðnÞ is a priori error vector which could be obtained [32] by  c ðnÞ : ea ðnÞ ¼ uðnÞT W  W ð4:27Þ

k¼1

h i   AðnÞ ¼ Akj ðnÞ KJ ¼ ξφj ðnÞξf k ðnÞwjk ðnÞ

When K ¼J, AðnÞ is a square matrix and following steps could be implement as output linear. When K a J, AðnÞ is not invertible. Thus the solution of ηφ ðnÞ could be estimated by the solution identification theorem, and categorization describes the situation of infinitely many solutions or no solution. If ηφ ðnÞ is no solution, it just states that the best step-size could not be determined by this recursion algorithm. In conclusion, the result of hidden layer is still more complex than traditional Kalman filter even after piecewiselinearization, thus it could not optimize step-size as Kalman filter further. The algorithm proposed is on the basis of WNN and infers activation derived functions by T–S fuzzy method, then solves the step-size ordinally. Since the weights updating in hidden layer is more complicated than that in output layer, even though AðnÞ is square matrix, the solution procedure would increase much burden. According to (2.3) and (2.4), the updating of hidden layer is based on the output layer, meanwhile the existence of P 0 f ðvk ðnÞÞ norm  f ðvk ðnÞÞ wkj ðnÞ helps every neuron of output layer have the relationship with the updating process between ith neuron of hidden layer and the previous layer, in other words, every neuron of output layer contributes to ηφ ðnÞ. Therefore this paper adopts the same step-size in ηφ ðnÞ based on ηf ðnÞ that

n-1

K X X C jk ðnÞ ¼ ξfk ðnÞwjk ðnÞewnnk ðnÞ: K

¼ 0, and the following equation could be

WNN is utilized to solve problems by updating weights, and the covariance of the weight matrixes is directly related to the meansquare error. When the output of WNN varies slightly and the state remains steady, a relationship equation could be met that h i h i lim E eðnÞ2 ¼ lim E ea ðnÞ2 þ σ 2v : ð4:26Þ

where Dj ðnÞ ¼

ð4:23Þ

ð4:24Þ

ð4:20Þ



0

 T AðnÞηφ ðnÞBðnÞBT ðnÞAT ðnÞ ¼ 12 EK ðnÞBT ðnÞAT ðnÞ þ EK ðnÞBT ðnÞAT ðnÞ

4.2.2. Hidden layer n Assume that ηφ ðnÞo is a diagonal matrix composing of ηφ1 ðnÞ; ηφ2 ðnÞ; …; ηφJ ðnÞ , and X J ðnÞ is required to be expanded to with X KJ ðnÞ, whose every row is constituted by X J ðnÞ. Then the following equation could be obtained that

 h  i c T ðnÞ ξ W c IJ ðnÞX I ðnÞ  bφ ðnÞ  bf ðnÞ; h ηφ ðnÞ ¼ norm  ξf W φ JK

Assume 2 D1 ðnÞ 6 DðnÞ ¼ 4 ⋮ 0

261

c ðnÞ is estimated to W donates the optimal weight which W h i 2 approach, thus limi-1 E eðnÞ represents the excess mean square error. Unlike the NLMS algorithm and Kalman filter algorithm, step-size has been calculated more than once in WNN, and weights also have been updated in different layers. Given the significant of main calculated steps in output layer and the intimation between error and output layer weights for WNN back-propagation, the convergence of steps in output layer comes to be decisive for the whole step set. Therefore the convergence of steps in output layer is discussed principally in this section. Referring to [32], assume the step-size of the proposed algorithm

262

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

in the steady state to be ηf ð1Þ which could be illustrated that [25] 0

ηð1Þ ¼ ηf ð1Þ J X J J 2 ¼ @1 

1

σ 2v

h

limn-1 E eðnÞ

2

i A:

ð4:28Þ

Based on literature [32] and assume RX J as the correlation matrix of XJ, the following equation could be satisfied that h i ηð1Þσ 2   1  v T r RX J E þ σ 2v : lim E eðnÞ2 ¼ n-1 2  ηð1Þ J XJ J 2

ð4:29Þ

From the point of [32], (4.29) is summarized by fixed step-size, while it is introduced in this section for variable step-size, the reason of this conduct is that ηf ð1Þ will remain unchanged when n approaches infinity and state maintains stable. Then the following equation could be inferred that h

lim E eðnÞ

2

i

n-1

i 0 h 1   1  E eðnÞ2  σ 2v @ Aσ 2 T r RX E h i ¼ þ σ 2v : v J J XJ J 2 E eðnÞ2 þ σ 2v

ð4:30Þ

And it could be rearranged that h

lim E eðnÞ

n-1

2

i

 1 0    h i σ 2v T r RX J 1 2 AE i :  σ 2v ¼ E eðnÞ  σ 2v @ h 2 J XJ J E eðnÞ2 þ σ 2v

ð4:31Þ on statistical characteristics of XJ, σ 2v T r RX J =ðE h Based i eðnÞ2 þ σ 2v Þ could be considered as an arbitrary h i value. Thus to 2 establish the condition of (4.31),h limn-1 i E eðnÞ is supposed to equal to σ2v, and it suggests that E eðnÞ2 is influenced by σ2v. When the WNN approaches stable and σ2v decrease to zero, ηf ð1Þ and ηð1Þ could be calculated that 

0

ηf ð1Þ ¼

1

1 1 @ σ iA  0: ηð1Þ ¼ 1 J XJ J 2 J XJ J 2 limn-1 E eðnÞ2 2 vh

ð4:32Þ

As ηf ð1Þ comes from the error of WNN, the results above reflect a class of desired weights for the neural network, and imply the stable state of WNN. Hence, the convergence of ηf ð1Þ could influence ηφ ð1Þ and step-size in other layers if any.

5. The model and PID controller of OTSG The OTSG with double-size heat transfer has more compact structure and stronger heat exchange capability, thus this OTSG is a sort of efficient heat transfer steam generator, whose heat exchange capability could reach 2.6 times as high as spiral tube steam generator. The OTSG in concentric annuli tube could reduce the scale of reactor pressure vessel (RPV) and improve mobility of the device. Compared with natural circulation steam generator U-tube, the OTSG is designed with simpler structure but less water volume in heat transfer tube. Since it is difficult to measure the water level in OTSG especially during variable load operation, the OTSG needs a high-efficiency control system to ensure safe operation. From the OTSG research, the steam pressure of outlet could be influenced by feed-water and steam flow. The exchange of steam pressure has an effect on the transfer of reactor heat from primary loop to second loop, which impacts the average temperature of primary coolant. This transformation is a synchronous and complicated process. Thus the argument above illustrates that the outlet steam pressure of OTSG is a significant variable, and a strategy to control outlet steam pressure should be utilized in this pressure. With small water capacity, steam pressure could change easily in the period of steam flow transformation, and a highefficiency feed-water control system is demanded to remain steam pressure stable. In this section, the OTSG model is established by RELAP5 transient analysis program. The WNN enhanced PID system achieves outlet steam pressure invariability by controlling secondary feed water system (Fig. 1). In OTSG system, the essential equation of thermal-hydraulic model is listed that (1) Mass continuity equations  1 ∂ ∂ α g ρg þ αg ρg vg A ¼ Γ g ; ð5:1Þ ∂t A ∂x 1 ∂ ∂ αf ρf þ α f ρf v f A ¼ Γ f : ∂t A ∂x (2) Momentum conservation equations ∂v2g ∂vg 1 þ α g ρg A ∂t 2 ∂x ∂P ¼  αg A þ αg ρg Bx A  ðαg ρg AÞFWGðvg Þ ∂x þ Γ g AðvgI  vg Þ  ðαg ρg AÞFIGðvg  vf Þ

α g ρg A

Fig. 1. Control system of reactor load following.

ð5:2Þ

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267



∂ðvg  vf Þ ∂vf ∂vg þ vf  vg ;  C α g α f ρm A ∂t ∂x ∂x



ð5:3Þ

∂vf ∂vf 1 þ α ρA ∂t 2 f f ∂x ∂P ¼  αf A þ αf ρf Bx A  ðαf ρf AÞFWFðvf Þ ∂x  Γ g AðvfI  vf Þ  ðαf ρf AÞFIFðvf  vg Þ

∂ðvf  vg Þ ∂vf ∂vg þ vg  vf :  C α f α g ρm A ∂t ∂x ∂x

αf ρf A

2

ð5:4Þ

(3) Energy conservation equations  1 ∂ ∂ αg ρg U g þ α g ρg U g v g A ∂t A ∂x  ∂α g P ∂  ¼ P  αg vg A þ Q wg þ Q ig þ Γ ig hng þ Γ w h0g þ DISSg ; ∂t A ∂x ð5:5Þ 1 ∂ ∂ αρU þ αρUvA ∂t f f f A ∂x f f f f  ∂α f P ∂   α v A þ Q wf þ Q if  Γ ig hnf  Γ w h0f þ DISSf : ¼ P ∂t A ∂x f f (4) Noncondensables in the gas phase 1 ∂ ∂ αg ρg X n þ αg ρg vg X n A ¼ 0: ∂t A ∂x

ð5:6Þ

ð5:7Þ

(5) Boron concentration in the liquid field The RELAP5 thermal-hydraulic model solves eight field equations for eight primary dependent variables. The primary dependent variables are pressure (P), phasic specific internal energies (Ug,Uf), vapor volume fraction (void fraction) (αg), phasic velocities (vg,vf), noncondensable quality (Xn), and boron density (ρb). The independent variables are time (t) and distance (x). The secondary dependent variables used in the equations are phasic densities (ρg, ρf), phasic temperatures (Tg,Tf), saturation temperature (Ts), and noncondensable mass fraction in noncondensable gas phase (Xni) [33]. To meet the operation demand of nuclear power equipment, steam power should be maintained within a certain range by OTSG feed-water control. Based on theoretical analysis and design experience, enhanced PID controller is adopted which comprises reactor power control system and secondary feed-water control system, and the two subsystems run in the meantime. The control of outlet steam power of OTSG is generated by secondary feedwater system, and the one of reactor power could be achieved by the average temperature of primary coolant. When the load alters, feed-water flow and reactor power could be determined by equations below     Gw ¼ K 1 Gs þ K Pp ΔP s ðnÞ  ΔP s ðn  1Þ þ K Pi ΔP s ðnÞ   ð5:8Þ þ K Pd ΔP s ðnÞ 2ΔP s ðn 1Þ þ ΔP s ðn  2Þ : In the function, Gs represents new steam flow. K1 is the conversion coefficient. KPp, KPi and KPd are control coefficients. And ΔP is the pressure divergence of OTSG. To achieve PID parameter tuning adaptively, the algorithm proposed above in the previous section has adopted to improve the performance of WNN, and three layer structure has been applied with system error esys, and the WNN input could be displayed that 8 in ðnÞ ¼ esys ðnÞ  esys ðn  1Þ; > < 1 in2 ðnÞ ¼ esys ðnÞ; ð5:9Þ > : in ðnÞ ¼ e ðnÞ  2e ðn  1Þ þ e ðn  2Þ; sys sys sys 3 uðnÞ ¼ uðn 1Þ þ Δu;

ð5:10Þ

263







Δu ¼ K Pp ΔP s ðnÞ  ΔP s ðn  1Þ þ K Pi ΔP s ðnÞ   þK Pd ΔP s ðnÞ  2ΔP s ðn  1Þ þ ΔP s ðn  2Þ ;

ð5:11Þ

where the number of output neurons and input neurons is the same. As the boundedness of sigmoid function, WNN output parameters for enhanced PID are remained in a range of [  1, 1]. To solve this problem, the expertise improvement unit is added between WNN and flow require counter in Fig. 1, which corrects parameters order of magnitudes. And the WNN weights for setting PID are altered to b b  1Þ þ ηδðnÞxðnÞ wðnÞ ¼ wðn   ∂P s 0 b jk ðn  1Þ þ ηsgn f ðvk ðnÞÞesys ðnÞink ðnÞxj ðnÞ ¼w ∂ Δu

ð5:12Þ

or

  X

∂P s 0 b b ij ðn 1Þ þ f ðvk ðnÞÞesys ðnÞink ðnÞwjk ðnÞ sgn wðnÞ ¼w ∂ Δu   ð5:13Þ ηφ0 vj ðnÞ xi ðnÞ: By substituting (2.3) into (5.12), the equation could be displayed as follows:     c JK ðn  1Þ þ sgn ∂P s h ηf ðnÞ ¼ norm  ξf ðnÞ W ∂ Δu T X JK ðnÞηf ðnÞξf ðnÞInðnÞE K ðnÞ X J ðnÞ  bf ðnÞ; ð5:14Þ where Ps is secondary outlet pressure of OTSG, and InðnÞ is a diagonal matrix composed of in1 ðnÞ, in2 ðnÞ and in3 ðnÞ.

6. Simulation results and discussion In this section, simulations of T–S fuzzy inference mechanism and enhanced PID controller have been carried out aiming to examine performance and assess accuracy of the algorithm proposed in this paper. To display the results clearly, simulations are divided into two parts. One is achieved as numerical examples in Matlab program which includes T–S fuzzy method and controller utilized in nonlinear control systems. Another is written in FORTRAN language, and connected with RELAP 5 which implements the OTSG modeling.

6.1. Numerical examples This paper applies T–S fuzzy inference mechanism for calculating nonlinear continuously differentiable function by piecewiselinear, and dispose sigmoid function and Morlet function by linear regression respectively. To illustrate this method clearly, Table 2 is offered to display fuzzy sets and related rules. As sigmoid function is axisymmetric and Morlet function is centrosymmetric, Table 2 only lists the parameters of functions in the positive violent field. Sigmoid function and Morlet function are continuously differentiable and bounded, and their derived functions have the same properties, as Figs. 2–5 shows. The properties provide the possibility for T–S fuzzy inference mechanism. Based on the membership functions in Section 3 and the rules in Table 2, the derived function of sigmoid and Morlet could be calculated in Figs. 6 and 7. Figs. 6 and 7 display the linear regression of derived functions by T–S fuzzy inferring. T–S fuzzy algorithm proposed is compared with the piecewise step approach algorithm which shows that T–S fuzzy appears a better effect, especially in the dramatic realms (detailed drawing). T–S fuzzy solves the rule consequences adaptively by fuzzy set, which could obtain more precise results. The inference of derived functions is the theoretical basis of step-size updating, thus the consequence would influence the value of kf in

264

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

Table 2 Consequent parameters of T–S fuzzy rules. x1

x2

0–0.5 0–0.5 0–0.5 0.5–1.0 0.5–1.0 0.5–1.0 1.0–1.5 1.0–1.5 1.0–1.5 1.5–2.0 1.5–2.0 1.5–2.0 2.0–2.5 2.0–2.5 2.0–2.5 2.5–3.0 2.5–3.0 2.5–3.0 3.0–3.5 3.0–3.5 3.0–3.5 3.5–4.0 3.5–4.0 3.5–4.0 4.0–4.5 4.0–4.5 4.0–4.5

Sigmoid derived function

0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5 0–0.167 0.167–0.333 0.333–0.5

Morlet derived function

p(i)

q(i)

r(i)

p(i)

q(i)

r(i)

 0.2136  0.2136  0.2136  0.3664  0.3664  0.3664  0.2392  0.2392  0.2392  0.1102  0.1102  0.1102  0.044  0.044  0.044  0.0168  0.0168  0.0168  0.0062  0.0062  0.0062  0.0023  0.0023  0.0023  0.0008  0.0008  0.0008

0.15 0.00001  0.11  0.015 0.00001 0.02  0.04 0.00001 0.04  0.04 0.00001 0.04  0.02 0.00001 0.02  0.002 0.00001 0.002  0.002 0.00001 0.002 0.0005 0.00001  0.0005 0.0005 0.00001  0.0005

0.5 0.524 0.555 0.5764 0.5814 0.5664 0.4492 0.4412 0.4292 0.2557 0.2497 0.2357 0.1233 0.1203 0.1133 0.0553 0.0548 0.0543 0.0235 0.023 0.0225 0.0097 0.0097 0.00995 0.0041 0.0041 0.00435

 2.9364  2.9364  2.9364  1.0638  1.0638  1.0638 2.1582 2.1582 2.1582 0.3874 0.3874 0.3874  0.4552  0.4552  0.4552  0.2186  0.2186  0.2186  0.0132  0.0132  0.0132 0.0112 0.0112 0.0112 0.0027 0.0027 0.0027

 1.4 0.00001 1.4  1.35 0.00001 1.2 0.4 0.00001  0.4 0.6 0.00001  0.6 0.05 0.00001 0.01  0.12 0.00001 0.12  0.04 0.00001 0.02 0.0005 0.00001  0.0005 0.00001 0.00001 0.00001

0.00001  0.23  0.7  2.0001  2.2  2.6  3.0945 3.0145 3.2945  0.4383  0.3383  0.1383 1.2469 1.2499 1.242 0.6554 0.6354 0.5954 0.0392 0.0342 0.0292  0.0462  0.0462 0.046  0.0121  0.0121  0.0121

Sigmoid Derived Function

Sigmoid Function 0.6 0.5

1

0.4

y position

y position

0.8 0.6 0.4 0.2

0.3 0.2 0.1 0

0

−0.1

−0.2 −5

0

−0.2 −5

5

0

Fig. 2. Sigmoid function.

Fig. 4. Sigmoid derived function.

Morlet Function

1.5

5

x position

x position

2

Morlet Derived Function

1.5 1

y position

y position

1

0.5

0

0.5 0 −0.5 −1 −1.5

−0.5 −10

−5

0

5

10

x position

−2 −10

−5

0

5

10

x position

Fig. 3. Morlet function.

Fig. 5. Morlet derived function.

later calculation. It is indicated in Figs. 6 and 7 that the error between original function and consequence of T–S is slight, and T– S fuzzy is linear to produce less computational burden, therefore

the T–S fuzzy inference proves to be available. Then this paper the algorithm proposed is designed to examine the effectiveness, and the enhanced PID controller with updating parameters is applied

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

Origianal Function Piecewise Step T−S Fuzzy

0.5 0.45 0.4 −0.5

0.4

0

0.5

0.3 0.2 0.1 0.2

0

−4

40 40

20

−20 1

−0.2

60

20 0

0

0.1

−0.1

Constant Step Size Variable Step Size

80

norm,yout

0.5

y position

100

Sigmoid Derived Function

0.6

265

1.5

−2

2

0

2

0

0

100

200

300

50

400

500

time(s)

4

x position

Fig. 10. The output results in the second system.

Fig. 6. Sigmoid derived function regression. Constant Step Size Variable Step Size

40

Morlet Derived Function 30

2 0.3 0.2 0.1 1.5

1

20

20

error

1.5

y position

40

2

0.5

0

10

−20 0

0

0

50

−10

−0.5

−20

−1 −1.5

Original Function

−1.2

Piecewise Step

−1.4

T−S Fuzzy

−30

0

100

200

−2 −5

0

300

400

500

time(s)

0.4 0.6 0.8

5

Fig. 11. The error in the second system.

x position to control known nonlinear system. The nonlinear systems are displayed that

Fig. 7. Morlet derived function regression.

700

Constant Step Size Variable Step Size

600

norm,yout

500

400 350

400

300 250

300

200

200

150

100 0

0

100

200

20

40

300

60

400

500

600

time(s) Fig. 8. The output results in the first system.

150

Constant Step Size Variable Step Size 150

100

error

100 50

50

0 20

40

60

0

norm ¼ a1 sin ðb1 π nÞ þ c1 log d n;

ð6:1Þ

norm ¼ a2 cos ðb2 π nÞ þ c2 n:

ð6:2Þ

Figs. 8–11 show the control performance between variable step-size and constant step-size. They adopt constant step-size in WNN and the algorithm proposed to control nonlinear system, and both methods bring relatively large system error in initial period. In Figs. 8 and 9, the two algorithm shows roughly same error in 0–30 s. After 30 s, the curve of variable step-size is lower than constant step-sizes. Error amplitude is also smaller than constant step-sizes after 50 s. Figs. 10 and 11 show that the nonlinear system could approach stable state in 30 s based on the algorithm proposed, while system could not be steady until 50 s on the condition of constant step-size in WNN. In the time realm of system error varying dramatically, the two algorithms present different performances. In 10–12 s, error of both methods is roughly same, then the adaptively adjusting of step-size begins to work at 12 s and the amplitude of error is smaller than the constant step-sizes. This comparison indicates that enhanced PID controller with variable step-size in WNN could following the nonlinear system with better performance. From the argument above, the conclusion could be summarized that variable step-size in WNN is adjusted actively by system error, which could reduce the amplitude of variation, advance the weights updating rage, and control nonlinear system adaptively. 6.2. Relap5 operation example

−50

0

100

200

300

400

time(s) Fig. 9. The error in the first system.

500

600

This paper establishes the Integrated Pressurized Water Reactor (IPWR) model by RELAP5 transient analysis program, and the control strategy in Fig. 1 is applied to control outlet pressure of

266

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

Fig. 12. Operating characteristic A.

Fig. 15. Operating characteristic D.

Fig. 16. Operating characteristic E. Fig. 13. Operating characteristic B.

Fig. 14. Operating characteristic C.

OTSG. As the boundedness of sigmoid function, WNN output parameters for enhanced PID are remained in a range of [  1, 1]. To solve this problem, the expertise improvement unit is added between WNN and flow require counter in Fig. 1, which corrects parameters order of magnitudes. Through the analysis of load shedding limiting condition of nuclear power unit, the reliability and availability of the control method proposed have been demonstrated. In the process of load shedding, secondary steam demand reduces from 100%FP to 20%FP in 5 s, then the operating characteristic could be displayed in Figs. 12–16. Fig. 12 illustrates that when load reduces, steam descends quickly. Since the hysteresis exists in feed water control system, OTSG produces more steam than secondary steam demand in the initial stage of load shedding, which cause the rapid increasing of OTSG outlet steam pressure and the maximum even reaches

4.7 MPa. On the condition of large deviation, the feed water flow of control system is adjusted to reducing, then outlet steam pressure reduces steam production decreasing. Since the variation process of steam pressure appears quickly, a large deviation would be produced and creates certain of overshoot. By adjusting for 20 s around, the feed water flow could match steam flow eventually, and maintain steam pressure steady as Fig. 13. Fig. 14 displays the variation of outlet and inlet temperature of OTSG based on average temperature of primary coolant in the process of load shedding. The reducing of heat absorption in OTSG second side causes the decreasing of secondary feed water flow. On the condition of coolant flow remaining stable, temperature difference lessens while primary coolant average temperature rise. By adjusting the reactor power down, power has been achieved to match second side feed water flow, and the variation of reactor power in load shedding is displayed in Fig. 15. In the process of load shedding, the steam pressure has been remained steady and secondary saturation temperature keeps stable, which accomplishes the following control and ensures the superheat degree of OTSG outlet steam.

7. Conclusions In this paper, a variable step-size updating algorithm has been investigated for wavelet neural network. This algorithm takes advantage of T–S fuzzy inference and NLMS method and improves them to be suitable for the variable step-size of WNN, which endows the algorithm with availability and reliability. Based on the algorithm proposed, the enhanced PID with updating parameters could control nonlinear system with less error and faster convergence rate. Motivated by the practicality of algorithm, the model of OTSG is established. Meanwhile, the algorithm is utilized in a

Y. Zhao et al. / Neurocomputing 158 (2015) 257–267

certain simulation of load shedding which could be regarded as an unknown complicated nonlinear process. Simulation results have validated the effectiveness of the proposed variable step-size algorithm. References [1] S. Bennett, The past of PID controller, Annu. Rev. Control 25 (2001) 43–53. [2] W.K. Ho, C.C. Hang, J. Ballmann, Tuning of PID controllers based on gain and phase margin specifications, Automatica 31 (3) (1995) 497–502. [3] C. Liu, F.Y. Zhao, P. Hu, S. Hou, C. Li, P controller with partial feed forward compensation and decoupling control for the steam generator water level, Nucl. Eng. Des. 240 (1) (2010) 181–190. [4] M.V. de Oliveira, J.C.S. de Almeida, Application of artificial intelligence techniques in modeling and control of a nuclear power plant pressurizer system, Prog. Nucl. Energy 63 (2013) 71–85. [5] C. Liu, J.F. Peng, F.Y. Zhao, C. Li, Design and optimization of fuzzy-PID controller for the nuclear reactor power control, Nucl. Eng. Des. 239 (11) (2009) 2311–2316. [6] K.J. Hunt, B. Sbarbaro, R. Zbikowski, P.J. Gawthrop, Neural networks for control system: a survey, Automatica 28 (6) (1992) 1083–1112. [7] S. Bouhouche, M. Lahreche, J. Bast, Control of heat transfer in continuous casting process using neural networks, Acta Autom. Sin. 34 (6) (2008) 701–706. [8] R. Artit, M. Milos, T. Akira, EBaLM-THPCA neural network thermohydraulic prediction model of advanced nuclear system components, Nucl. Eng. Des. 239 (2) (2009) 308–319. [9] A.M. Schaefer, U. Steffen, G.Z. Hans, A recurrent control neural network for data efficient reinforcement learning, in: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007, pp. 151–157. [10] J. Chen, T.C. Huang, Applying neural networks to on-line updated PID controllers for nonlinear process control, J. Process Control 14 (2) (2004) 211–230. [11] H. Shu, Y. Pi, PID neural networks for time-delay system, Comput. Chem. Eng. 24 (2) (2000) 859–862. [12] M.C. Fang, Y.Z. Zhuo, Z.Y. Lee, The application of the self-tuning neural network PID controller on the ship roll reduction in random waves, Ocean Eng. 37 (7) (2010) 529–538. [13] B. Delyon, A. Judilsky, A. Benveniste, Accuracy analysis for wavelet approximations, IEEE Trans. Neural Netw. 6 (2) (1995) 332–348. [14] Q. Zhang, Using wavelet network in nonparametric estimation, IEEE Trans. Neural Netw. 8 (2) (1997) 227–236. [15] Q. Zhang, A. Benveniste, Wavelet networks, IEEE Trans. Neural Netw. 3 (6) (1992) 889–898. [16] F. Ding, Several multi-innovation identification methods, Digital Signal Process. 20 (4) (2010) 1027–1039. [17] K. Shi, X. Ma, A variable-step-size NLMS algorithm using statistics of channel response, Signal Process. 90 (6) (2010) 2107–2111. [18] K. Mayyas, F. Moani, An LMS adaptive algorithm with a new step-size control equation, J. Frankl. Inst. 348 (4) (2011) 589–605. [19] S. Zhang, J. Zhang, Transient analysis of zero attracting NLMS algorithm without Gaussian inputs assumption, Signal Process. 97 (2014) 100–109. [20] J.C. Liu, Y. Xia, H.R. Li, A nonparametric variable step-size NLMS algorithm for transversal filters, Appl. Math. Comput. 217 (2011) 7365–7371. [21] H.C. Shin, A.H. Sayed, W.J. Song, Variable step-size NLMS and affine projection algorithms, IEEE Signal Process. Lett. 11 (2) (2004) 132–135. [22] K. Mayyas, Performance analysis of the selective coefficient update NLMS algorithm in an undermodeling situation, Digital Signal Process. 23 (6) (2013) 1967–1973. [23] S. Haykin, Adaptive Filter Theory, 4th ed., Prentice Hall, Upper Saddle River, NJ, 2002. [24] E. Eweda, A new approach for analyzing the limiting behavior of the normalized LMS algorithm under weak assumptions, Signal Process. 89 (11) (2009) 2143–2151. [25] H. Cho, S.W. Kim, Variable step-size normalized LMS algorithm by approximating correlation matrix of estimation error, Signal Process. 90 (9) (2010) 2792–2799. [26] S. Haykin, Neural Networks and Learning Machines, 3rd ed., Prentice Hall, Upper Saddle River, NJ, 2008. [27] E. Soria-Olivas, José D. Martín-Guerrero, Gustavo Camps-Valls, et al., A lowcomplexity fuzzy activation function for artificial neural networks, IEEE Trans. Neural Netw. 14 (6) (2003) 1576–1579. [28] K. Zeng, N. Zhang, W. Xu, Typical T–S fuzzy systems are universal approximators, Control Theory Appl. 18 (2) (2001) 293–297. [29] K. Zeng, N. Zhang, W. Xu, Sufficient condition for linear T–S fuzzy systems as universal approximators, Acta Autom. Sin. 27 (5) (2001) 606–612. [30] M.B. Malik, M. Salman, State-space least mean square, Digital Signal Process. 18 (3) (2008) 334–345. [31] X. Gan, J. Duanmu, W. Cong, SSUKF-WNN algorithm and its applications in aerodynamic modeling of flight vehicle, Control Decis. 26 (2) (2011) 187–190. [32] H.C. Shin, A.H. Sayed, Mean-square performance of a family of affine projection algorithms, IEEE Trans. Signal Process. 52 (7) (2004) 90–102.

267

[33] S.M. Sloan, R.R. Schultz, G.E. Wilson, RELAP5/MOD3 Code Manual, USNRC: INEL, 1998.

Yuxin Zhao received the B.S. degree in Automation from Harbin Engineering University, China, in 2001; the Ph.D. degree in Navigation Guidance and Control from Harbin Engineering University, China, in 2005; and completed the post-doctorial research in Control Science and Engineering from Harbin Institute of Technology, China, in 2008. From September 2004 to January 2005, he was a Visiting Scholar in the State University of New York, USA. From March 2012 to March 2013, he was a Research Associate in the Centre for Transport Studies, Department of Civil and Environmental Engineering, Imperial College London, London, UK. In 2001, he joined the Harbin Engineering University, China, and was promoted to a Professor in 2013. Dr. Zhao is the member of Royal Institute of Navigation, the senior member of China Navigation Institute, and the member of Mission Planning Committee of Chinese Society of Astronautics. His current research interests include complex hybrid dynamical systems, optimal filtering, multi-objective optimization techniques, and intelligent vehicle navigation systems.

Xue Du received the B.S. degree and M.S. degree from the College of Automation, Harbin Engineering University in 2010 and 2012 respectively. And she is pursuing for her Ph.D. degree in Control Science and Engineering from Harbin Engineering University. Her research interests include adaptive and learning control, neural network control, industrial systems control and the applications.

Genglei Xia received the B.S. degree from the Department of Thermal Engineering, Shandong Jianzhu University, Jinan, China, in 2007, and the M.S. degree from the College of Nuclear Science and Technology, Harbin Engineering University, Harbin, China, in 2010. He is currently pursuing for Ph.D. degree in Nuclear Science and Technology from Harbin Engineering University. His research interests include thermal analysis of nuclear reactors, two phase flow instability research and the simulation of nuclear power plant.

Ligang Wu received the B.S. degree in Automation from Harbin University of Science and Technology, China, in 2001; the M.E. degree in Navigation Guidance and Control from Harbin Institute of Technology, China, in 2003; the Ph.D. degree in Control Theory and Control Engineering from Harbin Institute of Technology, China, in 2006. From January 2006 to April 2007, he was a Research Associate in the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong. From September 2007 to June 2008, he was a Senior Research Associate in the Department of Mathematics, City University of Hong Kong, Hong Kong. From December 2012 to December 2013, he was a Research Associate in the Department of Electrical and Electronic Engineering, Imperial College London, London, UK. In 2008, he joined the Harbin Institute of Technology, China, as an Associate Professor, and was then promoted to a Professor in 2012. Dr. Wu currently serves as an Associate Editor for a number of journals, including IEEE Transactions on Automatic Control, Information Sciences, Signal Processing, and IET Control Theory and Applications. He is also an Associate Editor for the Conference Editorial Board, IEEE Control Systems Society. Dr. Wu has published more than 100 research papers in international referred journals. He is the author of the monographs Sliding Mode Control of Uncertain Parameter-Switching Hybrid Systems (John Wiley & Sons, 2014), and Fuzzy Control Systems with Time-Delay and Stochastic Perturbation: Analysis and Synthesis (Springer, 2015). His current research interests include switched hybrid systems, computational and intelligent systems, sliding mode control, optimal filtering, and model reduction.