IFAC MCPL 2007 The 4th International Federation of Automatic Control Conference on Management and Control of Production and Logistics September 27-30, Sibiu - Romania
NEURAL-ADAPTIVE CONTROL BASED ON ADALINE NEURONS WITH APPLICATION TO A POWER SYSTEM Ioan Filip, Octavian Prostean, Florin Dragan, Cristian Vasar Department of Control System Engineering, “Politehnica” University from Timisoara, Faculty of Automation and Computer Science, 300223 Timisoara, Romania, Phone: (+40) 0256-403238, E-Mail:
[email protected]
Abstract: A neural-adaptive control solution is exposed in this paper. The control strategy is based on the linear adaptive neuron, which is called ADALINE. Unlike other neural control solutions, based on perceptrons neurons characterized by a long time learning process and a difficult on-line tuning of weights, this approach uses a fast algorithm, which adapts on-line the neuron’s weights. Therefore the non-linear character of control law is induced by the permanent changes of neuron weights, which are variable parameters of controller. A set of study cases is done, with application to the excitation control of a synchronous generator. Copyright © 2007 IFAC Keywords: adaptive control, neural networks, ADALINE, modelling, simulation, synchronous generator.
1. INTRODUCTION
(learning only a limited desired functionality), to reach the objectives of a control strategy and to extend the functionality domain, classical control components (proportional, integrative) are often attached to the neural kernel of controller (somehow similar to fuzzy controller with external dynamic). (Filip et al., 2006a). In this context, an artificial neural network based on ADALINE neuron (Adaptive Linear Neuron) can be chosen as efficient and viable solution, to design and implement adaptive control strategies for nonlinear process with a complex dynamic. Unlike the neural network based on perceptrons, which involves a long time adapting of the weights, the main characteristic of ADALINE neuron consist of its possibility to on-line adapt the neuron weights using a relative simple computational algorithm, easily to be implemented, relatively similar to a process parameters estimator used in a classic self-tuning control strategy. The aim of this paper is to analyse the possibility of using such adaptive control system based on ADALINE network with application to the excitation control system of synchronous generator, a process with a complex dynamic, time varying parameters and stochastically
Using artificial neural networks to design viable process control strategies represents alternative solutions for classical control approaches. However neural network training process, due to its relative long time interval, represents in most of cases an impediment that limits its applicability domains. The previous observation concerns the applications of artificial neural networks based on perceptron neurons (having nonlinear activation functions and distributed in one or more hidden layers) to real time control of nonlinear complex process, where the time factor is critical. In such cases, the high computational training methods involve a long time, and many times the training results are dependent of the initial values chosen for network weights. (Ardalani et al., 2005). Therefore, an on-line control strategy based on such artificial neural network becomes difficult to be implemented in many cases concerning the control of plants characterized by a complex functionality. However, if such control method is chosen from different reasons, after a successful off-line training of a neural network 457
perturbed. Also, due to similarities with a self-tuning controller, some remarks between the two control structures performances will be presented. The selftuning control already represents a classical solution with outstanding performances for the cases of synchronous generator connected to an electric power system (Filip et al., 2006b). The inconvenience of this classical approach consists in the high computation effort corroborated with a potential numerical instability of the parameters estimation algorithm due to varying regimes of such process, problem solved by a control strategy based on ADALINE. (Ardalani et al., 2005)
wˆ 1 (t ), wˆ 2 (t ),..., wˆ n (t ) -
on-line
tuning
weights
calculated by a learning mechanism; yˆ (t ) - neuron trained output; δ - linear characteristic of activation function (usually inferior and superior limited). From this point it will be assumed δ = 1 (without affecting the generality of the considered problem). Also, two basic variables can be noticed in figure 1: - y (t ) -plant output (set point); -
e(t ) = y (t ) − yˆ (t ) - learning error. 3. ADALINE CONTROLLER
ADALINE controller design strategy is based mostly on fundamental principles of classical adaptive selftuning controllers. (Hunt et al., 1992) On-line parameters estimation algorithm is substituted, in this case, with an on-line learning mechanism (computing the neuron weights which models the controlled process). This neural model, implemented using an ADALINE neuron, has the role to learn as accurate as possible the dynamic of real process. In closed loop, the real process represents practically a reference model, whose dynamic must be on-line learned by a neural model. (Marei et al., 2004) The learning phase consist practically in a continuous adjusting of ADALINE neuron weights, that models the real process, in order to minimize, as fast as possible, the error between the real process output and the neural model output (the learning error). The weights of identified neural model, represents the primary information that, based on an adequate computing strategy (implemented by the computing block within figure 2), provides the controller’s adaptable parameters (practically the weights of the second ADALINE neuron that implements the controller). The parallelism between the classical adaptive controllers and the considered neuraladaptive controller is evident, based on the presented designing and functioning strategy. The general structure of such neural-adaptive control system is depicted in figure 2.
2. ADALINE NEURAL NETWORKS The power of ADALINE consist in the on-line training capacity, through a permanent adjusting of its weights based on a supplementary adapting mechanism, externally attached to the neuron. Considering a certain evolution of neuron output ( y (t ) ), the aims of the training (adapting) process consist in a continuous tuning of neuron weights to minimize the learning error ( e(t ) ) as fast as possible. This adapting method, is very often named “supervised learning”, involving the presence of a “trainer”, able to provide the desired output for the full on-line learning process. (Benedito and Eduardo, 1998; Derrick et al., 1990; Widrow and Stearns., 1985) The mathematical model of an ADALINE neuron is practically identical with that of a McCulloch-Pitts neuron (perceptron); the differences are the type of the activation function and the on-line weights adapting mechanism, structural attached to neuron. The ADALINE neuron structure is presented in figure 1.
Fig. 1. ADALINE neuron structure The training phase, takes place on-line in case of ADALINE neuron, unlike the McCulloch-Pitts neuron where the training is an off-line preliminary phase. Even the name of the neural element indicates that its activation function is linear, its input-output transfer characteristic become non-linear due to permanently varying of the weights (which are online adapted). An ADALINE input-output function is described by a mathematical relationship having the following form: yˆ (t ) = δ [wˆ 1 (t ) x1 (t ) + wˆ 2 (t ) x2 (t ) + ... + wˆ n (t ) xn (t )] (1)
Fig. 2. The structure of neural-adaptive control system We assume the following notation: Wˆ (t ) = [ wˆ 1 (t ), wˆ 2 (t ),..., wˆ p (t )]T -
the
weights
vector of neural model identifying the real process (on-line tuneable) and ' ' ' Wˆ ' (t ) = [ wˆ 1 (t ), wˆ 2 (t ),..., wˆ p (t )]T -the neural
where: x1 (t ), x 2 (t ),..., x n (t ) - neuron inputs;
controller’s weights vector. The calculation of 458
interval (0…2)); e(t ) = y (t ) − yˆ (t ) - learning error (the difference between process output and neural model output). Relation (4) represents practically a recursive parameter estimation algorithm. The two component blocks compose a neural estimator. The structure of neural estimator is presented in figure 3.
controller’s weights ( Wˆ ' (t ) ) is done through a supplementary calculus block starting with the weights values of process neuronal model ( Wˆ (t ) ), the indirect character of adaptation becoming obviously (firstly weights of neural process model, and based on them – secondly- the neural controller weights). Further, both structure and functions implemented by each component block of neural-adaptive control system (depicted in figure 2) will be presented. There is supposed the controlled process to be described by the linear equation (2) – ARMA discrete model, in order to outline the functions implemented by each bloc: (2) A( z −1 ) y (t ) = z −1 B( z −1 )u (t ) where: A( z −1 ) = 1 + a1 z −1 + a 2 z −2 + ... + a n z − n
B ( z −1 ) = b0 + b1 z −1 + b2 z −2 + ... + bm z − m
y (t ) –process output, u (t ) –process input, z −1 – delay operator ( z −1 y (t ) = y (t − 1) ), t = 0,1,2,... – discrete time. The following requirements are assumed: maximum degrees – (n, m) of polynomials A( z −1 ) şi B ( z −1 )
one
step
Fig. 3. Neural estimator
−1
are known; polynomial B ( z ) is stable; the
Taking into consideration the measurement vector dimension n+m+1 (relation (3)) results that estimated weights vector will have the same size:
coefficient b0 ≠ 0 .
Wˆ (t ) = [ wˆ 1 (t ), wˆ 2 (t ),..., wˆ n (t ),
3.1 Process’s neural model
wˆ n +1 (t ), wˆ n + 2 (t ),..., wˆ n + m +1 (t )]T
The process’s neural model weights estimation subsystem consists from two component blocks unitary treated. Those two components are: neural identified model of process and weights adaptation mechanism. Neural model of process contains an ADALINE neuron, its role is to on-line learn real process dynamic, through a continuous tuning of neuron weights by an adapting mechanism. The vector of input signals in neural model is a vector of measurements of real process inputs and output. Such vector of measurements (called also regression vector) contains previous values of process inputs and output, required to compute the current output. Using process model described by relation (2), results the process output at the t moment: y (t ) = − a1 y (t − 1) − a 2 y (t − 2) − ... − a n y (t − n) + (3) + b0 u (t − 1) + b1u (t − 2) + ... + bm u (t − m − 1) Developed by Widrow and Hoff, the delta rule, also called the Least Mean Square (LMS) method, is one of the most commonly used learning rules for ADALINE neural networks training (Widrow and Stearns, 1985): α e(t ) X (t − 1) (4) Wˆ (t ) = Wˆ (t − 1) + ε + X T (t − 1) X (t − 1) where: Wˆ (t ), Wˆ (t − 1) - weights vector at moment t,
(5)
and the neural model output is given by relation:
yˆ (t ) = − wˆ 1 y (t − 1) − wˆ 2 y (t − 2) − ... − wˆ n y (t − n) + wˆ n +1u (t − 1) + wˆ n + 2 u (t − 2) + (6)
... + wˆ n + m +1u (t − m − 1) 3.2 Neural controller and calculus block The starting point of adaptive neural controller design is the linear discrete model of the controlled process (relation (2)). The designing methodology supposes to determine an inverse model of the process. (Marei et al., 2004). Neural controller structure contains one ADALINE neuron (figure 4), whose weights (controller parameters) are on-line adapted. At t moment, the controller’s input vector has the following form:
X ' (t ) = [ y * (t + 1), y (t ),..., y (t − n + 1), u (t − 1), u (t − 2),..., u (t − m − 1)]T
(7)
and the weights vector is described by the following relation:
Wˆ ' (t ) = [ wˆ '1 (t ), wˆ ' 2 (t ),..., wˆ ' n (t ), wˆ ' n +1 (t ), wˆ ' n + 2 (t ),..., wˆ ' n + m +1 (t )]T
(8)
Taking into consideration relations (7) and (8), at t moment, the neural controller output is:
respectively t-1; X (t ) - inputs (measurements) vector; ε - a closed to zero constant, introduced to avoid a null denominator; α - reduction factor or learning parameter, (constant or variable in the
u (t ) = wˆ '1 y * (t + 1) + wˆ ' 2 y (t ) + ... + wˆ ' n y (t − n + 2) + wˆ ' n +1 y (t − n + 1) + + wˆ ' n + 2 u (t − 1) + ... + wˆ ' n + m +1 u (t − m)
459
(9)
where y * (t + 1) is the reference (known) predicted for the moment t+1.
network weights that must be on-line adapted. The control law implemented by the neural controller (general relation (9)) becomes:
u (t ) = wˆ '1 y * (t + 1) + wˆ ' 2 y (t ) + + wˆ '3 y (t − 1) + wˆ ' 4 y (t − 2) + wˆ '5 y (t − 3) + + wˆ ' 6 u (t − 1) + wˆ ' 7 u (t − 2) + wˆ '8 u (t − 3)
(12)
Taking into consideration the relations (11) describing the controller weights, particularized for the considered plant results the neural controller output:
1 [ y * (t + 1) + wˆ 1 y (t ) + wˆ 2 y (t − 1) wˆ 5 (13) + wˆ 3 y (t − 2) + wˆ 4 4 y (t − 3) + + wˆ 6 u (t − 1) + wˆ 7 u (t − 2) + wˆ 8 u (t − 3)] This relation practically implements a time-varying inverse dynamics of the controlled plant. (Ardalani et ˆ 5 (as a controller parameter) al., 2005). The weight w affects the denominator of the control law. Its time variation is very important for controller output dynamic and, implicitly, for the dynamic and performances of the entire adaptive control system. Analysing the relation (4) that defines Widrow-Hoff rule used to estimate the weights for process neural model, it can be concluded, for the moment, that the only tuning parameter of control structure is α ∈ (0...2) . In addition to this remark, WidrowHoff estimator being a local convergent recursive u (t ) =
Fig. 4. Neural controller Neural controller weights vector is obtained by processing the weights of neural model. A calculus block implements this processing operation, described by relation (10): 1 [1, wˆ 1 (t ), wˆ 2 (t ),..., Wˆ ' (t ) = (10) wˆ n +1 (t )
wˆ n (t ), wˆ n + 2 (t ),..., wˆ n + m +1 (t )]T Comparing relations (8) and (10), there can be written the explicit relations (11), between neural controller computed weights and process neural model estimated weights. 1 ⎧ ' ⎪ wˆ 1 (t ) = wˆ (t ) n +1 ⎪ ⎪ wˆ ' (t ) = 1 wˆ (t ) 1 ⎪ 2 wˆ n +1 (t ) ⎪ 1 wˆ 2 (t ) ⎪ wˆ 3' (t ) = ˆ (11) w ⎪ n +1 (t ) ⎪... ⎨ 1 ⎪ wˆ n' +1 (t ) = wˆ n (t ) wˆ n +1 (t ) ⎪ ⎪ ' 1 wˆ n + 2 (t ) ⎪ wˆ n +1 (t ) = wˆ n +1 (t ) ⎪ ⎪... 1 ⎪ wˆ ' ˆ ⎪ n + m +1 (t ) = wˆ (t ) wn + m +1 (t ) n +1 ⎩
algorithm, the initial value of weights vector Wˆ (0) , requested for the initialisation of estimation algorithm, has an important influence on adaptive control system performances. (Yang et al., 2000) Customizing for the considered process, an initial possible form of the weights vector can be where Wˆ (0) = [0 0 0 0 wˆ 5 (0) 0 0 0]T ,
ˆ 5 (0) wˆ 5 (0) ≠ 0 . The value of initial weights w affects significantly the controller output ˆ 5 weight initialisation characteristics. Over the w methodology (validated through simulation studies presented in the next paragraph) the following considerations can be concluded: ˆ 5 leads to a - A too high initial value chosen for w
4. NEURAL-ADAPTIVE CONTROL APPLIED TO SYNCHRONOUS GENERATOR
-
A particularisation of the neural-adaptive control structure presented in the previous paragraph, for the case of a synchronous generator connected to a power system through a transmission grid, implies primary information regarding the controlled plant. Thus, the specialized literature states that such process can be fully described by a 6th order liniarized model (Dao and Heydt, 1983; Filip and Prostean, 2005). Based on simplified assumptions regarding the generator functionality, its model order can be decreased to 4, resulting n=4, m=3 for the −1 polynomials A( q −1 ) and B (q ) . Practically, the number of ADALINE network inputs is established on this consideration, and therefore the number of
controller output with a slow dynamic and, as a consequence, a long control time. ˆ 5 leads to A too small initial value chosen for w
an increased control dynamic, so the controller output variance can be to high and the control system can be unstable. The literature recommends for neural network training process, that the initial values of weights used to startup the learning process must be relative reduced, belonging to the [-1…1] interval. (Nerrand et al., 1994) For the ADALINE neuron that implements the process model (identifying the plant), ˆ 5 is a convenient initial value for the weight w
wˆ 5 (0) ∈ (0...1] . A first choice is recommended to be 460
wˆ 5 (0) = 1 , and based on control system
values for the neural controller parameters are: α = 0.2 , wˆ 5 (0) = 0.2 . Figures 5.a, b, c describe the controlled process output, the controller output and the variance of neural weights. During the learning phase (first step variation of set point), can be observed the convergence of the neural weights to a stationary values set and a high overshoot of controlled output. For the next step variations of set point (at time moments t=50, 100,…), the controlled output overshot, the controller output variance and the time control decreased significantly (the learning phase identifying a process neural model being already finished). All the variations considered in all next study cases are dimensioned in per unit (p.u.).
performances (especially analysing the control time), the value of wˆ 5 (0) is successively reduced (all selection process being performed off-line). Therefore, together with reduction factor α , the ˆ 5 weight becomes the second neural initial value of w controller parameter, which must be proper chosen before the training process start-up. The implementation of the neural-adaptive control system was made in Simulink. 5. SIMULATION STUDIES OF THE ADAPTIVE CONTROL BASED ON ADALINE A set of study cases are done, with application to the excitation control of a synchronous generator, a nonlinear process with a complex dynamic, in which the characteristics fluctuate with varying loads and varying generation schedules and the operating point changes too. The goal of these studies is to determine an optimal combination for initial values of some fixed parameters of neural controller (reduction factor, weights), providing good control performance for all considered functioning regimes of synchronous generator. The conducted simulation studies highlight some aspects regarding the controller’s tuning and design, respectively the controller’s behavior functioning embedded into an excitation control system of a synchronous generator. First issue assumes an optimal initial tuning of the neural controller, by a proper choose of two parameter values: the reduction factor α (learning algorithm gain), respectively the initial value of
Fig. 5.a. Process output variance (terminal voltage)
wˆ 5
weight, which is placed at the denominator of the control law relation. Also, there is required a choose of the neural controller’s order (the m and n coefficients) taking into consideration the order of a linearised model of the controlled plant’s. A study regarding a possible reduction of the controller’s order will be done, such simplified structure leading to a small computational demand and obviously to faster response of the controller. On the other side there will be performed an analyses concerning the controller’s behavior and performances in two specific functioning regimes of the synchronous generator: reactive power load (corresponding to a set point variation, respectively the active power load regime (as a result of the mechanical torque changes). There are also conducted tests that highlight some aspects regarding the learning phase of the neural network. The goal of these studies is to find a proper pair set values (α , wˆ 5 (0)) (reduction factor and initial values of weight
Fig. 5.b. Controller output (excitation voltage)
wˆ 5 ), general valid for all considered
functioning regimes of the synchronous generator. The first case assumes the process parameters unknown, highlighting some aspects concerning the learning phase specific to a neural-adaptive control. Starting with null operating conditions (step variations of set point from zero to a imposed value), the controlled process output and the variance of estimated weights will be analyzed. The chosen
Fig. 5.c. Controller weights For the next study cases we consider, as initial operating condition, a trained neural network implementing the neural controller (using as initial
461
Comparatively analyzing the controlled process output in these two cases (fig. 6, 7), a smaller value of reduction factor conducts to a smaller output overshoot, assuring better control performance. Next case considers a fixed value for α = 0.1 , and two different values wˆ 5 (0) = 1 , respectively
weights, excepting wˆ 5 (0) , the values estimated for a specific set point in the previously learning phase). It will be proved that various initial values chosen for ˆ 5 (0) conduct to different control weight w performance, due to local convergence of delta rule algorithm. From this reason, starting with different initial values of weights, the estimated weights converge to different final values. Also, the value of reduction factor α affects the control performance. A set point variation (equivalent with a reactive power variation regime), forcing to a tracking control, and a mechanical torque change considered as external disturbance (equivalent with an active power variation regime) will be taking into consideration for the next study cases.
wˆ 5 (0) = 0.11 . The results are depicted in fig. 8 and fig.9. ˆ 5 (0) = 1 can be observed a smaller output For w overshoot, but an unacceptable long control time. Better performances regarding the control time are obtained in the second case, using smaller initial ˆ 5 (0) = 0.11 , but an increasing of value of weight w the controlled output overshoot occurs. Also, comparatively with the result obtained for the pair set (α = 0.1, wˆ 5 (0) = 0.2 ) , presented in fig. 7, for the
5.1 Active power load regime
same control time value, the output overshoot is higher. Therefore, if the chosen performance indicators are time control and output overshoot, for this functioning regime, the set pair (α = 0.1, wˆ 5 (0) = 0.2 ) assures better performance.
A relative step variation of mechanical torque (0.2 p.u.) occurs at time moment t=10. The following studies cases are performed, taking into consideration different pair set values for the ˆ 5 (0) . reduction factor and initial values of weight w First, we consider a fixed value for wˆ 5 (0) = 0.2 , and
two different values α = 0.2 , respectively α = 0.1 . The simulation results are depicted in figure 6 for pair set (α = 0.2, wˆ 5 (0) = 0.2 ) , respectively in fig. 7
for pair set (α = 0.1, wˆ 5 (0) = 0.2 ) .
Fig. 8. Process output variance ˆ 5 (0) = 1 ) ( α = 0.1, w
Fig. 6. Process output variance
(α = 0.2, wˆ 5 (0) = 0.2 )
Fig. 9. Process output variance ˆ 5 (0) = 0.11 ) ( α = 0.1, w 5.2 Reactive power variation regime
Starting from a fixed nominal value of set point (terminal voltage of synchronous generator), at time moment 10 and 40, a positive respectively negative
Fig. 7. Process output variance
(α = 0.1, wˆ 5 (0) = 0.2 )
462
functioning regimes must be achieved. In this case, for the chosen application (excitation control of a synchronous generator), even the pair set values (α = 0.1, wˆ 5 (0) = 0.2 ) can be considered to make
reference change occurs (0.05 p.u step variance). For a first case, we choose the value of reduction factor α = 0.1 and, except all other estimates weights ˆ 5 we impose the previously calculated, for weight w
such compromise concerning the control performance for these two functioning regimes. For all the presented study cases, the adaptive-neural control results are comparatively as performances with results of a classical self-tuning control strategy. (Dao et al., 1983; Filip et.all,. 2006a). The last study regards a possible reduction of the controller’s order, such simplified control structure leading to a small computational demand. For a pair ˆ 5 (0) = 0.2 ) , figures 12 and set values (α = 0.1, w
ˆ 5 (0) = 0.2 . The simulations initialization value w results are depicted in figure 10.
13 describe the control system responses for a 4th order controller (n=4, m=3), and for a reduced to 2nd order controller (n=2, m=1), superposed on the same graphics. There can be observed that the results are very similar, even a 2nd order controller providing satisfactory control performances due to adaptive mechanism which tracks all the functioning changes of the controlled process.
Fig. 10 Process output variance
(α = 0.1, wˆ 5 (0) = 0.2 )
ˆ 5 (0) = 0.11 , it can be Changing the values for w observed a significantly improvement of the control performance. The overshoot and especially the control time decrease (fig.11, comparatively with fig, 10). The reason is an increasing dynamic of the controller, directly influenced by the initial value of ˆ 5 , which practically represents the gain parameter w of the controller output. Many other simulation studies were conducted, with the same results, the paper presenting only the most significantly from the performances point of view. Fig.12. Process output variance (active power load)
Fig. 11. Process output variance (α = 0.1, wˆ 5 (0) = 0.11 )
Fig.13. Process output variance (reactive power load)
As a conclusion, for such complex nonlinear process, if the main functionality of the control system is a reference tracking (reactive power regime), a pair set (α , wˆ 5 (0) ) can be determined as optimal. If the
6. CONCLUSION This paper presents an application of the adaptive linear neuron (ADALINE), combined with delta rule algorithm use to train such neural network. The goal was to show that a neural-adaptive controller based on ADALINE is able to control a nonlinear complex process like a synchronous generator. Identifying a process model by training a neural network (the first
main functionality of the control system is to reject external disturbances (active power regime), another optimal pair set (α , wˆ 5 (0) ) provides best result. Therefore, a compromise concerning these two 463
Filip, I., Prostean, O., Balas, V., Prostean G. (2006a). Design and Simulation of a Neural Controller for Excitation Control of a Synchronous Generator. Proceedings of the 6th International Conference on Recent Advances in Soft Computing (RASC 2006), Canterbury, United Kingdom, July 10-12, 2006, pp. 361-366. Filip, I., Prostean, O., Szeidert, I., Prostean G., Vasar, C. (2006b). Self-tuning Control Using External Integrator Loop for a Synchronous Generator Excitation System. Proceedings of the 11th IEEE International Conference on Emerging Technologies and Factory Automation, (ETFA2006), Prague Sept., 2006, pp.997-1000. Filip, I., Prostean, O. (2005). Modelling, Parameters Estimation and Adaptive Control of a Synchronous Generator. Control Engineering and Applied Informatics Journal (CEAI), Vol.7, No.1, Bucuresti 2005, pag.20-30. Hunt, K.J., Sbarbaro, D., Gawthrop P.J. (1992). Neural Networks for Control System - A Survey, Automatica, Vol.28, 1992, pp. 1083-1112. Marei, M.I., El-Saadany, E.F. and Salama, M.M.A. (2004). Estimation Techniques for Voltage Flicker Envelope Tracking, Electric Power Systems Research, Vol. 70, Issue 1, June 2004, pp. 30-37. Nerrand, O., Roussel-Ragot, P., Urbani, D., Personnaz, L., Dreyfus, G. (1994). Training Recurrent Neural Networks: Why and How? An Illustration in Dynamical Process Modeling, IEEE Neural Networks, Vol. 5, No.2, 1994. Widrow, B., Stearns, S. (1985). Adaptive Signal Processing. Prentice Hall 1985. Yang, J.G., Wang, K., Zhang, J. (2000). A Real-Time Adaptive Control Algorithm Using Neural Nets With Perturbation, Journal of Zhejiang University Science, Vol.1.No.1, pp.61-65.
ADALINE neuron), the tuning strategy practically on-line estimates the weights of a second ADALINE (using delta rule algorithm), which implements the core of neural controller. The great advantage of this adaptive controller is provided by ADALINE neural network, which presents a lower complexity and a shorter training time, being appropriately to on-line control of fast nonlinear process, with a complex dynamics and permanently disturbed. To maintain optimal performance, the control system must continuously self-tune to process changes, task very well accomplished by such neural network. For the excitation control of a synchronous generator, solving the problem to find an optimal initialization values for the pre-tuned controller parameters values (reduction factor and initial value for weight placed at the denominator expression of control law) can provide very good control performances, even for different functioning regimes. REFERENCES Ardalani, N., Khoogar, A., Rophi, H. (2005). A Comparison of Adaline and MLP neural Network-based Predictors in SIR Estimation in Mobile DS/CDMA Systems, Transaction on engineering and technology, Vol.9, Nov.2005, pp.145-150. Benedito, D.B., Eduardo, L. (1998). A New Approach to Artificial Neural Networks, IEEE Transaction on Neural Networks, Vol.9, no.6, November 1998, pp.1167-1179. Dao, X., Heydt, G.T. (1983). Self –Tuning Controller for Generator Excitation Control, IEEE Transactions on Power Apparatus and Systems, Vol.PAS-102, No.6, June1983. Derrick, H.N., Windrow, B. (1990). Neural Networks for Self-learning Control Systems, IEEE Control Systems Magazine, April 1990, pp.18-23.
464