THEORY AND APPLICATIONS OF ADAPTIVE REGULATORS BASED ON RECURSIVE PARAMETER ESTIMATION
K. J. Astrom L. Ljung U. Borisson B. Wittenmark Dept. of Automatic Control, Lund Inst. of Technology, Lund, Sweden
ABSTRACT
2.
The motivation for this work has been to simplify the design of regulators for industrial processes. The regulators designed can be regarded as selftuning regulators. They will serve as a substitute for the time-consuming process of plant experiments, parameter estimation and control design. The regulators can also be used as adaptive regulators for processes with slowly varying parameters. The paper describes such regulators, outlines their theory and reviews industrial applications.
The regulators considered are described by the block diagram of Fig. 1. The regulator can be thought of as being composed of three parts: a parameter estimator (block 1), a controller (block 3) and a third part (block 2), which relates the controller parameters to the estimated parameters. The parameter estimator acts on the process inputs and outputs and produces estimates of certain process parameters. The controller is simply a linear filter characterized, for example, by the coefficients of its trgnsfer function. These coefficients are in general a non-linear function of the estimated parameters. This function is freque~tly not injective. This way of describing the regulator is convenient when explaining how it works. The subdivision is, however, largely arbitrary, and the regulator can equally well be regarded simply as one non-linear regulator. The functions of the blocks I, 2 and 3 are also simple, but the interconnection of these blocks represents a system with a rather complex input-output relation. The partitioning of the regulator as indicated in Fig. 1 is also conv~nient from the point of view of implementation, because the parameter estimator and the controller parameter calculation are often conveniently time shared between several loops.
1.
INTRODUCTION
Stochastic control theory has shown to be a useful tool for the design of controllers for industrial processes. In many practical applications it is, however, difficult to determine the parameters of the controller, since the dynamics of the process and its disturbances are unknown. The parameters of the process thus have to be estimated. For stationary processes it is possible to determine the unknown parameters through system identification. The experiment and the evaluation can, however, be rather time consuming. It could thus be desirable to have regulators which tune their parameters online. The motivation for this work has been to design such regulators for control of industrial processes. The regulators can also be used as adaptive regulators for processes with slowly varying parameters. The regulators discussed can be thought of as composed of three parts: a parameter estimator, a linear controller and a block which determines the controller parameters from the estimated parameters. There are many different possibilities depending on the control and estimation scheme used. Regulators of this type have been considered before. The contribution of this paper is to review analysis which can be used to understand how the regulators work, and to give examples of applications of the algorithms to the control of real industrial processes. The paper is based on experiences from using the algorithms to control several industrial processes including a paper machine, an ore crusher, a heat exchanger and a super tanker.
A CLASS OF REGULATORS
There are many different ways to estimate the parameters e and to calculate the regulator parameters v. This leads to different types of regulators. The class of regulators considered will now be described in more detail. Process Models Someof the results are valid without any specific assumptions of the process model. Other results are based on the assumption that the process is actually governed by the single input - single output model
A
(q
-1
)y(t) -1
B(q
-1 -1
)u(t-k) + C(q -1
where A(q ), B(q ) and C(q the backward shift operator qnoise.
606
-1
)e(t)
(2.1)
i and are polynomials in feet)} is white
Parameter Estimation
and the matr1x S(t) by (2.6).
The regl\lators discussed in this paper are all based on different recursive schemes of estimating the parameters of the prediction mo del
Recursive Maximum Likelihood RML
y(t)
~(t)
=
-A(q-l)y(t-l) + B(q-l)u(t-l) + C(q-l)E(t-l)
In this case the vector
(2.2)
~(t)
= [-
y(t-l) ... - y(t-p)u(t-l)
aE(t)
as r (2.8)
where the derivatives are given by the usual sensit1v1ty equations. Notice that this case is closely related to the model reference method. Using the performance index 00 2 dt) dt
(2.3)
(2.9)
(2.4)
which is similar to (2.5). The analysis of model reference adaptive regulators in a stochastic environment is thus included as a special case.
The prediction error is then ~(t)e
In some applications it is of interest to consider some parameters as being known. This is easily done by slight modifications of the vectors ~(t), e and the equation (2.4). See e.g. Astrom and Wittenmark (1973). All estimation schemes considered in this paper are described by the equations =
a dt) ] ay s
aE{t) a Sl
i. e. "the MIT rule", Whitaker et al (1958), the parameter adjustment becomes
= ~(t) e
e(t+l)
a dt) aYl
adt) aex p
o
the prediction model becomes
E(t,e) = y(t) -
...
J
u(t-r)E(t-l) ... E(t-S»)
y(t)
is defined by
gradedt)
[d E~~~
where u is the process input'_f the process outPUt and E prediction errors. A(q ), B(q-l) and C(q- ) are polynomials in the backward shift operator q-l, whose coefficients are the unknown parameters. Introducing the vector
whose elements are the unknown parameters and
= -
~
e(t) + U(t)S(t+l)~T(t)Ert, e (t»)
(2.5)
Many other recursive estimation schemes can also be represented by (2.5) through appropriate detinition of S(t) and ~(t); for example, stochastic approximations, generalized least squares, instrumental variables, etc. The analysis of adaptive regulators obtained by using these methods is completely Rnalogous to the cases discussed in this paper. A detailed discussion of recursive estimation methods is given in Soderstrom-Ljung-Gustavsson (1974) which also contains many references. Control St,ategies
(2.6) The scalar function U expresses the way in which past data are discounted. The standard least squares, where equal weight is given to all measurements, correspond to u(t) = lit. The least squares method with exponential forgetting of past data corresponds to u(t) = uo' where the exponent is 1 - u . A recursive equation can also be given o for the matrix S(t). This equation is more complicated than (2.6). The more complicated version must, of course, be used when S(t) is singular. The vector ~ (t) depends on the estimation scheme. Three different cases are considered.
When the prediction model and its parameters are known, there are many possibilities to determine the control strategies. In this paper controllers of the form -1
u(t)
~
(2.10)
(t)
F(q-l) y -1
-1
where F(q ) and G(q ) are polynomials whose coefficients are elements of the vector
will be considered. Furthermore, this paper discusses only non-dual strategies. In this case the parameters Yi not estimated. Furthermore ~(t)
=~(t)
o and
they are
(2.7)
~xie~d~d_L~a~t_S~u~r~s_E~S
In this case the parameters Yi are also estimated, Young (1970). The vector ~(t) is defined by (2.7)
Three different types of controllers are considered: dead beat (DB), minimum variance (MV) and a linear quadratic (LQ) controller which minimizes the criterion N 2 V = lim E l L [y2(i) + su (i») (2.11) N-+oo N i=t
607
These control strategies are standard. See e . g . ~strom (1970). The case LQ includes DB and MV as special cases. Further , if C 0 in (2 . 2) the controllers DB and MV are identical.
=
The complexity of the relation between the controller parameters and the estimated parameters varies significantly in the different cases from no more than simple substitutions in DB and MV to solution of steady state Riccati equations or spectral fac torization for LQ. The analysis which follows is not restricted to the cases given above . The crucial features are that the estimates can be characterized by equations like (2.5) or (2 . 9) . The regulators discussed are not new; they have been studied many times before. The structure of Fig . 1 was a popular starting point for many of the early approaches to adaptive control. The regulators were investigated in the late fifties and early sixties under the name of self-optimizing regulators, see Kalman (1958) . Regulators of this structure are obtained from separation hypotheses or from model reference arguments. Other references where the same type of regulators are discussed are Peterka (1970), ~strom-Wittenmark (1971), Peterka-~strom (1973). Notice that the regulators discussed include adaptive prediction as a special case. This is easily seen from (2 . 2), which actually is a prediction model . Hence if Sex) = 0 or if u(t) is a known signal , the model (2.2) reduces to a pure prediction model. This observation was first made in Hittenmark (1974) . It will not be persued further in this paper . 3.
ANALYSIS
The properties of the regulators discussed in the previous section will now be analysed. It is a ssumed that the process to be regulated is governed by (2.1). The process is time invariant. The analysis will basically deal with the case ~ (t) ~ 0 as t ~ 00 , i . e . when it is assumed known that the ?arameters of (2 . 1) are constants. It is reasonable to assume that such stationary analysis is valid also in the case where the system parameters vary slowly in comparison with the system dynamics, and ~ (t) ~ ~ O where ~ O is a small, positive number. This has also to a limited extent been demon strated by simulation. It is, however, easy to find examples with drifting parameters where the regulators of the class discussed here do not perform well. See e.g . ~strom-Wittenmark (1971) . The major problems of interest for analysis are: o o o
Overall stability of the closed loop system Convergence of the regulator The properties of the possible limiting regulators .
The analysis is far from trivial because the closed loop system is a nonlinear , time variable stochastic system. Even if the recursive identification schemes used are well known, their convergence properties are largely unknown except for the least squares case . In the particular case the input is also
generated by a time varying feedback. This intro duces additional difficulties . If the noise is correlated,C(q - l) I , the least squares estimates will be biased ann the bias will depend on the feed back used. Consequently , even if the regulators discussed are motivated using the hypothesis of separation of identification and control, the analysis cannot be based on such an assumption . In the following the available analytical results will be summarized . Even if the results obtained so far are far from complete , they do give considerable insight.
*
Stability is perhaps the most important property from the point of view of applications . Lacking stability the regulators would be useless. There are many different ways to define and analyse stability for a stochastic system. The easiest approach is perhaps to consider small perturbations from an equilibrium and make a linear perturbation. This is , however, not of great value from the prac tical point of view because the system may still depart from the region where the linearization is valid. A global stability concept which guarantees that the solution remains bounded with probability one is far more useful . This may still not be sufficient for practical purposes because the bounds obtained may be larger than can be accepted. Extensive simulations have indicated that the closed loop systems are stable in many circumstances. It is fairly difficult to show this formally . The special case of a regulator based on least squares estimation and minimum variance control applied to a minimum phase process is analysed in Ljung-Wittenmark (1974a). It is shown that under fairly weak cond i tions w.p.l.
(3.1 )
This result is interesting because it implies that this particular regulator will stabilize any linp.ar time invariant processes provided some weak conditions are fulfilled. The stabilizing property (3.1) is for instance obtained if the controlled process is stable and if the input to the process is limited . Extensive simulations have indicated that many of the other regulators also have the overall stabili ty property, although no formal proofs are yet available . It has also been found empirically that the regulators based on LS identification can recover quicker after a large disturbance than the regulators based on ELS and RML identification.
Convergence analysis can either relate to convergence of the estimated parameters e or convergence of the controller parameters V. There are some ra ther powerful analytical results, which give considerable insight into the convergence properties . The key result is that convergence of thp. estimRted
608
parameters is closely related to an ordinary differential equation. Introduce fee) ge e )
E =
~ (t)£(t,e)
(3.2)
E ~ (t) ~ T(t)
The mathematical expectation is taken with respect to the distribution of ( e(t) } when {yet» ) , {u(t) } and k(t)} are the stationary processes obtained from (2.1), (2.10) and (2.4), when the parameter e in (2.4) and the regulator parameters v( e ) are constant. Under certain assumtpions the differential equation s de dT
S f( 8 )
(3.3)
(3.4) will describe the development of the estimates. Precise statements and proofs are given in Ljung (1974), Ljung-Wittenmark (1974a), Ljung-Soderstrom -Gustavsson (1974). The differential equations (3.3) and (3.4) have been used to show the convergence of the minimum variance regulator when e = 1 in (2.1) and when the LS method is used for the estimation. For e 1 the differential equations have been used to show that there exist systems for which the estimation does not converge.
*
In view of these results, the ODEs (3.3) and (3.4) play an important_foIe when analysing the adaptive regulators. If S exists the stationary soluti o n is given by fee)
=
0
resting. Regulators with this property are called self-tuning or self-adjusting regulators. If the prediction model (2.2) has sufficiently many parameters, the true parameters 8 0 will always be a stationary solution to (3.5) for regulators based on extended least squares and recursive maximum likelihood identification. For regulators based on least squares it must in addition be required that e(q-l) = 1. Even if e = eO is a stationary solution there is no guarantee that e = 8 is a globally stable solution. 0 The solution is globally asymptotically stable for least squares estimation if { e(t) } is a sequence of uncorrelated, random variables. See Ljung-Wittenmark (1974a). For extended least squares it is possible to construct examples where the solution e = e is unstable. See Ljung-Soderstrom-Gustavsson (1~74). The solution e = eO is stable for small perturbations if the recursive maximum likelihood method is used, and if the parameters in e are identifiable. See Soderstrom-Ljung-Gustavsson (1974). There may also be other solutions to the equation (3.5) than e = eO. It is not a trivial problem to find all stationary solutiop.s because the equations are nonlinear. When the estimation algorithm LS is used an9 when the disturbances are correlated, the true parameter vector eO is not a stationury solution of (3.5). For this case it is shown in Astrom-Wittenmark (1973) that if the number of parameters in the prediction model (2.2) is appropriate, then there is only one stationary solution e* to the equation (3.5). The corresponding regulator parameters are such that V(e*) gives the minimum variance regulator. The regulator thus may be self-tuning in spite of the fact that the estimates are biased.
(3.5)
Equation (3.5) will, together with the requirement upon the linearized ODE to be stable, give the possible convergence points. The glob~l stability properties of the ODE can be investigated by integrating the equations. The differential equations are nonlinear, and it is frequently difficult to give closed form expressions of the right hand sides. Analysis is therefore often hard, but it is always possible to use numerical solutions. Examples are found in Wittenmark (1973), AstrnmWittenmark (1974) and Ljung-Wittenmark (1974a, 1974b). It should also be emphasized that it is much easier to find the steady state solution of (3.3) and (3.4) than to find a possible limit point by simulating the stochastic system (2.1) with the adaptive regulator.
If the parameter estimates converge, the adaptive regulator will in the limit reduce to a constant parameter linear controller. The situation when this limiting controller is the same as the controller that could be designed if the process paremeters were known a priori is particularly inte-
It is perhaps of more practical interest to consider the convergence of the regulator parameters, V(e). Since in many cases there are fewer regulator parameters than estimated parameters, it may happen that fee) vanishes and ge e ) is singular in a subspace, but the regulator converges. The present analysis has to be modified to cover such cases. The parameter estimates may converge to a particular point (which may depend on the realization), or they may meander. Preliminary analysis indicates that the parameter estimates in many cases actually do converge, but that the convergence rate is so small that the estimates from a practical point of view appear to meander. 4.
SIMULATION EXAMPLES
The properties of the regulators will now be illustrated using simulation examples. A couple of different regulators based on different models and identification methods will be investigated. The processes to be controlled are given by
609
y(t+l) + a y(t)
blu(t-l) + e(t+l) + c e(t)
y(t+l) + a y(t)
blu(t-l) + b u(t-2) + e(t+l) + 2 (P2)
+ c e (t)
(PI)
The numerical values are chosen as a = -0.95, b = = l,.b 2 = 2 and c = -0.7. The processes are selected 1n such a way that they illustrate the limitations of the different regulators. The process PI is minimum phase and P2 is non-minimum phase. The criterion considered is to minimize the quadratic criterion (2 . 11) with~~ = O. The optimal control laws for the two processes are given in Table 4.1 Three different structures of the prediction model (2.2) will be used: y(t) y(t) y(t)
- u l y(t-2) + u(t-2) + 8 u(t-3) + 8 u(t-4) 1 2 (4.3) - uly(t-l) + 8 u(t-2) + 8 u(t-3). (4.4) 1 2 u ly(t-l) + 8 1u (t-2) + 8 u(t-3) + Yl dt-l) 2 (4.5)
The model (4.3) has the advantage that the estimated parameters are the same as the parameters in the minimum variance controller for the processes (PI) and (P2). The parameters of the models (4.3) and (4.4) can be determined by LS, but the model (4.5) r e quires ELS o r RML since it contains prediction errors. Process
PI
Control law
gl
Parameters
l+flq
P2
-1
a gl = 1)(c-a)= 1 =
- 0.238
=
fl = c - a = = 0.25 fl =
a(c-a)(b -ab ) l 2 = b 2 (b -ab ) l 2
- 0.117 b +b (c+a) l 2 b 2
=
= 0.75
f2 =
(c-a) (b -ab ) l 2 = b -ab 2 l
= 0.246 Expected loss
1. 06 per
step
REGl REG2 REG3 REG4
1. 08 per
step
Identification method
(4.3) (4.4) (4.5) (4.5)
Control strategy
LS LS ELS RML
MV MV
LQ LQ
For the process (PI) the parameter 8 in the pre2 diction models (4.3) and (4.5) is fixed to zero. The simulated examples are shown in the Fig. 2-5. All the regulators REGl-4 will be self-tuning for the process (PI). The regulators REGl and REG2 have the advantage that there are only two parameters to identify. For REG3 and REG4 there is not a one-to-one correspondance between the estimated parameters and the controller parameters. REGl does not work when it is applied to the nonminimum phase process P2. In that case the control signal will oscillate with increasing amplitude. If the control signal is limited strongly, it will oscillate between the bounds. The regulator can be made to work by artificially increasing the time delay in the prediction model. See Wittenmark (1973). The regulators REG2, REG3 and REG4 are possible to use for non-minimum phase systems. For the used model (4.4) REG2 was not self-tuning . By increasing the order of the model, REG2 will be selftuning too. PRACTICAL APPLICATIONS
Industrial processes are increasingly being controlled by process computers. The regulators synthesized in the computers are mainly discrete time versions of PID regulators, sometimes with the addition of dead-time compensation. These standardized regulators have simple structures, which can be characterized by few parameters. In many applications it may, however, be advantageous to use more complex regulators. Such regulators are seldom implemented, mainly because they need a large amount of knowledge of the process dynamics and the characteristics of the noise. It is thus desirable to have some kind of automatic tuning of the regulator parameters. One way is to use the type of adaptive regulators discussed in this work. The self-tuning algorithms are very well suited for industrial use due t o the attractive properties discussed above and the moderate computational requirements. A self-tuning regulator can be used in several ways depending on the characteristics of the controlled process: o
A self-tuning regulator can be used at the installation or retuning of a regulator loop. It can be removed when a proper parameter set has been obtained.
o
A self-tuning regulator can be installed among the system programs in the computer and periodically serve different control loops.
Table 4.1 - Optimal control laws for the processes PI and P2. The different regulators are defined by the following table.
Prediction model
Table 4.2 - Definition of the regulators.
5.
gl -1 -2 l+flq +f q 2 gl =
Regulator
610
o
If the process has time varying parameters, it may be desirable to have the self-tuning regulator connected to the regulator loop all the time.
When implementing the algorithm on a process computer, it is sometimes advantageous to divide the algorithm into two parts, one for the estimation and one for the control as indicated in Fig. 1. If the algorithm is implemented on a process computer having a DDC package, then the control part in many cases can be implemented using the standard set of regulators defined ia the DDC package. The tuning part then delivers the regulator parameters to the data used by the DDC package. If a regulator structure is used that is not available among the standard routines, it is necessary to write a special routine for the control part. The tuning part must be specially written and included among the system programs. This routine can be used for many different loops if special care is taken concerning the storage of data. The simplest version of a self-tuning regulator is based on least squares estimation of the parameters in the regulator (compare REGl in Section 4). The other versions of the self-tuning algorithms include a routine for solving the Riccati equation. This will increase the memory requirements and execution times, but the algorithms will still be quite reasonable for implementation even on a small computer. Even if the discussed regulators automatically tune their parameters, it is necessary to determine some parameters in advance. These are for instance: o
The number of parameters in the prediction model (p, rand s).
o
The initial values of the parameter estimates.
o
Value of any fixed parameters in the model.
o
Rate of exponential forgetting of past data in the estimation algorithm.
o
The sampling rate.
Experience has shown that it is fairly easy to make the proper choice in practice. These parameters are also much easier to choose than to directly determine the coefficients of a complex control law. It is our experience that system engineers without previous experience of this type of algorithms have been able to learn how to use them after a short training period only. In the practical applications of the self-tuning algorithm the performance of the controllers can be checked by analyzing the autocovariance of the output, ryCT) , and the crosscovariance between output an input, ryu(T). In the optimal case these covariances w1ll b~ zero for all values of T greater than the time delay in the process. This can be used to determine if the regulator contains parameters enough and if the parameters have been properly tuned.
The self-tuning algorithms have been applied successfully to many different industrial processes e.g. o
paper machine (Cegrell-Hedqvist (1973) and Borisson-Wittenmark (1974»
o
digester (Cegrel1-Hedqvist (1974»
o
ore crusher (Borisson-Syding (1974»
o
entha1py exchanger (Jensen-Hansel (1974»
o
supertanker (Kallstrom (1974»
The simpl~st version of the self-tuning regu1atnrs, least squares estimation of the controller parame~ ters of the minimum variance regulator, has been used in all the applications listed above. A brief summary of two of the applications will now be given. In the paper machine application, Borisson-Wittenmark (1974), the method of periodic tuning was used. The self-tuning regulator controlled the moisture content on a machine of the Billerud company in Sweden, Fig. 6. It is important to have good control of the moisture content loop since it directly improves the economy of the production as well as the quality of the paper. The couch vacuum was measured in the paper machine. This signal was used as feed-forward. The advantages of including tuning of a feed-forward compensator in the selftuning algorithm could then be demonstrated practically. The self-tuning regulator program is now implemented in the program package of the IBM 1800 computer, which is controlling the different parts of the paper machine. The systems engineers at the paper mill can now easily select any loop for tuning of the regulator parameters. An examplp of selftuning control is given in Fig. 7. Concerning the ore crusher belonging to the company LKAB in Sweden, Fig. 8, one of the main problems was that the operating conditions were heavily dependent on the incoming raw material, see BorissonSyding (1974). To maintain a high constant power output from the crusher, and thus a high production, it is necessary to control the input of ore to compensate for variations in crushability and lump size, as well as for changes in the crusher depending on wear of the jackets. In this application the algorithm was tuning the regulator parameters all the time. During the study of the ore crusher a temporary digital control loop was set up . . A process computer in the Control Laboratory at the University of Lund was connected to the crushing plant at about 1800 kilometers' distance. The data were transmitted by an ordinary public telephone line and low speed modems. In Fig. 9 an example of self-tuning control is given. Estimated correlations for input and output signals are shown in Fig. 10. The study of the crusher showed that self-tuning regulators can in practical use work well in a truly adaptive environment with continually changing process variables, provided exponential forgetting is included in the parameter estimation. Compared with a conventional PI regulator with fix parameters it was demonstrated
611
that the more sophisticaced [elf-tuning algorithm could increase the production in the plant with about 10%. The industrial experiments and simulations show that the used type of self-tuning algorithms have good transient, as well as stationary, properties. After very few steps the controller makes a very good control even if the parameters have not reached their final values. The great advantage with the selftuning regulators is that it is possible to obtain go od tuning of many r~ulator parameters. Manually it is perhaps only possible to make good tuning if the regulator contains 2 - 3 parameters. The selftuning regulator can tune 6 - 8 parameters without any difficulties.
(10) Kalman, R. E., Design of a Self-Optimizing Control System, American Society of Mechanical Engineers Transactions, 80, No. 2, 468-478, (1958). (11) Kallstrom, C., Private Communi c ation, (1973). (12) Ljung, L., Convergence of Recursive Stochastic Algorithms, Report 7403, Department of Automatic Control, Lund Institute of Technology, (1974) . (13) Ljung, L., T. Soderstrom and I. Gustavsson, Counterexample~ to General Convergence of a Commonly Used Identification Method, Submitted to IEEE Trans Automatic Control, (1974). (14) Ljung, L. and B. Wittenmark, Asymptotic Properties of Self-Tuning Regulators, Report 7404, Department of Automatic Control, Lund Institute of Technology, (1974a).
Manual tuning is best done on the basis of transient responses in the control loop. It is, however, in many cases desirable to have a good stationary control in order to minimize the influence of stochastic disturbances. The self-tuning regulator is designed to tune the parameters in such a way that a good stationary control is obtained. Furthermore, if a forgetting factor is used, it is possible to follow slow changes in the process characteristics and all the time have a properly tuned regulator. All these properties make the self-tuning algorithms well suited for many control loops in the process industry.
(15) Ljung, L. and B. Wittenmark, Analysis of a Class of Adaptive Regulators, IFAC Symp. on Stochastic Control, Budapest, (1974b).
REFERENCES
(17) Peterka, V and K. J. Astrom, Control of Multivariable Systems with Unknown but Constant Parameters, IFAC Symp. of Identification and System Estimation, The Hague, (1973).
(1) Astrom, K. J., Introduction to Stochastic Control Theory, Academic Press, (1970). (2) Astrom, K. J. and P. Eykhoff, System Identification - A Survey, Automatica, 2, 123-162, (1971) . (3) Astrom, K. J. and B. Wittenmark, Problems of Identification and Control, Journal of Mathematical Analysis and Applications, 34, 90-113, (1971). (4) Astrom, K. J. and B. Wittenmark, On Self-Tuning Regulators, Automatica, 2, 185-199, (1973). (5) Astrom, K. J. and B. Wittenmark, Analysis of a Self-Tuning Regulator for Non-minimum Phase Systems, IFAC Symp. on Stochastic Control, Budapest, (1974). (6) Borisson, U. and R. Syding, Self-Tuning Control ot an Ore Crusher, IFAC Symp. on Stochastic Control, Budapest, (1974). (7) Borisson, U. and B. Wittenmark, An Industrial Application of a Self-Tuning Regulator, IFAC Symp. on Digital Computer Applications to Process Control, ZUrich, (1974).
(16) Peterka, V., Adaptive Digital Re gulation of Noisy Systems, IFAC Symp. on Identification and Process Parameter Estimation, Prague, (1970) .
(18) Soderstrom, T., L. Ljung and I. Gustavsson, A Comparative Analysis of Recursive Identification Methods, Report 7427, Department of Automatic Control, Lund Institute of Technology, (1974) . (19) Whitaker, H. P., J. Yarman and A. Kezer, Desi gn of Model Reference Adaptive Control Systems for Aircraft, MIT Instrumentation Laboratory, Report K-164, (1958) (20) Wittenmark, B., A Self-Tuning Regulator, Report 7311, Department of Automatic Control, Lund Institute of Technology, (1973). (21) Wittenmark, B., A Self-Tuning Predi c tor, IEEE Trans on Automatic Control; AC-19, 848-851, (1974). (22) Young, P., An Extension of the Instrumental Variable Method for Identification of Noisy Dynamic Processes, Report CN/70/l, University of Cambridge, Department of Engineering, (1970).
(8) Cegrell, T. and T. Hedqvist, Successful Adaptive Control of Paper Machines, IFAC Symp. on Identification and System Parameter Estimation, The Hague, (1973). (9) Jensen, L. and R. Hansel, Computer Control of an Enthalpy Exchanger, Report 7417, Department of Automatic Control, Lund Inst. of Technology.
612
--------- ---------- --- -1
4 PROCESS
r -I
y
3
-- 1
I I I
1 1
I
I
1
1
I
I I I
I
1
1
I
I
I I I
'j I , iop- ~~
1
I
1:
1 1
1
1
1
1
1
1
1 1 1
1
I I
I· 1
1
1
1
_____
~~_~~2~2--------
oc -1+---------------------------.,------------ -~ o 1000 2000 3000
L __ __ ____ _ _____ _ ___ J 1
1 REGULATOR
Time ~
Fig. 1 - Block diagram of the regulator.
- The regul a t o r parameters when REG2 is applied to the non-minimum phase process P2.
3
2
{)1 -";;;::f""-"'- -
.,
-
-
-
-
~
1 I'v---..
0
1~ ..$...
I-
t
a;
\.k
E ~
().
&.
g\
rrr
"i
.1 Vi
w -1
1000
2000
1
o
3000
1000
2000
3000
Time
Time
Fig. 4 - The re gulator parameters when REG3 is applied to the process P2.
Fig . 2 - The estimated parameters for REGl when controlling the process Plo Notice that the regulator parameters in this case are equal to the estimated parameters. The loss is 1.05 in this simulation, and the ex pected value is 1.06.
613
4~------------------------------------.
3
2
III
~.,
E ~
[0
.,
"t:I
a E ii -1
w
~-----------,------------.-----~~~~
o
1000
2000
3000
Time
Fig. 5 - The estimated parameters when REG3 is applied to the process P2. Notice that the regulator parameters are virtually constant after 600 steps (Fig. 4), but that the estimated parameters vary significantly. This is not surprising in view of the lacking identifiability for P2 with a constant regulator of the chosen structure. A forgetting factor A = 0.999 was used in the estimation. The loss is 1.07 in this simulation, and the expected value is 1.08.
~
- The self-tuning algorithm was applied to moisture content control on this paper machine. The machine is producing about 130 000 ton fluting per year with a basis weight between 112 g/m 2 and 150 g/m2. The steam-pressure in the drying cylinders was used as control signal and the couch vacuum as feed-forward signal.
614
...
10
.
.i:.
5 u;
'0
5
chaJge of machine speed
~
03.00
04.00
05.00
06 .00
Time
03.00
04.00
05.00
06 .00
Time
06.00
Time
eN'
.-
E
.
0..
~
~
v ~
5.~
E
- .. ~
0 :;:;
III
..
..
v
'0'-0.45 N
J:
...s § -0.50
"v
C
>
~
v
.3" -0.55
'"Q; Q; E ~ c a.
b
:1
I
I
04 .00
03 .00
I
05.00
0.5
..'"
Q; E ~ c
-
0
~,
?- -0.5
"'-
. ~
03 .00
04.00
05 .00
06.00
Time
03.00
04.00
05.00
06.00
Time
0
Q; E
e 8.
"-
-10
~ -
Process variables and parameter estimates. The self-tuning regulator had been controlling the process f o r about four hours when this registration was started. From the couch vacuum registration it follows that there was a slow change in the pulp quality during this period. A small change in machine speed was also made. Then a disturbance was introduced in the moisture content. After a short time the moisture level is satisfactory again. Notice the quick adapt ion of the estimated parameters t o the speed change.
615
CRUSHING PLANT
PROCESS COMPUTER SELF- TUNING REGULATOR
.35 FORTRAN
statements
• Memory requirements'
500 memory cells • Execution time for algorithm with 8 parameters to tune~ , 31 ms
' - --
- - - - --
IPCf' IS )
Fig. 8 - For the self-tuning control of the ore crusher 3 temporary digital control loop was set up. A process computer at the University of Lund was connected to the crushing plant at about 1800 kilometers' distance. In the crushing plant the ore is crushed to lumps smaller than a specified dimension. The ore enters the crushing line on an electromechanical feeder. A conveyor belt takes it to the first screen, where small lumps are separated. The rest of the ore proceeds to the crusher. Some part of the ore leaving the crusher and not passing through the second screen will be recycled to the crusher.
616
300..,.------
...
200
CII
~
o
Co
...
CII
~
III ~
U
100+------r---------r~
ry
1.0~-=---------""
ro
<{
O.S
E
L.J
Ci C
0
Cl
.iij
0 !; -O.S c 0 u
!lwv~JI ~
,I 1 \10 I
0.5
11 '
20
I.
Time [minJ ~
ryu (:)
tO~-----~
- An example of self-tuning control of the ore crusher. The set point of the crusher power is 200 kW. In thi s experiment the estimated standard deviation was 19.7 kW, which is a good result compared with conventional PI control. The thin line indicates the normal set point with PI control, 170 kW, which gives a considerably l ower production of ore . The dashed line shows an upper limit of the crusher power, which sho ul d not be exceeded for longer periods of time because the crus her motor might then be overloaded.
-0.5-t-------r----l -0.5-+--------r----l 5 10 10 0 5 o T T Fig. 10 - Estimated correlations for the signals in Fig. 9. In this case there are three time delays in the system, totally 60 seconds. It is then expected that ry(T) and ryu(T) will be zero for T > 3. The dashed lines indicate a 95% confidence interval in which the covariances can be regarded as zero .
617