Personal-computer based self-tuning controller

Personal-computer based self-tuning controller

Journal qf Microcomputer Applications Personal-computer (1988) 11, 95-106 based self-tuning controller K. J. Hunt and R. W. Jones Industrial Co...

915KB Sizes 0 Downloads 21 Views

Journal

qf Microcomputer

Applications

Personal-computer

(1988) 11, 95-106

based self-tuning

controller

K. J. Hunt and R. W. Jones Industrial Control Unit, Department of Electronic and Electrical Engineering, University of Strathclyde, Royal College Building, 204 George Street, Glasgow Gl 1X W, Scotland, UK

Self-tuning control is an area of control theory which has developed rapidly in the last decade. The emergence of self-tuning control has been facilitated by the great increase in power and decrease in cost of processing hardware, which now allows advanced process control techniques to be implemented with relative ease and at low cost. This paper describes a portable self-tuning controller package which has been written in FORTRAN and implemented on an IBM portable personal computer. The package can be used in both a simulation mode and in a real-time mode for control of physical processes. A brief introduction to self-tuning control, a control algorithm based upon Linear-Quadratic-Gaussian (LQG) optimal control theory, and an estimation routine based upon least-squares techniques are presented. The LQG algorithm and the estimation routine are computationally intensive procedures and the special precautions which are necessary to ensure numerical robustness are described. The key features of the package, and the software and hardware organization used to

implement it are presented.

1.

Introduction

In conventional controllers the control parameters are manually tuned by an operator based upon subjective analysis of the system response. Although adequate control of the process can often be achieved using this technique, the tuning procedure relies heavily upon the skill of the operator and is often very time-consuming and difficult. Consequently, many loops are often poorly tuned in relation to what is potentially possible. Self-tuning controllers attempt to overcome the tuning problem by automatically adjusting the controller parameters, based upon an estimate of a process model, to achieve a desired performance objective. A self-tuning controller is a digital controller which during each sample interval performs three major steps (see Figure 1):

(i) (ii) (iii)

Identifies (estimates) the parameters of a discrete plant model (using the sampled input-output data). Calculates the controller parameters using the estimated plant model parameters. Calculates and implements the new control signal.

Provided the estimation routine obtains an accurate model of the plant the controller will automatically tune, and should remain tightly tuned even when the process dynamics change (assuming the estimation routine can track any such changes). A recent review of self-tuning control is given by Astrom [l]. 95 0745-7 138/88/020095 + 12 $03.00/O

0

1988 Academic

Press Limited

96

K. J. Hunt and R. W. Jones

Figure 1.

Self-tuning system.

Notice that the three steps of the self-tuning algorithm outlined above imply a wide diversity of potential self-tuning controllers since a number of different identification methods may be used for step (1) and any one of a range of control design philosophies can be employed as step (2). This paper describes a FORTRAN program package which was designed to provide a facility for the evaluation of a wide range of self-tuning algorithms by both simulation and real-time control. In Section 2 one specific self-tuning algorithm is described in order to demonstrate the way in which the identification and control stages are combined. Section 3 gives a detailed description of the facilities provided by the package, and the hardware used is listed in Section 4. Critical numerical aspects of the self-tuning algorithm of Section 2 are discussed in Section 5. In Section 6 a simple example is given which demonstrates the performance of the self-tuning algorithm. Section 7 concludes the paper.

2.

Self-tuning

techniques

This section describes one particular estimation routine and a control shows how they are combined to form a self-tuning algorithm. 2.1

technique

and

IdentiJication

The process model structure is assumed to be given by the single-input difference equation model (the so called ARMAX model):

y(f) + a,.v(r- 1) + .

+ a,,,,_v(t- na) = b,u(t+i(t)+c,i(t-

l)+

single-output

k) + . + b,,u(t - k - nb) . +c,,i(f-nc)

(1)

where y(t) and u(t) are the output and input signals, and i(f) is a zero mean white noise disturbance signal. The integer part of the system time delay is represented by k > 1. The model (1) may be written in the more compact polynomial form (see Figure 2):

A(z~‘)y(t)=~-kB(z-‘)U(t)+C(,_~~‘)~(t)

(2)

PC based self-tuning controller

97

C(t)

C

a u(l)

I

---

-*8

+

where the system polynomials

>

Figure 2.

ARMAX

are defined

by:

A(z-‘)= 1 +a,z-‘+ B(z-‘)=b,+b,z-‘+ C(zF’)= 1 +c,z-‘+

z- ’ is the delay operator,

model

. . . +unoPu . . +b”p . +C,,Z-n’

1).

The model (1) must now be modified and re-written tation in a recursive parameter estimation routine:

vector f?(t) and regression

P(t) = (a^, . . &,; h^, /(t-

1) .

I)=(-y(t-

(3)

such that:

z-‘y(t)=y(t-

where the parameter

y(l)

+

A

-y(t-na);

in a form suitable

vector v(t) are defined

for implemen-

by:

gnb;E, . . . CT”,)

u(t-k) . . u(t-k-d);

(3

~(t- 1) . . . e(t--nnc))

(6)

The elements I . . . of the regression vector serve as proxies to the unmeasurable signal i(t). Notice that the ^ symbol in (5) signifies estimates of the polynomial coefficients in (3). The recursive estimation technique extended least-squares (ELS) may now be applied to obtain estimates of the polynomial coefficients in the parameter vector 8(t). The ELS algorithm is defined by (see Ljung & Sdderstrom [2] for details); B(t)=B(tqt)=

P(t_ [

l)-tfJ(t)q?(tI)_

l)b(t)-P(t-

et1)&t- 1)&t1+qqtl)P(t-

E(t)=y(t)4(t)q7(t-

l)fp(tl)P(t1) l)q(t1)

1).

I,]

1

;

(7)

98

K. J. Hunt and R. W. Jones

Updating of equations (7) during each sample interval constitutes step (1) of the selftuning algorithm outlined in Section 2.1. The main computational difficulty in this step is the updating of the covariance matrix P(t). Numerical aspects of this operation requiring special care are discussed in a later section. 2.2

Controller

design

The control design technique presented in this section is based upon the Linear Quadratic-Gaussian (LQG) method proposed by KuEera [3]. The LQG optimal regulator is chosen to minimize the steady-state cost function: J= E[Q,$(t)+

R@(t)]

(8)

Q, and R, are the output and control weighting functions control signal u(t) is calculated from (see Figure 3):

respectively.

The optimal

(9) The polynomials N(Y’) of the coupled diophantine

and D(:-‘) are the minimal equations:?

degree solutions

with respect to F

D:PN+ FA= BV”Q,C DTZ mzD- FB= A*PR,C where the polynomial

D,(z

‘) is obtained

(10)

from the spectral

DfD, = B*Q,B+

factorization:

A*R,A.

(11)

c

a A

Reference

+

N A

-



of a polynomial

I -*B

X(z ‘) is denoted

Controller

Y >

A

0

Figure 3. tThe adjoint

!J

+ f

structure.

by X*(z~ ‘). such that X*(z ‘) = X(z).

PC based self-tuning controller

99

The LQG controller thus requires two major computations during each sample interval, i.e. solution of equations (10) and (11). Possible numerical difficulties resulting from these calculations are discussed in a later section. 2.3

Self-tuning

algorithm

The self-tuning philosophy outlined in Section 2.1 may now be used to formulate a specific self-tuning algorithm based upon the identification routine of Section 2.1 and the control design of Section 2.2. The algorithm is: Data: Step 1: Step 2:

using ELS as defined

Step

D(z- ‘) by solving

Step Step Step Full

3.

Choose cost function weighting elements Q, and Sample process output y(t). Estimate A(z-‘), B(z- ‘) and C(z-‘) polynomials equations (7). 3: Calculate the controller polynomials N(zY’) and equations (10) and (11). 4: Calculate new control signal u(t) using equation 5: Update data vectors. 6: Goto step 1 at next sample instant. details of this algorithm may be found in Hunt [4].

The self-tuning

R,. by

(9). and implement.

system

The scale of the hardware and software system which is used to implement a self-tuning controller can vary widely. Two distinct types of system can be mentioned: (i)

(ii)

At the smallest scale is a low cost microprocessor based self-tuner which is dedicated to one specific control task and employs a single fixed self-tuning algorithm. Such a self-tuning device will be low-cost but inflexible. The development costs for such a device are, however, likely to be high since access to a microprocessor development system is required. Higher up the scale is a self-tuning system based upon a high level language program package implemented on a microcomputer. While the basic hardware costs for such a system are higher, the resulting self-tuning system will be widely applicable and have far greater flexibility.

The remainder of this paper describes a self-tuning system based upon the second of the above philosophies. The package is written in FORTRAN and implemented on an IBM portable personal computer. The motivation behind the development of the more flexible system was the necessity that the system should be capable of providing the following two functions:

6) As previously

mentioned, a wide range of possible self-tuning algorithms exist. In industrial situations, therefore, there is no single algorithm which could be considered uniformly best. A fundamental requirement of the package was to provide a facility for real-time on-site evaluation of the suitability of a wide range of algorithms for a specific control problem. Clearly, when a specific algorithm has been selected most of the code in the package becomes redundant and the chosen algorithm should then be used as the basis of a smaller dedicated selftuning device.

100

K. J. Hunt and R. W. Jones

(ii)

3.1

The package was required to provide a simulation facility whereby the features and performance of newly developed algorithms could be evaluated in a research environment. System ~facilities and functions

The facilities provided by the system software may be divided which are described in the following sub sections.

into several distinct

units

3.1.1 Set-up Interface. A major part of the package is that section of code which is necessary for commissioning the self-tuner before a run. This part of the program allows the user to define the specific form of the self-tuning algorithm to be used together with other operational details. The set-up interface interacts with the user by means of a menu-driven dialogue. The menus are arranged in a tree structure with any particular option within a given submenu being selected by means of a numerical key-press, each submenu containing up to nine options. Selection of each option results in the modification of a disk file which contains all the commissioning data, and which is subsequently accessed by the self-tuning program. Some of the more important options which are available can be mentioned: (i) (ii) (iii)

(iv)

The particular combination of identification method and control technique to be used can be selected. The parameters of a simulated process can be defined for use when the package is run as a simulation facility. The parameters of a fixed conventional controller can be defined. This controller can be used when the self-tuner is ‘starting up’ or as a safety feature which operates when failure of the self-tuner is detected. Several miscellaneous parameters may be chosen. These include parameters concerned with the identification and control stages, and various engineering constraints such as limits on the physical range of control signal which can be applied.

3.1.2 Run-time interface. The run-time user interface presents the user with several data including the key process variables and estimation parameters. The interface also allows the user to change various system parameters during control of the process. The runtime interface operates in three distinct modes, each of which may be selected by a single key-press: (i)

(ii) (iii)

Graphics mode. The user may obtain a graphical display of several system variables, the most important of which are the system set-point, output and control signal. Limited alpha-numeric data is also presented in this mode for the particular system variable on display. The display is updated during each sample interval. Data mode. Detailed information on a range of system variables is presented in alpha-numeric form. Again, the display is updated during each sample interval. Alteration mode. This mode allows the user to make changes to the mode of operation of the controller (as described in a later section) and to alter various system parameters such as the set-point or control variables. The changes to be made are selected by means of a menu structure.

PC based self-tuning controller

10 1

3.1.3 Library rourines. The program package contains a set of library routines which perform a wide range of functions. The modular nature of the library, in which the routines are written as user-callable subroutines, allows the user to easily configure new self-tuning algorithms. The library is divided into three main areas: (i)

(ii)

(iii)

Zdentzjication. Although the most commonly used identification routines are based upon the least-squares type procedure, the identification library allows the performance of new algorithms or ideas to be evaluated. Control. As previously mentioned, the wide range of control problems appearing in industrial situations requires the availability of a range of different control techniques. New control theories may also be easily tested by incorporating a newly written routine into the library. A4uthematical operations. Although a wide range of control algorithms exist, they often require the use of common mathematical operations. It is therefore more convenient to have these operations available as library routines. Using this technique the amount of code which must be written to produce a new selftuning algorithm can be kept to a minimum. Some of the functions available in the mathematical library include polynomial addition and multiplication. spectral factorization, and diophantine equation solution.

Data logging. A very important facility provided by the self-tuning system is that 3.1.4 of mass data storage during a process control run. The data can later be used to check on the performance of the self-tuner, for fault diagnosis, or for off-line identification experiments to assess the performance of the estimation routine. The mass storage medium used is floppy disk (or hard disk in the laboratory environment) and a typical record of information contains process input/output data and parameter estimation data. Monitoring and alarms. A crucial requirement for any control instrument is the 3.1.5 detection of control failure and the appropriate sounding of an alarm to warn the operator. In a conventional control situation the operators probable response to a control alarm would be to switch to manual control where the control signal is under direct command. In self-tuning control, however, the key problem is usually that of obtaining an accurate model from the identification stage. The alarm and safety system operated to safeguard the self-tuner is as follows: the output which is predicted by the current estimated model is compared to the actual output, and if the difference between these two quantities exceeds a pre-specified level an erroneous model is inferred. This condition results in a switch to the conventional fixed controller mentioned previously and the operator is advised of the current state of the controller by both audible and visual warnings.

3.2

Run-time

modes

The self-tuning package can be operated in one of three ways, depending algorithm receives the sampled process output from: (i)

on where the

Simulation mode. The system output is obtained from an internal representation of the system model defined by equation (2). This provides a useful tool for the evaluation and comparison of different self-tuning algorithms.

102

K. J. Hunt and R. W. Jones

(ii)

(iii)

Real-time simulation mode. In this mode the self-tuning algorithm obtains the process output from a process simulation running on another computer. Communication between computers is via a serial data link or D/A, A/D converters. This technique facilitates a much more comprehensive and realistic process simulation. The process is simulated in continuous time and practical features such as non-linearity and actuator saturation may be included in the model. Real-time mode. In this mode the computer controls a real physical process by communicating through D/A and A/D converters.

The mode required is selected during system set-up. At a lower level, the self-tuner be operated in one of three further modes: (i)

(ii)

(iii)

Zdentifv. With the process under the control of an external controller or the internal fixed conventional controller, the identification routine updates the process model but the estimates are not passed to the controller calculation stage. This technique is useful during the start-up phase of a self-tuner as it helps avoid large tuning-in transients. When the model obtained is sufficiently accurate the controller can be switched to one of the following modes. Tune. In this mode the controller is updated using the estimated model parameters for only a fixed length of time. After this time the controller is fixed, as ideally it should then be correctly tuned. It is often useful to switch to the tune mode periodically so that any dynamic changes in the process can be tracked. Adapt. In this mode the controller is continuously updated using the latest parameter estimates.

It is possible 3.3

can

for the operator

to switch between

these three modes.

Software organization

The different self-tuning algorithms which have been constructed by linking the appropriate library routines off-line are stored on hard disk. The required algorithm is loaded prior to run-time after being selected during execution of the set-up program. At this time the self-tuning algorithm is linked to the general management software (runtime interface, i/o software etc.) which is common to all of the complete controller programs.

4.

Hardware

The self-tuning program package described in the previous section has been written in FORTRAN and implemented on an IBM portable personal computer. The specifications for the system unit in use can be summarized as follows: Intel 8088 microprocessor. Intel 8087 arithmetic coprocessor. 512 K RAM. Two floppy disk drives (360 K per disk). 9-inch amber composite video monitor. Asynchronous communications adapter card (RS 232). Multi-channel D/A, A/D data acquisition card.

PC based self-tuning

controller

103

The availability of all the above features in a single self-contained unit makes the IBM an ideal machine for both laboratory and on-site experimentation. In the research environment the computer is also connected to the following hardware: Hard disk unit (10 Mbyte storage). Printer with graphics capability. The hard disk is particularly used in the program development programs being transferred to floppy disk for portability.

5.

Numerical

completed

aspects

There are three key computation steps in 2. Although solution of the diophantine does not present any dangerous numerical [5] is implemented. In contrast, update estimates [equation (7)] and the spectral care. 5.1

stage.

the self-tuning algorithm presented in Section equations (10) is computationally intensive it conditions. At present, the algorithm of Jeiek of the covariance matrix of the parameter factorization [equation (11)] require special

Spectral factorization

Many efficient algorithms for the solution of equation (11) are now available (see KuEera [3]). The spectral factorization does not present any major numerical difficulties for polynomial orders up to approximately six. The only point of caution in a real-time implementation is that the algorithm is iterative. This possible difficulty can be easily overcome by terminating the iteration when one of the two following conditions occurs: (i) (ii)

The error in the iteration The number of iterations

becomes becomes

less than a specified accuracy level. or equal to a specified maximum.

Note that the result returned on termination of the algorithm is guaranteed to be stable, regardless of the number of iterations which have been performed. This is a necessary condition for stability of the control loop. 5.2

Covariance

matrix update

Update of the covariance matrix Z’(t) [equation (7)] is the key step in least-squares based recursive estimation routines. P(t) is a symmetric positive-definite matrix and it is well know that the estimation routine will become unstable should P(t) become non-positive. This is a condition which can occur after a large number of samples due to the finite word length of computers. The most common solution to this problem has been to use the UD factorization technique of Bierman [6]. The matrix P(t) is factorized into the form: P(t)= U(t)D(t)V(t) where U(t) is upper triangular and D(t) is diagonal. guaranteed to be positive definite and possible numerical

Using this technique P(t) is difficulties are avoided.

104

K. J. Hunt and R. W. Jones

6.

Example

Performance of the LQG simple example. Consider parameters:

self-tuning controller algorithm is now demonstrated by a a process of the form equation (2) with the following

Y(t)=

z-*(1 -tow’) 1 _0.95zm’

u(t)+

1 _ o.$5_~ I i(t).

For the LQG design outlined in Section 2.2 the values Q, = R, = 1.O are chosen for the cost-function weights. The true LQG controller for the above process may then be calculated as: N(z- ‘) 7D(z- ) = 17fj+

0.88 1.41z-‘+0.46,--2’

However, in the self-tuning case it is assumed initially that the process parameters are unknown and are given arbitrary initial values. The set-point applied to the system was a square wave of magnitude f 20. The output response is shown in Figure 4 along with the control signal u(t). It is seen that initially the step-response has a small overshoot and that after the parameters have converged the overshoot is eliminated. Convergence of the estimated process parameters is shown in Figure 5 and convergence of the controller parameters in Figure 6. It can be seen that although the process has a time-delay of k = 2, the step-response is still very tight.

Set-point

and output

response

Figure 4.

Step response

PC based self-tuning controller 0 porometer -O*5L: E’! -0*6z k.

estlmotes

-0.73 _0*82

_,

_O.&

r,/ -/ .oF’I

r_

_,.,F,,,,,,, b Dorometer

estimates

=i 0-rilllllllllllllil

50

llIIIIL1 100

Figure 5.

Parameter

Controller

n porometer

Controller

d parometers

150

200

estimation

1.2,

2.0

do

E

1 dl

0

II/1//,II,,,,/,,,/I,1~,/,,,/1,/,,,1/,,! 50 100 Figure 6.

Controller

1 150

convergence.

200

105

106

K. J. Hunt and R. W. Jones

7.

Conclusions

The self-tuning controller package described in this paper was designed to provide a facility for the investigation and evaluation of a range of self-tuning algorithms. The modular structure of the software enables new algorithms to be implemented with relative ease. The package can be most effectively used as the first stage in the implementation of a self-tuner for an industrial problem, the most appropriate algorithm being subsequently implemented using less sophisticated hardware. An invaluable simulation facility is also provided. The IBM portable personal computer has proven to have all the hardware requirements necessary for the package, and is very easily transportable. The only possible drawback of the IBM is that its construction may not be suitable for more hostile industrial environments.

References 1. K. J. Astriim 1983. Theory and applications of adaptive control& a survey. Automatica. 19, 471486. 2. L. Ljung & T. Siiderstriim 1983. Theory and Practice of Recursive Identification. London: MIT

Press. 3. V. KuEera 1979. Discrete Linear Control. New York: Wiley. K. J. Hunt 1985. Self-tuning controller with optimised pole-positioning. Industrial Control Unit internal report. for minima1 solution of linear polynomial equations. 4. J. Jeiek 1982. New algorithm K.vbernetika, 18, 505.-516. 5. G. J. Bierman 1977. Fuctorisation Methodsfbr Discrete Sequential Estimation. New York: Academic Press. Kenneth Hunt was educated at the University of Strathclyde where he received the BSc degree in electrical and electronic engineering in 1984 and the PhD degree in 1987 for a thesis entitled ‘Stochastic Optimal Control Theory with Application in Self-Tuning Control’. He is currently a staff scientist with BBN Laboratories Ltd, Edinburgh.

Richard Jones received the BSc degree in chemical engineering from the University of Newcastle-Upon-Tyne in 198 1. He has since been working towards a PhD degree on self-tuning control. He is currently with the Department of Mathematics, University of Strathclyde.