FIT - Filtering and Identification Tool

FIT - Filtering and Identification Tool

Copyright © IFAC System Identification Santa Barbara, California, USA, 2000 FIT - FILTERING AND IDENTIFICATION TOOL Olaf Moseler, Michael Vogt Darms...

1MB Sizes 1 Downloads 54 Views

Copyright © IFAC System Identification Santa Barbara, California, USA, 2000

FIT - FILTERING AND IDENTIFICATION TOOL Olaf Moseler, Michael Vogt

Darmstadt University of Technology Institute of Automatic Control Laboratory of Control Engineering and Process Automation Landgraf-Georg-Strasse 4, D-64283 Darmstadt, Germany Phone: +49 - 6151- 167404, Fax: +49 - 6151 - 293604 E-mail: [email protected]

Abstract: The program "Filtering and Identification Tool" designed for MATLAB provides an easy to handle tool for linear system identification of continuous time domain systems. But also discrete time models may be identified. The only property the process model must fulfill is that it has to be linear in the parameters. A graphical user interface guides the research engineer through all steps. As most of the time consuming algorithms are implemented as C-functions, the identification even with a huge amount of data takes only few seconds. To identify continuous time process models, derivatives of the input and output signals are required. However, these signals often cannot be measured. Therefore digital filters such as state variable filter (SVF) and differentiating finite impulse response (FIR) filters are integrated into FIT to provide the necessary derivatives. For the identification task various recursive parameter estimation methods like recursive least means squares (RLS), discrete square root filter in information form (DSFI), normalized least means squares (NLMS), etc. are included, too. Recursive algorithms with exponential fading memory (variable step size in case of NLMS) were chosen in order to identify time variant systems. But also for the offiine design of real-time fault detection or adaptive control schemes using parameter estimation methods a variable forgetting factor is important to be able to track varying parameters. Thus, the main advantage of the Filtering and Identification Tool in comparison to existing software tools for system identification is the integration of both parameter estimation methods and differentiating filters. Copyright @2000 IFAC Keywords: system identification, differentiating filters, continuous time models

Usually these tasks are performed on a microcomputer. Thus, discrete time models are preferred. But this approach has the drawback that the identified coefficients do not directly represent physical parameters. Here, continuous time models provide better interpretability. As the coefficients of the differential equation result of mathematical equations by physical modeling, the identified parameters have physical meaning, such as electrical resistance, mechanical friction, etc. On the other hand for identification of dynamic processes the derivatives of input and output signals are required. Often they cannot be accessed by measurements. Thus, they have to be provided by

I. INTRODUCTION System identification with parameter estimation methods is a commonly used approach for the determination of process models' parameters (Ljung and Sonderstrom, 1983; Ljung, 1987). Since microcomputers have become fast enough, recursive estimation methods are also used for online adaptive control to track changing parameters of time variant systems in order to adapt the controller's parameters (Isermann et al., 1992). Similarly they are applied in on line fault detection schemes to detect unpermitted changes of the process parameters (lsermann, 1984).

863

algorithms such as state variable filters (SVF) (Young, 1970) or differentiating finite impulse response (FIR) filters (Filbert, 1990; Sagara et al., 1991; Wolfram and Moseler, 2000).

leads to the least squares (LS) estimate

Software tools for the identification of discrete time systems are available on the market. Even MA1LAB provides with ident. m a graphical user interface for its "System Identification Toolbox". But no comprehensive software has been offered for continuous system identification so far. So, in this paper a "Filtering and Identification Tool" is presented which takes the special requirements into account.

Similarly, discrete time systems represented by the equation

E>(N) = (wT(N)W(N») -1 w(Nf U(N) (6)

y(k) + a1y(k - 1) + .. . + any(k - n) = bou(k) + b1u(k - 1) + . .. + bmu(k - m) (7) may be treated. This equation yields the discrete time data vector

In section 2 the basics of parameter estimation are explained. Section 3 gives a short introduction into filters which can generate the derivatives of time signals. Finally in 4 an overview of the graphical user interface is given. Different steps when working with FIT are explained.

''IT(k) = [-y(k -1) . . . - y(k - n) u(k) .. . u(k - m)l The least squares estimate follows again from eq.(6). FIT can treat both types of model representations. The only assumption the process models have to fulfill is the linear-in-the-parameters property. A one-step solution of eq.(6) cannot provide parameter tracking ability that is required in fault detection and adaptive control. Thus, recursive algorithms for solving eq.(6) are required. These al~orithms calculate for each sample a new estimate for fl(k), that is based on previous input data "l(k - 1) respectively previous parameter vector ~(k - 1) and the new data vector tf;(k). A variable adaptation (forgetting) factor is used to control the adaptation speed. .

2. PARAMETER ESTIMATION The task consists of estimating process parameters using only measured input and output signals. Starting from a continuous process model represented by the differential equation

y(t) + aly(l)(t) + .. . + any(n)(t)

= bou(t) + blu(l)(t) + ... + bmu(m)(t)

(1)

There exist several recursive estimation algorithms. The most simple one is the recursive least squares algorithm (RLS). But there are different algorithms which show better numerical behaviour, such as the discrete square-root filter algorithm in information form (DSFI) (Kaminski et al., 1971) or the modified Gram-Schmidt algorithm (RMGS). For real-time parameter estimation on finite word-length processors often a gradient based algorithm, the normalized least means squares (NLMS), is favorable as the NLMS requires little computation time. But one has to bare in mind that the NLMS is a gradient based approach that is more sensitive towards noise and provides slower convergence to the real parameters. All of these algorithms, which are implemented in FIT, are provided with a forgetting factor A respectively a step size parameter f.l to adjust the fading memory in order to track time variant systems.

with the input u(t) and the output y(t), where y(n)(t) = dny(t)/dt n , then the coefficients ai and bi represent physical parameters. They can be determined by using parameter estimation algorithms like the least squares method which can be derived as follows: Writing eq. (1) in vector form

y(t)

= y/(t)fl

(2)

with the parameter and continuous-time data vector

flT(t) = [al .. . a n bo . . . bml ''IT (t) = [-y(1)(t) .. . _y(n)(t) u(t) .. . u(m)(t)] . For parameter estimation the equation error e(t) is introduced

e(t)

= y(t) -

''IT (t)fl.

(3)

3. DIFFERENTIATING FILTERS

After sampling with the discrete time kt/To = 0,1,2 , with To the sampling time, minimization of the sum of least squares

As seen in the previous section, identifying continuous dynamic processes the derivatives of input and output signals are required. If they cannot be measured, they have to be generated on the microcomputer. Due to noise on the measured signals, the discrete differentiation amplifies the noise and therefore is inappropriate for the calculation of the derivatives. So, digital

N

V =

Le2(k) = r,.Tr,.

(4)

k=l

dV/dfl

=Q

(5)

864

filters were developed combining differentiating property with low-pass characteristics. Considering

Y(S) = G(s) = B(s) u(s) A(s)

thus provide better numerical properties when the calculation is carried out on processors with finite word-length. The idea using FIR becomes obvious when considering derivative of the convolution of x( t) by the impulse response g(t) of an arbitrary system

(8)

for linear processes the filtering has no influence on the estimated parameters if both the input signal u(t) and the output signal y(t) are filtered by the same filter

G(s)

d dt (x(t)

dx

dg

* g(t)) = dt * g(t) = x(t) * dt (11)

= uf(s) = Gf(s)· u(s) = y(s) Gf(s)· y(s)

Yf(s)

u(s)

(9)

Thus. the differentiation of the filtered signal can also be evaluated by filtering the signal x(t) with the derived impulse response g(t). However. the filter characteristic is not defined and therefore many kinds of filters are conceivable. In order to decrease the influence of noise it is useful to choose low-pass filter characteristics that reduces the effects of differentiation on high frequency parts of the signal x (t) .

There exist several digital differentiating filters . Among recursive filters the so called state variable filter (SVF) is very popular. Because of its recursive structure it has some drawbacks if finite word-length processors are used for online calculation. In this respect FIR filters provide better properties. Both types of filters are introduced in the following sections.

A very common approach for the design of a FIR filter with differentiating characteristic is the approximation of the ideal low-pass filter's impulse response. Differentiating the filter's impulse response. the output of the filter provides the first derivative of the input signal (Oppenheim and Schafer. 1998). Several tests have shown that this approach works well up to the first derivative. Derivatives of higher order can also be achieved. But for this purpose a huge filter length (number of coefficients greater than lOO) has to be chosen to provide a reasonable good approximation of the ideal differentiator. Another type of FIR filters for generating derivatives of time signals are modulating functions i.e. window functions such as Blackman. Hamming. etc. (see e.g.• (Filbert. 1990»). Besides the window type modulating functions (Wolfram and Moseler. 2000) present a comprehensive approach for the design of differentiating FIR filters with arbitrary filter length and order. These modulating functions need less coefficients for a good approximation of the derivatives but for this type of filter the cut-off frequency depends on the filter length.

3.1 State Variable Filter

The basic idea of a state variable filter is its design such that the internal states Xi(t) represent the filtered derivatives X}i)(t) of the signal to be filtered (see Fig. 1). The output signal x f (t) is the filtered input signal. Given the transfer function

GSVF(s)

=

10

lo+hs+· · ·+ln sn

(10)

the coefficients Ii can be determined applying a Butterworth design. The cut-off frequency should be at least the highest frequency of the process to be identified. For the implementation on a microprocessor the algorithm has to be discretized. The approach is described in (Peter and Isermann. 1989) and should be omitted here. When applying the filter. all measured input signals have to be filtered by the same filter. The order of the filter should be one higher than the highest required derivative.

When applying FIR filters. for the input/output signal and each required derivative the calculation of one filter has to be carried out (see Fig. 2) whereas SVF provide automatically all derivatives of each filtered input signal. Thus. a SVF usually needs less computation time.

3.2 FIR Filters

In comparison to the above described state variable filter FIR filters have a feed forward structure and X;--'(I)

xt'(I)

xP)

X(I)

xiI)

X(t)~XfJ)

t---0--

x;''(t)

~X)"l(t) Fig. 2. Generation of derivatives using a FIR filter

Fig. I. Structure of the state variable filter

865

Fig. 7. Resulting frequency plot for the first derivative using a SVF

Fig. 9. Estimated, time varying parameter tively 3 in order to change the parameters of the filter design or of the estimation algorithm.

variable filter, e.g. this is depicted in Fig. 6. The respective impulse response and the Bode plot (see Fig. 7) may be graphically visualized . Processing the filter algorithm results in a new matrix FDATA including the filtered measurements and their derivatives. (3) Estimation of parameters After that a parameter estimation method has to be selected. In Fig. 8 the DSFI was chosen. According to the figure, a forgetting factor may be adjusted and recording of some interesting signals during the estimation process can be activated. Finally the parameter estimation provides as result the parameter matrix PARA respectively 8(N) that contains the recursive estimated parameters in each sample step.

As shown, the graphical user interface is easy to handle. Results of the identification task may be achieved within few steps only.

5. CONCLUSIONS The presented software tool provides a fast graphical user interface for system identification. With special regard to continuous time models differentiating filters were integrated to provide the parameter estimation algorithms with the necessary derivatives of time signals. Recursive Parameter estimation algorithms were chosen to be able to track parameters of time variant systems. The tool can also be used for offline adjustment of the algorithm's parameters which may afterwards be implemented to a DSP or a microcontroller for adaptive online control or on line fault detection.

6. ACKNOWLEDGEMENT The development of the software tool for system identification and early fault detection in electro mechanical actuators was supported by the Deutsche Forschungsgesellschaft flir die Anwendung der Mikroelektronik e.v. (DFAM) and the Bundesministerium flir Wirtschaft (BMWi, AIF-No . 10703 N). The authors are grateful for the financial support and the discussions within the sponsoring committee. Fig. 8. Setup of the DSFI algorithm 7. REFERENCES

(4) Result check At the end the results may be plotted. In Fig. 9 e.g., a drifting parameter of an identified PT 2 process is depJcted. As can be seen, the parameter estimation algorithm can easily follow the changing parameter. Furthermore the signal plots may be saved on hard disk. If the results are not satisfactory, one can go back to step 2 respec-

Filbert, D. (1990). A contribution to the test of electric motors by parameter estimation methods. In 'Proceedings of the 25th Universities Power Engineering Conference'. pp. 687- 690. Isermann, R. (1984). ' Process fault detection based on modelling and estimation methods - a survey '. Automatica 20(4), 387-404.

867

Isermann, R., K.-H. Lachmann and D. Matko (1992). Adaptive Control Systems. Prentice Hall. Kaminski, P., A.E. Bryson and S.P. Schmidt (1971). 'Discrete square root filtering - a survey of current techniques'. IEEE Transactions on Automatic Control AC-16, 727-736. Ljung, L. (1987). Systems Identification - Theory for the User. Prentice Hall. Ljung, L. and T. Sonderstrom (1983). Theory and Practice of Recusrive Identification . MIT Press. Cambridge. Oppenheim, A. and R. Schafer (1998). Discrete-Time Signal Processing . Prentice Hall. Peter, K. and R. Isermann (1989). Parameter-adaptive PID-control based on continuous-time process models.. In 'Proceedings of the 3rd IFACSymposium on Adaptive Systems in Control and Signal Processing'. Sagara, S., Z.J. Yang and K. Wada (1991). 'Identification of continuous systems using digital low-pass filters'. International Journal of System Science 22(7),1159-1176. Wolfram, A. and o. Moseler (2000). Design and application of digital FIR differentiators using modulating functions. In 'Proceedings of the IFAC SYSID Conference'. Santa Babara. Young, P. (1970). 'An instrumental variable method for real-time identification of a noisy process'. Automatica (6), 271-287 .

868