Output Sensitivity Analysis of Time-invariant Linear Systems Using Pobnomial Series by
PANAGIOTIS
Department
D.
SPARIS and
of Electrical
SPYRIDON
Engineering,
G.
MOUROUTSOS
Democritus
University
of Thrace,
Xanthi,
Greece
ABSTRACT: A new approximate method is proposed for the determination of the output sensitivity function of linear time-invariant systems using polynomial series expansions. The novelties of the proposed method are the use of the operational matrix of differentiation for the derivation of the algebraic equations approximating the dtflerential equation, and the use of the operational matrix of polynomial series transformation, for the simplification of the computer code. The approach appears to be more direct and computationally simple than other presently known techniques.
1. Zntvoduction The present paper refers to an approximate solution of the problem of sensitivity analysis of time-invariant linear systems with the use of polynomial series. Consider the following time-invariant system described by the differential equation : a ny’“‘+a,_,y’“-‘)+...+a,y
=x(t)
(1)
where x(t) is the system’s driving function. The differential equation is also accompanied by the appropriate initial or boundary conditions that correctly pose the problem. In the case of an initial value problem, these conditions may have the form yck’(t = 0) = ck
for
k = 0, l,...,
n-
1.
(2)
If the coefficient aj is disturbed about its nominal value ajO by Aaj, and obtains the value Uj = aj, +Aaj, the output sensitivity problem is to find the effect of this disturbance on the system’s output function y. If the system is well behaved, one may assume that the disturbance of the output Ay will satisfy a relation of the form Aaj. In the above relation, the function (ay/auj)~jO is called a nominal output sensitivity function and is denoted by Sj = s(t, Uj), i.e. Sj
0 The Franklin
Institute
001&0032/86
$3.M1+0.00
= S(t, aj) = -ay aajaio’
(>
(4)
325
Panagiotis
D. Sparis and Spyridon G. Mouroutsos
Therefore, the sensitivity analysis of the system leads to the determination of the function sj. It has been shown (1) that for this particular problem, the sensitivity function sj is the solution of the following differential equation : a n1 s(~‘+a,_,s~-l)
+...+a,sj=-y$‘(t)
(5)
with zero initial conditions, i.e. sp)(t = 0) = 0 for k = 0, 1,. . . , n- 1, where ~8) is the jth derivative of the solution of Eq. (1) corresponding to the nominal value of aj, i.e. ajo. Therefore, to solve the output sensitivity problem with the Frank approach, one has to determine the solution of Eq. (1) with the boundary conditions (2), evaluate thejth derivative y$)(t), and solve the differential equation (5). Due to the similarity of Eqs. (1) and (5), normally the same method would be used for the solution of both problems. Despite this relative simplification, the Frank approach is rather complicated for consideration as a basis for a numerical solution procedure of the sensitivity problem. Another limitation of the method is the restriction imposed by the boundary conditions that limits its application to only initial value problems. An alternative approach to the solution of the sensitivity problem is the estimation of sj with the use of definition (4) directly. In a recent paper (2) this alternative approach is followed for the development of an approximate method of solution based on the orthogonal polynomial series. In this paper the operational matrix of integration is used for the derivation of the algebraic equations that approximate the differential equation (1). The operational matrix of integration P is defined by the relation f f(x) dx N Pf(t)
(6)
s0
for any type of orthogonal polynomial series with f(t) the corresponding base vector. The operational matrix of integration P has been derived for various types of orthogonal functions, such as the Walsh (3), the block-pulse (4), the Laguerre (5), the Chebyshev (6), the Fourier (7) and the Hermite (8). It has been shown (9) that for the non-orthogonal Taylor polynomials a similar operational matrix of integration may also be defined. In the present paper, an alternative approach based on the operational matrix of differentiation is developed. The operational matrix of differentiation is defined by the equation : $ f(t) = Df(t)
where f(t) is the base vector of the series. The operational matrix of differentiation is initially introduced (10)in connection with the Chebyshev series. The main advantage of the operational matrix of differentiation (11) is that it can separate the equations approximating the differential equation from the equations approximating the boundary conditions. In this way one may apply this operational matrix not only to initial value problems
326
Journal
of the Franklin Institute Pergamon Journals Ltd.
Output Sensitivity of Time-invariant
Linear Systems
but also to boundary value problems. Both operational matrices offer a similar order of accuracy for the same number of terms in the series expansion. In the present paper it will be shown that the application of the operational matrix of differentiation simplifies considerably the solution to the sensitivity problem of linear time-invariant differential equations using polynomial series approximations. Additionally, the method is not restricted to initial value problems, but may also be applied to boundary condition problems. To familiarize the reader with the properties of the operational matrix of differentiation, the following introductory section is included. 1. The operational matrix of differentiation The operational matrix of differentiation is defined by Eq. (7) for a given polynomial series with a base vector f(t). One may easily show that & f(t) = D”f(t). If a function y = y(t) may be approximated expansion with r terms such as
by a truncated
polynomial
series
Y = y(t) = YW)
(9)
where f(t) = {f&)2 fi(~)?...J-l(~)~T is the truncated
(10)
series base vector, and
(11)
Y = {Yo, YIY.>Y*-l)T is the coefficient vector of the function approximated by the relation :
$
y, then the nth derivative
of y may be
y(t) = y(“) N yTD”f(t)
(12)
where D the r x r matrix of differentiation. Therefore, the differential may be approximated by the following matrix equation: a,yTD”f(t) + a,, lyTD”- ’ f(t) + ... + a,yTDf(t)
+ a,yTIf(t)
equation
= xTf(t)
where I is the r x r identity matrix, and x is the coefficient vector function x(t) expanded polynomial series using r terms. Equation (13) may easily yield the following linear system :
(1)
(13)
of the driving
QY = x
(14)
a,D”+a,_,D”-l+~~~+a,D+a,I.
(15)
where Q’= Vol. 321, No. 6, pp. 325-336, June 1986 Printed in Great Britain
327
Panagiotis
D. Sparis and Spyridon G. Mouroutsos
The boundary matrix D :
conditions
$
of the problem
Y(t = tj) N yTDjf(t =
tj)
=
may be easily approximated
using the
j = 0, 1,. . . , n- 1.
(16)
cj
for
It is clear that the use of the matrix D allows the application of the boundary conditions to the function y and its derivatives at different points in the solution domain. For example, in the case of a second-order differential equation, one may apply either initial conditions to the function and the first derivative, or two conditions to the function at different times, or a condition to the function at a time t = t, and a condition at the derivative at a time t = t,. Following this approach, the analysis of a linear time-invariant system via the operational matrix of differentiation for polynomial series leads to the simultaneous solution of Eqs. (14) and (16). In the case of an nth order differential equation approximated by a polynomial expansion with r terms (r >>n), the system will have n + r equations with r unknowns. It has been demonstrated (12) that this system may be reduced to a system involving r equations with r unknowns of the form: Q*y = x*
(17)
where Q*, x* are arrays formed by replacing the last n equations of the system (14) by the boundary conditions (16). Now Q* is square and the use of square matrices simplifies considerably the computational aspects of the approach, since the matrix (Q*)- ’ exists. The main reason for this small complication is the fact that the operational matrix of differentiation is singular, and should not be confused with P-l. This is a natural consequence of the fact that the operation of differentiation cannot be inverted without the introduction of an unknown constant. The operational matrix of differentiation may be easily found for all polynomial series using the definition equation (7). Consider the simplest form of polynomial series, the Taylor series. An analytic function y = y(t) may be expanded in Taylor series around the point t = 0: y = y(t) = yo+y,t+y,t2+...+yl_1tr-1+...
(18)
where y, = y(O), y, = (l/l!) (d/dt)y(O), . . . , y, = (l/n!) (d”/dt”)y(O), . . . This expansion will converge to the function y(t) in the interval where y is analytic. If we truncate the series expansion up to the rth term, we may represent the resulting approximation as Y = YW) where f&) = (1, t, P,. . .) t*-‘y
(19)
and Y = {Yo, Yl, Y2>...+1}*. From the definition 328
(20)
of the matrix D, one may easily find that the operational Journal
ofthe
matrix
Franklin Institute Pergamon Journals Ltd.
Output Sensitivity of Time-invariant Linear Systems of differentiation of the Taylor series is
.
D,=
(21)
In a similar way the operational matrices of differentiation for other types of polynomial series may be compiled. However, an alternative approach is also possible. It has been demonstrated (11, 12) that the operational matrices of integration of all polynomial series are interrelated via the operational matrix of polynomial series transformation. This matrix is defined by the equation f(t) = Tf,(t).
(22)
where f(t) is the base vector of any polynomial series, and fT(t) is the base vector of the Taylor series, defined by (19). Using the matrix T one may easily show the following very useful relations : P = TP,T-’
(23)
D = TD,T-‘.
(24)
The matrix T may also be used for the computation
of powers of P, D, since
P” = TP”TT-’
(25)
D” = TD”,T-1
(26)
where P, D are the integration and differentiation matrices of any polynomial series, and P,, D, are the corresponding matrices for the Taylor series. The use of the operational matrix of polynomial series transformation T simplifies considerably the computational effort required for the analysis of a system using more than one polynomial series for comparison. For example, if we compute the matrix Q in Eq. (15) via Taylor series, the corresponding matrix Q for other types of polynomial series may be found simply as : Q = TQ,T-I.
(27)
Having established the matrices D, T as useful tools in system analysis, we may now proceed with the sensitivity problem in question. 2. Sensitivity analysis
In the previous section the application of the operational matrix of differentiation transformed the differential equation (1) and the corresponding boundary conditions into a linear system, namely Eq. (17). The solution of this system yields the coefficient vector y of the polynomial series expansion of the solution. The unknown function y(t) can be determined by Eq. (9). Vol. 321, No. 6, pp. 375336, Printed in Great Britain
June 1986
329
Panagiotis D. Sparis and Spyridon G. Mouroutsos If we now combine the definition of the sensitivity function by given Eq. (4) with the results of the system’s analysis expressed by Eqs. (9) and (17), we obtain the following approximate expression for sj :
sj =
However,
0 2: $,J (YTftt)) =&J CfTtt)Y) =fT(t)&J (Y).
if we differentiate
Eq. (17), we have
aQ*
=y+Q*$=O J
(29)
J
since the driving function x(t) and the boundary conditions are independent of aj. Therefore, we finally obtain the following approximate expression for the nominal output sensitivity function : Sj 2:
f'(t)(-Q*)-'
~
y.
J Since the boundary
conditions
are independent
aQ*
-“Q_D_i
aaj- aajSo finally the nominal sensitivity polynomial series expansion as
function
Sj = S(t,
of aj, we may assume
(31)
’
sj may be determined
that
in the form of a
aj) N S’f(t) = f’(t)S
(32)
= -(Q*)-‘D’(Q*)-lx*.
(33)
where s =
-(Q*)-‘Djy
The above relation shows that, if a linear time-invariant system is analysed via the operational matrix of differentiation D, then the output sensitivity analysis of the solution y(t) N yTf(t) may be easily performed by a multiplication of the solution coefficient vector y by the matrix Z, where Z = -(Q*)-‘Dj.
(34)
This procedure appears to be much simpler than the classical Frank approach or the operational matrix of integration approximate approach. To illustrate the accuracy of the proposed method a number of characteristic examples were computed. In these examples, a number of polynomial series are used for the purpose of comparison, namely the Chebyshev, the Legendre, the Laguerre, the Hermite and the Taylor. This comparison is considered necessary in view of the “sensitive” nature of the problem. As Eq. (4) clearly shows, the nominal sensitivity function is a partial derivative of the solution, and therefore its estimation requires a high degree of accuracy. This fact is evident also in the approximate approach from the presence of the matrix (Q*)-’ in Eq. (33). Joumal
330
ofthe Franklin Institute Pergamon Journals Ltd.
Output Sensitivity of Time-invariant
Linear Systems
ZZ. Examples Example
1 Consider the following differential equation ay(‘)+y = 1
with boundary conditions y(0) = 0 and the nominal value a = aIo = 1. The exact solution of this equation is y=y(t)=l-exp
-!. (
a)
Thus, the nominal solution is y, = l-exp(-t). The nominal output sensitivity equation (5) takes the form s’,l)+s, = -ya)
= -exp(-t)
with boundary conditions s,(O) = 0, according to the Frank approach. The solution of this equation is s1 = -texp(-t). This solution could also be found from the definition of the nominal sensitivity function (4) : s1
8Y
=%=-;exp
t
t (1
-- a =-texp(-t)
for
a=l.
This simple sensitivity problem is solved by the developed method using Taylor, Chebyshev, Legendre, Laguerre and Hermite polynomial series, in the interval 0 <: t < 5. For this purpose, the interval of solution for the Taylor, Chebyshev and Legendre polynomials is transformed to 0 < t* < 1 to lie within the domain of convergence of these series, via the transformation t* = t/5. The Laguerre and Hermite polynomials do not require a similar transformation since they converge in the intervals (0, co) and (- 00, co), respectively. The results of the system’s analysis for the output function y(t) using r = 20 terms are indicated in Table I. These results indicate that with the exception of the Hermite series, which presents a considerable error for t > 3.5, the other polynomial series give very accurate results. Especially good is the performance of the Chebyshev and the Legendre series, whereas the performance of the non-orthogonal Taylor polynomials is understandably inferior but still very good, In general, considerable improvement in the performance of Taylor series expansions may be expected, if the solution interval is transformed to the interval - 1 < t < 1, where the series converges absolutely. Using the results in Table II for the system’s solution y(t), the results shown in Table II were obtained for the nominal sensitivity function using Eq. (33). These results indicate that the maximum errors of the approximate solutions of the Vol. 321, No. 6, pp. 32M36, Printed in Great Britain
June 1986
331
Panagiotis
D. Sparis
and Spyridon
G. Mouroutsos
TABLEI Values of the exact and the approximate solution of the differential
equation
t
Exact
Taylor
Chebyshev
Legendre
Laguerre
Hermite
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
0.000000
0.000000
0.000000
0.000000
0.000000
0.393469
0.393469
0.393469
0.393469
0.393469
0.632120 0.776869 0.864664 0.917915 0.950213 0.969802 0.981684 0.988891 0.993262
0.632120 0.776869 0.864664 0.917915 0.950213 0.969802 0.981684 0.988895 0.993293
0.632120 0.776869 0.864664 0.917915 0.950213 0.969802 0.981684 0.988891 0.993262
0.632120 0.776869 0.864664 0.917915 0.950213 0.969802 0.981684 0.988891 0.993263
0.632120 0.776870 0.864665 0.917915 0.950212 0.969803 0.981685 0.988891 0.993261
0.000000 0.393469 0.632120 0.776870 0.864665 0.917915 0.950213 0.969803 0.981748 0.990194 1.007808
sensitivity equation
are : eTaylor
Therefore,
Otl
E -
3,
eChebys
<
o(1
E -
6)
eLegend
=
Ocl
E -
6,
eLaguer
=
00
E -
5)
Germit
=
O(1 E-2).
for a given problem
polynomial
=
if maximum
series based on Chebyshev
TABLE
Values of the exact and approximate
accuracy
or Legendre
is essential, the orthogonal polynomials
must be used.
II
solution of the nominal output sensitioity equation
t
Exact
Taylor
Chebyshev
Legendre
Laguerre
Hermite
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
0.000000 - 0.303265 -0.367879 -0.334695 - 0.270670 -0.205212 -0.149361 -0.105690 -0.073262 -0.049990 - 0.033689
o.oooooo -0.303265 -0.367879 -0.334695 - 0.270670 -0.205212 -0.149361 -0.105691 - 0.073270 - 0.050068 -0.034316
0.000000 -0.303265 -0.367879 - 0.334695 - 0.270670 -0.205212 -0.149361 -0.105690 - 0.073262 - 0.049990 -0.033689
0.000000 -0.303265 -0.367879 -0.334695 - 0.270670 -0.205212 -0.149361 -0.105691 -0.073263 - 0.049993 -0.033698
0.000000 - 0.303265 -0.367878 - 0.334697 - 0.270672 -0.205208 -0.149357 -0.105694 -0.073273 -0.049996 - 0.033680
0.000000 - 0.303265 -0.367879 -0.334695 -0.270670 -0.205212 -0.149361 -0.105693 - 0.073349 -0.051322 -0.046132
332
Journal of the Franklin Pqamon
Institute Journals Ltd.
Output Sensitivity
of Time-invariant
Linear Systems
These computations should preferably be carried out using double precision variables, due to the errors involved in the computation of (Q*)-‘. An interesting comparison may also be made between these results and the results obtained in (8) using the operational matrix of integration and the Hermite series with 20 terms (Table III). These results indicate that overall, for the specific example under consideration, the present method is more accurate than the P approach. Nevertheless, this is an isolated example and therefore general conclusions should not be drawn, especially in view of the fact that the Hermite series performs rather poorly in this example compared to the other series.
Example 2 Consider now the following
second-order
ay”‘+ y(l)+ y = 0
differential for
equation
0 < t < 47c
with boundary conditions y(0) = 1, y(l)(O) = 0 and the nominal The exact nominal solution of this equation is y = y(t) = exp (- t/2) (cos (J3t/2) The exact solution
of the differential
equation
value a = azO = 1.
+ J3 sin (J3t/2)/3). is
y = y(t) = exp (- t/2a) (cos (mt) + sin (mt)/J(4a
- 1))
where m = J(4a1)/2a, and a > l/4. To solve this differential equation by the developed method via Taylor, Chebyshev and Legendre series, the interval of solution must be transformed to - 1 < t* < 1. For this purpose the simple transformation t* = (t/2z) - 1 is adequate. The results of the system’s analysis for the output function y(t) using r = 20 terms are indicated in Table IV. These results indicate that the Chebyshev and the Legendre series have excellent accuracy, whereas the Hermite series for t > Sn/5
TABLE III
Comparison of the sensitivity analysis results using the operational matrix of differentiation D and integration P t
Exact y
y via D
y via P
s via D
Exact s
0.0
0.000000
o.oooooo o.oooooo o.oooooo
0.5
0.393469 0.632120 0.776869
0.393469 0.632120 0.776870 0.864665 0.917915 0.950213 0.969803 0.981748
1.0 1.5 2.0
0.864664
2.5 3.0 3.5
0.917915 0.950213 0.969802 0.981684
4.0
Vol. 321, No. 6, pp. 325-336, June 1986 Printed in Great Britain
0.393691 0.632128 0.776894 0.864704 0.917951 0.950212 0.969809 0.981653
- 0.303265 -0.367879 - 0.334695 -0.270670 -0.205212 -0.149361 -0.105690 - 0.073262
0.000000
-0.303265 -0.367879 -0.334695 -0.270670 -0.205212 -0.149361 -0.105693 - 0.073273
s via P
o.oooooo - 0.303264 -0.367879 - 0.334692 -0.270676 - 0.205256 -0.149749 -0.105213 - 0.079254
333
Panagiotis
D. Sparis
and Spyridon
G. Mouroutsos TABLE IV
Values of the exact and the approximate
t
Exact
0.0
1.000000 0.843218 0.520388 0.200547 -0.026972 -0.140700 -0.161461 -0.126736 -0.072261 -0.022701 0.010178 0.02483 1 0.025605 0.018755 0.009780 0.002218 - 0.002445 - 0.004240 -0.003994 -0.002730 -0.001281
45
2n/5 3n/5 k/5
6:,5 In/5 a,/5 9x15 2?r lln/5 12~15 13lrf5 14n/5 37l 16~15 17n/5 18x15 19rc/5 4-n
Taylor
Chebyshev
1.oOOoOO 1.000000 0.843334 0.843218 0.520600 0.520388 0.200760 0.200547 -0.026818 -0.026972 -0.140621 -0.140700 -0.161445 -0.161461 -0.126758 -0.126736 -0.072296 -0.072261 -0.022734 -0.022701 0.010155 -0.010178 0.024821 0.024831 0.025604 0.025605 0.018759 0.018755 0.009786 0.009780 0.002223 0.002218 - 0.002442 - 0.002445 - 0.004238 - 0.004240 -0.003992 - 0.003994 -0.002711 -0.002730 -0.001123 -0.001281
start to diverge. The performance error of 0( 1E - 4). The sensitivity function solution
aY
aa = &
solution of the differential Legendre 1.ooooOO 0.843218 0.520388 0.200547 -0.026972 -0.140700 -0.161461 -0.126736 -0.072261 -0.022701 0.010178 0.02483 1 0.025605 0.018755 0.009780 0.002218 - 0.002445 - 0.004240 -0.003994 -0.002730 -0.001281
equation
Laguerre
Hermite
1.ooOoOO 1.oooooo 0.843219 0.843218 0.520386 0.520388 0.200549 0.200547 -0.026970 - 0.026972 -0.140706 -0.140700 -0.161466 -0.161455 -0.126723 -0.126738 -0.072240 - 0.076269 -0.022708 - 0.097655 0.010125 -0.771272 0.024777 - 5.698621 - 32.621198 0.025633 0.018905 - 153.830227 0.00997 1 - 622.208884 0.002252 - 2217.06205 - 0.002757 - 7092.28805 - 0.004887 - 20652.8780 - 0.004642 - 55308.2345 -0.002783 - 137209.968 -0.001896 -316832.142
of the Taylor series is quite good with a maximum
s may be easily found from the expression
of the exact
y = y(t ; a) by differentiation exp (- t/2a) (cos mt + sin mt/J(4a
+exp(-t/2a)
i
- 1))
-t~[cosmi/~(4a-l)-2sinmt/J(4a-1)]3
I
where m = J(4aThe nominal
aY
sensitivity function
= 0.5t exp (-0.5t)
s=aa,=,
$J
1)/2a,
is determined
- 1)).
by setting a = 1, therefore
J3t + 3J3sin. 2J3t > J3 . J3 tTsmIt-_coslt--
cos 2
(
+exp(-0.5t)
i
334
= (1 - 2a)/(2aZJ(4a
J3
2 sinJ3t 2 I . J27
Joumal of the Franklin Institute Pergamon Journals Ltd.
Output Sensitivity of’ Time-invariant
Linear Systems
TABLEV
Values of the exact and the approximate solutions of the nominal output sensitivity equation t 0
45
2zJ5 3x15 4n/5 6i/5 k/5 h/5 97L/5 2n 1h/5 12x/5 137cc/5 147-c/5 3n 16~15 1h/5 187-c/5 19x/s 47l
Exact
Taylor
Chebyshev
Legendre
Laguerre
Hermite
0
0.000000 0.121135 0.262211 0.256125 0.112075 - 0.078070 - 0.222877 -0.275307 -0.238225 -0.146680 - 0.044046 0.036204 0.078493 0.084033 0.064508 0.034690 0.006839 -0.011998 -0.020085 -0.019532 -0.014829
0.000000 0.122510 0.264661 0.258482 0.113681 - 0.077329 - 0.222798 -0.275586 -0.238598 -0.146987 -0.044224 0.036145 0.078512 0.084086 0.064561 0.034726 0.006856 -0.011996 - 0.020077 -0.019388 -0.013629
0.000000 0.122510 0.264661 0.258482 0.113681 -0.077329 -0.222798 -0.275586 -0.238598 -0.146987 - 0.044224 0.036145 0.078512 0.084086 0.064561 0.034726 0.006856 -0.011996 - 0.020077 -0.019388 -0.013629
0.000000 0.122510 0.264661 0.258477 0.113690 -0.077321 - 0.222825 -0.275616 -0.238550 -0.146869 -0.044202 0.035908 0.078121 0.083988 0.065229 0.036090 0.079379 -0.012766 - 0.023758 -0.025132 -0.017921
0.000000 0.122510 0.264661 0.258482 0.113681 - 0.077329 - 0.222804 -0.275628 -0.236724 -0.107239 0.349630 2.583858 12.003149 41.145539 92.909584 20.876614 -1115.2387 - 7388.1600 - 33314.298 - 124379.06 -410011.08
0.122510 0.264661 0.258482 0.113681 - 0.077329 - 0.222798 -0.275586 -0.238598 -0.146987 - 0.044224 0.036145 0.078512 0.084086 0.064561 0.034726 0.006856 -0.011996 - 0.020077 -0.019388 -0.013629
The application of the proposed method for the computation of the values of the nominal sensitivity function in the interval 0 < t < 4n gives the values shown in Table V. The results in Table V again demonstrate the superiority of the Chebyshev and Legendre polynomial series. As expected, due to the deterioration of the Hermite approximation for the output function y(t), the results of the sensitivity analysis using this type of orthogonal polynomial series are very poor. On the other hand, the Taylor series defined in the interval - 1 < t < 1, present a maximum error of O(1E - 3) in spite of their non-orthogonality, i.e. an accuracy that is quite good for most practical purposes.
III. Conclusions In the present paper a new approximate method is introduced for the solution of the output sensitivity problem of linear time-invariant differential equations, using polynomial series. The novel aspects of the proposed approach are the use of the operational matrix of differentiation for the derivation of the algebraic equations that approximate the differential equation, and the use of the operational matrix of polynomial transformation for the transformation of a given polynomial series expansion to another, based on a different base vector. Vol. 321, No. 6, pp. 32S336, Printed in Great Britain
June 1986
335
Panagiotis
D. Sparis and Spyridon
G. Mouroutsos
The use of these operational matrices simplifies considerably the computational procedure compared to other presently known methods. The method is also well suited to be used as a basis for the compilation of a general program for any type of polynomial series. A similar program in BASIC is presently in operation, and has produced the numerical examples of the present paper. In all cases tested, the Chebyshev and the Legendre series have yielded results that are accurate to the sixth significant digit. The results obtained using the Laguerre, the Taylor and the Hermite series are generally less accurate. The errors involved for a given number of terms in the series expansion increase with t, since all truncated power series expansions diverge for large values of t. In general, if maximum accuracy is desirable, the Chebyshev or the Legendre polynomials should be used. If a large number of terms is used in the expansion, it is advisable to use double precision variables for a more accurate computation of (Q*)- I. This is a good rule to follow in all numerical work involving inversion of large order matrices. However, in most practical cases, a 20-term series expansion is usually adequate, and in this case a single precision computation is satisfactory. References (1) P. M. Frank, “Introduction to System Sensitivity Theory”, Academic Press, New York, 1978. (2) P. N. Paraskevopoulos and G. Kekkeris, “Output sensitivity analysis using orthogonal functions”, Znt. J. Control, Vol. 40, No. 4, pp. 763-772, 1984. (3) C. F. Chen and C. H. Hsiao, “Design of piecewise constant gains for optimal control via Walsh functions”, IEEE Trans. Aut. Control, Vol. 20, pp. 596-603, 1975. (4) P. Sannuti, “Analysis and synthesis of dynamical systems via block-pulse functions”, Proc. ZEE, Vol. 124, pp. 569-571, 1977. (5) R. W. King and P. N. Paraskevopoulos, “Parametric identification of discrete-time SISO systems”, Znt. J. Control, Vol. 30, pp. 1023-1029, 1979. (6) P. N. Paraskevopoulos, “Chebyshev series approach to system identification, analysis and optimal control”, J. Franklin Inst., Vol. 316, 135-157, 1983. (7) P. N. Paraskevopoulos, P. D. Sparis and S. G. Mouroutsos, “The Fourier series approach to system identification, analysis and optimal control”, Proc. AMSE ‘83 Conference on Modelling and Simulation, Nice, France, Vol. 1, pp. 31-50, 1983. (8) P. N. Paraskevopoulos and G. Kekkeris, “Hermite series approach @ system identification, analysis and optimal control”, Proc. MECO 83 Conf., Athens, Greece, pp. 146149, 1983. (9) S. G. Mouroutsos and P. D. Sparis, “Taylor series approach. to system identification, analysis and optimal control”, J. Franklin Inst., Vol. 319, pp. 359-371, 1985. (10) P. D. Sparis and S. G. Mouroutsos, “Linear system analysis using the operational matrix of differentiation via Chebyshev series”, Proc. Znt. 84 AMSE Summer Conference, Athens, Greece, pp. 137-151, 1984. (11) P. D. Sparis and S. G. Mouroutsos, “A comparative study of the operational matrices of integration and differentiation for orthogonal polynomial series”, Znt. J. Control, Vol. 42, No. 3, pp. 621-638, 1985. (12) P. D. Sparis and S. G. Mouroutsos, “The operational matrix of polynomial series transformation”, Znt.J. Syst. Sci., Vol. 16, No. 9, pp. 1173-l 184,1985.
336
Journal
of the Franklin Institute Pergamon Journals Ltd.