Journal of Sound and Vibration (1976) 45(l),
RANDOM STOCHASTIC
69-89
VIBRATION
ANALYSIS
TIME-VARYING
OF
SYSTEMS
R. N. IYENGAR Department
of Civil Engineering,
AND P. K. DASH Department of Aeronautical Engineering, Indian Institute of Science, Bangalore 560012, India (Received 13 May 1975, and in revised form 7 August 1975)
The analysis of the free and forced vibration of a randomly time-varying system is the subject matter of this paper. This is a complicated problem which has received relatively little discussion in the literature. Herein two methods are presented, apart from the digital simulation technique, of finding the response moments. The first one is a series technique which can be considered as a generalization of the well known Galerkin method. The second method belongs to the class of closure techniques. Upon presuming some of the joint distributions to be Gaussian, equations are derived for the first two response moments. It is shown further that the non-Gaussian output density can he approximately predicted by a simple transformation. Detailed numerical results are obtained and compared with computer simulated response statistics. It is demonstrated that the methods developed here are highly efficient. In particular it is found that the Gaussian closure approximation has a wide range of application.
1. INTRODUCTION In the literature available on the random vibration of systems, most of the investigations are concerned with the analysis of deterministic systems under random excitations. However, the study of systems with random coefficients arises in a wide variety of boundary value and initial value problems. The notable contributions in this direction have been the study of random eigenvalue problems by Soong and Bogdanoff [l] and Boyce [2], and the investigations on stochastic stability by Kozin [3,4], Infante and Plaut [5], Caughey and Gray [6] and Ariaratnam [7]. It is observed that the stability aspect has attracted more attention. This may
be due to the fact that there are methods available in stability analysis wherein one need not know the explicit solution to the problem. The state of the art of stochastic stability has been very well represented in the proceedings of an International Symposium on the subject held in 1972 [8]. In contrast to this the results available on the quantitative response analysis of stochastic systems are very limited. For an explicit determination of the response moments, the methods available at present are the small perturbation and the closure methods based on correlation or cumulant discard [9-l 11. The system tackled widely in this connection has been a second order linear system governed by the equation f + 2qoi + [o* + &a(t)]x(t) =f((t).
(1)
Even though the above methods lead to an approximate description of the moments, there are no techniques available for even a rough estimation of the probability densities. The organization of the present paper is as follows. In section 2, the currently available techniques are briefly reviewed. This is mainly for the purpose of completeness and later 69
70
R. N. IYENGAR AND P. K. DASH
reference. This is followed by the solution of equation (I) in great detail by three different methods. The first method is a series technique, wherein the response is expressed as a series in the powers of the process a(t). The coefficients of the series are taken as deterministic constants to be found by a mean square error minimization procedure. The second method is a new closure approximation, wherein from the beginning the joint probability density of a(t) and x(t) is taken to be Gaussian so that one arrives at an integro-differential equation for the mean and a simple differential equation for the auto-correlation. As a further development of this, a transformation is suggested to estimate the non-Gaussian response density from the known Gaussian density function. A somewhat similar method has been suggested and found to be successful by Iyengar [12], in the analysis of non-linear random vibration problems. The third method is the numerical sample solution of the system for the various realizations of a(t) andf(t) simulated on a digital computer. The digital simulation has been undertaken only with the view of verifying the theoretical predictions and hence is not exhaustive. In particular the level crossing and first passage probabilities are not studied. Detailed numerical results are presented for a particular combination of a(t) and f(t). A parametric study also has been conducted by varying q and E to understand the effect of these on the validity of the solutions. The paper ends with a discussion on the results obtained and a set of important conclusions.
2. ANALYSIS Equation
(1) can be recast in the form x(t) = ‘[f(r) s
- .su(~)x(r)]h(t
- r)dr.
(2)
0
Here it is assumed
that the system starts from rest with the initial conditions x(0) = 1(O) = 0.
(3)
h(t) is the impulse response of the system when E = 0. For an under-damped of great importance in vibration problems, it follows that
system, which is
h(r) = (l/o& e-4ac sin wd I,
(4)
Wd = o(l - qz)“z. In some problems equation
the interest
(5)
may be in the fundamental
solutions
of the homogeneous
~+22r]wk+(wZ+&a)x=0. The two fundamental
solutions
u and u also may be considered
(6)
u and z, are the ones with the initial conditions u(0) = 0,
a(0) = 1,
V(0) = 1,
C(O) = 0.
as the solutions
u(t) = h(t) -
E
(7) (8)
of the integral
[a(r) u(z) h(t -
equations
z) dr,
(9)
0 o(t) = s(t) - E [ a(r)v(~)h(t - T) dr,
(10)
d where g(t) = e-nar[cos red c + q sin wp t/(1 - $)1’2]. The above integral
equation
formulation
will be useful for some later analysis.
(11)
RANDOMLY
2.1.
TIME-VARYING
71
SYSTEMS
SMALL PERTURBATION
For very small values of E, equations (1) and (6) can be solved by a straightforward power series expansion :
&yEiZ&).
x(t) =
(12)
1-O
For the forced response, this leads to the solution
[f(r) h(t -
zoo(t)=
z,(t)
=
-
/
a(7)
T) dr,
z~-~(z)
h(t
-
(13)
7)
dr.
(14)
0
Similarly the fundamental solutions also can be determined. Even though the complete solutions can be expressed in terms of z*(t), it will not be practicable in this method either to consider terms of order higher than E or to find moments of order higher than two. To order E, the first two response moments can be expressed as m =
= (zo) + s
(15)
tz) = + .WO(fl) Zl(f2)) + (Z&l)
Zo(mlr
(16)
where the angular brackets denote ensemble averages. It may be observed here that to order E the perturbation formalism leads to the moments easily. However if one is looking for an estimation of the response probability density function the approach may not be of much use. If one presumes that a(t) andf(t) are Gaussian random processes, the first term of the series in equation (12) is Gaussian. But this does not include the effect of a(t). The distribution of the next term, zr(t), is highly complicated since equation (14) involves the product of random processes on the right-hand side. Added to this is the limitation of the validity of the moments to very small values of E. The difficulties involved in the solution of even deterministic timevarying differential equations are too well known to be detailed here. The handling of stochastic equations is even more complicated, particularly when one asks for probability densities or better estimates of the moments. 2.2.
ITERATION
TECHNIQUE
A method which is theoretically more sound and more general than the perturbation method is the iterative technique [13]. In this method one has to start from an initial solution w(t) and iterate equation (2). This would lead to successive approximations for x(t) as x0(r) = w(t), xl(t)=zo(t)-s
(17)
‘a(z)x,_,(r)h(t-r)dr I
(i = 1,2,3,.
. .).
(18)
0
Under some conditions on the functions involved, it would be possible to show [l I] that xi(r) tends to the exact solution x(t) as i -+ co. But it is easy to observe that in random vibration problems this would be useful if only one or two iterations are sufficient to yield reasonably good results. This in turn would depend essentially on the initial approximation w(t). Suppose the starting solution is taken as the a0 order solution w(t)
=
z,(t)
=
f 0
f(7)
h(t
-
7)
dr.
(19)
72
R. N. IYENGAR
AND P. K. DASH
Then the iterates will be Xl(t)
=
Z,
-
E
s
U(T)Xl-l(T)h(t
- 5) dr
(i=
1,2,3 ,... ).
(20)
0
Since the above solution is the same as the perturbation series of equation (12) this initial choice may lead to poor convergence. Also the determination of the probability density even for i = 1 is tedious as already pointed out. 2.3.
HIERARCHY
CLOSURE
The hierarchy closure techniques have been used in the literature for both initial value and eigenvalue problems [2, lo]. The method consists in deriving differential equations for a particular order response moment by using an approximation on the higher order moments. As an example, upon averaging equation (1) it is seen that the equation for m(t) is not closed, as it contains the higher order unknown correlation (a(t)x(t)). A typical closure approximation at this stage could be the “correlation discard” wherein a(t) and x(t) are taken to be uncorrelated. Alternatively one could go a step further to derive an equation for (ax). But this would contain a still higher order moment in the form of (a(tl) Thus an approximation would be needed at some stage or other to close the equations of the moments. Apart from the somewhat arbitrary nature of the decision as to where one has to curtail the hierarchy, that is inherent in the technique, the solution of systems with random damping and the analysis of coupled stochastically varying systems would be rather difficult by this method. The above discussion points to the necessity of developing approaches which could lead to better initial approximations for use in the iteration technique and predict reasonably well the response probability distributions. With this in view two methods are proposed in the next two sections, which seem to fulfil these objectives to a large extent.
3. SERIES WITH MINIMUM
ERROR
In deterministic analysis the unknown functions often are expanded in series of known functions. The coefficients of the series are found by an error minimization procedure. This is generally known as the Galerkin method. In this section an attempt is made to extend this to random vibration problems. It seems plausible to presume that for two random processes x(t) and u(t) of equation (2) which are mutually dependent, an estimate for x(t) may be expressed in terms of the other process as a(t) = z&) - 2 c, u’(t),
(21)
i-0
in which the C,‘s are deterministic coefficients. It is presumed that u(t) is non-white. This representation is heuristic since at present it is not known whether or not two dependent random processes can be expressed in the above form. Nevertheless, as an approximation the form of equation (21) seems to be appropriate. Similarly an estimate for u(t) of equation (9) can be presumed in the form
u”(t) = h(t) -
&+c* u’(t).
1=0 The constants C, are found in the following manner. If Z(t) is the exact solution, tion into equation (2) would lead to an identity. On the contrary, here one gets Zo(t)
-
1
c,
U’(t)
#
q,(t)
-
E jU(T)[Z&)
0
-
2
cl
U’(T)]
h(t
-
T)
dt.
(22) its substitu-
(23)
RANDOMLY
TIME-VARYING
SYSTEMS
73
The resulting error is E = - 2 C&z’(t) + &Hi(t)] + &F(t).
(24)
Here
H,(t) = f a*+l(~)h(t - T) dt,
(25)
0 F(t) = j a(r)z,(?)h(t - T) dr.
(26)
0
For the fundamental solutions u or u, the above expressions are still valid if z,(t) is replaced by h(t) or g(t). So far the method is very similar to the classical Galerkin technique. To find the unknown coefficients, in the classical approach, one minimizes a measure of the error: namely, T (l/Z-)IE2(t)dt. 0
Even though this is called the mean square error it is not a mean in the strict probabilistic sense. Here, as an extension of this concept, the ensemble mean of the above quantity, T
(27)
e = (IIT)~
will be minimized. The minimization conditions
aepc,=o,
i = 0, I, 2, . . .)
(28)
lead to the system of simultaneous equations
2 CIo/I@ aJ>+ E’
I=0
T =
E
I
0
+ (Hi a’))] dt
T
(aiF) dt +
E’
I
i=o,1,2
)....
(29)
0
Here (0, T) is the interval in which the solution is sought. These equations contain not only terms of order s2 but also the higher order moments of a(t). Thus, it could be expected that the above estimate for either the forced response or the fundamental solutions will be better than the initial approximations as z,(t) or h(t). Choosing Z(t) or U”(t)as the starting solution, one can hope that the iterative technique of section 2.2 will lead to a good solution after one or two steps. Knowing the C,‘s from equation (29), one can easily find the moments of the response at any stage of the iteration. An example to illustrate the method in detail is considered later.
4. GAUSSIAN PROBABILITY CLOSURE All the methods considered above lead to the response moments. If one is interested in the probability densities also, the only method would be to arrive at them by making use of the estimated moments. Since the determination of the moments of order higher than two is very tedious, the easiest estimate of the distribution would be through the Gaussian density function. In this estimation it may be observed that there are two major approximations. The first one is in the estimation of the moments. The second approximation is in the assumed probability structure: namely, the normal density function. At this stage it would be interesting to ask the following question. Is it possible to make an initial approximation for the
74
R. N. IYENGAR AND P. K. DASH
response in terms of the functional form of a probability density with unknown parameters? The unknown parameters in turn will have to be chosen so as to be consistent with the equation of motion. Previously [12] this approach has been investigated in detail with respect to a non-linear system under random excitation. It has been found to be highly efficient in yielding the response moments. In this section, this method, which amounts to a closure technique, is shown to be useful for non-autonomous systems also. Since in the present problem a(t) is Gaussian with mean zero, and the response takes both negative and positive values, it is reasonable to choose the initial probabilistic description to be Gaussian. Thus the joint density of x and a would be P(X, a) =
(x-m)’ exp[-2(11-~2)(7-7r0-+~)]9 2rccr,a,( 1 - r 2)1’2
2r(x - m)a
1
a2 (30)
where = m, 4.1.
ai = (a2),
a: = (x2> - m2,
r(t) = (a@ - m)>/a, 6,.
(31)
RESPONSE MEAN
Since the mean of x(t) may not be zero, it would be simpler to express equation (1) in terms of a process z(t) defined as x(t) = m(t) + z(t). (32) This leads to ti + Zt]oti + o2 m + &urn=,f(t) - (i’+ 2qoi + o2 z + .saz). (33) Taking expectations throughout on this equation, one obtains ti+2rpti++2m+eR,,=0,
(34)
where R&
(35)
t) = .
This is an exact equation in which both m and R,, are unknowns. Multiplying equation (33) by a(tr) and averaging gives R&,
t) + 2tl~R,,&, r) + ~J~R,,(&, t) + sR,,,(tr, t, t) = R&r,
t) - smR,&r, t).
(36)
Now, upon invoking the Gaussian nature of p(x, a), it follows that z and a are also jointly Gaussian and hence R,,, in the above equation vanishes. With this simplification equation (36) can be explicitly solved to give
Rt& t) = j PW,, 7) -
47)
L(h,
(37)
7)l h(t - 7) d7.
0
Substitution of this in equation (34) leads to the integro-differential equation f
f tii+2rpriz+w2m-.52
s m(7) R&,
7) h(t - 7) d7 = - E 1 &,(t,
0
7) h(t - 7)
dr.
(38)
0
A general solution to this equation can be obtained when a(t) and,f((t) are jointly stationary. In this case R,,, and R,, are functions of the time difference only and hence taking Laplace transforms gives m(0) (s + 2~~0)+ ti(0) - e&i) L.T.[m(r)] = C(s) = 9 + 2?/os + 02 - &2G(s) ’ where
G(s) = L.T.[R,(t)h(t)].
(41)
RANDOMLY
TIME-VARYING
75
SYSTEMS
However the inversion of C?(s) may be too difficult and hence it would be simpler to solve equation (38) by a numerical procedure. When a and f are non-stationary processes even the Laplace transform of m(t) cannot be obtained and hence one is compelled to look for numerical schemes for the determination of m(t). Equation (38) could also have been derived by the correlation discard technique of simplifying the triple correlation in equation (36) as (42) %&l,f9 2) = && t)+(t)> = 0. However, this should not be taken to mean that both the methods are identical. The method developed in the present paper is more general and also unambiguous. The fact that it is more general can be appreciated when one observes that even non-linear stochastic or deterministic [12] systems can be handled by this method. It is only necessary to calculate the various averages involving the non-linear and/or non-autonomous terms by using the Gaussian joint probability density function. In case the time-varying coefficient a(t) is not Gaussian but a non-linear function of the Gaussian process b(t) of the form a(t) =
NW)l,
(43) it would be necessary to calculate the averages of functions such as [xN(b)] under the initial approximation thatp(x,b) is Gaussian. Due to limitation of space, problems of this type will not be discussed here. 4.2.
RESPONSE AUTO-CORRELATION
Multiplying equation (33) by z(tl) and averaging it gives fiZZ(fl,t) + 2tl~U~1,~) + m2&&1, t) = R,,(& t1) - &M(f)&,(t, t1). (44) Here m and Ra, are known functions. To find R,, it would be necessary to consider again equation (33). Multiplying this by f(tl) and averaging throughout, one obtains fi,,(tl, t) + 2tloA,,(t,, t) + 0’ R,,(f,, t) = R,,(t,, t) - sm(t)&(t, U. (45) In deriving this it is to be observed that p(a,z,f) has been taken to be Gaussian. These two equations can be easily solved to obtain
R,&>t) =
j[R( ff
b,
7)
tA1h(t -
-
&m(7)
R&7,
-
&m(7)
R,Jr,
7) d7,
(46)
0
&&I,
t)
=
J! &z(7,
h)
tl)] h(t -
7)
dr.
(47)
0
Once m(t) and R,,(t,tJ are known the approximate probabilistic description of x(t) is complete; since these are the only two quantities needed to define a Gaussian process. Further probabilities such as p(x, 2) can be easily estimated which in turn lead to the level crossing statistics. However, before this is undertaken it is necessary to compare the predicted moments and probabilities with the results obtained by other techniques. In particular it is essential to verify the predictions with respect to digitally simulated results. Prior to presenting such a numerical study, it would be more interesting to enquire whether the above approximations can be improved further to arrive at a better estimate of the exact non-Gaussian response distribution. 5. THE NON-GAUSSIAN
ESTIMATE
The method of the probability density closure in terms of a Gaussian distribution also amounts to an initial approximation, which is probably better and certainly more elegant than equation (21). Let this Gaussian process which is an approximation to x(t) be denoted
76
R. N. IYENGAR
AND P. K. DASH
as y(t). When y(t) is taken as the starting approximation in the iteration technique of section 2.2 it follows that x,(t) = Y(f), (48) XI(~)
= dt)
-
E j 47)y(7)
h(t
-
7)
dr.
(49)
0
Again the distribution of even xi(t) is very difficult to find. However, the previous approximations to the mean and variance hopefully can be improved by considering xi(t). The difficulty in finding the non-Gaussian density by the above approach can be attributed to the convolution involved with products of random processes. The only way to avoid this difficulty seems to be by approximating x(t) in terms of y(t) through a memory-less transformation. Such an approximate transformation can indeed be established as follows. Since y(t) is a Gaussian process, it is strictly the solution of a linear time invariant differential equation under a Gaussian input. In fact, the equation satisfied by y(t) is ji + 210); + w2y =f(t)
- &[a@)m(t) + &Jr,
t)],
(50)
where m(t) and R,,(t, t) are obtained from equations (38) and (37). It can be easily verified that (y(r)> = -a /R,,(7)
h(t
dz = m(t)
(51)
t2) + m(t,) N,).
(52)
-
7)
0
and (Y(UY(Q)
= &&,
Thus in attempting to solve equation (I), we have only solved equation (5Oj. This fact may be made use of in arriving at a relation between y(t) and a new estimate of x(t). Comparing equations (1) and (50), since the samef(t) appears in both the expressions, shows that it is necessary that at any time t f+210~++zx+&ax=L’+2?o~+o2Y+E(a~+R.,).
(53)
A set of sufficient conditions for this equality to be valid are jt = j;,
(54)
f = Jj,
(55)
x(02 + &a)= w2y + E(U?Yr + R,,).
(56) These define a memory-less transformation between (y,)‘,j;,a) and (x,i,%). The nonGaussian joint density function p(x,f,%; t) can be easily obtained since p(r,,‘,j;,a;t) is Gaussian. But it should be noted that this new density function is also an approximation since equations (54)-(56) are only sufficient but not necessary conditions for the validity of equation (53). 6. EXAMPLE To illustrate the various approaches and results of the previous sections, a system with the process a(t) chosen to be Gaussian with zero mean and auto-correlation K&i, t2) = exp (--al t2 - tl I)
(57)
is considered in this section. Both the free and forced vibration are studied for this system. In the free case only the fundamental solution u(t) is found. v(t) can be determined on similar lines. The Gaussian forcing functionf(t) is taken to be equal to u(t). When a andfare totally uncorrelated, then, from the perturbation analysis of section 2.1, to order a, the mean response would be zero.
RANDOMLY
TIME-VARYING
77
SYSTEMS
With this in view, in the problem considered here a andfare taken fully correlated so that a significantly non-zero mean value may be obtained. This would help in evaluating the various methods in comparison with simulated results. 6.1.
HEURISTIC
SERIES SOLUTION
From equations (21) and (22) three term approximations to the free and forced response are taken in the forms n(t) = z&) - (C, + c, Q + c, a*), (58) E(t) = h(t) - (B,, + B1 + Bz a2). W) Instead of using the real time t, it is more convenient to introduce a non-dimensional time 8 = ot/2n.
(60)
This transforms equation (1) into d2x =+4V
$
+ [4x2 + /?a(@] x = 47?f(e)/w2,
(61)
where j? = 4x2 &la?.
(62)
This shows that the effective small parameter for the perturbation technique is /3. The autocorrelation of the process a(0) is I = 2na/o.
R&e2 - 0,) = exp (-Ale, - e1 I),
(63)
From equation (29), the various moments needed in the determination of the Br’s and C,‘s are as follows: (Hi H,> = JpJ/caf+ye,) aj+ye,)> h(e00 (Hi uJ) = / (d(e)
e,)h(e- e,)de,de2,
(64)
d+ye,))
qe - e,)
deI,
(65)
(H, d) = f (d(e) d+ye,)) 0
h(e - e,)
del,
(66)
0
(67)
(z-zt F) = ,Bj +i+ye,)
00
ace,)> h(e - e,) h(e,) h(e - e,)
de1de,.
(68)
In these equations
h(e) = (l/271&eS21me sin (27~ e),
x* =
?r(l- qy.
(69)
In the forced case, instead of equations (67) and (68) one obtains +I F) = (4X2/o2) jjkye, ace,) ate,)> ye, - e,) h(e - e,) 00 e e 8, CH, F) = (4x2/w2)J 1 j (a(e,) a(0 2) d+ ye,)) h(e - e,) h(e - e,) h(e, - e,) 000
de,de2
(70)
de,de,de3.
(71)
It is seen that the basic information needed is that concerning the various moments of the process a(0). For a Gaussian process these can be found completely in terms of the auto-
R.N.IYENGAR AND P.K.DASH
78
correlation function. Further all the above integrals can be expressed in terms of the following eleven basic integrals : r,(e) = jy h(8 - e,) ye - e,)
00 z2,(e)= jjexp(-illB, 00
- e,p(e
de, de*,
(72)
- e,)h(e - e2)deldez,
(73)
0
z,(e) = jexp(-40 - e,l) 48,) h(e - 4) del,
(74)
0
z,,(O) = jexp(-il/e-
e,l)h(e
0
z,(e) = 14ed 0 z,(e) = J’/ exp (-rile,
00
- el)de,,
(75)
del,
(76)
- e1 1) h(e - e,) h(e,) h(e - e,) de1dez,
(77)
z,(e)= j/up (--A( e, - 8, 1)h(e,)ye,) h(e- e,)h(e- e,) de1de2, 00
(78)
z,(e)= jjhxp(4~e1 - eZl)h(el- e2)h(e- e,)de, de,, 00
(79)
zg(e)= jj’exp(-@e - e, - e21)h(e1 - e2)h(e- e,)de, de2, 00
G30)
e 081 zlo(e) = j 1 j exp 6-4e, - e,o w - ed h(e - e,) h(el - e,) de, do, de,, 060 0 eel zde)= jjj~.p(-AJB, -e31)exp(-~le,-e,()h(e-e,)h(e,e,) 000 h(e - e,) de, de, de,.
(81)
(82)
TABLE 1 Expressions for the compound moments
i i 0
0
1
2
w, ffL)
0
4
0 Z
I
I 21 0
2
3121
0
.;
0
0
Z 41
15
:.
4 + 212,
0
0 1 2
0 3121 0 9121+ 6123
3141 0
15 + x2 0
0
15 + 2142 0 3Z.u 0
ifor 0
Z8
Z6
0
4
0
0
40 + 211,
0
18 + 21,
31,
0
These integrals can be evaluated after some tedious algebra. In Table 1, for easy reference, the moments of equations (64X68), (70) and (71) areexpressed in terms of the above integrals.
79
RANDOMLY TIME-VARYING SYSTEMS
6.2.
MEAN AND VARIANCE OF u AND x
After the moments of equations (64)-(68), (70) and (71) are computed, the solution of equation (29) is quite simple. When one knows the coefficients B, and CL,the mean and variance of the response can be written as
ail(e) = w(e))
m,,(B) = = h(e) - Bo - B,,
(83)
a;,(e) = (222(e)) - mtl(e) = Bf + 2Bf,
(84)
mxl(e) = (q?(e)) = -co - c,,
(85)
- m;,(e) = (4112/~2)2z,,(e) + c: + 2cz, - (47+2)
c, z,,(eyx,,
(86)
where Z12(6)= / exp [-(2aq + 3.)e,] sin (2x,, 0,) de,.
(87)
0
These expressions may be called the first estimates. One could try an improvement of these by an iteration procedure. With one iteration the second estimates for the mean and variance of u(0) become m,,(e) = h(e) + PC, Z,(e),
(88)
a&(0) = B’[Z,(e) + (Bb + 9B; + 3B0 BJ 121(e) + 2B: Z,,(e) + 6B: I,, (0) - 2(B0 + 3B2) Z,(e)]. (89) For the forced response, the iterated averages can be easily found to be (90)
mz2(e) = -B[(47r21~2)Z*(e) - C, Z,(e)], m,,(e) = -B[(4n2/02) + NC0 + 3G)lw).
(91)
For the variance, the computation of even the second estimate is very laborious and hence is not considered here. 6.3.
PROBABILITY DENSITY FUNCTION OF U(t) AND X(t)
As explained previously one could approximate the density function of u and x as Gaussian with the estimates of mean and variance obtained above. Alternatively the method of section 5 can be used to arrive at a non-Gaussian approximation. By making use of equation (56) and the Gaussian joint density function p(y, a) given by P(Y, a) = AI exp [-A,{(Y - m)‘laI - 2&y
- M)/b, 0, +
a2/d11,
(92)
it can be easily shown that p(x,a) = AI](l + &)I exp [-A2([x(l + &J) - &ma + L) -2pa[x(l
-
ml’/4
+ tu) - <(mu + R,,) - ml/a, + u2)],
(93)
Al = 1/27ra, a,(1 - p2)‘12,
(94)
where A2 = l/2(1 - p’), P = MY - m)>la, 0, = R&, r =
E/CO’.
(95) a,,
(96) (97)
80
R. N. IYENGAR
AND P. K. DASH
Equation (93) can be used for both the free and forced response by substituting the corresponding values of a, and R,,. The marginal density p(x) is given by P(X) = J’ P(X, a) da = eIp (-_P)(X,(n/x,)‘~’
exp (X,2/4X,) Erf (fi,/t
+ 2% exp [-(lKWJ5
- X,/2&?J
- x2)1),
+ (98)
where X2 = (&/a:) (x - (R,, - VI)‘, Xi = (A&)[~‘(x
- m)’ - 2p5a,(x - m) +
(99)
4,
X, = (h/d) (x - 5&z - 4 PXx - m>- 2~51
(100) (101)
&=A,tPX,,
(102)
X,=A,--X.X,,
(103)
Erf (x) = 2/(7~)“~/ e-” dt.
(104)
0
7. NUMERICAL RESULTS In this section extensive numerical results are presented for a system with o = 271rad/s. Mean and variance obtained by various methods have been compared for several values of the parameter B. Both the free and forced vibration results are computed, in the transient regime only. The possibility of the existence of a steady state is not considered here. The response samples, mean, variance and density functions are generally considered for the first 10 seconds. In some cases, however, for saving computer time the results are obtained and presented here for a shorter duration of 3 to 5 cycles of the response. 7.1.
DIGITAL
SIMULATION
OF a(t)
The response of the system has been obtained by a numerical integration of the differential equation in which samples of the random processes a(t) andf(t) have been used. For the generation of the samples of the random processes a Fourier series representation [14] has been used. According to this the random process is expressed in the form N a(t) =
2
is1
D, sin idot + E, cos idwt.
(105)
Here Aw = 2n/T, and T is the length of the interval of expansion. N, the number of terms to be used in the series, is equal to f,T, where f,is the cut-off frequency, beyond which the process is assumed to have very little energy. Di and Et are mutually independent Gaussian random variables having zero mean and variances given by (D;}
= (E,?) = G(w,) Aw,
i= 1,2,...N.
(106)
For the process a(t) considered here the power spectral density is G(o) = 2~/[7r(a*+ o’)].
(107)
For purposes of simulation fc and T have been taken as 10 Hz and 10 seconds, respectively. The value of CIhas been fixed on the basis that the ratio of the maximum G(o) to G(27Cf) should be 100. This gives the value of a as 2.01~. The response mean, variance, probability density and distribution function have been computed by digital simulation and the two other
81
RANDOMLY TIME-VARYING SYSTEMS
analytical methods developed previously. Results have been obtained for rl= 0401, 0.025, 0.05 and fi = 0.1, 5, 10 and 15. However, due to limitations of space some representative results only are reported here. 7.2.
MEAN AND VARIANCE OF u
In Figures l(a) and (b) the simulated mean response is presented for q = 0.05 and B = 5 and 15, respectively. The figures also show the results as predicted by the heuristic series method, the Gaussian closure technique and the small perturbation method. Figure 2 shows the mean for q = 0.025 and /? = 10. The results denoted as “series” refer to the improved second estimate of equation (88). The results referred to as the “closure” are the solutions of the integro-differential equation (38) which has been solved by an iterative procedure. In the free vibration case equation (38) becomes f ti+2tpli?+w2m=~2 m(z) R,, (t - T) h(t - 7) dr. (108) I 0
The iterative solution is started by taking m(t)on the right-hand side to be h(t). At every step of iteration the equation has been solved numerically by a Runge-Kutta scheme. The convergence of the sequence of the solutions has been found to be quite fast for low values of /?. It has been found that, even for a value of B as high as 50, the number of iterations needed
“s
-160
x
F’
160
120 80 40 0 -40 -80 -Ix) i
-,
Figure 1. Mean of u(0); w = 27~rad/s, q = 0.05. (a) B = 5; -, simulation (100 samples); simulation (200 samples). ----, Closure; o o 0, series; ?????? , perturbation.
(b) B = 15;
R. N. IYENGAR
-1601 0
AND P. K. DASH
I
1
I
I
I
L
,
I
I
I
I
2
3
4
5
6
7
6
9
lo
Figure 2. Mean of u(0); w = 2n rad/s, o o 0, series; ?? ?? ?? , perturbation.
q = 0.025,
/?=
10. -,
Simulation
1
-1
(200 samples);
----,
closure;
is of the order of 10. The small perturbation solutions shown are of the order p’. A comparative study similar to that of the mean is presented for the variance of u(t) in Figures 3(a), (b) and (c). When a simulation study is conducted it is essential to know the convergence of the sample moments. In the course of the numerical work it was found that, for q = 0.05 and B = 5,lO to 20 samples were sufficient for the convergence of the mean. However for q = O-025 and /? = lo,60 samples were needed and for q = 0.05 and /3 = 15 as many as 200 samples were found necessary for good convergence in the first five cycles. A similar study for the variance indicates that the convergence of the sample variance is slower than that of the sample mean. In Figure 4 the convergence of the mean is studied for q = 0.05 and p = 15. Figures 5(a) and (b) present the convergence of the variance in two different cases. 7.3.
MEAN AND VARIANCE
OF X
The digital simulation of the forced response is presented here only for r] = 0.05 and /3 = 5. From the free vibration results it is seen that digital simulation would be highly timeconsuming since large number of samples are needed for good sample convergence. With this in view for the forced vibration problem only a single value of B has been considered. In Figures 6(a) and (b) the simulated mean and variance of the forced response are compared with the theoretical estimates. The sample convergence of the mean and variance is shown in Figures 7(a) and (b). 7.4.
RESPONSE PROBABILITY
DISTRIBUTION
The simulated probability density functions and distribution functions of u(t) and x(t) presented in Figures 8 through 13. Three different analytical predictions of the density and distribution functions are also shown in these figures. Two ofthe predictions are Gaussian. These are based on the heuristic series and the closure methods. The non-Gaussian estimate of section 6.3 is also presented along with the other solutions. are
8. DISCUSSION For small values of D, the numerical results indicate that all three approximationsnamely, the perturbation, the heuristic series and the probability closure method-lead to
RANDOMLY TIME-VARYING SYSTEMS
83
40-
30“b’ 2Q-
IO-
Figure 3. Variance of u(0); w = 2n rad/s. (a) q = 0.05, /3= 5; -, simulation (100 samples): (b) q = 0.05, /J= 15; -, simulation (200 samples): (c)q = O-025, B = 10; -, simulation (200 samples). ----, Closure; o o 0, series; ?? ???? , perturbation.
very good estimates of the mean and variance. However, for large values of /I it is seen that the probability closure method yields more acceptable estimates of the moments. This conclusion is based on the free vibration results which form the bulk of the numerical work reported here. The forced vibration results presented in Figures 6(a) and (b) indicate that the series method is as useful as the closure technique. But, because the value of /? used in this case has been quite small and a(t) andf(t) were chosen to be fully correlated, it can only be said that both the techniques are equally acceptable. Further numerical work with different a(t) and f(t) is needed to draw a firm conclusion. The convergence results of the sample moments presented in Figures 5(b), 7(a) and 7(b) represent the difficulty one faces in an extensive digital simulation study of a stochastic system. Such a study may be very uneconomical just because a large number of samples may be needed for good convergence. This observation in turn demonstrates the need for developing analytical methods to estimate the response statistics. The probability density and distribution functions of u(t) presented in
84
R. N. IYENGAR
-160 0
AND P. K. DASH
3
2
4
5
9
Figure 4. Convergence of the mean of u(6); w = 277rad/s, v = 0.05, B = 15. -.-.-, samples ; ----, 150 samples ; -, 200 samples.
50 Samples; ?? O ?? , 100
Figures 8 through 12 also lead to the conclusion that the closure method is perhaps better than the other method. For the forced case as shown in Figure 13 both the methods give almost the same values. In random vibration problems one would be more interested in the probability distributions and the level crossing statistics than in the simple response moments. As a step in this direction, the non-Gaussian estimates were obtained and presented in Figures 8 through 13. With the available results it can be safely said that there is very little difference between the Gaussian estimate of the closure method and the non-Gaussian prediction. Thus, an extra effort to find the non-Gaussian distribution of u(t) or x(t) may not be necessary. Further investigations are in progress to study this aspect in detail.
60
“0 ;
40
b M
0
I
2
3
4
5
8
Figure 5. Convergence of the variance of u(0); w = 27~rad/s. (a) q = @05, B = 5. -.-.-, 20 Samples; o o o, 40samples; ----, 60samples; ???? ?? , 8Osamples; -, 100 samples:(b) q = 0025, jI = 10. ----, 50 Samples; 000, lOOsamples; -*-.--, 15Osamples; -, 2OOsamples.
RANDOMLY TIME-VARYING SYSTEMS
85
(b)
30 -
“b’
Figure 6. (a) Mean and(b) variance of x(B); w = 2n rad/s, 9 = O-05, B = 5. -, ---- , closure; o o 0, series; ???? ?? , perturbation.
30
I
Simulation (500 samples);
1
(a)
t P
0
-
i -lo-
-20
-
-30
-
60
Figure 7. (a) Convergence of the mean and (b) convergence of the variance of x(0); w = 2n rad/s, q = 0.05, 500 samples. 350 samples ; 0 0 0,450 samples ; -, j?=5.----,30Samples;-.-*-,60samples;....,
86
R. N. IYENGAR
-c-O?-006~cc4 -002
AND P. K. DASH
0
002
004 006
Figure 8. Probability density and distribution function of u(e); w = 271rad/s, t] = 0.05, j? = 5, 0 = 3. -, , Gaussian (closure). Non-Gaussian; ---, Gaussian (series); ??????
6
0
G'20 -C-l6-0-12-OS6 -X4
0 u
004
OB
012
Figure 9. Probability density function of u(0); w = 272rad/s, q = 0.05, B = 15,B = 3. -, ----, Gaussian (series); ?????? , Gaussian (closure).
When a(t) andf(r)
are totally uncorrelated,
then, in equation
R&t, T) = 0.
Non-Gaussian;
(38),
(1W
For a system starting from rest, this would mean that the mean response is strictly zero. The same conclusion can be drawn by the perturbation formalism also. However, the response auto-correlation as given by equation (47) would be the same as that of a time invariant system where E = 0. The extension of the closure method to such cases has yet to be undertaken. The series of equation (21) leads to a system of homogeneous equations for the C,‘s if a andfare uncorrelated as in equation (109). However it is possible to overcome this difficulty by taking the solution in terms of the powers off(t) instead of a(i). The details and results of this type of solution will be reported in a future publication.
RANDOMLY
0
-020 +I6
TIME-VARYING
-012 -008 -004
0
OC4
SYSTEMS
008
87
012 C-l6 C-M
Figure 10. Probability distribution function of u(e); w = 272rad/s, q = 005, /I = 15, 8= 3. -, Gaussian; ----, Gaussian (series); ???? ?? , Gaussian (closure).
Non-
Slmulatian (200 samples)
6 c 5
4
3
2
I
0
-014
-w
-006-cm20002
006
010
OJA
”
Figure 11. Probability density function of u(0); w = 272 rad/s, t] = 0.025, /3= 10, 0 = 3. -, Gaussian; ----, Gaussian (series); ?? ?? ?? , Gaussian (closure).
Non-
An attractive feature of the closure technique is that the error committed in using this approach can be characterized neatly as follows. Let the difference between the exact solution x(t) and the approximate solution y(t) of equation (SO) be E=x-y.
(110)
From equations (1) and (SO), it follows that (jr - j;) + 2~jo(~i- j) + o’(.x - y) + EUX= E(W + R,,)
(111)
and ,!?+ 2qo& + wz E + .wE = E[u(~ - y) + R,,], E(O)= E(O)= 0.
(112)
88
R.
N. IYENGAR
AND P. K. DASH
Figure 12. Probability distribution function of u(0); w = 2n rad/s, t] = 0.025, B = 10, 0 = 3. --, Gaussian; ----, Gaussian (series); ?????? , Gaussian (closure).
Non-
x
Figure 13. Probability density and distribution function of x(B); w = Gaussian (series); ?????? , Gaussian (closure).
2~ rad/s,
q = 0.05,
p=
5, 0 = 3. -,
Non-Gaussian; ----,
By taking expectations, the average error at any time can be shown to be zero. The mean square and other moments of the error can be found approximately by one of the methods discussed previously.
9. CONCLUSION Two methods have been developed for the approximate solution of stochastically timevarying systems. Both the free and forced vibration problems can be handled by these methods. The first of these, namely, the series technique, is very similar to the Galerkin method of
RANDOMLY TIME-VARYING SYSTEMS
89
minimizing the mean square error in an interval of time. The second approach belongs to the class of methods called the closure techniques. In the closure method a suitable response joint density function (usually of Gaussian form) with unknown parameters is presumed to start with. The equation of motion and the chosen density function lead to equations for the response mean and auto-correlation. Both the methods have been explored in detail in the present paper by considering a single degree-of-freedom system with a Gaussian time varying coefficient and a Gaussian external excitation. The predicted mean and variance of the response in the transient regime have been compared with digital simulation results for assessing the utility of the proposed approximations. It has been generally found that the probability closure solution is more elegant and accurate than the other solution. The probability density and distribution function of the response studied at a representative point on the time axis also supports this conclusion. This suggests that the method also would be useful in estimating the level crossing and first passage statistics of stochastic systems. The possibility of extending the method to find the stability and response of systems with non-Gaussian coefficient processes is yet to be studied. ACKNOWLEDGMENTS
This work was supported by the Ministry of Defence through a Defence Grant-in-Aid scheme. The authors are grateful for this support. The authors are extremely thankful to Professor C. V. Joga Rao, Department of Aeronautical Engineering, and Chairman of the Mechanical Sciences Division, Indian Institute of Science, for his valuable discussions and constant encouragement. REFERENCES 1. T. T. SOONG and
2. 3. 4. 5. 6.
7. 8. 9.
10. 11. 12. 13. 14.
J. L. BOGDANOFF1963 International Journal of Mechanical Sciences $237-265. On the natural frequencies of a disordered linear chain of N-degrees of freedom. W. E. BOYCE1968 Probabilistic Methods in Applied Mathematics. New York: Academic Press (edited by A. T. Bharucha-Reid) 1, l-73. Random eigenvalue problem. F. KOZIN 1972 Stability of Stochastic Dynamical Systems. Berlin: Springer-Verlag (edited by R. F. Curtain), 186-229. Stability of the linear stochastic system. F. KOZIN 1969 Automatica 5,95-l 12. A survey of stability of stochastic systems. E. F. INFANTE and R. H. PLAUT 1969 American Institute of Aeronautics and Astronautics Journal 7, 766-768. Stability of a column subjected to a time dependent axial load. T. K. CAUGHEYand A. H. GRAY, JR. 1965 Journal of Applied Mechanics, Transactions of the American Society of Mechanical Engineers 32, 365-372. On the almost sure stability of linear dynamic systems with stochastic coefficients. S. T. ARIARATNAM 1965 International Conference on Dynamic Stability of Structures, Evanston, Illinois (edited by G. Herrmann), 255-265. Dynamic stability of a column under random loading. R. F. CURTAIN (Editor) 1972 Stability of Stochastic Dynamical Systems. Berlin: Springer-Verlag. J. M. RICHARDSON 1964 Symposium on Applied Mathematics 16, 290-302. The application of truncated hierarchy techniques in the solution of a stochastic differential equation. Providence, R.I. : American Mathematical Society. R. H. KRAICHNAN1962 Symposium on Applied Mathematics 13,199-225. The closure problem of turbulence theory. Providence, R.I. : American Mathematical Society. T. T. SLUNG 1973 Random DifirentiaI Equations in Science and Engineering. New York and London : Academic Press. R. NARAYANA IYENGAR 1975 Journal of Sound and Vibration 40, 155-165. Random vibration of a second order non-linear elastic system. R. BELLMAN 1969 StabiZity Theory of Difirentiaf Equations. New York: Dover Publications, Inc. J. S. BENDAT 1958 PrincipIes and Applications of Random Noise Theory. New York: John Wiley and Sons, Inc.