A computer-assisted learning system for random vibrations

A computer-assisted learning system for random vibrations

computers& srruc~ures Vol.43. Printed in Great NO. 5. pp. 975-993. 0045-7949192 s5.00 + 0.00 Q 1992 Pergamon PressLtd 1992 Britain. COMPENDIUM A...

2MB Sizes 1 Downloads 14 Views

computers& srruc~ures Vol.43. Printed

in Great

NO. 5. pp. 975-993.

0045-7949192 s5.00 + 0.00 Q 1992 Pergamon PressLtd

1992

Britain.

COMPENDIUM A COMPUTER-ASSISTED FOR RANDOM

LEARNING SYSTEM VIBRATIONS

A. DER KrUREGHlANt and C-D. WUNG$ tDepartment of Civil Engineering, University of California at Berkeley, Berkeley, CA 94720, U.S.A. $PMB Engin~ring

Inc., San Francisco, CA 94111, U.S.A.

(Received 18 April 1991)

AbstracG-The computer-assisted learning system described in this paper is based on the idea that solutions to problems in random vibrations consist of sequences of well-defined logical steps each requiring a certain amount of routine calculations. The student is engaged in the solution of the problem by selecting the proper sequence of steps, and the computer facilitates the solution by carrying out the routine calculations necessary for each step. The system enhances the learning process by (a) freeing the student from routine calculations that are not central to a fundamen~l understanding of the subject, (b) providing the means for application of the theory to realistic problems, thus helping the student gain experience and confidence, (c) providing the means for immediate visualization and interpretation of data at all levels, and (d) facilitating experimentation and exploration with the results of the theory. The system described includes capabilities for stationary and nonstationary response analysis as well as analysis of crossings and local and extreme peaks of processes.

1. INTRODUCTION The topic of random vibrations has been taught in U.S. universities as a graduate course in mechanical, aerospace, or structural engineering programs since early 1950s. The pioneering text by Crandall and Mark [l] and the authoritative book by Lin [2] played pivotal roles in the development of the field and its inclusion as a classical subject within these graduate programs. More recent texts [3-lo], as well as numerous research papers on the subject have further contributed to the growing interest in this field. The rapidly increasing power and versatility of persona1 computers and their wide availability on university campuses has presented another opportunity to further the teaching of random vibrations. This is particularly so since problem solutions in random vibrations are usually characterized by long analytical expressions or repeated, tedious calculations, which cannot be evaluated without the aid of a computer, even for the simplest of problems. Furthermore, these solutions usually require graphical repre~ntations for their full understanding, and personal computers provide excellent means for this purpose. This paper describes the pedagogical as well as theoretical concepts behind the development of an instructional software, STOCAL-II, which is a tool for teaching or learning the subject of random vibrations. Presently, the program is restricted to problems dealing with stationa~ or nons~tionary response of linear structures. It works on IBM-PC or compatible microcomputers and employs the IBM 975

Graphics Toolkit for color graphics in an interactive mode. STOCAL-II can be used as a teaching tool in a classroom environment for demonstration or problem-solving purposes, or as a self-learning tool by the individual student or researcher. The program is a vastly improved version of an earlier program, STOCAL [1 11, which was limited to stationary random vibration problems and did not have interactive or graphics capabilities. Both programs are developed as extensions to the deterministic structural analysis program CAL [12].

2. THE

PEDAGOGY OF STOCAL-II

The computer can aid the learning process of a theoretical subject, such as random vibrations, by four means: (a) by freeing the student from routine calculations that are not central to a fundamental understanding of the basic concepts, (b) by providing a means for the building of experience and confidence through application of the theory to realistic problems, (c) by providing a means for immediate visualization and interpretation of data at all levels (i.e., input data, resuits of intermediate calculations, final results), and (d) by fa~litating ex~~mentation and exploration with the results of the theory, In order to impart the above attributes to STOCAL-II, careful planning and consideration of various options were made. It was obvious that personal computers, rather than mainframe or minicomputers, were the proper environment, since they

916

A. DER KIUREGHIANand C-D. WUNG

provide maximum flexibility and accessibility, as well as excellent means for graphics, As an instructional code, it was decided that STOCAL-II must act as an ‘open box’ (as opposed to a ‘black box’), giving the user access to every stored data during the course of an analysis, and must require the active involvement of the user in solving each problem. Furthermore, the program had to be flexible and general in order to solve as large a class of problems as possible. To achieve these goals, certain compromises had to be made. We compromised on the efficiency of the data storage (all data are stored in core), efficiency and speed of computations (certain computations are repeated; PC speed), and power of the program (i.e., the maximum problem size that can be handled). Thus, STOCAL-II is not appropriate as a production program in large scale engineering applications; but it provides the transparency, flexibility, and generality that is essential for an instructional software. Solutions to most problems in random vibrations (indeed most problems in engineering) can be seen as sequences of logical steps, with each step requiring the selection of appropriate formulae or procedures and a certain amount of routine computations. For example, to determine the mean-square response of a linear structure, one needs to first select either a normal mode or a direct approach. If the normal mode approach is selected, one needs to form the appropriate mass, stiffness and damping matrices, perform eigenvalue analysis to determine the natural frequencies and mode shapes, and compute the modal participation factors for each type of input and response of interest. Depending on the specified form of the input excitation, one then has to decide between a frequency domain or a time-domain approach for evaluating the response statistics. If the frequency-domain approach is selected, one needs to determine the modal and cross-modal spectral densities (either stationary or evolutionary) and superimpose them in accordance to an appropriate modal combination rule to form the response power spectral density. This must then be integrated over the frequency-domain to compute the mean-square response. AIternatively, one may integrate the individual modal or cross-modal spectral densities to determine the mean-square modal or cross-modal responses, and then superimpose them to compute the total mean-square response. The selection of the sequence of steps (e.g., steps for a time-domain versus a frequency-domain approach) and the procedure or rule used in each step (e.g., type of modal combination rule used) require an understanding of the fundamental principles of the theory of random vibrations. However, the routine calculations (e.g., in frequency domain, integration numerical summation in modal superposition) usually are not central to such an understanding. In fact, these calculations even for the simplest of problems are tedious, repetitive, and a hindrance to a full understanding of the theory and its application. The basic

idea behind STOCAL-II, therefore, is to free the student from these tedious calculations, while requiring his/her active involvement in selecting the proper sequence of steps and the proper procedure in each step for solving the problem. In STOCAL-II, the solution to a problem is synthesized by issuing a sequence of commands, each performing the routine calculations necessary for one step of the solution process. A typical command has the form COMMANDNAME

Ml M2..

P=p,,p,

,...

. M+ I=il,i2

(C) ,...

I,.

where Mi are (alphanumeric) names of input matrces, M is the name of the output matrix containing the results of calculations performed by the command with the + sign indicating that it is a newly created matrix, C is an input matrix which is conditionally required depending on parameters of the problem, and P, Z, etc., are identifiers that define sets of real or integer-valued parameters required for the solution. The input matrices can themselves be outcomes of previous commands. Each matrix can be accessed by the user for viewing, printing, or plotting purposes in an interactive mode. Furthermore, the user can alter the contents of each matrix or replace it with another matrix during the course of the analysis. To master a theoretical subject such as random vibrations, it is essential to gain experience through repeated applications of the theory to a wide range of practical problems. Such experience is vital to gain confidence and foothold on the subject. Without it, many a student is left with the impression that the subject is of only academic interest and irrelevant to engineering practice. To facilitate such applications, a series of commands have been developed in STOCAL-II that allow the synthesis of solutions to a variety of problems based on the approach described above. Examples throughout this paper will illustrate these commands and show the breadth of applications that can be studied with the program. STOCAL-II enhances the learning process by experimentation, since it provides flexibility for parametric studies, or for exploring alternative solution approaches. For example, in the middle of a solution sequence, the user may replace the off-diagonal elements of the modal covariance matrix with zeros in order to investigate the effect of ignoring crossmodal correlations. Or, the user may consider two sequences of steps to compare the results of solutions based on two alternative approaches. Furthermore, parameter values at each step may be changed to investigate sensitivities. Through such experimentation, the student gains insight and a deeper understanding of the concepts behind the theory of random vibrations. Because of the flexibility in the choice of parameters for each command, it is possible for the

A computer-assisted

user of STOCAL-II to issue commands for theoretically inadmissible calculations. For example, the user may issue a command involving the second derivative of an oscillator response to a white-noise input, which does not exist. In such instances, the program is designed to issue an instructive message explaining why the command cannot be executed. Thus, the program allows the student to learn from his/her mistakes. It is clear that no computer program can be a substitute for a rigorous, analytical study of the subject of random vibrations. In his teaching of the subject, the senior author uses an analytical approach involving full derivation of the important results and makes assignments to the students that require derivations and calculations by hand. STOCAL-II is used to demonstrate the results of the theory in class and is used by students to solve supplementary assignments which involve applications of the theory to realistic problems. Many students also use it in their research. Thus, STOCAL-II is used as a complement rather than a substitute to a traditional analytical teaching of the subject.

learning

system

911

case, Pj = -MI,, where Ij is the influence vector representing the nodal displacements of the structure for a unit displacement of the base in direction j, and q(t) = Ai( where A,(r) is the base (translational or rotational) acceleration in direction j. Note that for this case U denotes the generalized nodal displacements relative to the base of the structure. In the present development, we consider only a single load function, i.e., F(t) = PF(t). However, if Fj(t) are statistically independent, the response statistics can be obtained by simply adding the mean and variance terms arising from the individual load processes. Commands available in STOCAL-II for matrix operations can be used to carry out the addition. Correlated load processes can also be handled if information on the cross-correlations or cross-power spectral densities is provided. These generalizations, however, will not be further discussed here. Employing the conventional normal-mode approach, the equations of motion are transformed into the decoupled form Y,(t) + 2giwi pi(t) + 0: Yi(r) = yiF(r),

3. SYNTHESIS OF PROBLEMS IN RANDOM VIBRATIONS

In order to develop a computer-assisted learning system with the above attributes, it is necessary to formulate problems in random vibrations in terms of sequences of well-defined logical steps, such that a solution through a synthesis process can be achieved. In this section, we present such a formulation for multi-degree-of-freedom linear systems. The normalmode approach is used for this purpose, since it is an effective and popular method in structural engineering. Most results reported here and in the following sections are not new. However, the manner of their formulation, which allows the solution by synthesis, is novel. The equations of motion of an n degree-of-freedom linear system are given by Mfj(t) + CO(t) + KU(r) = F(f),

(1)

where M, C, and K are the n x n mass, damping and stiffness matrices, respectively, U, U, and U are the n-vectors of generalized nodal displacements, velocities and accelerations, respectively, and F(t) is the n-vector function of external nodal forces. In general, F(t) can be expressed in the form

F(t) = i PjF,(Ov

(2)

,=1

where Pi and Fj(t), j = 1, . . . , k, are load coefficient vectors and load functions, respectively, and k denotes the number of separate load functions acting. In random vibrations, F,(r) are stochastic processes. The preceding formulation also applies to a system excited by the motion of its base. In that

i=1,2

)...,

n,

(3)

where Yr= [Y,, Y:,, . . . , Y,,] are the normal-mode coordinates defined by the transformation U = @Y, in which @=[+,,4*,.. . , #,I is the matrix of mode shapes, wi and ii are the modal frequencies and damping ratios (assuming classical damping), respectively, and yi = +fP/($fM&) are the modal participation factors, in which the superposed T denotes the transpose of a vector or matrix. It is convenient to define $(r) = Yi(r)/yi. Equation (3) then takes the normalized form

S;,(r)+ 2?&&(r) + ofS,(r) = F(r), i = 1,2, . . . , n.

(4)

Si(t) can be interpreted as the response of a singledegree-of-freedom oscillator of unit mass, frequency wi and damping ratio li, to the forcing function F(r). Hereafter, we denote this as the ith normalized modal response function, or simply the ith modal response. A structural response quantity of interest, X(r) (e.g., a story drift, internal force, stress or strain component), in general can be expressed as a linear function of the nodal displacements. i.e. X(r) = $W),

(5)

where qr = [q,, . . . ,qn] is a response transfer vector with constant elements. For example, for the relative displacement between degrees of freedom 1 and 2, qT=[l, -l,O,... ,O]. More generally, when an internal force, stress or strain component is of interest, q contains the stiffness and geometric properties

A. DERKXJREGHIAN and C.-D. WUNG

978

‘1_

100

M=O.l

in

Klps*src

(El)s=90000

1

2/ln

(El)P=36*106Klps*ln2

A

(EAX=

Klps’ln2

4.6

KITS

(b)

(4

Fig. 1. Example primary-secondary structure: (a) geometry and member properties, (b) assigned degrees of freedom. of the relevant structural member. Using definitions, the response can be written as X(t) = qT@Y = qT@f S(t) = aTS(t),

these

(6)

where r = diag[y,] is an n x n diagonal matrix with the modal participation factors as its diagonal elements, ST = [S, , . . . , S,,] and aT = qT@Y. Generalizing this formulation for an m-vector of response quantities X(r) = QrIJ(r), where Q is an n x m response transfer matrix, one has X(t) = ATS(t),

(7)

where AT= QTW. In an expanded form, expression for the k th response quantity is

the

where the coefficient uik, the i, k th element of A, is associated with the ith mode and the kth response quantity. Hereafter, we denote A as the modal effective participation matrix. The advantage of the above formulation is that it separates the time-invariant system parameters contained in A from the time-dependent modal response functions s,(t). In particular note that, while the elements of A change from response to response, S(t) remains fixed for all response quan-

tities. A single calculation of S(t), therefore, is sufficient for determining all responses. In STOCAL-II, the user is required to construct the matrix of modal effective participation factors by multiplication of the relevant structural matrices. Commands for formation of stiffness and mass matrices, for static condensation, and for eigenvalue analysis are available (as a part of CAL [12]) to facilitate these operations. In the following two sections we present formulations and synthesis for second-moment random vibration analysis of stationary and nonstationary structural responses. Statistics of level crossings and extremes for Gaussian processes are described in a subsequent section. Specific commands available in STOCAL-II for each kind of analysis are described through example applications. For the purpose of illustration, throughout this paper we analyze the example primary-secondary structure shown in Fig. 1. Using commands available in STOCAL-II from CAL [ 121, the stiffness matrix of the combined system is formed, the rotational degrees of freedom are eliminated by static condensation, and eigenvalue analysis is performed. The results for the modal frequencies and mode shapes, as well as the assumed modal damping ratios, are listed in Table 1. Note that the first two modes of the system are closely spaced due to tuning between the primary and secondary modes. The input excitation is assumed to be a translational acceleration process applied at the base of the structure. Its particular form is described

Table 1. Modal properties of example structure Mode shapes

Frequency,

Damping

Mode

rad/sec

ratio

DOF

Mode 1

Mode 2

Mode 3

Mode 4

Mode 5

1 2 3 4 5

10.17 11.08 15.18 54.54 93.58

0.05 0.05 0.05 0.05 0.05

1 2 3 4 5

0.677 2.066 14.039 15.676 16.562

-0.648 - 1.984 17.325 16.357 14.460

0.066 0.167 31.455 -0.116 -31.674

- 2.029 1.329 0.147 0.021 -0.134

-0.002 -0.003 22.656 -22.061 22.656

A

Table 2. Effective modal participation matrix ResDonse

Mode

X,

1 2 3 4 5

0.771 0.587 0.005 -0.363 0.000

x2

5.847 -4.839 -0.003 -0.006 0.001

X3

X4

14.390 10.740 0.201 221.526 0.001

- 0.622 0.559 0.108 -0.049 0.000

for each example separately. Four response quantities are considered for the subsequent analysis: X, representing the horizontal displacement of the primary structure at node C; X, representing the horizontal displacement of the secondary system at node E; X, representing the shear force at the base of the structure; and X, representing the axial force in secondary spring CF. The corresponding matrix of modal effective participation factors is shown in Table 2.

4. STATIONARY RANDOM VIBRATION ANALYSIS

In the most general case, the second-moment statistical quantity of interest in stationary random vibration analysis is the cross-correlation function between derivatives of order mk and m, of two response quantities Xk(t) and X,(t), respectively. Using the modal superposition rule in eqn (8), this can be written as

n =



C Z: +aj/RFg;;“‘)(r),(9)

r=lj=l

where? t-t,- r, is the time lag and R$!;;““’(t) is the cross-correlation function between derivatives of order m, and ml of the modal responses S&) and S+(t), respectively. In stationary analysis, most typically the input excitation is specified in terms of a power spectral density O,(o), where cu denotes the circular frequency. In terms of this function, the modal cross-correlation is

RF;;““(z) =

sm(

iw)mk(icO)~‘Hi(~)

--co

x Rj(w)@&w)exp(iwr)

979

computer-assisted learning system

do

(10)

in which H,(o) = [o: - u.G + Zi[,w,o]-’ is the fr~uency-~s~nse function of mode i and the superposed bar indicates the complex conjugate. Observe that this generic term is specified solely in terms of the input excitation and the modal frequency and damping values. Closed-form solutions of the above integral for admissible values of m, + m, have been derived by the

authors for the four types of input power spectral densities described in Fig. 2 (see [13]). These are the white noise (I = 1), the banded linear noise (I = 2), the piecewise linear noise (I = 3), and the filtered white noise (I = 4) models. These solutions, which are long algebraic expressions, are coded in STOCAL-II and are accessed through the identifier I assigned to each model. The set of parameters, P= p1tpzt.. ., and the admissible values of m, + ml for each model are also described in Fig. 2. Note that I = 3 signifies a piecewise linear function that can be used to approximatey represent any general power spectral density shape. The coordinates (oi, @Jo)of this function are stored in a matrix denoted P and the parameter p, describes a scalar multiplier of the function. It is noted that the integral in eqn (10) diverges for values of m, + m, outside the specified range for the white noise and filtered white noise models. The primary command in STOCAL-II to carry out the preceding analysis is SCF (Stationary Correlation Function). It has the format .SCFFDAlAZCF+(P) [M=m,,m,

I=i

TA=z,,?iN=nL=I

P=p,,... IC=i,,iJ,

where F and D define vectors that contain the modal frequencies oi and damping ratios &, respectively, Al and A2 are matrices containing modal effective participation factors (could be the same or two different matrices), CF is the output correlation function, the parameter identifiers I and P are as defined in Fig. 2, M defines the derivative orders m, and m,, TA defines the range t, < r < zr of the time lag, N defines the number of equally spaced values of T within the specified range, L defines the number of modes to be included in the analysis (i.e. less than or equal to the total number of modes), and IC defines the column numbers of Al and A2 that store the effective modal participation factors for any two response quantities of interest. The conditional matrix P is applicable only to the case I = 3, and for that case it contains the set of coordinates We, &, i = 1,2,. . . , defining the piecewise linear power spectral density in the manner described in Fig. 2. The square brackets define optional parameters, i.e., those which have defauit values and may be deleted from the command line. More details on the specification of the matrices and parameters are given in the user’s manual of STOCAL-II 1141. The command SCF can be used to compute the modal cross-correlation function in eqn (10) for any pair of modes i andj. For that purpose, it is necessary to define Al and A2 as an n x n identity matrix I and set IC = i, i. Naturally, one obtains a modal autocorrelation function if m, = m, and i = j are specified, and a modal cross-correlation function otherwise,

A. DERKIUREGHIAN and C.-D. WUNG

980

@(o)\

@(m)

PI

PC

PI w

0 P2

(a) I= 1, rnl+mzcz

P3 (b)

I =

2

w (d)l=4,

m,+m256

(4 I = 3

Fig. 2. PSD models for stationary process: (a) white noise, (b) banded linear noise, (c) piecewise linear

noise, (d) filtered white noise. As an example, Fig. 3 shows the cross-correlation function between the modal responses S, and 9, for a white noise input of intensity Go = 100, obtained by issuing the commands: SCFF

D I I RS12+1=1 TA=

-5,s

N=201

P=lOO M=O,l

IC=1,2

PLOT RS12.

Here we have used the default value of L (equal to the total number of modes). The command PLOT has a

Fig. 3. Cross-correlation

number of subcommands for text or graph editing and for producing hard copies of the plots. Figure 3 and all subsequent figures in this paper are produced by this command, except for the mathematical expressions included therein. After repeating the SCF command for all mode pairs, one can use matrix operations available in STOCAL-II to perform the modal superposition in eqn (9) for any pair of responses. A more direct approach, however, is to use the command SCF with the proper modal effective participation matrix. One obtains an auto-correlation function if the same

function of modal responses S,(r) and S,(f).

981

A computer-assisted learning system response quantity is specified by the M and IC parameters, and a cross-correlation function if two different response quantities or two different derivatives of the same response are specified. In this application of the command, the individual modal auto- or cross-correlation functions, R&,~mJ)(r), are not stored. Thus, these modal quantities are recomputed each time the SCF command is used in this direct manner. As an example, Fig. 4 shows the auto- and cross-correlation functions of X, (the base shear) and X. (the secondary spring force) produced by the commands SCF FDAARX33+

I

I= 1 P= 100 TA=0,5

SCF FD AARX44+

N=401

IC=3,3

I= 1 P=lOO

TA=0,5 SCF FDAARX34+

.23*E4 EfX,(~)x,(t+T)]

N=201

IC=4,4

7

I= 1 P= 100

TA = -5,5

N = 201 IC = 3,4,

where A represents the modal effective participation matrix in Table 2. Note that the default values M = 0,O and L = 5 are used in this application. It is interesting to note in Fig. 4 that the primary response is strongly correlated with the secondary response at later times, i.e., for negative values of the time lag r. A third approach for computing the above correlation functions will be described shortly. A second command for stationary random vibration analysis is SPSD (Stationary PSD), which computes the auto- or cross-power spectral density function of any pair of response derivatives. The mathematical expression is given by

-.812

WO

5.00

27.1

Fig. 4. Auto- and cross-correlation functions of responses X3(t) and X,(l). in which @(J$m”m’(0) = (-iOJ)mk(iO)m’Hi(OJ)Rj(f.O)@~r(~) (12) represents the cross-power spectral density between the mkth and m,th derivatives of the modal responses Si(f) and S,(r), respectively. The command has the format SPSD

F D Al A2 PSD+(P) P=p,,...

[w=o,,o+

I=i N=n

M = m, ,m, L = I IC = i, ,i2] in which all the terms are as defined earlier, except that PSD defines the output PSD function and W and N define the range w, d w G w, and number IO of frequency points, respectively. As before, the

command can be used to compute the pairwise modal auto- or cross-power spectral density functions, or the same functions directly for any pair of response quantities. As an example, Fig. 5 shows the real and imaginary parts of the cross-power spectral density of the responses X, and X, to a filtered white noise input having the parameters 3 = 100, w,= 15.7 rad,kec and &= 0.6, which is generated by issuing the command SPSD

F D A A PX3U+ I = 4 p = 100,15.7,0.6

w=

-20,20

N = 401 IC = 3,4

in which PX34 is the matrix storing the real and irna~na~ parts of the computed cross PSD. It is seen that the contributions of modes higher than the second mode are negligible.

A. DERKIUREGHIAN and C.-D. WUNG

982 12.1

Re@xg,b) Im@ xq,bJ)

---

Fig. 5. Cross-power spectral density of responses X3(r) and X,(r).

Several commands are available in STOCAL-II for Fourier transform analysis, including one employing the FFT algorithm and one for a piecewise linear function. Together with the SPSD command, these offer an alternative approach for computing response correlation functions. Namely, for any pair of desired responses, one first computes the auto- or crosspower spectral density, and then uses the inverse Fourier transform to compute the corresponding auto- or cross-correlation function. For example, the command IFTD PX34 RX34+ generates the cross-correlation function of X, and X, for the filtered white noise input shown in bottom of Fig. 4. The accuracy of the results generated by the SCF command can be verified through this alternative approach. For determining the statistics of crossings and extremes of a stationary response process Xk(t) (to be discussed in Sec. 6), the primary quantities of interest are the spectral moments, which are defined by P5, 161

Equation

(13) can be written in the form

(16)

We note that for even m, A,,,, and A,,,, represent the mean squares of derivatives of order m /2 of X, (t) and s(f), respectively, whereas P,,,~~represents the correlation coefficient between derivatives of order m/2 of s&t) and S,(r). For odd m, these quantities are related to the Hilbert transform of the process that is employed in defining the envelope process (see Sec. 6). The advantage gained from the formulation in eqn. (16) is that it allows using simple approximations of P,,,~,together with exact results for J.m,ii,which are much easier to compute than the cross terms l,,ij for i #j, to obtain good approximations of the spectral moments. An example below illustrates this point. Several commands are available in STOCAL-II to compute the spectral moments. The command SMSM (Stationary Modal Spectral Moments) has the format SMSA4 F D SLAM+

SRHO+ (P) I = i

P=p,,p,,.

13,,,=2

mw”8xk&u)do

= i

i aikajk

An.ij

i-l/-l

I0

111=o, 1,2,. . .,

(13)

where

l,,ij = 2 Re

* m”‘~,(~)~,(c+M~) U

0

da

..

[M=m

L=1]

-

1

is a modal cross-spectral moment of order Introducing the dimensionless coefficients

(14)

m.

(15)

in which all terms are as defined earlier, except that A4 now defines the order m of the spectral moment and SLAM and SRHO respectively define an I-vector and an I x I matrix that store the A,,,, and P,,,~~values for the first 1 modes. A second command, SMR, with the format SMR

SLAM SRHO Al A2 RLAM + [IC = i, , i2]

is available that combines these modal values in accordance with eqn (16) to compute the spectral moment J.,,, (or the cross-spectral moment for two different responses), which is stored in RLAM. A

A

computer-assisted learning system

further command, SRSM, directly computes the spectral moments 4, I,, A,, and A.,of the response in accordance with eqns (13) and (14). The command has the format SRSM FD Al A2 RLAM+

(P) I = i

P=p,,...

[L=I

IC=il,iz]

where all the terms are as defind earlier, except that RLAM now is a 4 x 1 vector that contains the four spectral moments. As shown in Sec. 6.1, these four moments define most statistics of the stationary response that are of engineering interest. As an example application of the above we examine the effect of using commands, approximate expressions for the modal crosscorrelations P,,,~~,i #j, or the effect of entirely ignoring them. The set of commands SlMSM F D SLO+ SRO+ I=4

computes an approximation of the spectral moment & (stored in RUNO) that neglects the cross-modal correlations. This process is repeated for spectral moments R, and ,i, and also for a broader band filtered white noise input with I$= 0.6. The resuhs are listed in Table 3 and show that the white-noise approximation of P,,,~~is quite acceptable when the input is broad band and that ignoring the modal cross-correlations leads to gross errors. This example serves to demonstrate one of the many facilities that STOCAL-II has for numerical experimentation and parametric studies. Several additional commands are available in STOCAL-II for computing spectral moments of stationary response. These include a command SM for computing the spectral moments for a given power spectral density, and two commands RCQC and RSM for computing the spectral moments based on the response spectrum definition of the input (used in earthquake engineering), which employ a fo~ulation described in Der Kiureghian 1171.

P = 100,6.280.2 M = 0 SMR

SLO SRO A A RFWO+

5. NONSTATIONARY RANDOM VIRRATION ANALYSIS

IC= I,1

computes the modal &and P,,,~~ values (stored in SLO and SRO, respectively) and the spectral moment & of the response X, (stored in RFWO) for a filtered white noise input with parameters 4p0= 100, o,= 6.28 rad/sec, and t;/= 0.2. The command SMSM

F D SLWO+

983

A variety of models are available for describing nonstationary processes. For implementation in STOCAL-II, we have chosen the evolutionary process model of Priestly [18], which has a p~ticularly convenient input~utput relation for linear systems. A process F(t) of this type is defined in terms of a Fourier-Stieltjes integral m

SRWO+

I= 1

r;(t) =

A (a, t) exp(iot) dS(w) s -m

P=lOO

M=O

computes the modal & and pO+uvalues for response to white noise. An approximate estimate of & (stored in RWNO) now can be obtained by issuing the command SMR

SLkl SRWO A A RWNO+

IC= 1,l.

Note that the approximate correlation matrix SRWO has been used in place of the exact one, SRO. Furthermore, let I define a 3 x 3 identity matrix, The command SMR

SLO I A A RUNO+

in which A(w, I) is a frequency-time modulating function and dS(w) is an orthogonal-increment process having the property E[dS(w,)d&+)J

= @( 0,)6(w, - w2) dw, dwz (18)

in which ‘P(a) is a non-negative even function and S ( .) denotes the Dirac delta function. The autocorrelation function of the process is given by

IC= 1,1.

Table 3. Comparison of spectral moments for response X, to filtered white noise r/= 0.2 sDectra1 moment

Exact

t

l,= 0.6

Appr0x.t

A.pprox.t

Anurox.?

ADnr0x.t

25.65 3.23

23.74 2.92

15.95 1.95

28.88 3.12

28.61 3.05

18.94 2.01

223.22

213.84

141.86

287.20

290.00

189.35

Exact

I r,

tApproximation SApproximation

(17)

employing P,,,~,based on response to white noise. by neglecting cross-modal correlations, i.e., setting P,,,~)= 0.

A. DERKI~~IAN

984

which for f, = t, = t, yields

E[F*(t)] z R&f, t) =

m lA(w, t)]*@(w)dw. s --Q

(20)

and C.-D. WUNG

In the preceding equation, hi(.) is the unit-impulseresponse function for mode i, which for the system under consideration is given by hi(t) =$exp(-&m,r)

This defines the time-dependent density function of the process as

power

~FF(W,t)=IA(W,f)l*~(W).

(21)

A special form of the above definition is obtained when A(w, t) is considered to be a real function of time only. In that case, eqn (17) reduces to F(f) = A (r)Y(t),

@,(w, f) = A2(~)~(~)

(23)

which shows that the spectral shape remains constant but its amplitude is modulated in time. This class of nonstationary process, first used in structural engineering by Shinozuka [ 181, are said to be unifo~iy modulated. The cross-correlation and cross-power spectral density functions of the derivatives of order mk and m, of two generic responses Xk(t) and X,(t), respectively, are given by the superposition rules

in which R~~~,~‘)(r,, tz) and ~~~~~~‘)(~,r) are the corresponding modal functions. Generalizing a formulation used by Hammond [19] and Howell and Lin [20], the cross-power spectral density of the modal derivatives can be written in the form t)@(w)

(26)

in which

fM,(u , t ) exp(iot)] (27)

M,(w,r)=

R$“;m’)(tl, t2) =

c ‘A(w,t)h,(t-r)exp[-io(r-r)]dr.

cc Mp)(o, s -m

t,)iiij;l)2l)(co, t*)

x G(w) exp[iw(t, - f2)] dw.

(30)

It is clear that the generic term for nonstationary response analysis is the integral in the preceding equation, which itself involves two single-fold integrals as defined in eqns (27) and (28). For in STOCAL-II, closed from implementation solutions of these integrals are derived for the special case where e(w) is a piecewise linear function of o (i.e., cases I = 1, 2, or 3 in Fig. 2) and A(o, 1) is a piecewise linear and real function of w and t (including the product term wt). These solutions are long algebraic expressions reported in Wung and Der Kiureghian f 131 and will not be presented here. It is noted that, while the function A(@, t) in general can be complex, in the present implementation it is restricted to having only real values. The primary commands in STOCAL-II for nonstationary random vibration analysis are ECF, TCF, EPSD, and TFSD. The first two compute the autoor cross-correlation function of a pair of response quantities, and the latter compute the corresponding evolutionary power spectral density functions. Commands starting with the letter E are for the general case of an evolutionary input; commands starting with the letter T are for the special case of an uniformly modulated input, i.e., when A(w, t) = A (2). Although the more general commands ECF and EPSD can be used for this special case, the commands TCF and TPSD employ simpler expressions and are more efficient to compute. The command ECF has the format ECF FD Al A2 AWT R-t- (I’) I=i P=p,,...

Mjmki(o, r) = exp( - iwt) g

(29)

where wD,= wi(l -[Z)“* is the damped natural frequency of mode i and H(t) is the Heavyside function. Anaiogous to eqn (19), the modal crosscorrelation function is given by

(22)

where Y(t) is a stationary process having the power spectral density
~~~~;m’)(~,2) = M$m~)(w,r)~~(~,

sinoDirH(t).

spectral

Tl = fib, l,, [T2 = tZ6,tzr N = n M = m,, m, L = I IC = i, , i2]

in which AWT is a matrix storing the coordinate points for the piecewise linear modulating function A(w, t), Tl and T2 define the ranges t,b< tl < tl, and tz4G t, G th for the two time coordinates, respectively, N defines the number n of equally spaced points

985

A computer-assisted learning system

Fig. 6. Modulating function for evolutionary input process.

within the two ranges, and defined earlier. The output or cross-correlation values The command EPSD has

the remaining terms are as matrix R stores the autofor the n pairs of t, and t2. a similar format

EPSD F D Al A2 AWT PSD+ (P) I=i Tl =r,,,tr2 ,... N=n

M=m,,m,

[T2=t2,,tZZ ,... L=l

P=p,, . . .

W=w,,wr

IC=i,,iJ

in which all the terms are as defined earlier, except that Tl and T2 now define selected values of the two time coordinates t, and t,, respectively, W defines the frequency range w, G w < 02, and N defines the number n of equally spaced frequency points. The output matrix PSD stores the evolutionary auto- or cross-power spectral density values (both real and imaginary parts) for the selected pair of responses at all specified time and frequency points. The commands TCF and TPSD have similar formats, except that instead of AWT a matrix AT describing the piecewise linear function A(t) is used. As in the case of SCF and SPSD commands, the above four commands can be used to either compute the pairwise modal responses [which may then be superimposed to compute the response values according to eqns (24) and (25)], or they may be used to directly compute the auto- or cross-correlation function or the evolutionary power spectral density function for any selected pair of responses. For illustration of the above commands, consider an evolutionary input process defined in terms of the piecewise linear modulating function A (w, t) in Fig. 6 and 0(w) = 100. A sample function of this process shown in Fig. 7, which is generated by a STOCAL-II command GEGP, clearly shows the nonstationary character of the process both in time and in frequency content. In the following, we give various results generated for the responses X, and X, to this evolutionary input. Figure 8 shows the evolutionary auto- and cross-power spectral densities at times

t = 5, 10, 15, and 20 set, which have been generated by repeatedly issuing the command EPSD

F D A A AWT EPkl+

I=1

P=lOO

W = 0,20 N = 301 Tl = 5,10,15,20 IC = k, 1 for k=l=3, k=1=4 and k=3, 1=4. Figure 9 shows the cross-correlation function E[X,(t,)TJ(t,)] for 0 < t, c 20 and t, = lOsec, which has been generated by issuing the command ECF F D A A AWT CC33 I= 1 P=lOO T1=0,20

T2 = 10,lO N = 303 M = 0,l IC = 3,3

and Fig. 10 shows the cross-correlation function E[X,(t)X,(t)] for 0 < t c 20, generated by issuing the command ECF F D A A AWT CC34 I= 1 P= 100 Tl = 0,20 N = 103IC = 3,4. Note that the default value T2 = Tl of the second time parameter is used in the EPSD and the last ECF commands. Two more commands, ERMS and TRMS, are available for nonstationary random vibration analysis. These commands compute the variances and cross-correlation coefficient functions of a selected

Fig. 7. Sample function of evolutionary input process.

986

A. DER KIUREGHIANand C.-D. WUNG

and cross-PSD functions of responses X3(r) and X,(r)

response X(r) and of its first two derivatives, 8(r) and y(r). These results can be generated by repeated applications of commands ECF or TCF by specifying Tl = T2, i, = i2 and m, = m, = 0, 1,2. However, commands ERMS and TRMS are more direct and employ more efficient computations. The command ERMS has the format ERMS

F D A AWT VR+

(P) I=i

variance functions a:(r), a:(r), and a$(r), and the correlation coefficient functions p&r), pXt(r), and pfx(t). The command TRMS has a similar format but employs a modulating function of time only. As an example, Fig. 11 shows the variance and crosscorrelation coefficient functions for the response X, and its two derivatives obtained by issuing the command

P=p,,...

T = r, ,I* [N = n L = 1 IC = i,] in which T and N define the range I, < I 6 t, and number n of time points, respectively, IC defines the column number of the effective participation matrix A for the response quantity of interest, and VR is an n x 7 matrix containing the time points t, the

ERMS

F DA

AWT VR3+

P=lOO

T = 0.20 N = 203 IC = 3. As described in the following section, these functions are essential for computing the statistics of crossings and extremes of nonstationary processes.

.105E5 1 E[X,(r)&(lO)J

Fig. 9. Cross-correlation

I=1

function of X,(r) and %,(I).

Fig. 10. Cross-correlation

function of X3(t) and X4(t).

P&&f f .ll?

F

I

-.mE-2

-3OiE-’

f

20.0

Fig. il. Variance and cross-correlation

functions of ,X$(l), .$(f), and X,(t).

A. DER KIUREGHIANand C.-D. WUNG

988

6. STATISTICS OF CROSSINGS AND EXTREMES

From a safety analysis viewpoint, the structural response quantities that are of primary interest are the crossings of the process at specified thresholds and the local and extreme peaks. In this section, we describe formulae and commands that are available in STOCAL-II for such analyses. Throughout this section we assume that the input process and, hence, the response are Gaussian. This is because available solutions are mostly restricted to this class of process. Furthermore, the processes are assumed to have zero means, unless specifically noted. Two well-known results due to Rice [21] are fundamental to the analysis in this section. One is the formula for the mean rate of upcrossings of X(t) above the level a at time I, which is given by vx(a +, 1) =

= if&a,

i; t) di

s0

(31)

and the other is the formula for the mean rate of local peaks of X(r) above the level a at time t, which is given by 0 p&r)=

-

i'fxyx(z,

Cc

Is(I

0, i’; t) di’ dz.

(32)

--x

We note that a local peak occurs when the derivative process B(t) down-crosses the zero level. In the above expressions fX~(z, i; t) and J&(z, i, i’; t) denote the joint probability density functions of X(t) and its derivatives at time t. The cumulative distribution function of the local peaks is given by the fraction of peaks below the level a [22], i.e. F,(u, t) = 1 -

1c,(o, 1) Px(-cc% 1)’

(33)

The corresponding probability density function can be obtained by differentiation. For a Gaussian process, X(t), J?(r), and x(1) are jointly Gaussian. As a result, closed-form solutions for the above integrals can be derived. These and other results are described in the following two sections together with the corresponding STOCAL-II commands for stationary and nonstationary analysis. 6.1. Stationary Gaussian process For a stationary Gaussian process X(t), the statistics of interest are given in terms of the four spectral moments 1,, A,, A,, and &. For a response process, these can be computed by using STOCAL-II commands SMSM, SMR, SRSM, or RSM in the manner described in Sec. 4. Moreover, for a stationary process with its power spectral density specified by one of the four types listed in Fig. 2, these moments can be computed by using a command named SM. We recall that &= a$, A2=a$, and A4 = a$ are the variances. Furthermore, it can be

shown [23] that 1, = -E[$(r)8(t)], where 8(t) is the Hilbert transform of X(t). The mean rate of upcrossing of level a by a zero-mean stationary Gaussian process is given by [211 VX(u+)=&($J’2exp(-$J.

(34)

This result is independent off due to stationarity. The cumulative distribution function of the local peaks is given by [24]

M4=@[

(1 _‘,2,,,2]-a

exp( -3@[

(, :&,‘] (35)

in which @(.) denotes the standard normal cumulative probability, r = a/#‘, and CL= J.2/(10L,)“2. The coefficient 51lies between 0 and 1 and is a measure of the bandwidth of the process: c( near 1 indicates a narrow-band process, and c( near zero indicates a wide-band process. In order to determine the distribution of the extreme peak (i.e., the largest peak over a specified duration) of a narrow-band process, the concept of an envelope process is introduced. The envelope of a narrow-band process X(t) is defined as a smoothly varying process E(t) such that E(t) > X(t) for all t and E(t) x X(t) at or near the local peaks of X(t). In particular, the envelope due to Cramer and Leadbetter [25] is defined by E2(t) = X2(t) +

df2(t).

(36)

It can be shown that this definition of E(r) satisfies the requirements just stated. Furthermore, it can be shown that when X(t) is stationary and Gaussian, E(r) and B(t) are statistically independent and have the first-order probability density functions 2

10 YE(u) = Qexp

( >

(37)

-5 30

V2 ft(V)

=&ew(

--

)

2421, ’

(38)

where 6 = [l - L:/(lo12)]“2. The coefficient 6 is also bounded by 0 and 1 and it offers an alternative measure of the bandwidth of the process: S near zero indicates a narrow-band process, and 6 near 1 indicates a wide-band process [26]. Using the preceding results in eqn (31), the mean rate of upcrossing of level u by the envelope process is vE(u +) = (2n)‘j26 r vx(u +),

(39)

where r = u/l;” as defined earlier. The ratio vX(u +)/vE(u +) = [(2x)“26r]-‘, which equals the

989

A computer-assists learning system average number of upcrossings of X(f) per average upcrossing of E(r), is known as the average ‘clump size’ 1271. Excluding the upcrossings of E(z) that contain no upcrossing of X(t), Vanmarcke 1261 has derived a modified measure of the clump size for ‘qualified’ upcrossings of E(t), which is given by

Wump

1 sIzeI = 1 _ exp( __J(24&)



(40)

The average clump size is a measure of dependence between the crossings of X(r). The extreme peak of the process X(t) over a duration (0,7) is defined as

x,=Jy’,“,

x0>.

x1 < a < x, of levels of interest, N defines the number of equally spaced levels within the specified range, T defines the duration t, and MU defines the mean fi of the process. The extreme peak definition in eqn (41) is used if MU is specified (even if p = 0), and the definition in eqn @I) with p = 0 is used if MU is not specified. For non-zero mean, the above formulae are adjusted by replacing the level n by a -I*. As an example, Fig. 12 shows various statistics of the response X, to a filtered white noise input with parameters @, = 100, t~)r= 15.7, and &= 0.6 that are generated by issuing the commands SRSM FDAA

SSGf

is of more interest. The dist~bution of X, is closely related to the first-passage time probability. Exact solutions of this problem are not available, but several approximate solutions based on simple models of the crossing events are available. The simplest of these models assumes the level crossings of X(t) to be Poisson events. This, however, is not a good assumption for a narrow-band process, where level crossings tend to occur in clumps. Furthermore, the Poisson process does not account for the time spent above a level during which no upcrossing can occur. A better approximation that accounts for these effects is developed by Vanmarcke [26]$ based on the a~umption of ind~ndent crossings by the envelope process and employing the clump size expression in eqn (40). The resulting distribution of X, is

Fx,(a)=[l -exp(-g)]

in which S, = (2S)‘.’ and v, = ~~(0’) for the peak in eqn (41) and 6, = (S)‘.2 and v, =i 2v,(O+) for the peak in eqn (42). Approximate expressions for the mean and standard deviation of this dist~bution are given in Der Kiureghian [16] in the form Pi, = pux and CT~,= rlo,, where p =p(vez, 6) and q = q(vez, 6) are the peak factors. The primary STOCAL-II command for the above analysis is SSGP (Statistics of Stationary Gaussian Process). It has the format N=n

T=z

MU=pf

in which LAM is a vector containing the spectral moments &,, A,li Aa, and &, X defines the range

I=4

P = 100,15.7,0.6 IC = 1,l

(41)

However, for a zero-mean process, the definition

SSGP LAM ~=x,,x~

LAM+

LAM X=3,9

N=3

T= IO.

As described in Sec. 4, the first command computes the spectral moments &,, A,, I, and a,, which are then used in the SSGP command to compute all the relevant statistics for the specified thresholds and time duration that are shown in Fig. 12. Two other commands are available for imputing the distributions of the peaks for plotting purposes. They are LPK%, LAM LP+

[R=r,,r2

N=n]

EXTL) LAM EP+ T = z [R=r,,r>

N-n

MU=p].

The first command computes the probability density and ~~ula~ve distribution functions of the local peak (stored in LP), and the second Roland computes the same functions for the extreme peak over the duration T (stored in EP). In the above commands, R denotes the range of levels of interest in units af standard deviation of the process (i.e., square root of &,) and N denotes the number of equatly spaced levels within the range. LAM, T, and MU are as defined earlier. As an example, Fig. 13 shows distributions of the local and extreme peaks for the responses X, and X, to the above filtered white noise input with o = 10 see, which are generated by issuing the above two commands.

For a nonstationary process, the statistics of interest are given in terms of the variance functions rrf(t), ai and the cross-correlation &(t), coefficient functions pxz(t), pxX(t), and px&). For any selected response process, these can be computed by using the commands +!32MS or TRMS as described in Sec. 5. Additional commands are available for computing these functions for a nonstationary process directly defined in terms of its evolutionary power spectra1 density function.

A. Daa KIUREGHIAN and C-D. WUNG

990 ________

STATISTICS

OF ZERO-HEM

________

STATIONARY GAUSSIAN PROCESS

process X(t): Standard

deviation of X(t) E [ X(t) d(Hilbert tran X)/at ]**1/2 Standard dwiation of dX/dt Standard deviation of dX2/dt2

= = = =

2.5549

Cramer-Leadbetter envelope E(t): Man of E(t) Standard deviation of E(t) Standard deviation of dE(t)/dt

t I I

3.2020 1.6738 .5375a

Regularity delta alpha

= -

.21041 .a8805

=

1.6653

factors

(measures of bandwidth): (1 - 11*11/10/12)**1/2 12*12/10/14

Mean zero upcroosing nu(O+) Threshold

sqrt(l0) sqrt(l1) sqrt(12) sqrt(l4)

rate (apparent frequency): (1/2pi)+(12/10)**1/2

a.1711 26.732 314.96

crossings: Threshold Level 3.0000 6.0000 9.0000

Uean Upcroasing Rates X(t) R(t) .a3575 .51760 .10565 .13086 .00336 .00625

I¶ean Clump Size (qualified crossings) 2.16595 1.40801 1.18482

Probability distributions: Level 3.0000 6.0000 9.0000

PDF X E Xptak .7837E-01 .2307 .2052 .9906E-02 .58323-03 .5179E-01 .3154E-03 .27953-02 .2473E-02

X .a7985 .99057 .99979

CDF E .49813 .93656 .9979a

Xptak .55419 .94366 .99821

Statistics of max[l X(t)lJ for duration T - 10.00 Peak factors:

P 9

Mean

Standard deviation Probability

= =

2.5559 .49624

I I

6.5300 1.2678

di8tribution: Level 3.0000 6.0000 9.0000

PDF .260143-02 .31320 .445693-01

CDF .00055 .41147 .96533

Fig. 12. Statistics of stationary Gaussian response process X,(r) generated by command SSGP.

Three commands are available for analyzing statistics of a nonstationary process. They are

NCR VRNU+

[x=x,,.xr

the

The command NCR computes the mean upcrossing rate and stores it in NU for all specified levels and time points according to the formula

N=n]

NDLP VRLP+

[X=x,,x,N=n]

NDEP VREP+

[x=x,.x,N=n]

in which VR is the matrix that contains the variance and correlation coefficient values for a selected sequence of time points, X defines the range x, d II Q xl of the levels of interest, and N defines the number n of equally spaced levels within the range.

x exp [

-& 1[4(r) + mm

(44)

where #( .) and @( .) respectively denote the standard normal probability density and cumulative distribution functions, and PxAf)

r=[I - p:a(r)]'%&.

A computer-assisted

learning system

991

PDF 349

1 -

local peak of X2

extreme peak of X,

extreme peak of x2 .166E-5

-4.26

54.9

a

Fig. 13. Probability density functions of local and extreme peaks of X,(t) and X2(t).

This formula is obtained by solving eqn (31) for the jointly Gaussian distribution of X(t), B(t), and f(t). As an example, Fig. 14 shows the mean upcrossing rates of the response X, for levels a = 30,40, and 50 for the evolutionary input with the modulating functions in Fig. 6, which are obtained by issuing the command NCR. For this application, the matrix VR3 obtained from the command ERMS in Sec. 5 is used in place of VR. The command NDLP computes the probability density function of the local peak, conditioned on its occurrence, and stores it in LP for all specified levels and time points according to the formula

fp@9 r,=[I -

P”*M(s)+ Ws)l p&(r)][l - p:f(r)]“*ax(r)

in which

_PxxW- PxHwx(r) a {Al - P:m(m”* fJ,O) 1+ 2P,A~)P,x(~)Pd~) s=

P=

-&(r)-&(r)

- p&(t).

(47)

(48)

This formula is derived by solving eqn (33) for the joint Gaussian distribution. As an example, Fig. 15 shows the probability density functions of the local peaks of X, at times r = 5, 10,15, and 20 set for the above evolutionary input, which are generated by using the command NDLP. Finally, the command NDEP computes the cumulative distribution function of the extreme peak, K = ,~a:~ -W ), according to the formula

a2

x exp -2a,(t)*[l

-&(t)]

>

(46)

%(a)=exp[-lv,(a+,r)dt]

Fig. 14. Mean upcrossing rate of X,(r) at selected levels.

(49)

A. DER KIUREGHIANand C.-D. WUNG

992

PDF

.60X-12

1

-30.0

1

_

..m_\-

150.

a

Fig. 15. Probability density functions of local and extreme Peaks of X3(r) for evolutionary input. where r is the duration of the input, which is specified through VR. The corresponding probability density function is derived by differentiation. The two functions are stored in EP. This formula is based on the simple assumption of Poisson crossings. Although more accurate formulas are available at the present time only this (e.g., Dl), approximation is implemented in STOCAL-II. As an example application, Fig. 15 shows the probability density function of the extreme peak of X3 over the duration of the evolutionary excitation generated by the command NDEP. It is interesting to compare the distribution of the extreme peak with the distributions of the local peaks, which are also shown in the figure. 7.

SUMMARY

AND CONCLUSIONS

The computer-assisted learning system described in this paper is based on the idea that solutions to problems in random vibrations consist of sequences of well-defined logical steps each requiring a certain amount of routine calculations. The student is engaged in the solution of the problem by selecting the proper sequence of steps, and the computer facilitates the solution by carrying out the routine calculations necessary for each step. The system enhances the learning process by (a) freeing the student from routine calculations that are not central to the fundamental understanding of the subject, (b) providing the means for application of the theory to realistic problems, thus helping the student gain experience and confidence, (c) providing the means for immediate visualization and interpretation of data at all levels, and (d) facilitating experimentation and exploration with the results of the theory. Within the limited pages of this paper, only some of these capabilities were demonstrated through the numerical examples. In this paper, STOCAL-II commands dealing with stationary and nonstationary random vibration

analysis, including the analysis of crossings and local and extreme peaks, were presented. A number of other commands are available that were not presented. These include commands for generation of sample functions for stationary or nonstationary processes and for temporal or ensemble estimation of the autocorrelation function of stationary processes. While these capabilities are sufficient for the analysis of many problems in structural engineering, the scope of problems in random vibrations is far broader. In particular, such topics as response of nonlinear systems, response to parametric excitation, uncertain systems, and non-Gaussian response are not covered. Capabilities for such analysis, as well as simpler extensions such as treatment of nonclassically damped systems, analysis of distributed-mass systems, or alternative approaches such as the state-space approach, can be incorporated in the program or added to it as extensions. The architecture of STOCAL-II is designed in a modular form so as to facilitate such extensions in the future. The long-term contribution of an instructional system such as STOCAL-II must be seen in its helping to bring the methods of random vibrations to the realm of engineering practice. The authors hope that through this development they have taken a step in this direction. Acknowledgements-The

work presented in this paper has been supported by grants from the College of Engineering Dean’s Office and from the Council on Educational Development at the University of California, Berkeley.

REFERENCES

1. S. H. Crandall and W. D. Mark, Random Vibration in Mechanical Svstems. Academic Press, New York (1963). 2. Y. K. Lin, P;obabilistic Theory of Structural Dynamics. McGraw-Hill, New York (1967). 3. D. E. Newland, An Introduction to Random Vibrations and Spectral Analysis, 2nd Edn. Longrnan, London (1984).

993

A computer-assiste ,d learning system 4. R. W. Clough and J. Penzien, Dynamics of Structures. McGraw-Hill, New York (1975). 5. N. C. Nigam, Introduction to Random Vibration. MIT Press, Cambridge, MA (1983). 6. I. Ehshakoff, Probabilistic Methods in the Theory of Structures. John Wiley, New York (1983). 7. V. V. Bolotin, Random Vibration of Elastic Systems. Martinus Nijhoff, The Hague (1984). 8. R. A. Ibrahim, Parametric Random Vibration. Research Studies Press, Taunton (1985). 9. C. Y. Yang, Random Vibration of Structures. John Wiley, New York (1986). 10. J. B. Roberts and P. D. Spanos, Random Vibration and Statistical Linearization. John Wiley, New York (1991). Il. M. R. Button, A. Der Kiureghian and E. L. Wilson, information manual. Report STOCAL user UCB/SESM-81/02. Department of Civil Engineering, University of California, Berkeley, CA (1981). 12. E. L. Wilson, CAL86-computer assisted learning of structural analysis and the CAL/SAP development system. Report UCB/SEMM-86/05. Department of Civil Engineering, University of California, Berkeley, CA (1986). 13. C-D. Wung and A. Der Kiureghian, STOCAL-II: computer-assisted learning system for stochastic dynamic analysis of structures. Part I-theory and development. Report UCB/SEM-89/10. Department of Civil Eneineerine. Universitv of California. Berkelev. ~ CA (1985). -’ 14. C-D. Wung and A. Der Kiureghian, STOCAL-II: computer-assisted learning system for stochastic dynamic analysis of structures. Part II-user’s manual. Report UCB/SEMM-89/11, Department of Civil Engineering, University of California, Berkeley, CA (1989). 15. E. H. Vanmarcke, Properties of spectral moments with applications to random vibration. J. Engng Mech. ASCE 98, 425-445 (1972).

16. A. Der Kiureghian, Structural response to stationary excitation. J. Engng Mech., ASCE 106, 1195-1213 (1980). 17. A. Der Kiureghian, A response spectrum method for random vibration analysis of MDOF systems. Earthquake Engng Struct. Dyn. 9, 419-435 (1981). 18. M. B. Priestley, Evolutionary spectra and nonstationary process. J. Royal Statistical Sot. B27, 204-237 (1965). 19. J. K. Hammond, On the response of single and multidegree of freedom systems to nonstationary random excitations. J. Sound Vibr. 7, 393-419 (1968). 20. L. J. Howell and Y. K. Lin, Response of flight vehicles to nonstationary atmospheric turbulence. AIAA Jnl9, 2201-2207 (1971). 21. S. 0. Rice, Mathematical analysis of random noise. Bell System Technical Journal 23, 283-332 (1944) and 24, 46-156 (1945). 22. W. B. Huston and T. H. Skopinski, Probability and frequency characteristics of some flight buffet loads. NACA TN 3733 (1956). 23. M. Grigoriu, Discussion of ‘Structural response to stationary excitation,’ by A. Der Kiureghian. J. Engng Mech., ASCE 107, 1255-1257 (1982). _ 24. D. E. Cartwriaht and M. S. Lonnuet-Hianins. The statistical distribution of maxima of a random function. Proc. Royal Society of London A327, 212-232 (1956). 25. H. Cramer and M. R. Leadbetter, Stationary and Related Stochastic Processes. John Wiley, New York (1967). 26. E. H. Vanmarcke, On the distribution of first-passage time for normal stationary processes. J. Appl. Mech. ASME 42, 215-220 (1975). 27. R. H. Lyon, On the vibration statistics of a randomly exited hard-spring oscillator, Part I. J. Acous. Sot. Am. 32, 716719 (1960); Part II 33, 1395-1403 (1961). 28. J.-N. Yang, First excursion probability in nonstationary random vibration. J. Sound Vibr. 27, 165-182 (1973). -1