PERGAMON
Computers and Structures 69 (1998) 339±347
Vibrations of a beam-mass systems using arti®cial neural networks Bekir Karlik a, ErdogÆan OÈzkaya b, Serkan Aydin a, Mehmet Pakdemirli b, * a
Department of Electrical Engineering, Celal Bayar University, TR-45140, Muradiye, Manisa, Turkey Department of Mechanical Engineering, Celal Bayar University, TR-45140, Muradiye, Manisa, Turkey
b
Received 20 January 1997; accepted 28 April 1998
Abstract The nonlinear vibrations of an Euler±Bernoulli beam with a concentrated mass attached to it are investigated. Five dierent sets of boundary conditions are considered. The transcendental equations yielding the exact values of natural frequencies are presented. Using the Newton±Raphson method, natural frequencies are calculated for dierent boundary conditions, mass ratios and mass locations. The corresponding nonlinear correction coecients are also calculated for the fundamental mode. The calculated natural frequencies and nonlinear corrections are used in training a multi-layer, feed-forward, backpropagation arti®cial neural network (ANN) algorithm. The algorithm produces results within 0.5 and 1.5% error limits for linear and nonlinear cases, respectively. By employing the ANN algorithm, computational time is drastically reduced compared with the conventional numerical techniques. # 1998 Published by Elsevier Science Ltd. All rights reserved.
1. Introduction Beam-mass systems are frequently used as design models in engineering. Natural frequencies of beammass systems were calculated for various end conditions [1±6]. Under the small amplitude assumption, a linear analysis is valid and the natural frequencies are independent of the amplitudes of vibration. When the amplitudes are not small enough, linear analysis ceases to be valid and the nonlinearities come into eect to limit the response to a ®nite value. In the case of immovable end conditions, the nonlinearities arise due to the axial stretching of the beam during the vibrations. Woinowsky-Krieger [7] and Burgreen [8] were the ®rst to study the eects of axial stretching on the vibrations of beams. Srinivasan [9] applied the Ritz±Galerkin technique to analyze the large amplitude free vibrations of beams and plates with stretching. Hou and Yuan [10] investigated the design sensitivity
* Corresponding author. Tel.: 00-90-236-241-2144; Fax: 0090-236-241-2143.
of a stretched beam with immovable ends. McDonald [11] studied the dynamic mode couplings of a hinged beam with uniformly distributed loading. This work is closely related to the recent studies [12, 13]. Pakdemirli and Nayfeh [12] investigated the eects of stretching and nonlinear spring on the vibrations of a simply supported beam-mass-spring system. OÈzkaya et al. [13] treated the same problem under ®ve dierent sets of boundary conditions, but did not include the eect of nonlinear spring so that stretching phenomena could be analyzed more precisely. In both studies [12, 13], the method of multiple scales (a perturbation technique) is employed to ®nd approximate analytical solutions to the nonlinear problem. In this work, we treat the linear and nonlinear free vibrations of a beam-mass systems under immovable end conditions. The ®rst ®ve natural frequencies are calculated using the Newton±Raphson method for dierent mass ratios, mass locations and boundary conditions. This technique requires numerical iterations for each frequency, mass ratio, mass location and boundary condition, a relatively long computational
0045-7949/98/$ - see front matter # 1998 Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 4 5 - 7 9 4 9 ( 9 8 ) 0 0 1 2 6 - 6
340
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
time. The calculated key values are then used in training a multi-layer, feed forward, backpropagation arti®cial neural network (ANN) algorithm. After the training, any natural frequency can be calculated immediately within 0.5% error on the average. The nonlinear correction coecients obtained from multiple scales analysis are also used in training the ANN algorithm. Results of ANN and of multiple scales are compared and it is found that ANN calculates the coecients within an average of 1.5% error. Results show that ANN modeling can be eectively used as a supplementary technique to conventional numerical procedures in vibration problems. For some other examples of ANN applications to structural mechanics, the reader is referred to Refs [14±16].
2. Formulation of the Problem The beam-mass systems and the dierent boundary conditions considered are shown in Fig. 1. The dimensionless equations of motion are derived using Hamilton's principle [12, 13]: 0 1
Z
1 1 B 02 C 00 02 w 1 wiv
1 1 @ w1 dx w2 dxAw1 ; 2 0
Z
0 1
Z
1 1 B 02 C 00 02 w 2 wiv 2 @ w1 dx w2 dxAw2 ; 2 0
Z
Fig. 1. Dierent end supporting conditions for a beam-mass system.
2
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
where w1 and w2 are the dimensionless left and right de¯ections with respect to the concentrated mass M. The dot denotes dierentiation with respect to the non-dimensional time t and the prime denotes dierentiation with respect to the spatial variable x. The dimensionless quantities are related to the dimensional ones (denoted by an asterisk) through the following relations 1 w1;2 x xs 1 EI 2 x ; w1;2 t ;
3 ; Z ; t 2 L rA L r L where L is the length of the beam, r is the radius of gyration of the beam cross section, xs is the position of the concentrated mass, Z is the non-dimensional position parameter (0 R Z R 1), E is the Young's Modulus, A is the cross-sectional area and r is the density of the beam. I is the moment of inertia of the beam cross section with respect to the neutral axis of the beam. The end boundary conditions are given in Fig. 1. The intermediate boundary conditions at the location of the concentrated mass are: w1
Z; t w2
Z; t; w001
Z; t w002
Z; t;
w01
Z; t
w02
Z; t;
The end boundary conditions are given in Table 1. Exact solutions of Eqs. (9)±(12) together with the end boundary conditions yield the mode shapes and transcendental frequency equations listed in Table 1, where: b o1=2 :
13
The transcendental equations are solved using the Newton±Raphson procedure for dierent a and Z values, and for dierent boundary conditions. Only the ®rst ®ve natural frequencies are calculated. Using the method of multiple scales, the nonlinear amplitude dependent frequencies are calculated approximately: onl o l a20 ;
l
3b2 ; 16ok
5
where the non-dimensional quantity a
M rAL
15
where
1 b Y02 dx Y02 1 2 dx;
6
represents the ratio of the concentrated mass to the beam mass.
3. Conventional analysis
k 1 aY12
Z:
16
Z
0 000 1
Z; t 0; w000 1
Z; t ÿ w 2
Z; t ÿ aw
14
where a0 is the amplitude of vibration. l is the nonlinear correction coecient de®ned as:
Z
4
341
Note that the arbitrary coecients C of the mode shapes are to be calculated from the following normalization condition
Z 0
1 Y02 dx Y02 1 2 dx 1
17
Z
4. Arti®cial neural network approach
We present brie¯y the results obtained using conventional analysis in Ref. [13]. Neglecting the nonlinear terms in Eqs. (1) and (2), and assuming solutions of the type w1
x; t
A cos ot B sin ot Y1
x;
7
w2
x; t
A cos ot B sin ot Y2
x;
8
we obtain the following eigenvalue±eigenfunction problem, Y1iv ÿ o2 Y1 0;
9
Y2iv ÿ o2 Y2 0;
10
Y1
Z Y2
Z; Y01
Z Y02
Z; Y001
Z Y002
Z;
11
000 2 Y 000 1
Z ÿ Y 2
Z a o Y1
Z 0:
12
An alternative to the conventional numerical techniques is presented in this section by employing an arti®cial neural network algorithm. Arti®cial Neural systems are physical cellular systems which can acquire, store and utilize experimental knowledge. The distinguished characteristics of neural networks have played an important role in a wide variety of applications. Powerful learning algorithms and self-organizing rules allow ANN to self-adapt as per the requirements in a continually changing environment (adaptability property). The ability to perform tasks involving nonlinear relationships and noise-immunity make ANN a good candidate for classi®cation and prediction (nonlinear processing property). Finally, architectures with a large number of processing units enhanced by extensive interconnectivity provide for concurrent processing as well as parallel distributed information storage (parallel processing property).
342
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
Table 1 Mode shapes and transcendental frequency equations for dierent end conditions Mode shapes Case I
Transcendental frequency equations
Y1(x) = C {Tanhb (CosbZ ÿ Cotb SinbZ) 2 Tanhb Tanb ÿ ab {Tanhb Tanb (CosbZ SinbZ ÿ CoshbZ Sinbx + (SinhbZ ÿ Tanhb CoshbZ) Sinhbx} SinhbZ) ÿ Tanhb Sin2bZ + Tanb Sinh2bZ} = 0 Y2 (x) = C {Tanhb SinbZ (Cosbx ÿ Cotb Sinbx) + SinhbZ (Sinhbx ÿ Tanhb Coshbx) }
Case II Y1 (x) = C { (Cotb CosbZ + SinbZ) Sinbx ÿ Cotb (CoshbZ ÿ Tanhb SinhbZ) Sinhbx} Y2 (x) = C { SinbZ (Cotb Cosbx + Sinbx) ÿ Cotb SinhbZ (Coshbx ÿ Tanhb Sinhbx) }
2 Cotb ÿ ab { (Cotb CosbZ + SinbZ) SinbZ ÿ Cotb (CoshbZ ÿ Tanhb SinhbZ) SinhbZ} = 0
Case III Y1 (x) = C { (Tanhb CosbZ + Tanhb Tanb SinbZ) 2 Tanb Tanhb + ab { (1 + Tanb TanbZ) Tanb Cosbx + (Tanb CoshbZ ÿ Tan2b TanbZ CoshbZ) Coshbx) } Cos2bZ + Tanb Cosh2bZ (1 ÿ Tanhb TanhbZ) } = 0 Y2 (x) = C { (Cosbx + Tanb Sinbx) Tanhb CosbZ + Tanb CoshbZ (Coshbx ÿ Tanhb Sinhbx) } Case IV Y1(x) = C { (CosbZ (Sinhb Cosb ÿ Coshb Sinb) + SinbZ (Sinhb Sinb + Coshb Cosb) ÿ SinhbZ) Sinbx + (SinhbZ (Coshb Cosb ÿ Sinhb Sinb) ÿ SinbZ ÿ CoshbZ (Sinhb Cosb ÿ Coshb Sinb) ) Sinhbx Y2 (x) = C { SinbZ (Sinhb Cosb ÿ Coshb Sinb) Cosbx + (SinbZ (Coshb Cosb + Sinhb Sinb) ÿ SinhbZ)Sinbx ÿ SinhbZ (Sinhb Cosb ÿ Coshb Sinb) Coshbx + (SinhbZ Coshb Cosb ÿ SinhbZ Sinhb Sinb ÿ SinbZ) Sinhbx
2 (Sinhb Cosb ÿ Coshb Cosb) + ab { (Sinhb Cosb ÿ Coshb Sinb)(CoshbZ SinhbZ ÿ CosbZ SinbZ) ÿ Sin2bZ (Sinhb Sinb + Coshb Cosb) ÿ Sinh2bZ (Coshb Cosb ÿ Sinhb Sinb) + 2 SinhbZ SinbZ } = 0
Case V Y1 (x) = C { (CoshbZ + (Sinb CosbZ ÿ Cosb SinbZ) Sinhb ÿ (Cosb CosbZ + Sinb SinbZ) Coshb) Cosbx + (CosbZ ÿ Cosb (Coshb CoshbZ ÿ Sinhb SinhbZ) ÿ Sinb (Sinhb CoshbZ ÿ Coshb SinhbZ) )Coshbx} Y2 (x) = C { (CosbZ (Coshb Cosb ÿ Sinhb Sinb) ÿ CoshbZ) Cosbx + CosbZ (Coshb Sinb + Sinhb Cosb) Sinbx + (CoshbZ (Sinhb Sinb + Coshb Cosb) ÿ CosbZ) Coshbx ÿ CoshbZ (Coshb Sinb + Sinhb Cosb) Sinhbx}
2 (Coshb Sinb + Sinhb Cosb) + ab { (Coshb Sinb + Sinhb Cosb) (SinbZ CosbZ ÿ CoshbZ SinhbZ) + Cos2bZ (Coshb Cosb ÿ Sinhb Sinb) + Cosh2bZ (Sinhb Sinb + Coshb Cosb) ÿ 2 CoshbZ CosbZ} = 0
The ANN architecture, used in this study, is a multilayer, feed-forward, backpropagation architecture. Multi-layer Perceptron (MLP) has an input layer, hidden layers and an output layer. The input vector representing the pattern to be recognized is incident on the input layer and is distributed to subsequent hidden layers, and ®nally to the output layer via weighted connections. Each neuron in the network operates by taking the sum of its weighted inputs and passing the result through a nonlinear activation function (transfer function). Generally, the sigmoid function is chosen as the nonlinear activation function. If om j represent the output of the jth neuron in the mth layer, and Wm ij the weight on connection joining the ith neuron in the (m ÿ l)th layer to the jth neuron in the mth layer, then: " # X m m mÿ1 oj f ; mr2;
18 Wij oi i
where the function f(.) can be any dierentiable func-
tion. Usually the sigmoid function as de®ned below is used: f
x
1 1 eÿx
19
This function limits the outputs om j among 0 and 1. It is possible to shift the function f(.) along the x-axis by adding a threshold value to the summation term of Eq. (18) before the function f(.) is applied [17]. To achieve the required mapping capability, the neural network is trained by repeatedly presenting a representative set of input/output patterns, with a backpropagation error and weight adjustment calculation to minimize the global error EP of the network, i.e. Ep
n0 ÿ 2 1X tpj ÿ om pj ; 2 j1
20
where tpj is the target output of neuron j and om pj is the computed output from the neural network correspond-
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
343
Fig. 3. The ANN architecture used in the nonlinear analysis.
Fig. 2. The ANN architecture used in the linear analysis.
ing to that neuron. Subscript p indicates that the error is considered for all the input patterns. Minimization of this average sum-squared error is carried out over the entire training patterns. As the m outputs om pj are functions of the connection weights w mÿ1 and the outputs opj of the neurons in layer m ÿ 1, which are functions of the connection weights wm ÿ 1, the global error Ep is a function of wm and wm ÿ 1. Here, w with a superscript refers to the connection matrix. To accomplish this, w evaluates the partial derivative, @E/@Wij and supplies a constant of proportionality as follows: DWij e dpj opi ;
21
where e refers to the learning rate; dpj refers to the error signal at neuron j in layer m; and opi refers to the output of neuron i in layer m ÿ 1. dpj is given by: ÿ ÿ
22 dpj tpj ÿ opj opj 1 ÿ opj for output neurons; ÿ dpj opj 1 ÿ opj Sk dpk wkj
for hidden neurons;
23
where opj refers to layer m; opi refers to layer m ÿ 1; and dpk refers to layer m + 1. In practice, a momentum term (m) is frequently added to Eq. (21) as an aid to more rapid convergence in certain problem domains. The weights are adjusted in the presence of momentum by: DWij
n 1 e
dpj opi mDWij
n:
24
In this study, the momentum and learning rate values are taken as 0.9 and 0.7, respectively. These values are found by trial and error. A backpropagation algorithm is used in the optimization in which the weights are modi®ed. In all computations, a Pentium90 PC is used. For the linear part, 15 000 iterations and for the nonlinear part, 40 000 iterations are performed in training the algorithm. The ANN architecture used in the linear part is a 2:9:9:9:5 multi-layer architecture as shown in Fig. 2. For the nonlinear part, 2:3:3:3:1 ANN architecture is used (Fig. 3). The problem of ®nding the frequencies of the system can be treated as an input/output process with an unknown transfer function. For the linear part, the input parameters are the a and Z values, whereas the output values are the ®rst ®ve natural frequencies. For the nonlinear part, the input parameters are again the a and Z values and the output are the l values. In all computations, the ranges of a and Z are taken as 0 R a R 10 and 0 R Z R 1. In Table 2, for case V, we compare the results of ANN and Newton±Raphson. In general, a good agreement is observed. For the sake of brevity, we do not present the tables for other cases. Instead, as an example, for case II, we present one comparison in the form of a bar chart (Fig. 4). The a and Z values are indicated on the ®gure. It can be seen that, the results are very close to each other. The conventional method for calculating the nonlinear correction coecients is as follows: calculate the relevant natural frequency, calculate the coecient C of the mode shapes such that the normalization condition of Eq. (17) hold, numerically integrate Eq. (16) to ®nd b, calculate k, and ®nally calculate l from Eq. (15). After using the key values in training the ANN algorithm, we ®nd other l values immediately. Exact and ANN values of l are compared in Table 3 for the fundamental modes for each dierent end condition and various a and Z values. A sample comparison of l for case III is also given in the form of a bar chart in Fig. 5. Results are again close to each other.
344
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
Table 2 Comparison of Newton±Raphson and ANN frequencies for case V a
Z
0.8
0.2 0.4 0.6 0.8
4
0.2 0.4 0.6 0.8
8
0.2 0.4 0.6 0.8
o1 NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN NR ANN
3.401770 3.421992 3.979000 3.993120 4.861600 4.883490 5.512800 5.521656 1.811300 1.825716 2.296800 2.315514 3.373100 3.388524 5.186300 5.204922 1.315500 1.313370 1.695400 1.686894 2.609100 2.605632 4.792600 4.781712
o2 28.586901 28.555229 26.112200 26.074520 19.415600 19.503960 23.919201 24.098049 27.972700 28.093260 23.978500 24.045191 14.368800 14.614380 14.751400 14.964680 27.862200 27.919390 23.558800 23.804649 13.401400 13.484200 11.719200 11.379690
o3 74.057404 74.116722 59.942600 60.000000 70.558296 70.135681 52.254902 51.612480 73.785301 73.747360 56.148602 56.671200 69.341599 69.434883 45.097301 44.893200 73.732597 73.793518 55.550499 55.621521 69.139900 69.374397 44.110600 44.894878
o4 118.705399 118.991096 138.792007 138.287994 123.643303 123.919998 112.536003 112.410599 112.230698 112.456200 138.792007 138.994995 119.906898 120.293999 108.716103 109.815201 111.167503 111.653397 138.791901 138.667404 119.320702 118.885803 108.193298 107.890800
o5 188.219193 188.358795 187.972305 190.826706 196.832504 196.253296 204.032593 203.277206 183.369507 182.814697 181.562103 182.483002 192.993195 192.707703 201.841705 202.528305 182.516800 183.042206 180.609406 181.384293 192.446198 192.725998 201.534897 201.283203
Fig. 4. Newton±Raphson vs ANN values for the ®rst ®ve natural frequencies (case II, a = 0.8, Z = 0.1).
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
345
Table 3 Comparison of exact and ANN nonlinear correction coecients for the fundamental mode Nonlinear Case I
a
Z 0.8 8
Case II
0.8
8
Case III
0.8 8
Case IV
0.8
8
Case V
0.8
8
0.1 0.3 0.5 0.1 0.3 0.5 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.1 0.3 0.5 0.1 0.3 0.5 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8
Fig. 5. Exact vs ANN values for the nonlinear correction coecients (case III, a = 4, Z = 0.1, 0.3).
l (exact)
l (ANN)
1.7086 1.2571 1.1392 0.9808 0.5031 0.4329 0.4298 0.3692 0.3228 0.2950 0.2763 0.1755 0.1359 0.1169 0.9907 1.4131 0.0015 0.4444 0.9161 0.0003 1.2243 0.9839 0.9731 1.3463 0.7041 0.5206 0.5025 0.7269 0.1983 0.2168 0.2554 0.3072 0.0776 0.0889 0.1116 0.2211
1.7085 1.2560 1.1395 0.9843 0.5001 0.4392 0.4297 0.3700 0.3225 0.2951 0.2752 0.1755 0.1345 0.1173 0.9826 1.4108 0.0015 0.4426 0.9285 0.0003 1.2225 0.9858 0.9884 1.3455 0.6806 0.5281 0.5086 0.7374 0.1978 0.2177 0.2547 0.3096 0.0775 0.0875 0.1108 0.2208
In Table 4, the number of data used in training the algorithm are given for each end condition for the linear frequencies. For these training values, the ANN algorithm produced results with an average error of less than 0.5%. Then for some other test values, results are compared for both methods and it is found that the average error is again less than 0.5%, as shown in Table 5. In Table 6, the number of input data in training the algorithm and the average percentage error values of those data are given for the nonlinear problem. In Table 7, the number of test values and the corresponding average percentage errors are indicated for the nonlinear problem. In both tables, the error is less than 1.5%. From an engineering point of view, these errors are considerably low. In Fig. 6(a), for case I, the mean square errors (MSE) in training vs iteration numbers are shown for
346
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
Table 4 Average percentage errors for training values of ANN for the linear problem Input data number Average error (%) Case Case Case Case Case
I II III IV V
48 80 46 70 79
0.494172 0.491376 0.170264 0.402501 0.427118
Table 5 Average percentage errors for test values of ANN for the linear problem Input data number Average error (%) Case Case Case Case Case
I II III IV V
18 20 18 20 31
0.298561 0.379373 0.120634 0.385934 0.4573
Table 6 Average percentage errors for training values of ANN for the nonlinear problem Input data number Average error (%) Case Case Case Case Case
I II III IV V
24 32 24 32 31
0.288706 0.335944 0.774889 1.339636 0.670423
Table 7 Average percentage errors for test values of ANN for the nonlinear problem
Fig. 6. (a) Iteration number vs MSE for training natural frequencies (case I). (b) Iteration number vs MSE for training nonlinear correction coecients (case I).
iterations, our architecture 2:3:3:3:1 used in the nonlinear analysis possesses the lowest total error values.
Input data number Average error (%) Case Case Case Case Case
I II III IV V
9 8 9 8 12
0.381519 0.274351 1.423081 1.175880 0.525184
the linear problem. In Fig. 6(b), again for case I, the MSE vs iteration numbers are given for the nonlinear problem. In both ®gures, after 4000 iterations, the MSE dropped drastically. Fig. 7 is a comparison of total errors of dierent ANN architectures for the nonlinear problem for iteration numbers starting from 4000 up to 15 000. Note that the ANN architectures compared in the ®gure have dierent numbers of hidden layers. Among the tested architectures, 2:3:3:3:1 and 2:4:4:4:1 are the best ones. For more than 15 000
5. Concluding remarks In this work, we treat the vibrations of a beam-mass system under ®ve dierent supporting conditions. The exact values of the frequencies are calculated using the Newton±Raphson method. The correction coecients to the linear frequencies in the case of nonlinearities are also calculated from the formulas given in a previous paper [13]. For each end condition, mass location, mass ratio and frequency, numerical analysis should be repeated, a lengthy process which requires the convergence of iterations. When the initial guesses are not close enough, the algorithm may diverge also. Some key values obtained using the conventional analysis are then used in training an ANN algorithm. After half an hour of training for the linear and non-
B. Karlik et al. / Computers and Structures 69 (1998) 339±347
347
Fig. 7. Comparison of total errors of various ANN architectures for the nonlinear problem.
linear cases, the frequencies become available almost instantly. For the linear case, the algorithm yielded results of 0.5% error on average, whereas for the nonlinear case the error is below 1.5%. These errors are considerably low. ANN algorithms cannot, of course, replace totally the conventional numerical techniques, since they need some key values for training. However, for involved problems in structural vibrations where excessive iterations are needed for convergence, they can be implemented as an ecient supplementary tool, reducing drastically the computational cost. References [1] Srinath LS, Das YC. Vibrations of beams carrying mass. ASME J Appl Mech 1967;Series E:784±85. [2] Goel RP. Free vibrations of a beam-mass systems with elastically restrained ends. J Sound Vibration 1976;47:9± 14. [3] Saito H, Otomi K. Vibration and stability of elastically supported beams carrying an attached mass under axial and tangential loads. J Sound Vibration 1979;62:257±66. [4] Lau JH. Fundamental frequency of a constrained beam. J Sound Vibration 1981;78:154±57. [5] Laura PAA, Filipich C, Cortinez VH. Vibrations of beams and plates carrying concentrated masses. J Sound Vibration 1987;117:459±65. [6] Maurizi MJ, Belles PM. Natural frequencies of the beam-mass system: comparison of the two fundamental
[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
theories of beam vibrations. J Sound Vibration 1991;150:330±34. Woinowsky-Krieger S. The eect of an axial force on the vibration of hinged bars. ASME J Appl Mech 1950;17:35±6. Burgreen D. Free vibrations of pin-ended column with constant distance between pin ends. ASME J Appl Mech 1951;18:135±39. Srinivasan AV. Large amplitude-free oscillations of beams and plates. AIAA J 1965;3:1951±53. Hou JW, Yuan JZ. Calculation of eigenvalue and eigenvector derivatives for nonlinear beam vibrations. AIAA J 1988;26:872±80. McDonald PH. Nonlinear dynamics of a beam. Comput Struct 1991;40:1315±20. Pakdemirli M, Nayfeh AH. Nonlinear vibrations of a beam-spring-mass system. ASME J Sound Vibration Acoustics 1994;166:433±38. OÈzkaya E, Pakdemirli M, OÈz HR. Nonlinear vibrations of a beam-mass system under dierent boundary conditions. J Sound Vibration 1997;199:679±96. Berke L, Hajela P. Applications of arti®cial neural nets in structural mechanics. Struct Optimization 1992;4:90±8. Avdelas AV, Panagiotopoulos PD, Kortesis S. Neural networks for computing in the elastoplastic analysis of structures. Meccanica 1995;30:1±15. Abdalla KM, Stavroulakis GE. A backpropagation neural network model for semi-rigid steel connections. Microcomput Civil Engng 1995;10:77±87. Lee Y, Oh SH, Hong HK, Kim MW. Design rules of multi-layer perceptron. Proceedings of SPIE, Science of Arti®cial Neural Networks 1992;Vol. 1710:329±39.