333
14 DISTORTIONS AND ACCURACY IN MODELING At several places in this book, models have been described which differed in some way from the prototypes that they were meant to simulate. These models are, in general, called distorted models. We can distinguish two types of distortions. For the first, a known physical effect in the prototype is not simulated correctly because some scaling requirement can not be met. For the second, the model looks different than the prototype or some parameter is treated differently in the model than in the prototype, but nonetheless, the model response is a faithful simulation of the prototype response. The first type of distortion is serious and can cause misleading conclusions when the modeler is not aware of the implications of the distortion; we will not, however, discuss such distortions further. The second type of distortion is not actually a distortion of the physics, since the desired response parameter is not distorted; the apparent distortion is usually a result of the modeler's experience or physical insight about the factors that influence the response under investigation. Judicious use of these "non-distortions" can sometimes eliminate the first, most serious type of distortion. Model Analysis and Distortions in the Drop Test Experiment The drop test experiment described in Chapter 1 is an excellent example of a "distortion" which does not distort the desired response parameter. The response to be investigated is the structural damage (permanent deflection) of a cantilever beam subjected to a rapid acceleration pulse; a set of aluminum beams served as the prototype, and a set of smaller steel beams served as the models. To conduct an experiment, a beam was clamped to a heavy weight and dropped down a guideway on to a stiff spring, which rapidly stopped the weight and propelled it back upwards at nearly the same velocity as the impact velocity. A beginning modeler might start an analysis by requiring the model to be a geometric scale model of the prototype, and so he would list drop height and beam length and width as important dimensions. The yield stress and stress-strain function of the model material would also be listed to incorporate material properties. The mass of the clamping weight, the density of the beam material, and the spring constant would complete the list. (A really inexperienced modeler might also include the shape of the weight; in fact, this parameter would be relevant if the velocities were so high that air drag is important.) Table 14.1 lists the selected parameters, By any of the procedures described in Chapter 2, the pi terms and model law can be found to be: X
Γ pgH K
m H
I
For the beams used in the tests, the scaling factors for material properties are: λρ = 0.351 and λσ = 0.283; and the geometric scale factor is λ = 2.0. Then from the fourth term on the right of Eq. (1), we find that λΗ = 2.0, which states that the drop height must be scaled geometrically. From the second pi term on the right, we conclude that λ* = λσλ = 0.566. From the third pi term, the clamping mass must be scaled as λ„ - λρλ3 = 2.81. Finally, from the first pi term, the scaling law for gravity is found to be Xg = λ<,/λρλ or Xg = 0.403. Since gravity can not be scaled easily, we must do some deeper thinking about the physics to determine a better way to conduct model tests. The deflection of the beam is ultimately due to the rapid Change in velocity when the weight impacts the spring. The inertia force moment caused by the sudden change in velocity induces bending stresses that
334 TABLE 14.1 Parameters in Demonstration Drop Test Dimension L L L ML'3
Parameter Beam length Other beam dimensions Beam drop height Beam density
Type Geometric Geometric Geometric Physical property
Yield point of beam
Physical property
P σ
Other non-dimensional stresses Drop weight mass Gravity
Physical property
σ«
ML-'T'2
Inertia Acceleration
m
M LT~2
Spring constant of anvil Deformation of beam tip
Foundation property Response
Symbol / /,H
8 K X
ML-'T-2
MT~2 L
exceed the elastic limit of the beam material. Consequently, we can perhaps develop a different model law for the beam response if we understand the physics of the acceleration and the resulting bending moment and stresses. Upon striking the spring, the beam reverses its velocity in a time that is determined by the total impacting mass and the spring constant. We know from vibration theory that this characteristic period is proportional to yjm/K. (This assumes that the mass of the beam is negligible compared to the mass of the clamping weight, which was the case in the tests.) The total change in velocity of the system is twice the impacting velocity Vf; hence, the acceleration pulse of the beam is 2Vt divided by the characteristic period, or 2ViHm/K. (The impacting velocity is V2gi/, neglecting friction, but for the moment we will not consider this relation). Since the mass of the beam is plwt (where w is the width of the beam and t is the thickness), the bending moment T of the inertia force about the clamping point is (except for a numerical factor): T = (pl2wt/2) (IVJ^mÎK)
(2)
This moment is resisted by the moment of the bending stresses at the root of the beam. Since the beam yields plastically, the stress above the beam centerline is roughly constant at +σ, and below the center line it is roughly constant at -σ. The moment T of the bending stresses is thus: T = awt2/2
(3)
Since the two moments are equal:
(4)
But, f, u>, and / are all scaled geometrically. Thus, after cancelling the geometric scale factors, Eq. (4) can be written in terms of scaling factors in the form:
K = K,M^
(5)
335 This equation shows us how to scale the model tests. First, we see that gravity does not have anything to do with the beam response; all gravity does for us is to accelerate the mass and beam to the desired impact velocity V,. We need to scale the impact velocity, not gravity and drop height; and the velocity is a parameter that can be scaled. Second, we see that the mass m and the spring constant K do not each have to be scaled, so long as their ratio is scaled. For that reason, we can use the same spring for both the model and the prototype (i.e., λ^ = 1), and scale the mass as the ratio of the original mass and spring constant scale factors, or ^ = 2.81/0.566 = 4.97. This value was, in fact, the ratio of the two clamping weights used in the tests. (Formally, we have combined the second and third pi terms of Eq. (2) to give a new pi term (m/K)(G/pl2); from this pi term, we conclude that the ratio mIK is required to scale as λ^λ2/λ^ = 4.97.) Now, from Eq. (5), the scaling factor for impact velocity is λ ν = 0.897. We must select drop heights to obtain scaled velocities that agree with this scale factor. With this new-found knowledge, we can see that our original list of parameters should have been /, /„ p, σ, σ(, Vh and ^mlK, since these are the factors that actually influence the tip deflection X. With these parameters, the modeling law is:
X
[pVf mo ,
1
T^LV'o^'^J
_ (6)
The modeling relations deduced from Eq. (4) can be derived easily from this form of the law. It ought to be noted, in addition, that the impact velocities were not chosen in accordance with the scaling law. We never actually compared a given prototype test to a scaled model test. Instead, the aggregate of model test data was shown to correlate with (i.e., predict) the aggregate of prototype test data. Nonetheless, we couldhave conducted a completely scaled model test if that had been our aim, and the velocity would have been scaled for that particular model and prototype test. The reader might claim that the drop test experiment contained no distortions. Instead, it could be said that the original model analysis included parameters that were extraneous. Had we possessed enough physical insight to list the duration of the loading ^m/K and the impact velocity Vit instead of m, K, //, and g, we would never have thought there was a distortion in the first place. In that sense, what amounts to a distortion to one person may not be a distortion to another. Other Examples of Distortion There are many types of models that do not look like the prototype, or in which some parameter is treated differently in the model than in the prototype, but for which, nonetheless, the model does represent the physics of the prototype response. In Chapter 5 on rigid body response, the first similitude analysis scaled both pressure and impulse but distorted standoff distance R by scaling it as λ1/2. This distortion was allowed because the standoff distance was so great that the shock front was essentially plane by the time it arrived at the target. Hence, the standoff distance was insignificant so long as the shock impulse was scaled. Without this realization, our similitude analysis would have had to include atmospheric conditions, charge weight, and standoff distance, rather than the simplification of characterizing the blast wave as plane with a known P and /. We would have concluded that simulation was impossible. Our physical knowledge gave us the insight to distort the standoff distance and thus make a model possible.
336 In structural problems, a common geometric distortion comes from the realization that to model a bending response, the second moment of area must be scaled but the thickness can be distorted. If one makes this distortion, however, the ability to simulate membrane action is lost. Nonetheless, the distortion can allow a simplified fabrication of the model when strict geometric similarity would lead to excessively thin members. In addition, it can sometimes permit the use of the same material in the model as in the prototype when strict geometric scaling would otherwise dictate a change of material. In honeycomb structures, the product Eh, where h is the skin thickness, is the parameter that must be modeled, not E and h separately. The product EA which represents membrane stiffness is Eh times a width, and the product El which represents bending stiffness is £7i//2, where// is the distance between the honeycomb skins. Thus, both the bending stiffness and membrane stiffness can be scaled by modeling only the product Eh. Naturally, if h is distorted, one must compensate by adjusting E. These distortions are made possible by the physical knowledge of the modeler about structural responses. Not all distortions have to be geometric. The structural response of underground structures to blast loads can be modeled even though it is not possible to simulate the duration of loading very well. The response time of an underground structure is long compared to the duration of even a poorly scaled loading. In other words, the structural response lies in the quasi-static loading realm. On the other hand, for the overturning of rigid bodies by blast, the response is usually in the impulsive loading realm. Under these conditions, the model can be created without scaling peak pressures or maximum thrust exactly. For models in which failure by large plastic deformation is of concern, the elastic properties of the material can be ignored. For example, the ratio of the yield points of the materials in the drop-test experiments was 13/46 = 0.283, whereas the ratio of their elastic moduli was 10/30 = 0.333. The lack of elastic property scaling did not introduce any significant errors because the response was dominated by plastic bending well beyond the elastic limit. Sometimes we can make a model simulation possible by observing (perhaps by preliminary experiments or analysis) that a combination of pi terms governs the response of interest, rather than each pi term separately. The drop test experiment illustrated, for example, that a combination of the term m/pl3 and the term Kiol in the form (m/K)/(o/pl2) controlled the loading duration of the beams. This distortion allowed us to create a model in which the spring constant and the clamping mass could be distorted without introducing any error. In explosive cratering, we were able to create a model only by observing that the term Wm/pmc2J3d combined with Wm/pmg md to form a combination term W7nAip7l2Ac mg,/8. While it may not have apparent at the start that this combination, and not the two separate terms, was the important term, it was only through this distortion that we were we able to make a successful model. In other cases, we may be able to model the response of interest only over a limited range of the variables. That is, within this range, the physics are accurately simulated, but outside it we will have distorted the physics (rather than just made the model look different). For example, the static deflection at the mid-span of a simply supported beam under a concentrated load at mid-span is:
This equation can be rearranged to display the three pi terms that govern the deflection as:
337 Readers with a knowledge of structural mechanics will realize that Eq. (8) is a bending solution with a limited range of validity of the pi term I/L4. If I/L4 becomes too small, membrane forces rather than bending forces become predominant. If I/L4 becomes too large, shear forces will become predominant. There is a great deal of truth to the assertion that what we have been calling distortions are not distortions at all, since we have always simulated the physics accurately. If we can make the model look different, or treat some parameter of the model differently than it is treated in the prototype, without distorting the physics, we should do it if this distortion makes a model possible that otherwise would not be possible. In those cases, our original list of parameters included some parameters that were actually not needed to represent the physics of the response. Perhaps we needed to include only the structural bending stiffness rather than the both the modulus of elasticity and the thickness. Perhaps we only needed the applied impulse rather than the complete pressure-time history of the loading. In that sense, these distortions are really corrections to an over-defined list of parameters. One can safely apply such corrections or distortions when something is known about the physics of the problem. Accuracy in Modeling and Engineering A criticism sometimes made about modeling is that the accuracy of the results can not be obtained unless the model is compared to a prototype. Throughout this book we have attempted to use only illustrations for which both model and prototype data were available so that comparisons could be made, but that does not mean that we think that such comparisons must always be made. The criticism concerning accuracy is not limited or unique to modeling. The accuracy of an analytical solution can not be assessed, either, unless tests are conducted to evaluate the analysis. To illustrate the point that a mathematical analysis can have unknown accuracy, we will compare the analytical solution for the dynamic response of simple structural elements to experimental data. Locklin and Mills1 report results for the transient elastic response of thin cantilever beams subjected to air blast pressure. Table 14.2 compares the maximum tip deflection measured in the tests to that predicted by the analytical model. The best agreement is 10% difference and the worst is 83%. For a second example, we will compare the maximum elastic strain measured at the root of blast-loaded cantilever beams to the corresponding strain predicted from a small-deflection analytical model with distributed TABLE 14.2 Comparison of Maximum Tip Deflection for an Elastic Cantilever Beam Under Blast Loading Test No.
Measured Deflection (in.)
1 2 3 4 5 6 7 8 9
0.270 0.090 0.060 0.046 0.600 0.280 0.200 0.072 0.460
Predicted Deflection (in.) 1.070 0.235 0.075 0.042 3.500 0.740 0.260 0.141 0.360
Percent Difference -75 -62 -20 +10 -83 -62 -23 -49 +31
338 TABLE 14.3 Comparison of Maximum Tip Strain at the Root for an Elastic Cantilever Beam Under Blast Loading Test No.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Measured Strain x 106
Predicted Strain x 106
3180
5000 2020 1170 3060
870 340
950 470
3500 1000
2250
3800 1670
930
% Difference
-24 -17 -20 +4 -8 -18 +56 +47 +23
480 270 840 535
680 390 135 550 155
1540
1570
-2
640 155
260 90
+146
+100 .+53 +245
+72
mass2. (The analytical model is similar to the one used to predict tip deflections for Table 14.2.) The comparisons are given in Table 14.3. The agreement here ranges from excellent (2% difference) to quite poor (245% difference). These two examples illustrate that a mathematical analysis, which is supposedly a good representation of physical reality, can be rather inaccurate. In fairness, it should be pointed out that beam frequencies are predicted more accurately and that better predictions of strain and deflection could be made for thicker beams. On the other hand, the examples might cause one to question the accuracy of an analytical model of the dynamic responses of a complex structure such as an aircraft, ship, or building, especially if plastic rather than elastic deformation was of concern. We can conclude that the question of accuracy is applicable to all engineering analysis techniques. Those who use modeling techniques should not feel more doubtful than those who solve a differential equation or use a computer code. Modeling is an analysis technique, a method for obtaining engineering answers. There is no need to apologize for an analysis technique as long as the user understands the assumptions and evaluates their implications. A model-prototype comparison is not needed just because the technique used is modeling, if the assumptions can be evaluated in some other way, say by reference to an analogous problem already solved. Method of Conducting a Model-to-Prototype Comparison Assuming that the assumptions made in the model analysis are justified, the most meaningful statement about the correlation between a model and a prototype is obtained from the probability that the statistical mean of the two data sets are within a given tolerance. For the probability to have significance, the probability that other model measurements and other prototype measurements are within the same tolerance must be compared with the probability that the model and the prototype data are correlated. For example, one should not expect model and prototype data to correlate within ±5% if the probability is low that additional model data (or additional prototype data) will correlate with the available model data (or prototype data) within this same tolerance. Likewise, if the probability is high that additional model data or additional prototype data will correlate within, say, ±30%, then the probability should also be high that the model and prototype data will
339 correlate within this tolerance; otherwise, the model is not similar to the prototype. The probability that model and prototype correlate within a specified tolerance will always be smaller than the probability that either other prototypes or other models correlate among themselves within this same tolerance. If the probability that model and prototype correlate approaches the lower of the other probabilities, the model-prototype correlation can be considered as good. Provided that model and prototype do correlate, a comparison between the probabilities that (1) other model results compare with the model statistical mean, and (2) other prototype results compare well with the prototype statistical mean is a measure of the influence of scale or size on the scatter in experimental results. We will give an illustration of a model-prototype comparison for the drop-test results described in Chapter 1. The aluminum beams are called the model and the steel beams called the prototype. From Figure 1.4, we can infer that the model and prototype data do overlay fairly well when scaled, but we want to make definite statements about this correlation. Because there are only 2 to 4 data points for each drop height, we will first fit a curve to each set of raw data (Figure 1.3) so that all of the data for each sample is used. A standard deviation will then be computed for each family of data around the respective curve fits. This computation allows us to determine the probability that additional, repeat tests would fall within a given tolerance. Similar calculations for the scaled data will permit us to make definite conclusions about the degree of correlation between model and prototype. Each family of data can be defined by a straight line through the data points, using a least-squared-error fitting procedure. The equation of the proposed straight line is: δ = ΑΗ+Β
(9)
where A and B are coefficients to be determined from the curve fit. We can develop the least-squares fit by rearranging Eq. (9) in matrix form:
[ / / , 1 . 0 ] Η = [δ]
(10a)
or in shorthand notation: [Η][Α] = [δ]
(10b)
The [H] matrix consists of two long columns of data, with the first column being all the drop height values and the second column being an equal number of 1.0' s. The [δ] matrix is a single column of the residual deformation data. Both sides of Eq. (10) are multiplied by the transpose of [H] and then inverted to obtain: [A] = [HTHf[Hf$]
(11)
For the data of Fig. 1.3, the computed results are: dsteel = 0M305Hsteel - 0.1565
(12a)
340 TABLE 14.4 Computation of Distribution About a Value of Unity for the Aluminum Model Data (in.)
δ calculated (in.) δ measured (in.)
5 5 10 10 15 15 20 20 25 25 30 30 35 35 40 40
0.9064 0.9604 1.9513 1.9513 2.9961 2.9961 4.0410 4.0410 5.0858 5.0858 6.1307 6.1307 7.1755 7.1755 8.2204 8.2204
0.135 0.800 2.170 1.700 3.100 2.900 4.890 3.980 5.440 5.620 6.300 6.140 7.440 7.040 7.320 8.040
δ ^ _ = 020S91Haluminum - 0.1384
(δ obs)/(5 calc) 0.1495 0.8864 1.1120 0.8710 1.0360 0.9692 1.2100 0.9849 1.0704 1.1058 1.0289 1.0028 1.0385 0.9827 0.8922 0.9799 (12b)
For the combined data of Fig. 1.4, the same procedure gives:
y = 0.13748 x 10\2pgH/öy) - 0.02805
(13)
The negative B coefficients imply that some drop height greater than zero is required to cause permanent deformation. The correlations do not apply for drop heights less than this threshold height. Standard déviations of each data set about its respective correlation are computed by creating a new distribution. Each computed tip deflection from the correlations is divided into the observed deflection for the same drop height. Table 14.4 illustrates the procedure for the aluminum beam data. Table 14.4 is now a sample of 16 rather than the inadequate 2 for each drop height. The standard deviation for this sample of 16 is:
<-v
[(δ observed)/(6 calculated)-1.0]2 _ _
ίΛ.. (14)
The standard deviation S for the aluminum model is 22.55%; a similar calculation for the steel prototype gives S = 20.17%, and for the scaled data it gives S = 25.72%. Using these values of 5, we can compute the probabilities that: (1) additional model results would fall within a specified tolerance of the present model results; (2) additional prototype results would fall within a specified tolerance of the present prototype results; and (3) the scaled, non-dimensional model and prototype results are correlated within the same tolerance (in statistical language, that the two sets of distributions are
341 TABLE 14.5 Tolerance and Probabilities for Model-Prototype Correlation
Tolerance
(%)
±1 ±5 ±10 ±20 ±30 ±40 ±50 ±60
Probability that steel correlates with steel
Probability that aluminum correlates with aluminum
Probability that all data correlates
3.94 19.57 37.98 67.84 86.30 95.25 98.68 99.70
3.54 17.55 34.24 62.48 81.65 92.38 97.34 99.22
3.11 15.11 30.25 56.32 75.60 88.00 94.81 98.04
Degree of Association (%) 87.85 87.80 88.34 90.14 92.59 95.25 97.40 98.81
part of the same family). To do so, we will use the Student's T method, which is a statistical measure of correlation. The three probabilities from the Student's T method are listed in Table 14.5 for eight different tolerances. (A tolerance of ±5% means that the data falls between 0.95 and 1.05.) As has already been stated, the probability that model and prototype correlate within a specific tolerance will always be lower than the probability that other prototypes or other models will correlate within the same tolerance. The lower of these last two probabilities is the upper limit on the probability that the model and prototype distributions belong to the same family. If the probability of correlation of the two sets of scaled data (the fourth column in Table 14.5) approaches the probability of correlation of the aluminum model (the lower of the model and prototype probabilities), it is reasonable to claim that the model and prototype data do come from the same family. The last column in Table 14.5 is the probability of all the data correlating, divided by the probability of aluminum correlating with aluminum. For lack of a better name, we will call this number the degree of association. From Table 14.5, it can be concluded that the model and prototype data do come from the same family. No matter what tolerance we choose (for the standard deviations found in the test) even as small as ±1%, the degree of association is at least about 88%. If we are satisfied to have the model predict the prototype results to within ±30%, the degree of association is 92.6%, which means it is almost certain that the model does correlate with the prototype. Note that the scatter in the drop-test data is fairly large. The data is also unusual in that the physically larger system (the aluminum beams) has the more scatter; normally, smaller systems (i.e., the model in most cases) has the more scatter because of poorer control over manufacturing tolerances, greater influences of small irregularities, and so on. If one accepts the evidence in Table 14.5 as indicating that prototype and model are from the same distribution, Table 14.5 is also a measure of the influence of dimensional size on scatter. How to Use Models With all the illustrations given in the previous chapters, the reader might think that the question of how to use models has been answered. In fact, only a few of the ways were illustrated because the emphasis was on examples for which model and prototype comparisons could be made. Table 14.6 lists the six ways that models can be used.
342 TABLE 14.6 How Models Are Used 1. 2. 3. 4. 5. 6.
Collect data to evaluate an analysis procedure Obtain quantitative data for a prototype design Generate a functional relationship to solve a general problem empirically Evaluate limitations of an existing (expensive) system Explore fundamental behavior of new phenomena No other method of analysis is possible
One uses models to collect data for an evaluation of an analysis, which is the first use of models listed in Table 14.6, because such an evaluation is less expensive or quicker than other methods. The only way to evaluate predictions from the analysis is to obtain experimental results for comparison. The second use of models is to obtain quantitative data for a prototype design. The aerospace industry makes good use of models in this way; virtually every airplane or flight vehicle has been tested as a model in a wind tunnel to determine its aerodynamic characteristics or flight loads. Hydraulic engineers also make effective use of models for the design of dam spillways and flood control measures. Other engineers should learn from these practitioners and use models in the early stages of design to discover problems and fixes. The third reason to use a model is to generate a functional relationship empirically to solve a general problem. Hopkinson-Sachs blast scaling is an excellent example of this use. This scaling "law" shows that scaled or normalized pressure is a function of scaled stand-off distance. Relatively few experimental tests were needed to fill in the entire two-dimensional space of blast pressure vs. stand-off distance. Hopkinson-Sachs scaling is the basis for virtually all air blast predictions. The explosive cratering illustration in Chapter 11 is another example of obtaining an entire solution, in this case for crater size as a function of charge weight, depth of burial, and soil type; the solution was made possible because a model analysis showed thaï test data could be normalized using similitude theory. The development of nomographs or design charts to assist in determining loads, section properties, and other design inputs is a less obvious variation of this use of models. Such charts can be made more compact when the variables are non-dimensional. The fourth reason for using models is to evaluate the limitations of an existing system. Models become appealing when an evaluation of a prototype might result in its destruction. Such model applications include many of the vulnerability assessments prevalent throughout this book. How vulnerable is an oil tanker to leakage when it collides with a floating object? How much thermal protection is required to make the reentry of a space vehicle safe for its inhabitants? How should a tank be designed to make it resistant to land mine attack? Many other examples can be cited for this use of models. Models can also be used to explore the fundamental behavior of new phenomena. Is gravity, or inertia, or surface tension, or thermal effects, or constitutive behavior significant in some poorly understood system? Build a model that simulates certain phenomena and distorts or ignores others. Make a model-prototype comparison to determine if the distorted phenomena are significant. If the comparison is poor, build another model based on another group of assumptions to determine whether other phenomena can be ignored. One good example of this use of models is in soil mechanics, where such an approach has shown that gravity and inertial properties should be simulated in granular soils while constitutive relationships can be ignored, and constitutive and inertial properties should be simulated in cohesive soils while gravity can be ignored. Another example is the use of geometrically larger models to obtain data when the prototype is too small to be instrumented.
343 The final reason listed for using models is when no other method is possible. There are many systems for which testing the prototype, or some other "real" thing, would be dangerous to the experimenters or to the public; one would not enjoy examining, for example, the failure modes of an actual operating nuclear power plant! The effect of nuclear explosions in the air on ground or air structures is another such example. Many safety-related problems fall in this category of the use of models, such as models that examine the propagation of tires on offshore oil platforms or through hotels, and the effects of earthquakes on buildings. For these applications, full-scale tests are out of the question, and computer simulations, because of lack of complete knowledge, may not incorporate all the relevant physics. Model tests are the only real way to find answers to such problems. Conclusions Although this book has been restricted to the use of models to obtain solutions of engineering dynamics problems, no such limitation exists in general. Models can also be used to solve statics problems, for example. Sedov3 has a fascinating discussion of using models for astrophysical studies such as the variation of brightness of Cephids and the dynamics of supernova. Esnault-Pelterie4 describes the use of models in meteorology. Models can be used to speed up reaction rates in chemical process studies. The Lovelace Foundation has used models to note similarities between animal and human responses. Similitude theory can be used in all these fields because of its generality and because it is an analysis technique. A model can be a valuable and versatile analysis technique. The success of the technique depends to a degree on the user's imagination and creativity. Early in Chapter 1, we claimed that modeling was an art as well as a science. Unlike other arts, however, modeling can be learned without a high degree of natural talent. Being talented helps, but experience is of far more importance, and experience can be gained by study and observation. All experience does not have to arise from personal practice. The last question to be answered is "When should similitude theory be applied?" The answer is: models should be used whenever their application provides reasonable and economical answers to a physical problem or when other methods are not feasible. All physical problems begin with a definition of the problem. Then the engineer must decide on a technique for seeking a solution to the problem by the use of available engineering tools. The tools include: (1 ) mathematical analysis (closed form answers and computer simulations); (2) analogs such as electrical or hydraulic analogs of mechanical systems; and (3) experiments, which include full-scale "boiler plate" tests as well as scaled-down model tests. (In earlier chapters, we emphasized that dimensional analysis should be applied to full-scale tests as well as to model tests because of the resulting reduction in the number of variables to be investigated.) Which of the engineering tools we should apply is decided by the cost and the time schedule - how much is the cost to obtain the answer, and how soon do we need the answer? Models should be used when they are the cheapest or quickest alternative. The only exception to this rule is when a model, regardless of cost, is the only method available.
References 1.
R. G. Locklin and S. N. Mills, Jr., Dynamic Response of Thin Beams to Air Blast, Ballistic Research Laboratories Report No, 787, Aberdeen Proving Ground, Maryland, September 1965.
2.
W. E. Baker, W. O. Ewing, Jr., J. W. Harma, and G. E. Bunnewith, The Elastic and Plastic Response of Cantilevers to Air Blast Loading, Proc. 4th U. S. National Congress of Applied Mechanics, ASME, New York, 1962.
344 3.
L. I. Sedov, Similitude and Dimensional Methods in Mechanics, Academic Press, New York, 1959.
4.
R. Esnault-Pelterie, Dimensional Analysis and Meteorology (The Giorgi System), F. Rouge, Lausanne, 1950.