Plurwr.Spuw SC,;., Vol. 44. No. 12. pp. 1563-1578, 1996
Pergamon
Copyright f 1996 Elsevier Science Ltd Printed in Great Britain. All rights reserved 00374633196 $15.00+0.00
PII: SOO32-0633(96)00064-5
Catastrophic fragmentation as a stochastic process fragments
:
sizes and shapes of
A. La Spina and P. Paolicchi Dipartimento
di Fisica, Universita
di Pisa, Piazza Torricelli
Received 21 July 1995: revised 10 April 1996; accepted
2, I-56127 Pisa. Italy
10 April 1996
Abstract. It is rather difficult to understand theoretically and to analyse the experimental data concerning the mass and shape distributions of fragments created by catastrophic collisions. The fr prows is dise\lssed as being a phenomenon ; the size and strape distributions o in this way are compared with the results ofla experiments. The results are presented of some computer simulations of random vokme tion processes ; they are a 3-D genrtralizatk of tb rmmerical experimmts described in Grady aad Kipp (J. AJ$.
since it ki not obvious in which order to def+
cussed. In this light, the &ape res&s @he fiaean vales
responding to different assumptions re me&&on properties are preseated. the main features of the shape di laboratory experiments cannot be sati$f&torily reproduced. Comparison of the results with the outoomes of the semiempirical fragmentation model by Paolicchi ef al, (Icarus 121, 126157, 1996), as well as with some results coming out from hydrodynamical simulations, shows how only a “globa?” and physical modei, not a purely statistical one (neither global nor “local”), can afford to reproduce the observed data. Copyright 0 1996 Elsevier Science Ltd Correspondence to: P. Paolicchi
1. Introduction For a long time, the study of impact fragmentation has been an important issue, connected to several applications in the fields of engineering, geophysics and astrophysics. In particular many of the observed properties of asteroids (size distribution, shapes, rotational properties and so on) can be understood as direct or indirect consequences of a complex collisional evolution in which the catastrophic events (i.e. those leading to a significant fragmentation of both impacting bodies) play a dominant role (see, for essential references, Farinella et al. (1982) Davis et al. (1989), Chapman et ul. (1989) and Paolicchi (1994)). The problem of impact fragmentation has been studied per se both from a theoretical point of view and by means of laboratory experiments (for these, see the reviews by Fujiwara et al. (1989) and Martelli et al. (1994)). The theoretical approach to the problem presents itself as dramatically difficult : the involved physics is extremely complex; the strongest challenge probably concerns the range of scales involved : the fractures can start as quasimolecular flaws in the structure of the original boqies, and then, due to the impact, grog to macroscopic scales, up to (in the case,of asteroids) the order of magnitude of many kilometres. Moreover, it is not easy to understand the propagation of the effect of the collision within the various parts of the impacted bodies. Difficulties arise from the geometry (compression and rarefaction waves, their reelection at the boundaries, their interferences) and from the physics (properties of the material ; structure of the body bet&e the impact: porosity, pre-existing fractures and so on; existence of flaws, birth and growth of fracture surfaces). Another difficult problem concerns the formation of fragments and their properties (mass distribution, shapes, dynamical properties and so on). To build a complete theoretical picture for these processes seems a troublesome task. At the moment we have only semiempirical models (Verlicchi et al., 1994 ; Paolicchi et al., 1996) or numerical hydrocodes. based on a modified version of the Grady-Kipp theory of fracture (Grady and Kipp, 1980). giving noteworthy but--at least for the
1564
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process
moment-partial results (Melosh et al., 1992; Benz and Asphaug, 1994; Benz et al., 1994). In this context, it seems to be worthwhile to resort to a purely statistical approach, to check whether some of the observed properties can be understood in terms of simple probabilities. In this work we will examine the size distribution and the shapes of the fragments. In principle, we may be doubtful about the feasibility of such an approach as applied to the size distribution. Laboratory experiments show that a strong correlation is present between the size of the fragment and its distance from the impact point, and this suggestion is validated also by theoretical considerations concerning the strain rate. Its variation within the target causes a different increase of the damage with time and thus, at the end, a different degree of fragmentation (Melosh et al., 1992). In this sense, the size distribution should originate from the combined effect of a “global” distribution, due to the overall impact geometry, and of a “local” (and maybe stochastical) term (giving the distribution of sizes in a given zone, where the general physical conditions are approximately the same). The previous considerations do not apply to the shape distribution, which is independent of the size, and thus can put together regions where the fragments are generally big, and those where small bodies originate. A stochastical model of fragmentation is implicitly present in the classical, time honoured Weibull(1939) theory. The first explicit models were later introduced by Mott and Linfoot (1943), and Mott (1943a,b, 1947). Far later, Grady and Kipp (1985) thoroughly discussed various cases, performing 2-D simulations of stochastical fragmentation to obtain the size distribution of fragments. The Grady and Kipp approach is the starting point of our numerical computations. For a few cases, we extend the simulations to a 3-D treatment and discuss the properties of the size distribution. We compare our results from the Mott-Linfoot and Grady-Kipp forecasts, with the results of the experiments and with the results of both semiempirical and hydrodynamical models. Moreover, we extend our analysis to the-less explored-problem of shapes. A few possible statistical models of fragmentation are introduced; our computations show that a truly satisfactory fit with the experiments cannot easily be achieved. In particular, regarding the fragment shapes, with some specific (and ad hoc) prescriptions on the fragmentation statistics, we were able to reproduce the mean values of the axial ratios resulting from the laboratory experiments, but not the shape distributions.
2. Stochastical models of fragmentation : the size distribution The problem of fragmentation as a stochastical process has been studied by various authors, with applications to different problems, ranging from astrophysics to geophysics, but, generally, limited to the analysis of the resulting size distribution. Our main goal is the analysis of the shape distribution; however we will also compare our results
with some of the main issues, present in the literature, concerning the mass distribution of fragments. Mott and Linfoot (1943) simulated a fragmentation process in which an infinite area was divided by means of a grid of horizontal and vertical lines. They obtained a cumulative fragment distribution following the exponential law : N(a) = N,e - a
(1)
where N(a) represents the cumulative number of fragments (per unit area) whose area exceeds a and No is the total number of fragments per unit area. For volume fragmentation, the corresponding distribution would be : fqu) =
N~~-(~N,,“)‘:~, (2)
In 1985 Grady and Kipp, with a different approach, suggested an alternative distribution : N(a) = NOe-Noa
for area fragmentation
(3)
and
N(u) = NOeeNoV
(4)
for volume fragmentation. Several algorithms for area fragmentation were presented in the Grady and Kipp paper (1985), while no model was computed in three dimensions. Their main result was the dependence of the distribution on the algorithm, sometimes obtaining a good fit to the Mott-Linfoot prediction, sometimes a better fit to their own, sometimes also (as in the Voronoi-cells model) a distribution rather different from both. Our task has been essentially to test and extend to 3-D this analysis. We performed a few 2-D simulations, dividing a unit square either with horizontal and vertical lines or with randomly oriented lines, obtaining results (Figs 1 and 2) not too different from those presented in the above quoted Grady-Kipp paper. We refer to the Grady-Kipp paper for a visual representation of these cutting procedures. Besides the simple methods mentioned above, alternative algorithms for the fragmentation process were proposed by Grady and Kipp, Voronoi, Johnson and Mehl (see Grady and Kipp, 1985), but none of them can be extended to 3-D in a simple way. We therefore considered only the Grady and Kipp (1985) models (referring to a square divided by orthogonal or randomly oriented lines), which allowed a good 3-D representation and invented a few different 2-D algorithms, containing the same immediate possibility of extension. The basic model for our simulations is the slicing of a square (cube), side 1, by means of two (three) sets of lines (planes), parallel within each set and orthogonal to its sides (faces). Hence the fragments obtained are rectangles (parallelepipeds). This trivial model will be referred to as “clear”. If a completely random fragmentation can be excluded, the model just described can be modified. A slicing will be allowed only in the largest part previously obtained. We have termed this case as “hierarchical”. This model tends to avoid very odd shapes ; nevertheless
A. La Spina and
Paolicchi : Catastrophic fragmentation
1565
as a stochastic process
SIZE DISTRIBUTION
0.01 0.015 FRAGMENT AREA
0.005
0
0.02
0.025
Fig.1.Cumulative number-size relation in the case : 2-D, parallel cuttings, 400 fragments, “clear” case (see Section 6), compared with Mott-Linfoot and Grady-Kipp distributions. The dashed curves correspond to the theoretical distributions : Mott-Linfoot (long dashes) and Grady-Kipp (short dashes)
NON PARALLEL - 2D 250
0
1
I
I
0.005
0.01
0.015
I
I
0.02 0.025 FRAGMENT AREA
I
I
I
0.03
0.035
0.04
0.045
Fig. 2. Cumulative number-size relation in the case: 2-D, non-parallel cuttings, 225 fragments, compared with Mott-Linfoot and Grady-Kipp distributions. The dashed curves correspond to the theoretical distributions : Mott-Linfoot (long dashes) and Grady-Kipp (short dashes)
1566
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process
very thin fragments can be obtained, because divisions arbitrarily close to the sides are still allowed. We have defined a “hierarchical with cut-off model” in the following way: a threshold T is established, slices far from the sides of the square (cube) less than T are discarded and the calculation of the random data is repeated. We cannot forget the “most random” model : dividing the square (cube) by a set of randomly oriented lines (planes). It will give the worst fit to the experiments, at least regarding the shapes. In Section 5 we will present a couple of additional cutting models, which are not suitable for unambiguous estimates of the size distribution, but will be adopted only to extend the analysis of the shape distributions : 1. A “shape with cut-off” model. A threshold T is also requested as input in this case; the fragments are created as in the “clear” model, but a parallelepiped is “cut” along its largest dimension, which is A, if the ratio C/A is less than T. This additional cutting is not completely artificial : as many laboratory experiments take place in closed chambers, the fragments can undergo secondary breakups as a consequence of impacts with the walls. Due to considerations of probability we assumed that this cutting would involve only the longest dimension A. 2. The “local” model. Parallelepipedal fragments are generated separately, without the constraint of lying inside an empty solid figure : obviously it is very unlikely that, at the end of creating a lot of such figures, we shall be able to fill an “original parent body”, as in a jigsaw puzzle.
The obtained size distributions are presented in Figs l5. For the 3-D case, the case-parallel cuttings, “clear” model-gives results (Fig. 3) relatively close to the MottLinfoot theory, while the size distribution from the “hierarchical-with-cut-off” model (Fig. 4) is rather similar to the Grady and Kipp one; finally the case of randomly oriented planes (Fig. 5) gives a result intermediate to the Mott-Linfoot and the Grady-Kipp distributions. We do not represent the results of other simulations: the pure “hierarchical” model gives results similar to the “clear” one, and the results presented in Fig. 4 are not too critically dependent on the choice of the cut-off parameter. The results in the non-parallel case are possibly biased by poor statistics (the number of fragments is small, and the model is limited by computer time and memory allocation constraints). In the case of parallel-cuttings we present outcomes with almost the same number of fragments, for the sake of homogeneity, but we found that an increased number of fragments does not alter, qualitatively, the results. The interpretation of these results is not easy. Qualitatively, one can suggest that the introduction of a hierarchical and of a cut-off procedure make the formation of many large fragments more difficult, and consequently the available volume is distributed with more equity among the outcomes. This results in a final distribution similar to the Grady-Kipp prediction, while the pure stochastical choice, with no constraints, enhances the division of the available volume into both large and very small fragments, as in the case of the Mott-Linfoot dis-
tribution. We recall that, at least in the parallel cuttings cases, the total number of fragments is fixed a priori. Even if a final conclusion is not possible, at this stage, the discrepancy among the various cases suggests that the main Grady and Kipp conclusion, i.e. that the distribution depends critically on the assumed model of fragmentation, retains its validity also in 3-D. We will now compare these results from stochastical fragmentation processes with the outcomes both of the laboratory experiments and of the various theoretical models (hydrocodes: see Melosh et al. (1992) Benz et al. (1994) ; semiempirical models : Verlicchi et al. (1994) Paolicchi et al. (1996)). We recall that the experiments as well as the theories show an approximate power law behaviour of the mass distribution. The experimental cumulative mass distribution has been widely discussed; most of the experimenters (see Section 3 for the references concerning the experiments) found some evidence for a change in the slope at some intermediate size : the cumulative number increases more rapidly from the largest to the intermediate-size fragments, then changes to a slower increase in the intermediate-small range. The theories reproduce qualitatively the features of the mass distribution, showing sometimes the change in the slope. In particular Melosh et al. (1992) exhibit various two-slope cases while Benz et al. (1994) find more ambiguous results. Paolicchi et al. (1996) maintain that the change in slope is strictly connected with the location of the largest fragment: if it is close to the antipodes, the change in slope is generally present ; when a central core is formed, it is not (or, at least, it is far less evident). They give a geometrical interpretation of these results, consistent with the conclusions by Melosh et al. (1992). In general, theories and experiments seem to agree sufficiently well. The interpretation of astronomical data to obtain asteroid size distributions is very difficult : we do not observe the outcomes of an individual phenomenon, but the final result of a multi-collision entangled process, for which the direct or indirect effects of gravitational forces work even at moderate sizes. Even the dynamical families of asteroids, presumably outcomes of an individual catastrophic impact, are affected by the collisional evolution following their formation (see, for a reference, Marzari et 01. (1995)). Consequently we can only conclude that their qualitative features are similar to those observed in the experiments (see Cellino et al., 1991). Our stochastical mass distributions, as well as those which could be directly derived from the analytical MottLinfoot or Grady-Kipp formulae, pass from an approximately vertical slope for large fragments to a horizontal one for small bodies. The effect is especially pronounced for the Grady-Kipp linear exponential distribution, and slightly more moderate in the case of Mott-Linfoot. Our 2-D and 3-D simulations exhibit approximately the same trend. In Fig. 6 we represent a “Kresak-plot”, i.e. a plot of the logarithm of the mass of thejth fragment (in units of the total mass) versus 2j- 1, comparing the outcomes of two of our “parallel cases” (the same represented in Figs 3 and 4 : the “clear case”, labelled as KRE.OUT, and the “hierarchical-with-cut-off’ one, labelled as KREIO.OUT) with one of the cases from the semiempirical model pre-
A. La Spina and P. Paolicchi : Catastrophic fragmentation
350
I
1567
as a stochastic process
I
I
I
I
I
0.03
0.035
I
300
250
50
0 0
0.005
0.01
0.015
0.02 0.025 FRAGMENT VOLUME
0.04
0.045
Fig. 3. Cumulative
number-size relation in the case: 3-D, parallel cuttings, 343 fragments, “clear case”, compared with Mott-Linfoot and Grady-Kipp distributions. The dashed curves correspond to the theoretical distributions : Mott-Linfoot (long dashes) and Grady-Kipp (short dashes)
50
0 0
0.005
Fig. 4. Cumulative
0.01
0.015
0.02 0.025 FRAGMENT VOLUME
0.03
0.035
0.04
0.045
number-size relation in the case: 3-D, parallel cuttings, 343 fragments, “hierarchical with cut-off (T = 0.1) case”, compared with Mott-Linfoot and Grady-Kipp distributions. The dashed curves correspond to the theoretical distributions: Mott-Linfoot (long dashes) and Grady-Kipp (short dashes)
A. La Spina and P. Paolicchi : Catastrophic fragmentation
568
as a stochastic process
NON PARALLEL - 3D 300
250
200 $ i$ ri
150
50
-------------------_-____________-_-_____-____
0
I 0
0.01
0.02
0.03 FRAGMENT VOLUME
I
I
0.04
0.05
0.06
Fig. 5. Cumulative number-size relation in the case: 3-D, non-parallel cuttings, 261 fragments, compared with Mott-Linfoot and Grady-Kipp distributions. The dashed curves correspond to the theoretical distributions : Mott-Linfoot (long dashes) and Grady-Kipp (short dashes)
-1
'G2'‘KRE.OUT’ ----. ‘KREiO.OUT’ ....‘p430’
-1.5
1
-2
3
z
-2.5
7
2i 8
-3
-3.5
-4
-4.5 1
100
IO
1000
2J-1
Fig. 6. “Kresak-plots”
(i.e. the plot of the mass of thejth largest fragment, in units of the total mass, versus Zj- 1) corresponding to our 3-D parallel cases (where the label “KRE.OUT” corresponds to the “clear” case, and the label “KRElO.OUT” to the “hierarchical-with-cut-off (T = 0.1)” one), compared with one of the cases generated by the semiempirical model described in Verlicchi et al. (1994) and Paolicchi et al. (1996) (labelled as “N30”), exhibiting the usual change in slope obtained in most laboratory experiments and hydrodynamical simulations, and with the outcome of one of the experiments by Giblin ef al. (1994) (labelled as “G2”)
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process sented by Verlicchi et al. ( 1994) and Paolicchi et al. ( 1996). This model, labelled as “N30” exhibits the change in slope common both to many experiments and to some numerical simulations. We represent in the figure also one of the experiments reported by Giblin et al. (1994). The representation of data with the “Kredk-plot” (essentially equivalent to a cumulative-number vs. size plot, with the interchange of the axes) has been chosen to enhance the comparison with many similar figures present in the literature, both concerning experiments (see, e.g. Capaccioni et al., 1986) and asteroid families (Chapman et al. (I 989) and previous papers referenced therein). The figure shows that the stochastical curves are not able to reproduce the experiments in a satisfactory way, or as well as the theoretical models do. The knee they exhibit is too steep and follows a quasi-plateau. The “clear case” gives a slightly better fit, which could even be considered as acceptable. Unfortunately, as we will see later, it predicts values for the shapes which are completely wrong. In conclusion, we interpret these results in the sense that a “purely stochastical” fragmentation theory is presumably not adequate to explain the physical process. The results we are going to show for the shapes will strengthen this conclusion.
3. Shapes of fragments produced in laboratory experiments Main belt asteroids shapes are rarely known, but their general characteristics can be understood through analysis of their light curves at various phases. Laboratory experiments performed during the last 10 years showed that the shape distributions of fragments are qualitatively similar to those of asteroids (see, e.g. Capaccioni et al., 1984). The comparison involves only small and intermediate-size asteroids, where the shape regularization introduced by self-gravitation is not relevant (see, for a discussion of this point, Farinella et al. (1981, 1982)). A seminal work on laboratory impact experiments was compiled by Fujiwara et al. (1977, 1978). They used cylindrical polycarbonate projectiles and cubic basalt targets and used a light gas gun which could achieve impact velocities of l-4 kms-‘. Taking into account fragments >4mm, they measured the shape parameters, from smallest to largest size (the order of measurement is not negligible, see the next section), obtaining mean values of 0.50 for C/A and 0.73 for B/A. It is intriguing to note that the average ratio of A :B:C is approximated well by the geometric ratio 2 : 3: 1. They found normal distributions centred around the average values both for B/A and for CIA. A correlation among the shape of fragments, the temperature and the degree of fragmentation was found by Lange and Ahrens (1981), who impacted ice targets at velocities of the order of 1 km s- ‘, finding that, at 257 K, mean values for the axial ratios were 0.6 (B/A) and 0.4 (C/A), being larger for smaller temperatures (8 1 K). Capaccioni et al. (1984) confirmed the results obtained in 1978 by Fujiwara et al. They carried out six impacts using basalt and concrete targets, finding average vaIues
1569
for B/A in the range 0.64-0.72 and for C/A in the range 0.37-0.48. In the same year they performed two experiments impacting, by aluminium projectiles, targets made of fused alumina cement mixed with marble powder. Again, they found for A :B :C, the general trend quoted above and normal distributions centred around the mean value of B/A (0.72 for both experiments) and of C/A (0.49, 0.48).
Recent experiments performed in open air (i.e. not in a closed impact chamber) by Giblin er al. (1994) show smaller mean values for both B/A (-0.6) and C/A (-0.4). This discrepancy may be (according to the suggestion of the authors) due to the fact that fragments measured in closed chambers, by hitting the walls, suffer secondary and artificial breakups, taking place preferentially along the longest axis, increasing the mean values of measured ratios. When the experiment is performed in open air, this effect is less important, since the air friction slows the fragments during their fall. According to this suggestion, the smaller ratios found in these last experiments should be the original ones, resulting from the impact process. In spite of these moderate discrepancies, it is possible to state that the shapes derived from experiments carried out in different environmental conditions, and using different compositions and sizes for targets and projectiles, are sufficiently consistent amongst themselves. The ratios range around O.&O.7 and 0.4-0.5, and a relation of the kind (B/A)’ = C/A approximately holds. The distributions are distinctly peaked around the mean value.
4. Problems concerning the definition of the shape Before focusing on the possibility of reproducing the shape distribution of fragments by means of a statistical model we should introduce a standard definition of shape. Fragments’ shapes are usually defined in terms of the ratios B/A and C/A ; in the particular case of ellipsoidal fragments A, B, C represent the axes of the ellipsoid while in the other cases (i.e. when the fragments are not ellipsoidal) this definition has to be extended, and the generalization is neither obvious nor unique. This is a point first raised by Verlicchi et al. (1994), which we will now discuss in more detail. We identified two different methods employed in the literature to analyse the experimental results, in which the order used in measuring the dimensions of a fragment is reversed. Also, we define two additional methods, never before used, but which sound equally as reasonable. 1. We called TOP-DOWN (TD from now on) the method which starts by measuring the largest dimension present in the fragment : A ; B is defined as the largest dimension orthogonal to A, and C as the largest dimension orthogonal to both A and B. This method has been used by Capaccioni et al. (1984) and Giblin et al. (1994). 2. The BOTTOM-UP (BU) method starts by measuring the smallest dimension in the fragment : C; B is defined as the smallest dimension orthogonal to C, and A as the dimension orthogonal to both C and B. This method
A. La Spina and P. Paolicchi : Catastrophic fragmentation
TOP-DOWN
BOTTOM-UP
PROJECTION
Fig. 7. A simple 2-D picture of TOP-DOWN, and PROJECTION methods
BOmOM-UP
corresponds essentially to the procedure adopted by Fujiwara et al. (I 978). We identified the PROJECTION method as the method which starts by measuring the largest dimension in the fragment : A, defines B as the largest dimension projected along A, and defines C as the largest dimension projected along B. This method has never been used in the analysis of the experiments, but sounds reasonable and could be easily implemented by image processing software. By a simple example in 2-D we will underline the differences in defining A, B, C using these three methods (see Fig. 7). There is a completely different way to define the shape of a fragment: to compute its tensor of inertia, diagonalize it and find the eigenvalues and the associated eigenvectors. The eigenvalues represent A, B, C and the eigenvectors the principai inertia axes. This is, in principle, the “optimal” method to define shape. Unfortunately it is a purely mathematical one: it is completely unrealistic to perform such a computation on fragments resulting from laboratory experiments. The indirect methods used to obtain A, B and C from observations of asteroids, by means of the light-curve
as a stochastic process
analysis, cannot easily be transformed into an explicit procedure that can be discussed here. Nevertheless, it is possible to rationalize that, due to difficulties concerning the analysis of data, they tend, in the case of irregular bodies, to underestimate the ratio B/A with respect to both TD and BU methods (Zappala, private communication). Are results coming out from these various methods similar? Obviously, they are exactly the same in all cases when fragments are ellipsoidal. In 2-D, for rectangular fragments, as in the example presented in Fig. 7, the values of B/A and C/A computed with the TD and with the BU methods are exactly coincident. It remains true for a more general set of 2-D figures, as could be shown by elementary geometrical arguments. In 3-D, things behave in a very different manner. First let us remark that the PROJECTION method leads mostly to far higher values of the ratios B/A and C/A. Even in the simple case represented in Fig. 7 this result is evident. In our numerical simulations, we observed often a crude increase of B/A, from 0.7 to 0.9 or so. In our opinion this behaviour was not so obvious a priori. The differences arising from the other methods are less severe. Nevertheless they are not negligible : in particular the mean value of B/A can differ, between the TD and the BU cases, by about 0.1, at least in the case of parallelepipedal fragments (see Verlicchi et al., 1994). Moreover, for individual fragments, the variation can be even larger. In Fig. 8, for example, we plot the differences between the C/A and B/A values obtained with TD and BU methods for parallelepipedal fragments, with ratios of the sides regularly spaced. The fourth method (based on the computation of momenta of inertia) has been-at the moment-introduced only in some numerical simulations (Paolicchi et al., 1996). The axial ratios coming out from these simulations (ellipsoids cut on one side by a spherical surface) are not very different from those obtained with the TD method. Since the extension of the definition of shape, for fragments which are not ellipsoids, is not unequivocal, we wish to stress the importance of specifying explicitly the method used to analyse the results from numerical simulations or from laboratory experiments. Obviously the comparison between groups of data analysed in a different way requires some caution ; nevertheless the discrepancies introduced by the TD and BU methods appear not to exceed those observed among various experiments : consequently the scenario presented in the previous section remains valid. The presence of concave fragments (which are often created in the experiments, especially connected with spallation processes) should in principle introduce additional difficulties and uncertainties. From what we know (even if not explicitly stated anywhere) the computation of ratios usually works as the concavities were %lled-up”. 5. Shapes from stochastical fragmentation : algorithm and models
We are now going to discuss the various methods and models adopted for the computation of shapes in our
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process 0.3
I
I
DELTA(B/A) vs. DELTA(C/A) I
1571 I
I
0.2 -
P Y(
x
:
*
-0.2 i f -0.3 -0.3
I
I
-0.2
-0.1
1 DELT:
I
I
0.1
0.2
0.3
(B/A)
Fig. 8. The differences among the ratios B/A and C/A computed according to the TOP-DOWN and BOTTOM-UP methods, for individual parallelepipeds characterized by side ratios (counted with the BU method) equally spaced. Different symbols represent cases with both side ratios smaller than l/3 (diamonds), C/A smaller than l/3 and B/A larger but smaller than 2/3 (+), or B/A larger than 2/3 (squares), both side ratios larger than l/3 and smaller than 2/3 (crosses). C/A larger than l/3 and smaller than 2/3 and B/A larger than 2/3 (triangles), or both side ratios larger than 2/3 (stars)
simulations ; we will present them directly in 3-D ; the 2D equivalent cases are immediately derivable. The BU method corresponds to defining A, B and C as
the dimensions, or the sides, of a parallelepipedal fragment, and is easily implemented for parallel cutting planes. It has not been implemented for non-parallel-cuttings, where its numerical treatment is rather troublesome. The TD method has been adopted for all the cases, because generalization from regular to arbitrarily irregular bodies is straightforward, both from a conceptual point of view and regarding a numerical implementation. A classical Monte Carlo method has been adopted, choosing random points inside the cube, which has been previously cut by a set of planes into several solid figures. Choosing random points within the whole volume should lead to severe inaccuracy of the results, due to the low probability of randomly generating points close to the sharp corners (which are extremely critical for an accurate computation of the size), as they correspond to a very small volume fraction. In this way one would presumably underestimate the maximum dimension A, often wrongly evaluate also B, and C, and then, almost always, the ratios B/A and C/A. The problem is obviously more serious when the fracture planes are non-parallel. In the simpler model of parallel planes this problem could be overcome by dealing with each fragment separately, with a large number of random points. This simplified method is possible as a consequence of our knowledge a priori of the parallepipedal shape of the produced fragments. Nevertheless. we adopted a general-purpose algorithm : we generated the random points on the planes cutting the
cube ; those, among the created points, which belong to the same figure are identified by comparing their position with respect to the cutting planes. The n equations of the cutting planes can be written in the form : ar+ty+c:-d
= 0
(5)
where the coefficients c(, b, c, n were chosen in order that the plane would intersect the cube. The n sets of random points, one for each plane, were identified in this way : A point P in a 3-D space is identified by three coordinates : .T,J,z; two of them were generated in a completely random way (obviously within the range [0 : l] consistent with the size of the cube), the third one was computed by using equation (5). The point is discarded when it is external to the cube. Six sets of random points were generated on the faces of the cube. We associated the generated random points to binary strings. The generic binary number has n digits, one for each cutting plane, plus six digits, for the planes delimiting the cube. The ith position corresponds to 1 if the point R,~,z satisfies the inequality, referring to the ith plane : ax+by+c,-Lb0
(6)
otherwise, if the following holds : a.x+/Jy+c-d
a 0 is assigned to the ith position. These assignments correspond
(7) to the situation in
1572
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process
which the point lies on the right with respect to the plane (equation (6)) or on the left (equation (7)). We recall that a set of points is chosen to lie on the ith plane. In this case the point P satisfies equation (5) and two strings are defined, the former one having 1 in the ith position, the latter one having 0. Of course the other digits are common to the two strings. 4. At the end we have two binary strings, for each internal point, lying on the cutting planes, and one for each external point, lying on the faces of the cube. For an easier handling the strings are converted into decimal numbers. Hence we can identify points having the same number as belonging to the same figure. It is well known that within Monte Carlo algorithms the choice of the number of random points to be generated is important. For our purposes it was necessary to find the minimum number N for which, when N>iV, results no longer change in a significant way. In fact, the most timeconsuming part of our code concerns the computation of the distances between couples of points. Also a moderate increase in the number of points causes a rapid increase in machine-time. After some simulations, we found that an acceptable convergence of the mean values was obtained by generating 30,000 random points. For every figure, A is defined as the maximum distance between a couple of the points belonging to it. For the computation of B, all vectors orthogonal, with the usual scalar product in R3, to the vector corresponding to A were considered, and among them the longest one was assigned to B. C was defined in a similar way. This method overcomes most difficulties, but small problems still exist. The former one is intrinsic of a Monte Carlo algorithm : random points are more numerous in larger zones and we cannot handle very small fragments coming out from our statistical models: sometimes it can happen that in such fragments only very few points (of the order of 3-10, on a total of several tens of thousands) are present, hence the estimate of A, Band C is not reliable. These situations are irretrievable as a realistic increase of the generated random points does not lead to any significant improvement of the results. The solution we adopted is to neglect these “pathologic” (and, fortunately, very rare) fragments. The latter one is connected to the orthogonality condition, which we are employing to look for B and C: when can we agree that a given scalar product is zero? Obviously a too low threshold would get rid of too many “candidates” : we adopted a “dynamical value”, that increased when the resulting selection is too thin, before repeating the search for the longest orthogonal vector. With the above described algorithm we performed several simulations of stochastical fragmentation, according to the models described in Section 2. The computations were mostly performed with a RISC-6000 workstation. Concerning the “local” model introduced in Section 2, we implemented it (in 3-D) by simply choosing B/A and C/A inside a right-angled triangle of side 1 (see Fig. 9). We obtained, after some elementary computations, mean values of 213 for B/A and l/3 for C/A. As it is evident from geometrical considerations, both distributions of ratios are monotonic, that of C/A decreasing from 0 to 1, the other increasing.
We have quoted this case only for completeness: it is unrealistic, and the results regarding shapes are very dissimilar to those from experiments.
6. Analysis of results Before analysing 3-D simulations we would like to present some results from the 2-D models, comparing simple “parallel” and “non-parallel” models. Since, at least in the case of “parallel” cuttings, both BU and TD methods give the same results for the shape distribution, we limit ourselves to compare these results with those coming from the TD model (non-parallel case). Within the “clear” model, we cut a square with 40 lines. For the case of parallel cuttings (i.e. taking two groups of parallel lines, while the lines belonging to different groups are perpendicular to each other) we obtained a mean value for B/A of about 0.43. In the case of non-parallel cutting (i.e. taking 40 randomly oriented lines) the resulting mean value of B/A is smaller ( = 0.37). It is interesting to note that this is in contrast to the suggestion by Grady and Kipp (1985), which states that almost squared figures (corresponding to larger B/A) are more frequent in a nonparallel model than in a parallel one. We devoted a larger number of simulations to the 3-D cases. Once we have established the possible models, we performed various simulations, varying also the number of planes cutting our cube. When required, a threshold T was introduced, and the computations have been repeated for different values of T. Our goal was to check whether it was possible to reproduce the shape results of laboratory fragments, that is both mean values ((B/A) = 0.6-0.7, (C/A) = 0.40.5) and the general features of the distributions (always peaked around their mean values, never monotonically increasing or decreasing). The results we are going to present, concerning 3-D simulations, were obtained using 27 planes, 9 per dimension, to cut a cube of side 1. When a parallel model was used, with cutting planes orthogonal to the faces of the cube, the process generated 1000 parallelepipedal fragments. For the non-parallel model, both the number and the shape of fragments was obviously unpredictable, but the number is generally smaller. A large number of simulations has shown that increasing the number of generated fragments does not lead to significant changes in (B/A) and (CIA). In Figs 10 and 11 a general plot of results, including all the models, is presented ; different points, representing the same symbol, refer to models requiring a choice of the threshold T, and correspond to the results obtained using three different values of the parameter : 0.1, 0.15, 0.2. Since the experiments give, as the most frequent result, ratios C/A and B/A for which (B/A)* = C/A approximately holds, we represent in the figures C/A vs (B/A)‘. The experimental points should lie close to the diagonal line, in the region where both C/A and (B/A)’ range between 0.4 and 0.5. The mean values coming out from our simulations are shown also in Table 1. A careful examination of the differences between the
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process
1573
C/A
1
0.8
0.6
0.4
0.2
0 L 0
0.2
0.4
0.6
0.8
I
Fig. 9. The figure explains the “local” model. Points are arbitrarily chosen in the upper left triangle, resulting into a couple of ratios B/A and C/A BOTTOM-UP 1
0.8
0.6 4 . v 0.4
0.2
0 L 0
0.2
0.4
0.6
0.8
I
(B/A)**2 Fig. 10. The mean values resulting from our simulations with various models, in a (B/A)‘, C/A plot,
3-D case, BOTTOM-UP
method
TD and BU methods shows that, while for an individual fragment B/A and C/A can vary often by 0.2 from one to the other (see Fig. 8), (B/A) has a maximum variation of about 0.1 and (CIA) of about 0.05. In the BU case, by a hierarchical with cut-off model, T
set to 0.1, we can obtain values for (B/A) (0.73) and (C/A) (0.49) very close to the results of laboratory fragments. However, the corresponding model using the TD method gives a value of (B/A) (0.84) too large with respect to the experimental value.
A. La Spina and P. Paolicchi : Catastrophic fragmentation
1574
as a stochastic process
TOP-DOWN 1
0.8
0.6
0.4
0
0.2
0.4
0.6
0.8
1
(B/A)**2
Fig. 11. The mean values resulting from our simulations with various models, in a (B/A)‘, C/A plot, 3-D case, TOP-DOWN method
A comparison of results from the same model, but with different values of T, show a range of Tvalues (0.08-o. IS), for which the BU case produces mean values rather close to the experimental values, while larger values of T lead to exaggeratedly cubic fragments. We can consider surprisingly negative the results regarding the “shape with cut-off’ model. The results do not depend, in a significant way, on the parameter T. Table 1. Mean values from 3-D model simulations BOTTOM-UP Model Clear Hier. Hier.-cut (T = 0.1) Hier.-cut (T = 0.15) Hier.-cut (T = 0.2) Shape-cut (T = 0.1) Shape-cut (T = 0.15) Shape-cut (T = 0.2) TOP-DOWN Model Clear Hier. Hier.-cut Hier.-cut Hier.-cut Shape-cut Shape-cut Shape-cut
(T = 0.1) (T = 0.15) (T = 0.2) (T = 0.1) (T = 0.15) (T = 0.2)
method : BIA
CIA
0.57 0.65 0.73 0.72 0.80 0.54 0.54 0.52
0.21 0.31 0.49 0.47 0.63 0.24 0.26 0.29
method : BIA
CIA
0.63 0.72 0.84 0.83 0.92 0.61 0.61 0.61
0.19 0.28 0.45 0.44 0.57 0.22 0.25 0.27
Also, both (B/A) and, even in a more evident way (C/A), are too low with regard to the mean values found from laboratory experiments. This conclusion is valid for both computation methods. On the other hand, a severe increase in T would translate to an artificial second breakup, involving not very elongated fragments. We conclude that this simple model with artificial cuttings cannot give realistic results. A general glance at the plots showing the results obtained with the two methods indicates also how both the “clear” and the “non-parallel” model give results completely inconsistent with the experiments. It is disappointing to note that the extension from the parallel to the non-parallel case gives, now as in the 2-D case, even more elongated fragments, i.e. a more severe misfit. Since both of them represent a pure stochastical model, with no constraint nor additional hypothesis, we can conclude that such an approach to understanding the fragmentation processes has no hope at all of working. It is possible that a corrected stochastical model may give a good fit. Only the “hierarchical with cut-off’ models gave a good fit for (B/A) and (C/A). This is not sufficient evidence to state that the model works, since we have not yet presented the distributions of B/A and CIA. Figure 12a and b show the typical behaviour of B/A and C/A obtained in laboratory experiments : the common feature of peaked distributions around the mean values can be observed. Figure 13a and b show the same histograms, but indicate the fragments obtained by our simulation, with hierarchical with cut-off model (T = 0.1) and BU method.
A. La Spina and P. Paolicchi : Catastrophic fragmentation
as a stochastic process
DISTRIBUTION OF B/A
1575
(MEAN=.72)
180
160
80 -
60 -
40 -
20 -
0
250
1
I
0
0.2
r
I
0
0.2
I
I
I
I
I
0.4
0.6
0.8
3
DISTRIBUTION I
OF C/A
(MEAN=.481 I
I
i b)
0
I
I
I
0.4
0.6
0.8
I
1
Fig. 12. The typical distributions of B/A (a) and C/A (b) for fragments formed in laboratory experiments (adapted from Capaccioni et al. (1984))
The distribution of C/A is still peaked around the mean value of 0.49, but the distribution of B/A does not exhibit a similar trend. The situation does not improve if we emp¶oy a TD method: the B/A distribution is simply shifmd on the right, as a consequence of a larger mean value (0.84) (see Fig. 14a and b).
7. Conclusions We performed many numerical simulations of stochastical fragmentation, extending our computations also to 3-D. The size distribution arising from our simulations exhibits a qualitative resemblance to those coming from lab-
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process
1576
DISTRIBUTION OF B/A,N=lOOO,HIERCUT.
MODEL,T=.l(MEAN=.73)
250
r-
I
J
0.6 DISTRIBUTION OF C/A,N=lOOO,HIERCUT.
0.8
1
MODEL,T=.l(MEAN=.49)
250
-I
200
150
100
50
0 U.L
0.4
0.6
0.8
1
Fig. 13. The distributions of B/A (a) and C/A (b) for fragments obtained in our simulations (case hierarchical with cut-off T = 0.1, BOTTOM-UP model)
oratory experiments ; nevertheless this similarity is not very accurate, and some coincidental features are probably affected by various selection effects. So we can come to no firm conclusion on the suitability of these models. The analysis of shapes, performed after a thorough discussion of the method to be used to define and to compute C/A and B/A, allows us to derive a sharper conclusion. It is not possible, by any simple pure statistical
model, to reproduce the features of the shape distributions ; under very special-and somewhat artificialassumptions it is possible to obtain mean values close to the experimental values, but it is never possible to obtain distributions similar to those observed, which are always distinctly peaked around the mean values, and almost symmetric. We recall that the semiempirical approach introduced
1577
A. La Spina and P. Paolicchi : Catastrophic fragmentation as a stochastic process DISTRIBUTION 600
r
OF B/A,N=lOOO,HIERCUT.
MODEL,T=.l
(MEAN=.84)
a)
500
400
300
200
J-
100
0
0.2 DISTRIBUTION
0.4 OF
0.6
C/A,N=lOOO,HIERCUT.
0.8
1
MODEL,T=.l(MEAN=.45)
250
b)
200
150
100
50
0 0
0.2
0.4
0.6
0.8
1
Fig. 14. The distributions of B/A (a) and C/A (b) for fragments obtained in our simulations (the same case as Fig. 13, but with the TOP-DOWN model for the shape computation) by Paolicchi et al. (1989) recently improved (Verlicchi et al., 1994 ; Paolicchi et al., 1996), is able, with a suitable choice of its parameters, to reproduce many of the observed features of both size distributions and shapes. The present analysis shows that this fit cannot be the consequence of hidden stochastical properties, and cannot be obtained in terms of local properties + statistics, but must derive from a global model of fragmentation, taking
account of the overall geometry of fragmentation, as well as of the properties of the “fragmentation field” (connected with the derivative of the velocity field, in the above quoted models, and with the strain rate in the hydrodynamical models (see Melosh et al., 1992)). While detailed physical models will, hopefully, in the near future, explain all the properties of fragmentation, including sizes and shapes, the present work excludes the possibility to
A. La Spina and P. Paolicchi : Catastrophic fragmentation
1578 understand these phenomenon.
features
in terms
of a probabilistic
Acknowledgements. We are grateful, for a critical reading of a preliminary draft of the paper, to A. Cellino, P. Cerroni, P. Farinella and V. ZappalQ. We are also grateful to the referees E. V. Ryan and I. Giblin. One of the authors (P.P.) has been supported by MURST, CNR, AS1 and EEC funds.
References Benz, W. and Asphaug, E., Impact simulation with fracture : I. Method and tests. Icarus 107,98-l 16, 1994. Benz, W., Asphaug, E. and Ryan, E. V., Numerical simulations of catastrophic disruption : recent results. Planet. Space Sci. 42,1053-1066,
1994.
Capaccioni, F., Cerroni, P., Coradini, M., Farinella, P., Flamini, E., Martelli, G., Paolicchi, P., Smith, P. N. and Zappal& V., Shapes of asteroids compared with fragments from hypervelocity impact experiments. Nature 308,832-834, 1984. Capaccioni, F., Cerroni, P., Coradinl, M., Di Martlno, M., Farinella, P., Flamini, E., Martelli, G., Paolicchi, P., Smith, P. N., Woodward, W. and Zappalti, V., Asteroidal catastrophic collisions simulated by hypervelocity impact experiments. Icarus 66,487-514, 1986. Cellino, A., Zappal& V. and Farlnella, P., The size distribution of asteroids from IRAS data. Mon. Not. R. Astron. Sot. 253, 561-574, 1991.
Chapman, C. R., Paolicchi, P., Zappal&, V., Binzel, R. P. and Bell, J. F., Asteroid families : physical properties and evolution, in Asteroids II (edited by R. P. Binzel, T. Gehrels and M. S. Matthews), pp. 386-415. University of Arizona Press, Tucson, 1989. Davis, D. R. and Ryan, E. V., On collisional disruption experimental results and scaling laws. Icarus 83, 156-182, 1990. Davis, D. R., Weidenschilllng, S. J., Farinella, P., Paolicchi, P. and Binzel, R. B., Asteroid collisional history : effects on sizes and spins, in Asteroids ZZ(edited by R. P. Binzel, T. Gehrels and M. S. Matthews), pp. 805-826. University of Arizona Press, Tucson, 1989. Farinella, P., Paollcchi, P., Tedesco, E. F. and Zappalir, V., Triaxial equilibrium ellipsoids among the asteroids? Icarus 46,114-123,198l. Farinella, P., Paolicchi, P. and ZappalB, V., The asteroids as outcomes of catastrophic collisions. Icarus 52,409%433,1982. Fujiwara, A., Energy partitioning into translational and rotational motion of fragments in catastrophic disruption by impact: an experiment and asteroid cases. Icarus 70, 53& 545, 1987. Fujiwara, A., Kamimoto, G. and Tsukamoto, A., Destruction of
as a stochastic process
basaltic bodies by high-velocity impact. Icarus 31, 277-288, 1977.
Fujiwara, A., Kamimoto, G. and Tsukamoto, A., Expected shape distribution of asteroids obtained from laboratory impact experiments. Nature 272,602-603, 1978. Fujiwara, A., Cerroni, P., Davis, D. R., Ryan, E., Di Martino, M., Holsapple, K. and Houaen, K., Experiments and scaling laws on catastrophic collisions, in Asteroids ZZ(edited by R. P. Binzel, T. Gehrels and M. S. Matthews), pp. 240-265. University of Arizona Press, Tucson, 1989. Giblin, I., Martelll, G., Smith, P. N., Celllno, A., Di Martlno, M., ZappalP, V., Farinella, P. and Paollcchi, P., Field fragmentation of macroscopic targets simulating asteroidal catastrophic collisions. Icarus 110, 203-224, 1994. Grady, D. E. and Kipp, M. E., Continuum modelling of explosive fracture in oil shale. Znt. J. Rock Mech. Min. Sci. Geomech. Abstr. 17, 147-157, 1980.
Grady, D. E. and Kipp, M. E., Geometric statistics and dynamic fragmentation. J. Appl. Phys. 58(3), 1210-1222, 1985. Hartmann, W. K., Continued low-velocity impact experiments at Ames vertical gun facility: miscellaneous results. Lunar Planet. Sci. XI, 404406,
1980.
Lange, A. and Ahrens, T. J., Fragmentation of ice by low-velocity impact. Proc. Lunar Planet. Sci. XII, 1667-1687, 1981. Martelli, G., Ryan, E. V., Nakamura, A. M. and Giblin, I., Catastrophic disruption experiments : recent results. Planet. Space Sci. 42, 1013-1026, 1994. Marzari, F., Davis, D. R. and Vanzani, V., Collisional evolution of asteroid families. Icarus 113, 168-187, 1995. Melosh, H. J., Ryan, E. V. and Asphaug, E., Dynamic fragmentation in impacts : hydrocode simulation of laboratory impacts. J. Geophys. Res. 97, 14735-14759, 1992. Mott, N. F., Ministry of Supply, AC 3642, March, 1943a. Mott, N. F., Ministry of Supply, AC 4035, May, 1943b. Mott, N. F., Proc. R. Sot. London A 189,300-308, 1947. Mott, N. F. and Linfoot, E. H., Ministry of Supply, AC 3348, January, 1943. Nakamura, A. and Fujiwara, A., Velocity distribution of fragments formed in simulated collisional disruption. Icarus 92, 132-146, 1991.
Paolicchi, P., Rushing to equilibrium: a simple model for the collisional evolution of asteroids. Planet. Space Sci. 42,207211, 1994.
Paolicchi, P., Verlicchi, A. and Cellino, A., An improved semiempirical model of catastrophic impact processes. I. Theory and laboratory experiments. Icarus 121, 126-l 57, 1996. Takagi, Y., Mitzutani, H. and Kawakami, S., Impact fragmentation experiments of basalts and pyrophyllites. Icarus 59,462-477, 1984. Verlicchi, A., La Spina, A., Paolicchi, P. and Cellino, A., The interpretation of laboratory experiments in the framework of an improved semi-empirical model. Planet. Space Sci. 42, 1031-1041,1994.
Weibull, W., A statistical theory of the strength of materials, Zngeniorsvetenskupsukudemiens, Vol. 151. Stockholm, 1939.