Logarithmic distributions in reliability analysis

Logarithmic distributions in reliability analysis

Microelectronics Reliability 42 (2002) 779–786 www.elsevier.com/locate/microrel Logarithmic distributions in reliability analysis B.K. Jones * Depa...

130KB Sizes 0 Downloads 55 Views

Microelectronics Reliability 42 (2002) 779–786 www.elsevier.com/locate/microrel

Logarithmic distributions in reliability analysis B.K. Jones

*

Department of Physics, School of Physics and Chemistry, Lancaster University, Lancaster LA1 4YB, UK Received 8 January 2002

Abstract Real-life systems are complex, with many independent parameters which can affect the system. Their behaviour can therefore be very variable. This is especially true of the more involved processes which occur in reliability and degradation processes. However, there are some characteristics which are observed which can be understood simply because they are characteristic of complex systems. These include distributions that are very often logarithmic rather than uniform, log–normal failure distributions and 1=f noise. A wide variety of diverse examples is given to illustrate the common occurrence of such observations together with the underlying unifying themes. There are several basic reasons for the origin of logarithmic distributions. One is that they arise from multiplicative processes. Another is that although basic science is often introduced as linear, with non-linear effects added as a correction, complex systems are often inherently non-linear. This produces multiplicative effects, such as harmonic generation and fractal behaviour. Ó 2002 Elsevier Science Ltd. All rights reserved.

1. Introduction In introductory texts science is introduced by linear systems (e.g. simple harmonic motion) and statistical results by normal (i.e. Gaussian) distributions. However in real life we very often find logarithmic distributions and this very different regime is usually not explained or even recognised. Here we aim to show that quantities in the real world are often logarithmically distributed and that there is no one single reason for this [1]. The systems that will be described include those from the natural world, the manmade social world as well as the world of microelectronic systems with the statistics of their reliability and their non-ideal behaviour such as 1=f noise. There has been a realisation that there are simple mathematical ways of describing the behaviour of complex systems. These have involved the ideas of selfsimilarity, scale invariance and fractals [2]. A general description of many of these effects has been given by Schroeder [3].

*

Tel.: +44-1524-593657; fax: +44-1524-844037. E-mail address: [email protected] (B.K. Jones).

We will first describe some general observations on the behaviour of many systems in the natural and social world which illustrate that a very large number of complex systems show a power law behaviour, for various reasons. We then explain that this is a result of scale invariance and why this may occur. A particular example is given of the anomalous law of first digits, named after Benford, which is a general property of some scaleinvariant data sets. A particular scale-invariant phenomenon, the 1=f noise of electronic systems is described and shown that this is a result of general principles rather than a universal origin. Finally the reason for the frequent occurrence of log–normal distributions in complex phenomena such as failure statistics is explained.

2. Pareto and Zipf Pareto was an economist who had an engineering training so that he was familiar with collecting and analysing data. In about 1895 he published a ‘law’ of income distribution [4]. This was the observation that if an individual’s income is plotted against ‘rank’ a power law is observed. That is the income is plotted against the number of individuals with this or a higher income. If

0026-2714/02/$ - see front matter Ó 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 0 2 6 - 2 7 1 4 ( 0 2 ) 0 0 0 3 1 - 8

780

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

the data are plotted on log–log axes then a straight line with a slope of 2=3 is found. Fifty years later a similar analysis found a slope of 1=2 [5]. At each time the data were found to be fairly consistent between countries and tax regimes and the difference between these results seems to be just due to the development of the practice of taxation and of societies. Strictly the law only applies for the high income part of the population. Such a clear variation gives rise to the ability to make mathematical predictions about the effect of different economic and tax regimes and to perhaps deduce how societies operate and can be manipulated. Since the experimental data on which this analysis is based are not very good, certainly compared with acceptable data in the physical sciences, the analysis that followed the original observation seems to have been much overdone; but that is the nature of the subject. Such a simple result also gives rise to speculation about the underlying cause of the variation. In such a complex social system much speculation is possible and we will reserve such analysis for other systems. An interesting observation is that the statistical distribution of incomes, the basis of the integral analysis above, appears to be nearer log–normal (in income) than normal [6]. We will see that this is found elsewhere also. This early ranking analysis has also lead to various tools for analysis and management. Thus a Pareto plot is a very general ranking presentation of profitability (say) by product type (say). This demonstrates clearly the important items and leads to simple aphorisms such as ‘20% of the products produce 80% of the revenue’. This is a numerical exaggeration for the distributions which are frequently found to be of the 1=ðrankÞ type, even if only a few products are involved. It should be noted that a ranking procedure will inevitably produce a monotonically decreasing distribution so that a power law variation is not a complete surprise. Ranking procedures are also used in reliability analysis to demonstrate the weakest link. Zipf came to use a similar analysis, but from a different direction. He was a linguist at Harvard who studied the distribution of words. Thus the ranking of words by frequency of occurrence produces a 1=ðrankÞ variation very exactly and became Zipf’s law for a large data set in English. The analysis has been extended to other languages and then to many other aspects of writing and speech. Power law variations, although sometimes with different exponents, are nearly always found. Thus the result is universal, or a general law. An example is shown in Fig. 1 for the frequency of occurrence of words in (A) a James Joyce book and (B) from American newspapers. Curve (C) has a slope of 1 for comparison. Here we use Zipf’s original graphs which may seem out of date but plotting new data sets will show the same behaviour. The linguistic interest here is that the very complex social interaction which results in language has devel-

Fig. 1. An example is shown for the frequency of occurrence of words in (A) a James Joyce book and (B) from American newspapers. Curve (C) has a slope of 1 for comparison [5].

oped in a similar way in many cultures to produce very similar and well-described mathematical behaviour. What is it about the development and form of a mature language that produces this result? One reason is that the evolution process has produced an efficient form of coding as enunciated in mathematical information theory. Thus frequently used words are short and infrequent words are longer. Similarly the Morse code symbols for the frequent letters are shorter than the less common ones. Extensions to this argument and the addition of similar logical and natural laws of behaviour can lead us towards an explanation. It has also been shown that purely random processes produce similar results. Zipf appreciated that there could be more general application of this evolutionary process which could give general rules of behaviour for complex social systems. In 1949 he published ‘Human Behaviour and the Principle of Least Effort’. The first half of this book discusses the possible reasons underlying the linguistic observations. The second half widens the discussion to other areas. Social systems and human behaviour are governed by the application of ‘least effort’. It is natural that the easiest, most natural or sustainable solution is eventually reached in the evolution of any process or system. The examples given by Zipf are not always in the form of ranked numbers but involve perhaps two variables. In most cases the analysis needs large number of points in the data sets. Fig. 2 shows the ranking by population of metropolitan districts in the USA in 1940. The slope is 0:98  0:04. The ranking of the number of manufacturing businesses by their type, for example: clothing, shoes, furniture, is shown in Fig. 3. The slope is

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

Fig. 2. The ranking by population of metropolitan districts in USA in 1940. The slope is 0:98  0:04 [5].

Fig. 3. The ranking of the number of manufacturing businesses by their type, for example; clothing, shoes, furniture. The slope is 2/3 [5].

2=3. An explanation for the reason for such variations would require considerable assumption and discussion but perhaps indicates the essence of the operation of the business world. An easier analysis is perhaps possible for the data of Fig. 4. Here the number of marriages in Philadelphia are plotted against the number of blocks apart of the couples. The slope is 0:82. The number of people living at a distance R increases as R but for each person of the couple the likelihood of going a distance R decreases as R so that a R1 variation might be expected. In practice the population is not uniformly distributed so that a slower variation may be expected.

781

Fig. 4. The number of marriages in Philadelphia are plotted against the number of blocks apart of the couples. The slope is 0.82 [5].

A ranked income distribution, as initiated by Pareto, is shown in Fig. 5 for Germany at different years between 1926 and 1936. A slope of 1=2 is also shown. A final example is shown in Fig. 6 for the size frequency of insurance claims. The slope is 0:47. It should be noted that the data of Pareto’s income variation are consistent enough that it was used sometime ago to look for anomalies in the distribution of tax returns and hence to indicate tax evasion. The data in a Zipf analysis usually fit better to a power law if plotted as log(y þ c) vs log(x) where c is a small constant. This is to be expected. For example, the actual distribution for all incomes is likely to be nearly normal (or log–normal) but with a sharp cut-off at the low end. The analysis works only for the large income part of the distribution. For a nearly normal distribution the top 25% of the population, or rank, will cover

Fig. 5. A ranked income distribution, as initiated by Pareto, is shown in for Germany at different years between 1026 and 1936. A slope of 1/2 is also shown [5].

782

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

Fig. 6. The frequency of insurance claims of different sizes. The slope is 0.47 [5].

perhaps over a third of the income scale, so that for a distribution with a cut-off it could be well over half. The constant lifts the distribution as its increase slows down as it approaches the peak from above. Because of the variety of systems involved, it is obvious that the physical reason for the good power law variation in each example of a Zipf analysis must be different. One very obvious general conclusion is that large events occur infrequently. However there must be some other underlying principles involved and we will investigate these next.

3. Scale invariance In the search for order and laws in complex systems there has been the realisation that much of life is scale invariant. That is, if one studies the properties of a complex system on different scales similarities can be seen. The scale may be in distance or time. We accept without much thought that objects are translationally invariant; that is they behave the same if they are moved spatially to a different (but similar) position. Similarly that some are rotationally invariant, either totally or with symmetry. Consider the family of bowed string instruments, from violin to bass. These are almost identical as the scale is changed to accommodate the need for different ranges of pitch in accordance with the physics of vibrating wires. Similarly the mammals from mouse to elephant have general similarity as the linear scale is changed. In complex systems the fundamental similarities rather than the small differences are important. These objects are said to be self-similar. They are invariant to multiplicative operations. A general consequence of scale invariance is that power law variations are observed [2,7]. In the living world the reason for such self-similarity was studied by D’Arcy Thompson [8,9]. As a snail or sea shell grows it keeps the same shape by adding larger

sections. This results in the shape of a logarithmic spiral, r ¼ ah which has a radial generator with a constant angular velocity and linear velocity proportional to r. This results in self-similarity and a specific mathematical shape. Once this scaling is accepted then further analysis is possible. Thus adaptation among the range of warm blooded animals can be understood since the heat loss is proportional to their surface area (dimension)2 while the energy production is proportional to the volume (dimension)3 . This works over three orders of magnitude where (energy dissipated)  (body mass)2=3 . As an example of a modification to the simple scale invariance, the elephant’s legs have to become thicker and straighter than smaller quadrupeds because the strength of the bone is fixed while the weight increases as the volume and the leg cross-section as the area. In a more complex and less regular system the branching of the limbs of a tree from the main branches to the twigs has regularity with scale. The scale dependence is expressed as fractals [3]. In this example the scaling is derived from the requirements of uniform flow of sap though the plant together with the needs of strength and rigidity and the basic properties of that particular species. It has been found that some systems will naturally adopt fractal and scale-invariant properties. This is selfordered criticality [3,10] and the best known example is the frequency and size of the ‘landslides’ which occur as sand is uniformly run on to the top of a sand pile. Power laws are observed for many properties in the approach to the critical point in self-ordered systems such as a phase change. The power exponent depends only on the general class of the system and is independent of the particular system. That is all liquid–gas transitions behave in the same way in this respect [2]. The basic reason for the scale invariance is different for each system but each implies a wide range of characteristic dimensions or times. The wide range often results from processes derived from the multiplication of several steps, rather than the addition or from a process governed by a variable which has a wide distribution and occurs as an exponent.

4. Benford Benford’s ‘law of first digits’ has a history over very many decades and has produced a literature which is remarkable in that it shows a lack of understanding that the law is fundamental and general rather than specific to the properties of a particular data set. A large data set is chosen and the relative frequency of occurrence, P ðnÞ, of the first digit, n, is counted. Of the nine possibilities (1–9) it is found that the distribution is heavily weighted with the lower values and follows the law

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

P ðnÞ ¼ logð1 þ nÞ  logðnÞ ¼ logð1 þ 1=nÞ

783

ð1Þ

This distribution is also that of the length of the scales on slide rules and it was observed very early that it corresponded to the amount of usage of the pages in printed tables of logarithms. This result is very surprising when first encountered and a basic lack of belief has enlarged the extensive literature. A uniform distribution between the digits is so widely expected that a wager on the result can bring reward to the expert. It is found that the initial digit is 130% (0.699/0.301) more probable to be among the four, 1–4, than the five digits, 5–9, and even 50% (0.602/0.398) more probable to be 1–3 than 4– 9 [11,12]. Not every data set will do. It has to be approximately scale invariant. That is it must extend over many decades and be ‘natural’ so that any particular number within the set has an equal probability of occurring in any decade and has no other bias. Thus lengths of rivers, areas of oceans, lakes and ponds, resistivities of elements, dielectric constants of substances, populations of towns, incomes of countries etc. are all acceptable. Examples of unacceptable data sets are telephone numbers which are constrained to be a fixed length and hence tend to a uniform probability distribution of the first digit and street numbers which are usually biased to the smaller numbers because of the short street lengths. Because the result is general and independent of the particular data set it must be a consequence of pure mathematics, and scale invariance. A host of theorems stem from the basic observation. The distribution remains valid under multiplication of each term by the same factor or transposition into another number system with a different base. A uniform distribution among the digits rapidly approaches the Benford distribution after successive multiplications of the set by numbers. Once this distribution has been attained it is stable. On reflection these are obvious requirements for a universal law and leads to the derivation of the distribution [13– 15]. One way of generating a data set with suitable properties is to perform successive random divisions of a mass of a substance. This multiplicative (divisive) process produces a set of particles with an appropriate distribution of size [16,17]. As with the Pareto income analysis the universality of the result has been used to discover financial fraud since a fraudster may not realise what distribution of first digits would be expected in a series of artificial account entries and adjust the figures to give the uniform distribution that would be more obvious [18]. Note that P ðnÞ ¼ logð1 þ nÞ  logðnÞ ¼ logð1 þ 1=nÞ tends to 1=n for n  1. This is seen in Fig. 7 where the Benford distribution is plotted on log–log scales together with a line with a 1=n variation. For the Benford analysis to be appropriate, the basic requirement is that

Fig. 7. Benford probability distribution of first significant digit with a line of a 1=n variation.

the numbers have an equal probability of occurring in any decade (or similar span in a system with another base). This also requires that the distribution is unchanged with most simple mathematical operations [14,15]. This results in a probability distribution P ðxÞ  1=x, which is a special case of the Zipf distributions [13,16]. The general case for a P ðxÞ  1=xa distribution is that the distribution of first digits becomes ½1=ð1  aÞ ½ðn þ 1Þð1aÞ  ðnÞð1aÞ for a 6¼ 1. For another special case, a ¼ 2, the distribution of first digits becomes 1/n2 . The Zipf ranking law P ðRÞ  1=ðRÞ, where R is the rank, corresponds to P ðxÞ  1=x2 In general if P ðxÞ  1=xa then P ðRÞ  1=Rð1=ða1ÞÞ , for a  1. Note that the distribution has to be limited at the high end for ranking purposes. We thus see that scale invariance can give the purely mathematical result of Benford’s law although specific reasons are needed to determine why scale invariance exists, even though it is very common in complex systems.

5. 1=f noise Electrical noise is the random fluctuation within an electrical system. The Johnson, kT or thermal noise is a consequence of the thermal agitation within the lossy part of a resistor and is well understood. Another understood noise is the shot noise fluctuation in a current due to the random passage of the discrete, quantised, charge packets. These both have a white spectral density which has the same intensity in each Hz interval at any frequency. Excess noise is a resistance fluctuation which depends on the physical properties of the material of the electronic component and is hence a diagnosis tool to

784

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

investigate the properties of the material. It comes in two types, generation–recombination (g–r) noise, which has a Lorentzian spectral density with a characteristic frequency, s, and 1=f noise which has a spectral density which varies as 1=f c with c near 1. The g–r noise is understood and is a random change in the resistivity due to the random trapping and de-trapping of free carriers. The 1=f noise is generally believed to have the same basic origin but occurs very generally in many diverse systems so that a more general explanation is sought by some. We will demonstrate here that a generally applicable mechanism is not required since the spectral shape is a natural consequence of scale invariance, which is quite common, as we have seen. Excess noise is basically caused by imperfections or defects in the sample and is hence a good indicator of quality or reliability. For the physical systems in which a cause has been identified it is apparent that there is a very wide variety of sources [19–21]. One of the reasons that progress in understanding 1=f noise has been so slow is that it is featureless. It is invariant in translation on both the intensity and frequency axes so that one cannot tell whether a change in an experimental variable has altered the size or rate. Its form is scale invariant [17,22–25]. It contains the same ‘energy’ per decade of frequency, whereas the white noise has the same energy per Hz. The property of the 1=f spectrum that it has equal probability (energy) in each decade is a complete parallel with the need for a 1=x distribution in order for the Benford law to appear, as we saw above. It is this property which suggests that wavelet transformations, which use a geometrical frequency sum, might be a more appropriate analysis method than the Fourier transform, which uses a linear sum of frequencies [26]. An obvious property is that large fluctuations occur infrequently. A commonly used model to generate 1=f noise is that of an appropriately weighted summation of spectra with characteristic times. This produces the required invariance if the range of characteristic times is large. For a basis spectrum with a Lorentzian spectral density, such as that of g–r noise, a weighting, gðsÞ  1=s is needed. The Lorentzian itself can be generated in various ways such as with a random set of exponentially decaying pulses or the statistics of a random telegraph two-level system. This particular weighting over the experimentally required wide range of characteristic times is not very special but is obtained readily if the characteristic time is controlled exponentially by a parameter which has an approximately uniform distribution. Examples are the thermally activated de-trapping of carriers where s  expðE=kT Þ and there is a uniform distribution of activation energy, E, among the traps and the quantum mechanical tunnelling through a barrier with uniform distribution of barrier thickness, h. The tunnelling rate

varies as exp(Ch). It should be noted that the uniform range in the exponent need not be large to obtain many decades in the characteristic time and also the exact requirement for the form of the distribution is not strong since a range of frequency exponents, c, is acceptable in 1=f noise. It can be seen that a normal distribution of the trap depth or barrier thickness leads to a log–normal distribution in s. If the width, or deviation of the distributions is large this then leads to a 1=s variation in the distribution over a wide range and this leads to 1=f noise. This log–normal distribution is more satisfying than the simple 1=s variation since it has natural limits at both ends which remove the logarithmic singularities which are a characteristic of an infinite range 1=f variation [17]. The unusual frequency exponent of the 1=f noise has attracted a large number of statistical models. The basic characteristic has been that they are multiplicative [17,27,28]. Consider a random train of pulses. Carson’s theorem gives

Sðf Þ T

Z

þ1

P ðsÞA2 ðsÞj F ðx; sÞj2 ds

ð2Þ

0

where T is the average total rate of random perturbations per unit time, P ðsÞ is the normalised probability of occurrence of an event with characteristic time s and F ðx; sÞ is the Fourier transform of the time pulse H ðsÞ. The spectrum is thus made up from a multiplication process. A 1=f spectrum is obtained at low frequencies from a uniform array of pulses if the pulse shape is t1=2 [24]. The t1=2 variation suggests a diffusion process and many models of this sort have been developed. The wide range of frequencies over which the spectrum is smoothly varying, or the wide range of characteristic times, is given here by the slow decay of the transient together with the random occurrence of the events. Alternatively, for ‘any reasonable perturbation’ the 1=f spectrum is obtained [27] if the distribution of the probability and intensity with characteristic frequency is appropriate, P ðsÞA2 ðsÞ  s2 . In general, impulses must decay with time. The resulting spectrum therefore must have a steeper decrease at higher frequencies than at lower frequencies and for this intensity spectrum the exponents must be even ordered. Thus a Lorentzian spectrum is white at low frequencies and drops off quadratically at high frequencies. The weighting of the spectra of the individual events, based on their characteristic time, is therefore arranged to compensate for any deviation of the spectrum from the desired 1=f variation. Another possible variable to influence the spectrum of pulses is to invoke correlation between pairs of pulses [28]. Another multiplicative model is based on intermittency. A particular case is the suppression of a random

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

g–r noise source by a, slower, random two level (on–off) signal which also has a Lorentzian spectrum [29]. Another possible class of models uses a well-established source of noise, such as white thermal or shot noise, which is then filtered in some way to obtain the required spectrum. Since filtering takes place on the amplitude the resultant intensity has an even powered spectrum. In order to obtain a spectrum, such as 1=f , over a wide frequency range the parameters of the filter need to be wide ranging so that there is again some weighting, with the characteristic time, of the filtering process. The filtering can be performed best by a modification to the processes producing the white noise such as extra terms in the diffusion equation or correlation between pulses. Many examples have been quoted of 1=f noise in non-electronic systems. These are very varied and include spectral analysis of music [30], spatial spectral analysis of art and landscapes [31]. In these analyses the music and art is treated as a random pattern. The landscapes chosen are those that are considered pleasing to man, although real landscapes are also generated by natural processes which produce scale-invariant distributions and so may be related to 1=f noise in a different way. These manmade systems suggest that the human mind likes a wide distribution of scales. An interesting observation has been made that the fluctuations in the natural world which affect population dynamics and species extinction are likely to have a broad range of time constants as with 1=f noise [32]. Another example which is appropriate for the present discussion is the observation that the insulin uptake data for an unstable diabetic has a 1=f spectrum. The probability distribution of the doses which generate the spectrum has what appears to be a log–normal distribution [33].

6. Log–normal distributions A good description of the properties and use of the log–normal distribution is given in the NIST Handbook [34]. Simple error distributions are normal, i.e. Gaussian. These arise because they are derived from several independent additive error sources. This distribution is most often used in analysis because it is taught in elementary error analysis which is usually used on linear systems which are well behaved in this respect. However, log–normal distributions are very frequently observed in real systems [35]. Note that for small standard deviation the log–normal looks like a normal distribution, so the normal distribution is used because of the familiarity. If the data are really log–normal the normal plot will appear to be slightly skewed. A distribution may be log–normal for various different reasons. It depends on the details of each system.

785

The essential feature is that it is generated by multiplicative random processes [36]. That is the disturbance multiplies the quantity by a certain amount (the logs add) so that it is normal in (log x). The log–normal distribution is the stable shape for a succession of random multiplicative operations as the normal distribution is for random additive processes. As we saw earlier if a quantity depends on an exponent which has a normal distribution, then the quantity has a log–normal variation. The variance, or width, tends to increase with time for a random system, or with the addition of more random error terms, so that the normal distribution becomes more uniform in x and the resulting log–normal distribution tends to uniform in log(x). Thus Z Z Z P ðlog xÞ dðlog xÞ ¼ C dðlog xÞ ¼ C ð1=xÞ dx ð3Þ which has the appropriate variation needed for a Benford distribution.

7. Conclusions We have seen that scale invariance, or self-similarity, occurs everywhere and is natural. It occurs for different reasons in different systems but the reasons are all basically due to multiplicative processes. Scale invariance in a system is a primary requirement for a logarithmic distribution. It is better to confirm the existence of scale invariance and to establish its cause than be concerned about the details of the noise/error distribution. In this respect 1=f noise is less general than scale invariance.

References [1] Jones BK. In: Abbott D, Kish LB, editors. Proceedings of International Conference on Unsolved Problems of Noise and Fluctuations (UPON), Adelaide, 1999. Bristol: IOP Publishing; 2000. p. 115–23. [2] Wiesenfeld K. Resource letter: ScL-1: scaling laws. Am J Phys 2001;69:938–42. [3] Schroeder M. Fractals, chaos, power laws. New York: W.H. Freeman and Co.; 1992. [4] Pareto V. Cours d’economie politique. Lausanne, 1897. [5] Zipf GK. Human behaviour and the principle of least effort. New York and London: Hafner Publishing Co.; 1949 [Reprinted 1965]. [6] Mandelbrot B. The Pareto–Levy law and the distribution of income. Int Econom Rev 1960;1:79–106. [7] Troll G, beim Graben P. Zipf’s law is not a consequence of the central limit theorem. Phys Rev 1998;57:1347–55. [8] Thompson DW. On growth and form. Cambridge: Cambridge University Press; 1961.

786

B.K. Jones / Microelectronics Reliability 42 (2002) 779–786

[9] Ball L. The self-made tapestry. Oxford: Oxford University Press; 1999. [10] Turcotte DL. Self-organised criticality. Rep Prog Phys 1999;62:1377–429. [11] Raimi RA. The peculiar distribution of first digits. Sci Am 1969;121:109–22. [12] Lines ME. A number for your thoughts. Bristol: Adam Hilger; 1986. [13] Pietronero L, Tosatti E, Tosatti V, Vespignani A. Explaining the uneven distribution of numbers in nature: the laws of Benford and Zipf. Physica A 2001;293:297– 304. [14] Turner PR. The distribution of leading significant digits. IMA J Numer Anal 1982;2:407–12. [15] Turner PR. Further revelations on L.S.D. IMA J Numer Anal 1984;4:225–31. [16] Lemons DS. On the number of things and the distribution of first digits. Am J Phys 1986;54:816–7. [17] West BJ, Schlesinger MF. On the ubiquity of 1=f noise. Int J Modern Phys B 1989;3:795–819. [18] Matthews R. The power of one. New Scientist, 10 July, 1999. [19] Weissman MB. 1=f noise and other slow, nonexponential kinetics in condensed matter. Rev Mod Phys 1988;60: 537–71. [20] Kogan Sh. Electronic noise and fluctuations in solids. Cambridge: Cambridge University Press; 1996. p. 203. [21] Jones BK. Electrical noise as a measure of quality and reliability in electronic devices. Adv Electron Electr Phys 1993;87:201–57. [22] Machlup S, Hoshiko T. Scale invariance implies 1=f spectrum. Second International Symposium on 1=f noise, Orlando, March 1980.

[23] Mandelbrot BB, Wallis JR. Noah, Joseph, and operational hydrology. Water Res Res 1968;4:909–18. [24] Buckingham MJ. Noise in electronic devices and systems. Chichester: Ellis Horwood Ltd.; 1983. p. 155. [25] Keshner MS. 1=f noise. Proc IEEE 1982;70:212–8. [26] Wornell GW. Proc IEEE 1993;81:1428–50. [27] Halford D. A general mechanical model for f a spectral density random noise with special reference to flicker, 1=f , noise. Proc IEEE 1968;56:251–9. [28] Heiden C. Power spectrum of stochastic pulse sequences with correlation between the pulse parameters. Phys Rev 1969;188:319–26. [29] Gruneis F. 1=f noise and intermittency due to diffusion of point defects in a semiconductor material. Physica A 2000; 282:108–22. [30] Voss RF, Clarke J. 1=f noise in music: music from 1=f noise. J Acoust Soc Am 1978;63:258–63. [31] Peitgen HO, Saupr D, editors. The science of fractal images. Springer; 1988. [32] Halley JM, Inchausti P. 1=f noise: an appropriate stochastic process for ecology. In: Proceedings of the 16th International Conference ‘Noise in physical systems and 1=f noise’ (ICNF 2001), Gainsville, 2001. Singapore: World Scientific; 2001. p. 797–800. [33] Campbell MJ, Jones BW. Cyclic changes in insulin needs of an unstable diabetic. Science 1972;177:889–91. [34] NIST/SEMATECH Engineering Statistics Internet Handbook. http://www.itl.nist.gov/div898/handbook/. [35] Limpert E, Stahel WA, Abbt M. Log–normal distributions across the sciences: keys and clues. Bioscience 2001;51: 341–52. [36] Holden AJ, Allen RW, Beasley K, Parker DR. Death by a thousand cuts. Qual Reliab Eng Int 1988;4:247–54.