A new approach to the acquisition of potential field data

A new approach to the acquisition of potential field data

Journal of Geodynamics 43 (2007) 248–261 A new approach to the acquisition of potential field data K.A. Galybin a,∗ , F. Boschetti b , M.C. Dentith a...

2MB Sizes 2 Downloads 100 Views

Journal of Geodynamics 43 (2007) 248–261

A new approach to the acquisition of potential field data K.A. Galybin a,∗ , F. Boschetti b , M.C. Dentith a a

School of Earth and Geographical Sciences, The University of Western Australia, Australia b Exploration and Mining, CSIRO, Western Australia

Received 30 September 2005; received in revised form 26 July 2006; accepted 18 September 2006

Abstract This study presents a new strategy for the design of geophysical surveys based on non-uniform sampling across a survey area. The strategy comprises several stages. The first stage is design and implementation of an initial, large station spacing, survey that defines the geophysical response in areas where there is little variation, and also allows identification of areas of interest where more data are required. The station spacing in this first-stage survey can be selected based on the probability of detecting such areas of interest that are of a given size. Using a gravity survey as an example, areas of interest are identified within the first-stage dataset based on the occurrence of variations likely to be due to source ‘edges’. The data are used to prepare a source edge density map, thus identifying the regions within the study area that require further sampling. The density of edges is used to design the next phase of data acquisition, station/sampling frequency being proportional to edge density. As a result, a non-uniform sampling array is produced. If necessary the process can be repeated based on an edge-density analysis of the combined first- and second-stage data. To improve the logistics of data collection for this non-uniform sampling array, a simulated annealing algorithm is used to determine a close-to-optimal travel path for the collection of the data. This strategy results in data being acquired in areas where it provides the most information, producing an accurate dataset, whilst minimising the cost of collecting the data. © 2006 Elsevier Ltd. All rights reserved. Keywords: Data acquisition; Potential fields; Sampling interval; Gravity

1. Introduction Ideally, the number and location of measurements comprising a geophysical survey depends only on the intended outcomes of the survey. In practice, of equal importance are logistical factors and cost. The intended outcomes of a survey can be conveniently described in terms of two end-members. Firstly, there is the ‘detection’ objective, where the aim is to identify areas of interest. In designing surveys of this type the problem is one of probabilities: for a given distribution of measurements what are the chances of detecting a response with some defined dimensions? The alternative survey objective is to ‘delineate’ any geophysical responses. Delineation requires that the responses in areas of interest are accurately mapped in terms of their gradients, spatial-frequency content, etc. Proper delineation is ∗

Corresponding author. Tel.: +61 8 6488 1873; fax: +61 8 6488 1178. E-mail address: [email protected] (K.A. Galybin).

0264-3707/$ – see front matter © 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.jog.2006.09.006

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

249

essential if a response is to be “modelled” to determine, for example, the depth and orientation of the source. Achieving this requires consideration of sampling theory. In a resource-exploration context, the areas of interest would be those whose characteristics are consistent with the geological environment thought to host natural resources. In the case of data acquired to map the local geology in general, the areas of interest are those where the geophysical response is most complex, where more data are needed to understand the underlying geology. To use a common geophysical terminology, areas of interest when mapping are where the residual component of the data is greatest. In other areas regional variations predominate. The delineation objective requires a much greater sampling density than the detection objective. The cost of a survey is roughly proportional to the number of samples taken, so economic considerations dictate that early in the investigation of a region, large-area, low-resolution, reconnaissance surveys are undertaken to identify areas of potential interest (detection objective). Surveys of smaller extent, but higher-resolution, are then used to follow up responses of potential significance, which may then partly or fully fulfil the delineation objective. This leads to an overall non-uniform sampling of the exploration area, with widespread relatively sparse sampling supplemented by greater sampling in selected areas. Moreover, this sampling is achieved through a series of separate phases of data acquisition, each with its own crew-mobilisation costs. The most advantageous use of the available budget would occur if the samples were optimally placed and acquired in effectively one phase of data acquisition. For example, the reconnaissance surveys would use the minimum sampling density necessary to define regional variations and also identify areas of interest, and based on these data the remainder of the budget would be assigned to measurements designed to better define responses in the areas of interest. Identifying these features as the reconnaissance data become available would enable the reconnaissance and follow-up surveys to be carried out effectively simultaneously, saving time and money. Here we present a novel approach to geophysical survey design and data acquisition that combines elements of both reconnaissance/detection and follow-up/delineation surveys and which aims to optimise the number and spatial distribution of the measurements within the constraints of a given budget. The method considers both the most useful locations for the individual samples and accounts for the costs associated with the measurement itself and the travel between measurement sites. The principles of the method are applicable to any kind of sample-based survey, but in its current form it is most relevant to ground geophysical methods, in particular gravity surveys. 2. Survey specifications Geophysical data are normally acquired along parallel, equally spaced, straight-line traverses, with along-line measurements taken at stations spaced at equal intervals. The station interval is often many times smaller than the spacing between the traverses, here called the line spacing. The survey traverses are usually oriented so as to be parallel to the direction of maximum variation in the measured parameter, which is expected to be perpendicular to the local geological strike. The lesser variation expected parallel to the strike direction justifies the larger sample spacing in this direction. 2.1. Criteria for detection In the absence of detailed a priori knowledge of the local geology the criteria for detecting areas of interest must be probabilistic. Sambuelli and Strobbia (2002) describe how to determine the probability (P) that a rectangular target will be sampled a specified number of times by a uniform grid of measurements (1).     θ2 dθ L cos θ + l sin θ − 2S sin θ cos θ − (n − 1)d P= , max 0, min 1, d π/2 θ1       l L −1 −1 min 1, min 1, , θ2 = sin (1) θ1 = cos S S where l, L are the lengths of the short and long sides of the rectangle, respectively, n the number of survey lines, d the survey line spacing, S the minimum length of intersection of survey lines with the target and θ is the angle between the long side of the rectangle (L) and the perpendicular to the survey line direction.

250

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Fig. 1. Definition of the parameters required to determine the probability of an ellipse being intersected by two parallel lines of length at least S.

A rectangle is an unlikely shape for a geophysical anomaly and Galybin and Dentith (in press) derive an equivalent expression for an elliptical target. The elliptical target has a major axis 2a and minor axis 2b, and detection is based on intersection by n line segments of length at least S, during a survey with line spacing d. The effective width of an ellipse, rotated by θ (Fig. 1), with two line segments of length S is given by   2 1/2 S 2 (a2 tan (θ)2 + b2 ) 2 2 2 2 (2) Hs = 4a + 4(b − a ) cos (θ) − a2 b2 sec (θ)4 and the probability that n lines, a distance d apart, intersect the ellipse is given as     θ2 Hs − (n − 1)d dθ P(n, d) = max 0, min 1, , d π/2 θ1       l L , θ2 = sin−1 min 1, , 2a = L, 2b = l θ1 = cos−1 min 1, S S

(3)

The overlap parameter S is important because it represents the number of measurements required to constitute detection. There is a danger that one, or even several, anomalous measurements will be taken to be noise. The choice of S is thus a function of the perceived data quality. Also to be borne in mind is that the target anomaly covers a larger area than its source. In the case of potential field data, the aerial extent increases with the distance between the source and the detector, albeit with a decrease in the amplitude of the response. Data acquired at a greater source–detector separation will contain responses, which cover a larger area, but there is a greater risk of the anomalous responses being indistinguishable amongst the background noise. Eq. (3) can be implemented numerically using any available mathematical package (MatLab, Maple, Mathematica, MathCad, Octave, etc.). Fig. 2 shows the likelihood of detecting elliptical anomalies with major axes (2a = L) of 1000, 500, 250 and 125 m and major to minor axis ratios of 1:1, 2:1 and 4:1. The data in Fig. 2 are appropriate for aeromagnetic data collected using a fixed-wing aircraft. It is assumed that the station interval is 5 m and that at least five readings (corresponding with an overlap (S) of 20 m) on a single line are required to produce a recognisable response. The commonly used reconnaissance line spacing of 400 m is shown to have about a 31% chance of resulting in a recognisable response from a circular target with a diameter of 125 m. Doubling the size of the target doubles the probability of detection. The probability of detecting elongate targets is, as expected, less than circular ones, for equivalent maximum dimensions. 2.2. Criteria for delineation As with any scenario where a continuously varying parameter is sampled at discrete points, an important consideration is the well-known phenomenon of aliasing (Fig. 3a). A full description of aliasing is provided in most geophysical textbooks, e.g. Telford et al. (1990) and so is not repeated here. A crucial parameter is the Nyquist frequency (fN ); this is the sampling frequency required to properly represent variations and is equal to twice the highest frequency of

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

251

Fig. 2. Probability of detecting an ellipse encased by a rectangle. Different ellipse aspect ratios are shown, 1:1, 2:1 and 4:1. The long axis, 2a = {125, 250, 500, 1000}.

variation within the data. Put simply, if the sample spacing is too large when compared with the wavelength of the variations, the results from the survey are irreparably degraded due to the short wavelength components appearing as spurious longer wavelength variations (Fig. 3a and b). The concept of aliasing is also important in that it shows that there is no advantage in data acquisition using a sampling rate greater than the Nyquist frequency, since no extra information is obtained through this oversampling. Although oversampling is not detrimental to the quality of the data, it is a waste of resources since it represents a cost without a return. 2.3. Implications for surveys using constant sampling intervals It is very unlikely, especially in surveys of large aerial extent that the sampling requirements for detection and delineation will remain constant throughout. This is especially true for delineation, since there will almost certainly be areas where there are greater rates of variation in the measured parameter. Denser sampling is required in such areas, but to extend this to the entire survey area results in unnecessary oversampling. Thus, a uniform sampling scheme will almost certainly result in both aliasing and oversampling. To state the obvious, it would be sensible to modify the sampling interval accordingly to the wavelengths of the variations being measured. 3. New methodology for survey design The above illustrates some of the factors that should be taken into account if the available resources are to be optimally used in acquiring geophysical data. What should also be taken into account is the cost of acquiring data. Not only should the minimum number of data be collected, the time and travel involved in collecting the data should be minimised. Based on these observations an idealised planning scheme is proposed: 1. Assign some portion of the survey budget to the cost of an initial reconnaissance survey, leaving the remainder for follow-up surveys (there may be several phases of follow-up and these occur effectively immediately after the reconnaissance survey).

252

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Fig. 3. (a) Illustration of aliasing using sampling of a single frequency periodic signal. To avoid aliasing the sampling rate must be at least twice the frequency of the signal. The Nyquist frequency (fN ) is the minimum sampling frequency required to properly represent the actual variations in the data. When the sampling interval is too large the result is apparent periodic variations with lower frequencies than the input. (b) Graph showing the relationship between the frequency of the input, the sampling interval and the frequency of the output. For a sampling frequency of 200 Hz, the Nyquist frequency is 100 Hz. Inadequate sampling rates cause the output to be folded back in to the region between 0 frequency and fN , referred to as the Nyquist interval (0–100 Hz).

2. Determine the shortest path between sampling locations to minimise travel and carry out the reconnaissance survey. 3. Analyse the data to identify areas of potential interest. Assign a proportion of the remaining budget to the first phase of follow-up surveys. Identify additional measurement locations based on the areas of interest. 4. Determine the shortest path between measurement locations to minimise travel. This is a more complex problem then for step 2. There are likely to be several spatially discrete regions where more detailed sampling is required. Also, some data have already been acquired in these regions, and for maximum efficiency measurements at these locations should not be repeated. Note that some repeat measurements could be made to evaluate instrument drift, repeatability, etc. and hence help to determine the accuracy of the acquired data. 5. Carry out the first phase of the follow-up survey along the path selected in step 4. 6. Analyse the follow-up data and repeat steps 3–5 if required and if budget allows.

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

253

There are three conceptual problems to be addressed in implementing this approach. The first problem concerns the specifications of the reconnaissance component of data acquisition. The probability-of-detection calculations described above are a suitable basis for designing this part of the survey. Note that the probabilities are based on a regular sampling distribution. Deciding what constitutes an acceptable probability implicitly determines how much of the budget will be retained for follow-up surveys, because a higher-resolution reconnaissance survey has a greater chance of detecting features of interest, but will leave less funds for follow-up. The second problem concerns identifying what exactly constitutes a response of potential interest in the reconnaissance survey. As noted previously, a major cost saving is achieved by making such decisions quickly, so the acquisition is nearly continuous and the cost of remobilising a crew is avoided. If time permits an analysis involving all available geological factors could be used. An alternative is to use the reconnaissance data itself as a basis for designing subsequent phases of data acquisition. At this stage we do not argue the case for, or against, particular selection criteria for areas of interest, instead seeking only to illustrate the principle of our survey design method. The third problem is how to determine the optimum way to collect the required data. The overall cost of data acquisition can be expressed as the sum of the cost of travelling to each measurement location and the cost of making the required measurements. The cost of ground surveys primarily depends on the number of measurements made, which translates into time. Time is saved by collecting the readings as quickly as possible, and by minimising the time spent travelling between stations. For airborne surveys, measurements are made continuously during travel, so distance travelled is the dominant cost parameter. To illustrate the solution to these problems, and the proposed methodology as a whole, we use a gravity dataset from the Stuart Shelf in South Australia (taken from the Primary Industries and Resources South Australia (PIRSA) Gravity Database). Geophysical exploration on the Stuart Shelf, including the use of regional gravity data, is described by Esdale et al. (2003). The gravity data, in the form of Bouguer anomaly, are presented in Fig. 4. These data are referred to as the real data (G) from here onwards. The data (6500 measurements) were mostly collected on a regular grid with a station spacing of 5 km. The size of the survey area is 325 km × 500 km. In this exercise the test data are taken as the actual variations in the gravity field, although of course in reality they are an incomplete representation of the true variation. We seek to reproduce the test data using a two-stage process (reconnaissance and a follow-up survey) in an optimal fashion, i.e. with as few measurements and as little travel as possible. 3.1. Reconnaissance survey The targets sought by the reconnaissance survey were taken to be elliptical with 2a = L = 48 km and 2b = l = 32 km (terms as for Eq. (3)). The area of such features corresponds to 0.75% of the whole survey area. A 90% probability of detecting this target is achieved by having a regular grid with a 28 km station spacing (where the edge of the survey area is closer then 28 km, a measurement is made at the edge), making for 216 measurements in the primary survey. The most efficient way to acquire uniformly spaced data such as these is line by line. 3.2. Recognition of areas of significance and production of a non-uniform sampling array After each phase of data acquisition, decisions must be made as to which regions warrant follow-up. Different types of geophysical data will naturally require different criteria. Time-domain electromagnetic data might rely on decay rates of the secondary field, whilst induced polarisation/resistivity data might use areas of higher conductivity corresponding with higher chargeability values. Here we use a source edge-based criterion, which is suitable for potential field data. Geologically, source edges characterize faults and contacts and can be taken as evidence of geological complexity—a possible characteristic of areas of interest whether exploring for mineralization or seeking to provide the best possible ‘map’ of the local geology. From the data analysis point of view, source edges coincide with distinct changes in response characteristics, in particular amplitude. This change may be defined in a number of ways, e.g. amplitude thresholding, inflection points or from gradient maxima. The data from the primary survey were projected on to a grid using a minimum-curvature algorithm (Briggs, 1974, Fig. 5a), then analysed using a Canny edge-detecting algorithm (Canny, 1986; Marr and Hildreth, 1980). The resultant edge map (Fig. 5b) has the same array size as the input (gridded) data and contains 1’s at points deemed to be an edge within the data and 0’s otherwise. This edge map is analysed by centring a user specified square template at every position in the edge map and counting the number of edges within it (Fig. 5b). The number of edges is divided by the

254

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Fig. 4. Original test data, G, comprising 6500 points on a nearly uniform grid. Data are Bouguer gravity in gravity units (gu). Contour interval is 50 gu.

Fig. 5. (a) Dataset produced by the primary survey. (b) The Canny edge map derived from (a) and averaging window. (c) The continuous edge density map. (d) The non-uniform sampling array designed using the edge density map.

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

255

area of the square template. The resulting edge density count at each position is then allocated to a bin, representing a low, medium or high density of edges, thus producing an edge density distribution function, EDDF (Fig. 5c). Note that choice of bin allocations is user controlled, here we use 0.55 and 0.85 cut-offs for the low and medium density bins, respectively. The EDDF is the basis for identifying areas of interest, higher edge densities correlating with greater interest. The EDDF is also the basis for designing the next phase of data acquisition. If Ni is the mean of edge density in each bin i = {1, 2, 3} corresponding to low, medium or high density of edges, respectively, and Ai is the areas covered by the bin i, then the number of measurements (Mi ) may be assigned as follows Mi = kNi Ai

(4)

where k is an unknown scaling constant. The cost of the primary survey can be calculated once the specifications of the survey (line and station spacing) are calculated from Eq. (3). Hence, the available budget for a secondary, follow-up survey is known, and the number of available measurements, Mt , can be estimated, hence k = 3

Mt

(5)

i=1 Ni Ai

Fig. 6. Solution of a Travelling Salesman Problem by the SAA (1000 loci from Fig. 5d).

256

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Thus, the number of measurements can be calculated for each bin, producing the non-uniform sampling array (NUA), i.e. the positions of the next generation of measurements. The NUA comprises three levels (Fig. 5c), corresponding to the three bins described above. Note that the available measurements are uniformly shared between each level, but each level has a different density of measurements (Fig. 5d). The density of measurements is assigned according to the density of edges. This task is fully automated and the only operator inputs required are the thresholds for high, medium or low bins of the EDDF and control of measurement locations (by eliminating repeated loci), to avoid two measurements being made in the same place (unless required for quality control, etc.). Note that in the example considered here, repeat measurements have not been carried out. 3.3. Costs of measurements and travel Determining the minimum length of the path required to make the measurements in the secondary survey is an example of a Travelling Salesman Problem (Appendix A). This problem does not have a closed solution, but a closeto-optimal solution may be found using a simulated annealing algorithm (SAA). The SAA is applied here, mostly for the ease of implementation, and can be replaced by a genetic algorithm if required. The resultant solution for our example is shown in Fig. 6.

Fig. 7. Dataset produced from 1216 points comprising a non-uniform sampling array, GNUA . Contour interval is 50 gu.

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

257

Fig. 8. Non-optimised dataset produced from 1216 points on a uniform grid, GU . Contour interval is 50 gu.

3.4. Results Gridding the data after just two phases of data acquisition produced a product (Fig. 7), which is very similar to the real data (G, Fig. 4). The resulting gravity dataset, called here the optimised data (GNUA ), consists of 1216 measurements: 216 measurements from the primary survey and the 1000 measurements from the secondary survey. This is almost five times less than the original test data. It is instructive to compare the optimised data with the test data and also a uniform grid comprising 1216 measurements (Fig. 8), the same number as the non-uniform grid, referred to as the non-optimised data (GU ), from here onwards. Simple subtraction is an effective means of summarising the differences between the datasets. The following Table 1 The range, mean, median and standard deviation of the test (G), NUA (GNUA ) and non-optimised (GU ) data

Minimum Maximum Mean Median Standard deviation

G (gu)

GNUA (gu)

GU (gu)

−435.2665 180.8098 −175.3815 −170.2550 72.4726

−391.3780 180.8098 −174.3603 −167.8922 69.4583

−438.0923 10.0606 −178.6574 −172.0066 72.2683

258

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Fig. 9. D1 , difference between the test (G) and NUA (GNUA ) data. Contour interval is 5 gu.

results are obtained for two difference sets, D1 and D2 (Figs. 9 and 10), where D1 = G − GNUA and D2 = G − GU (Table 1). Visual inspection of the various datasets shows that the short wavelength variations are poorly represented in the non-optimised dataset. Referring to the local gravity ‘highs’ in the north centre of the study area, the wavelengths are comparable in the real and NUA data, but are overestimated in the non-optimised data. The amplitudes of these features are also underestimated in the non-optimised data. The successful reproduction of amplitude variations by the NUA data, but not by the non-optimised data, is confirmed by the variance statistics in Table 1. The average values in all datasets are comparable. The summary statistics of the difference data comprising Table 2 show quite large differences, but reference to Fig. 9 shows in most areas the differences are small, only locally do larger differences occur, notably Table 2 The range, mean, median and standard deviation of the difference sets D1 and D2

Minimum Maximum Mean Median Standard deviation

D1 = G − GNUA (gu)

D2 = G − GU (gu)

−132.4201 94.6926 1.0211 0 15.3846

−88.5494 288.5517 −3.2759 4.0062 20.7501

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

259

Fig. 10. D2 , difference between the test (G) and non-optimised data (GU ). Contour interval is 5 gu.

at the edges of the study area. The distribution of differences in Fig. 9 has a random appearance, but those in Fig. 10 show more systematic variations, related to the poor definition of shorter wavelength variations. Analysis of singular values (d) is an alternative means of comparing the three datasets (Golub and Kahan, 1965). Since all three gridded datasets are matrices, singular value decomposition is possible. Fig. 11 shows the plot of the logarithms of the singular values for each matrix; si representing the row of the diagonal matrix D where the singular value occurs. The singular values represent the detail of the matrix. In Fig. 11 the solid line represents the singular values of the real data (G), the dashed line represents the singular values of the optimised data (GNUA ) and the dotted line those of the non-optimised data (GU ). Note that the singular values for the original and NUA data are almost identical, suggesting that the two matrices G and GNUA are also almost identical. It is clear that apart from looking similar, G and GNUA are almost identical in their frequency content. The number of measurements in the optimised dataset has been reduced by 81%, compared with the real data, and thus the associated cost is also reduced. The cost of travelling is linearly proportional to the distance travelled. For a regular grid of 6500 measurements (65 × 100), with each measurement 5 km apart, 33,150 km needs to be travelled. On the other hand, the two phases of acquisition (216 measurements in the primary survey and 1000 measurements in the secondary survey) leading to the non-uniform grid only add up to approximately 14,500 km; a reduction of more than 57%.

260

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

Fig. 11. Logarithms of singular values d(sj ) (sj is row of diagonal matrix D where the singular value, d(sj ), occurs).

4. Discussion and conclusions An efficient way of collecting geophysical data using the NUA algorithm has been presented. This algorithm divides the survey into two parts, reconnaissance and secondary surveys. The reconnaissance survey has a consistent station spacing based on the estimation of the dimensions of the features of interest and the probability of detecting these. Secondary surveys provide supplementary data in areas where it is needed, creating an overall non-uniform sampling pattern. Based on a test using gravity data from the Stuart Shelf in South Australia, a gridded dataset created from 6500 regularly spaced measurements has been adequately reproduced with 1216 irregularly spaced measurements. It should be noted that the approach described here represents a proof of concept. Many unknowns remain whose influence needs to be explored. For example, the number of phases of data acquisition, the criteria for selecting areas of interest and how additional stations are assigned according to perceived variations in the significance of the different areas of interest. An obvious development for practical use in the field is to integrate the survey design algorithm into a geographical information system (GIS). The GIS could contain information on local accessibility. In the discussion above it was assumed all stations were easily accessible and it was possible to travel in a straight line between any two points. Travelling Salesman Algorithms exist that account for barriers (say a river) and optimal travel paths (for example, a freeway). In vehicle-based gravity data acquisition terms these translate in to fences, creeks, etc. and tracks and cut lines, respectively. A semi-automated means of designing an optimal acquisition strategy tuned to the local field conditions would be a very useful tool. Acknowledgement The authors would like to thank PIRSA for allowing the use of their gravity data. Appendix A Over the last 70 years there have been a number of methods suggested to solve the Travelling Salesman Problem, however, the better performing, simulated annealing and genetic algorithms, have only been developed in the last 20 years. The simulated annealing algorithm (SAA) is applied here, mostly for the ease of implementation, and can be changed to a more powerful genetic algorithm if required. Simulated annealing algorithm (SAA, Press et al., 1985; Lundy and Mees, 1986) produces a close-to-optimal path by joining all loci in the given area in a specific order so as to minimise the path length. The starting point for the SAA is a random path (P0 ), of length L0 , which simply connects all the loci. Next, a random section of this path is chosen and it is either reversed or transported (the choice of whether to reverse or transport is random). If the order is reversed,

K.A. Galybin et al. / Journal of Geodynamics 43 (2007) 248–261

261

the new section is returned into the place of the original section. If, on the other hand, the order is kept unchanged, this section is transported into a different location in the path. Thus, a new path (P1 ) is produced and length, L1 , is calculated. At the same time the so-called temperature of the SAA is calculated. This is a measure of how easily the new path P1 is accepted. Temperature is defined as an iterative function Tn+1 =

Tn 1 + εTn

(A.1)

where ε is a small number and T0 is prescribed by the operator. If the length of the path has decreased, then the changes are accepted, and the cycle repeated with P0 = P1 . If the length increases, the changes are accepted with probability f (L1 ) = e−(L0 −L1 )/Tn

(A.2)

That is, if f(L1 ) > r, where r is a random number between 0 and 1, the changes are accepted and the cycle repeated with P0 = P1 again. If f(L1 ) < r, then the changes are rejected and the cycle repeated with unmodified P0 . This process is repeated continuously, until the length of the path does not change. References Briggs, I.C., 1974. Machine contouring using minimum curvature. Geophysics 39 (1), 39–48. Canny, J., 1986. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8 (6), 679–698. Esdale, D., Pridmore, D.F., Coggon, J., Muir, P., Williams, P., Fritz, F., 2003. The olympic dam copper–uranium–gold–silver–ree deposit, South Australia. A geophysical case history. In: Dentith, M.C. (Ed.), Geophysical Signatures of South Australian Mineral Deposits. Centre for Global Metallogeny, Publication No. 31, Australian Society of Exploration Geophysicists, Special Publication No. 12, pp. 147–168. Galybin, K.A., Dentith, M.C. Targeting elliptical anomalies, in press. Golub, G., Kahan, W., 1965. Calculating the singular values and pseudo-inverse of a matrix. J. SIAM Numer. Anal. 2 (2), 205–224. Lundy, M., Mees, A., 1986. Convergence of the annealing algorithm. Math. Programming 34 (1), 11–124. Marr, D., Hildreth, E., 1980. Theory of edge detection. In: Proc. R. Soc. Lond., vol. B-207, pp. 187–217. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1985. Numerical Recipes. Cambridge University Press, New York. Sambuelli, L., Strobbia, C., 2002. The Buffon’s needle problem and the design of a geophysical survey. Geophys. Prospecting 50, 403–409. Telford, W.M., Geldart, L.P., Sheriff, R.E., 1990. Applied Geophysics, second ed. Cambridge University Press.