An algorithm and Pascal program for geostatistical mapping

An algorithm and Pascal program for geostatistical mapping

Computers & Geosciences Vol. 17, No. 4, pp. 489-503, 1991 Printed in Great Britain. All rights reserved 0098-3004/91 $3.00 + 0.00 Copyright ...

631KB Sizes 10 Downloads 116 Views

Computers & Geosciences Vol. 17, No. 4, pp. 489-503, 1991 Printed in Great Britain. All rights reserved

0098-3004/91 $3.00 + 0.00 Copyright <~ 1991 Pergamon Press plc

AN A L G O R I T H M A N D PASCAL P R O G R A M FOR GEOSTATISTICAL M A P P I N G B. BERKOWITZ and M. BEN-ZvI Hydrological Service, P.O. Box 6381, Jerusalem, Israel 91063

(Received 27 February 1990; accepted 13 September 1990) Abstraet--A general model for generating statistically optimal estimates is presented, together with an accompanying Pascal program developed for solution on a microcomputer. The model, based on the theory of objective analysis, is a powerful tool to estimate mapped data fields based on scattered point observations. The special feature of the method is its ability to quantify the accuracy of the estimates, allowing determination of an associated error field. The Pascal code for the model and sample input and output are provided. The code is implemented easily and modified.

Key Words: Mapping techniques, Objective analysis, Kriging.

INTRODUCTION

EQUATIONS

Studies in the geosciences may require the ability to predict (or interpolate) the values of spatial or temporal points on the basis of available observed point data. These data usually are distributed unevenly in space and time. Problems of this nature include analysis of hydrogeological data (such as groundwater levels or transmissivity values), and data from atmospheric pollution and oceanic fields. Numerous stochastic and deterministic methods for prediction are available in the literature. One estimation method which has become widely used in recent years is that of kriging [see, for example, Delhomme (1978) and de Marsily (1986) for extensive discussion of the method]. The simple kriging method (Journel and Huijbregts, 1978) is identical to an estimation theory known as "objective analysis" (Gandin, 1965), which was developed concurrently, and since has been applied in an oceanographic context by, among others, Bretherton, Davis, and Fandry (1976) and Carter and Robinson (1987). In fact, similar derivations of optimal linear predictions are present earlier in the literature; an extensive review with historical observations is given in Cressie (1989). In general, the spatial and temporal variability of the controlling parameters of such systems is not purely random, but, rather, correlated. The underlying assumption is that the "nearer" data are (in space or time), the more they are correlated. The problem is to identify appropriate expressions for the spatial or temporal correlations, and then apply objective analysis as a method for optimizing estimates of the variables throughout the field. The model presented here assumes that the semivariogram is known. In practice, the estimation and fitting stage of the spatial/temporal analysis must be carried out first.

A detailed treatment of the relevant objective analysis equations, with extensive discussions of the theoretical aspects of the theory, are given in Bretherton, Davis, and Fandry (1976) and Carter and Robinson (1987). The code developed here is based on the following objective analysis model. It is assumed that the fields are stationary and homogeneous. A measurement, q~,, at location r, is assumed to consist of a true value O, and a random noise error (with expectation equal to zero), e,:

~,=e,+e,.

(1)

It also is assumed that the error is uncorrelated with the error at other measurement points at location s, and also with the true field:

E[er4as] = 0

(2)

E[e, es] = E~'6,s

(3)

and

where E2 is the error variance (which is independent of location because the field is assumed to be homogeneous). Then the least-squares optimal estimator of O is

The confidence level (expected error variance) is

E[(Ox-OO21=B.~.. -

~

B~rA~LB~.,

(5)

r,s~l

where n is the number of observations, A . = E[4~r4~1 = C(x, - x O + E26,,, B,, = C(x

489

- x,),

(6) (7)

B. BERKOWITZand M. BEN-ZVI

490 Start

and C is the correlation for the field 4'- The correlation function is assumed homogeneous and stationary, but not isotropic, that is of the form

[

Call OAParameters : read in all relevant parameters

C(r) = C(Ax, Ay, AT).

Defining Co to be the autocorrelation for the vector of observations, and Cxo to be the cross-correlation between the vectors of estimates and observations, that is

Call ReadData : read in all measured data

Co =

Call MakeGrid : determine grid locations where mapped values are to be calculated

Choose grid point at which mapped and error values are to be calculated

Aoo

C~o = Axe

Call Select : choose data to be used in calculating mapped and error values at the grid point

Call ObjAn : calculated mapped and error values at the grid point

No

I

I

I

Figure 1. Program flowchart.

(IO)

(ll)

Associated error matrix: Ce

~-

Cx - (?xO Co- 1 Cxo. V

(12)

Note that the observation variable, 4,, and the estimated quantity, O, can be considered as scalar or vector quantities. Equations (11) and (12) constitute the Gauss-Markov theorem for an unbiased linear minimum mean-square estimate of a random variable matrix (B.L.U.E. estimate). In practice, the entries of the covariance matrix Cxe depend on the relative location (in space and time) of the observations and on the estimated values. The field error then can be introduced by employing the concept of normalized field error, E, which is the variance of the field error divided by the sum of the interpolation and field errors: E =

Call Results : print out mapped values and error fields

(9)

then the equations can be written in the general matrix form: Linear estimate for O:

@ = cxec~14'.

Call Diag : calculate and print "diagnostics"

(8)

var(e,) var(Or) + var(e,) '

(13)

SOLUTION ALGORITHM

The solution algorithm is most easily explained by the flowchart in Figure 1. The program OBJECTIVE_ANALYSIS has been written for

Table 1. Sample input: mapping parameters Value for B -- covariance function: F = exp(-sqrt(x**2 ÷ y**2)/B) 6300.0 Value for E (field error) -- when value is 999.999 in input data 0.3 Dist -- spatial radius of influential points (meters)

4000.0 Time -- temporal radius of influential points (days) 50.0 DateYear - central date (point in time) at which mapped values are calc. 89 DateMonth - central date (time point) at which mapped values are calc. I0 DateDay - central date (point in time) at which mapped values are calc. I0 Limit -- maximum number of influential points 15 NPX -- number of grid points in x-direction I0 NPY -- number of grid points in y-direction I0

Algorithm and Pascal program for geostatistical mapping Table 2. Sample input: measurement data x

142010 142420 143890 142260 142680 142970 143220 143270 144800 141240 141270 141390 141830 141890 141970 142570 142830 143100 143370 143700 141030 141350 141470 141680 142210 142770 143330 143510 143680 144220 141280 141410 142030 143090 143500 143860 144830 141520 142210 142620 143140 143230 143610 143680 144250 144830

y day month 200510 02 10 200040 02 10 200600 02 10 201110 02 10 201140 02 10 201150 02 10 201070 02 10 201590 02 10 201880 23 10 202420 24 10 202190 22 10 202800 24 10 202740 22 10 202120 22 10 202450 22 10 202590 23 10 202470 23 10 202950 23 10 202890 23 10 202940 23 10 203340 24 10 203780 24 10 203170 24 10 203440 22 10 203090 22 10 203400 23 10 203770 23 10 203120 23 10 203450 23 10 203000 23 10 204160 24 10 204550 24 10 204740 24 10 204410 24 10 204470 23 10 204370 02 10 204440 23 10 205700 23 10 205840 24 10 205840 23 10 205980 02 10 205940 02 10 205920 23 10 205060 02 10 205520 24 10 205130 24 10

year 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89 89

Level Error 1.38 0.3 1.56 0.3 4.49 0.3 0.41 0.3 1.43 0.3 2.10 0.3 2.81 0.3 1.62 0.3 7.88 0.3 -0.68 0.3 -0.55 0.3 -1.13 0.3 -1.10 0.3 -0.37 0.3 -0.88 0.3 -0.81 0.3 -0.47 0.3 -1.05 0.3 -0.79 0.3 1.28 0.3 -1.09 0.3 -1.48 0.3 -1.29 0.3 -1.55 0.3 -1.29 0.3 -1.35 0.3 -2,12 0.3 -0.66 0.3 0.23 0.3 2.10 0.3 -1.64 0.3 -2.08 0.3 -2.34 0.3 -5.40 0.3 -1.99 0.3 2.03 0.3 6.80 0.3 -2.37 0.3 -2.69 0.3 -3.74 0.3 -4.00 0.3 -3.97 0.3 -4.83 0.3 -2.56 0.3 6.46 0.3 6.65 0.3

microcomputers operating under MS-DOS (Trademark of Microsoft, Inc.), and was compiled with Borland Turbo Pascal Version 5.5 (Trademark of Borland International). The code contains a main program with calls to seven main procedures. Two data files are required as input: one file contains the actual measurement data, whereas the other contains

491

user-specified parameters relevant to the mapping procedure. User-specification of parameters allows for flexibility in producing the mapped data and error fields. Values for the following parameters are required by the program: (1) Coefficient for relevant covariance function, (2) Default value for the normalized variance of measurement, (3) Spatial radius of influential points, (4) Temporal radius of influential points, (5) Central date (day, month, year) at which mapped values are to be calculated; (6) Maximum number of influential points in space and time, used in calculating values at grid points, and (7) Number of grid points in x and y directions. In addition, the covariance function is defined as a separate function within the program, and can be modified easily as desired to fit the relevant spatial/temporal correlations for any data set under consideration.

NUMERICAL EXAMPLE

In order to provide a simple illustration of the method, the code developed here was used to analyze a set of groundwater level measurements taken from a section of the coastal aquifer of Israel. The two required input data files are reproduced in Tables 1 and 2. From prior analysis of the data (Atzmon and Ben-Zvi, 1989), the covariance function was determined to be exponential, with a spatial decay coefficient of 6300 m, Tables 3 and 4 present the mapped data and error fields, respectively. The values are printed at their corresponding locations on the grid. The error field indicates regions of lower accuracy of the estimates, which is a function both of scarcity of data and variance of the available measurements.

SUMMARY AND CONCLUSIONS

This paper briefly describes a model and a computer program for generating statistically optimal

Table 3. Sample output: mapped data field Data Field: -2.460 -2,221 -1,956 -I,579 -1,241 -0.954 -0.463 0.080 0.487 0.895

-2,627 -2.410 -2.128 -1,740 -1.304 -0.968 -0.399 0.210 0.684 1,084

-2.761 -2.655 -2.330 -1.951 -1.286 -0,973 -0.297 0,404 0.979 1.340

-3.016 -2.933 -2.642 -2.133 -1.277 -0,922 -0.055 0.696 1,308 1.618

-3.191 -3.016 -2,865 -2.102 -1.417 -0,679 0.254 1.199 1.779 2.151

-3.135 -2.640 -2.587 -1.926 -1.031 -0.329 0.571 1.723 2.523 2,687

-2.149 -1.388 -1.047 -0.583 -0.167 0.733 1,868 2,711 3.209 3.217

-0,043 0,887 1,081 1,129 1.272 1.819 2.853 3.505 3.830 3.659

1.740 3,054 3,050 2.627 2.743 3.137 3.863 4.257 4.192 3.974

2.777 4.151 4.443 4.268 3,833 4.005 4.692 4.757 4.455 4.163

B. BERKOWITZ and M. BEN-ZVl

492

Table 4. Sample output: mapped error field Error Field: 0.252 0.206 0.164 0.128 0.097 0.123 0.156 0.231 0.279 0.332

0.185 0.153 0.114 0.098 0.084 0.082 0.116 0,183 0.211 0.265

0.152 0.134 0.114 0.124 0.097 0.077 0.101 0.144 0.143 0,199

0.121 0.131 0.128 0.137 0.109 0.092 0.116 0.113 0.118 0.147

0.110 0.131 0.134 0.129 0.094 0.087 0.119 0.099 0.121 0.169

estimates based on scattered point observations. The model is based on the theory of objective analysis, and provides both a mapped data field and an associated error field which quantifies the accuracy of the estimates. These fields can be analyzed directly, or used as input to graphics packages to produce contour and three-dimensional surface plots. The error field is particularly useful both as a method of indicating where additional information is most valuable (in terms of reducing the estimation error), and in quantifying uncertainty for assessing the system and for designing management strategies. The code can be loaded and compiled easily on a microcomputer, and the structure allows for quick comprehension and ease in entering modifications.

Acknowledgment--The authors thank E. Tziperman of the Weizmann Institute of Science for providing the initial motivation and background for this work.

0.082 0.124 0.112 0.107 0.093 0.093 0.126 0.096 0.129 0.203

0.104 0.112 0.098 0.103 0.082 0.102 0.140 0.128 0.140 0.223

0.150 0.114 0.112 0.123 0.110 0.124 0.160 0.165 0.154 0.249

0.190 0.124 0.129 0.151 0.151 0.155 0.166 0.192 0.222 0.297

0.240 0.149 0.137 0.176 0.202 0.204 0.170 0.224 0.284 0.353

REFERENCES Atzmon, B., and Ben-Zvi, M., 1989, A method to determine the relationship between the location of observation wells and the accuracy of estimation of water levels: Hydrological Service of Israel, Tech. Rept. (in Hebrew), 18p. Bretherton, F. P., Davis, R. E., and Fandry, C. B., 1976, A technique for objective analysis and design of oceanographic experiments applied to MODE-73: Deep-Sea Research, v. 23, p. 559-582. Carter, E. F., and Robinson, A. R., 1987, Analysis models for the estimation of oceanic fields: Jour. Atmos. Oceanic Tech., v. 4, no. I, p. 49 74. Cressie, N., 1989, Geostatistics: Am. Statist. v. 43, no. 4, p. 197-202. Delhomme, J. P., 1978, Kriging in the hydrosciences: Adv. Water Resources, v. 1, no. 5, p. 251-266. Gandin, L. S., 1965, Objective analysis of meteorological fields: Israel Program for Scientific Translations, Jerusalem, 242 p. Journel, A. G., and Huijbregts, C. J., 1978, Mining geostatistics: Academic Press, New York, 632 p. de Marsily, G., 1986, Quantitative hydrogeology: Academic Press, New York, 440 p.

APPENDIX

Program Listing Program Objective_Analysis; {N+I (* uses 8087 numerical co-processor, if available *)

{$R÷t {$M 26520,0,655360} { { {

Program to do "objective analysis" of data scattered in both time and space. Estimates of data - together with error estimates - are calculated for points on a specified regular grid. The program requires the following two input files: I. Objective analysis parameters: in file '0AParam.Dat' This file includes the following parameters:

B:

correlation distance, used in caculations of the covariance function E: default value of the normalized field error, in dimensionless form, 0.0 < E < 1.0 LIMIT: maximum number of influential points (in time and space) used in calculatlng values at grid points DIST: spatial radius of influential points TIME: temporal radius of influential points DATE: central date (point in time), at which objectively mapped values are to be calculated (consists of day, month, year, and converted to absolute number) NPX: number of grid points in x direction NPY: number of grid points in y direction

Algorithm and Pascal program for geostatistical mapping 2.

493

Data file of measurements: in file 'Measure.Dat' Each lines consists of (I) (x,y) coordinates, (2) time (year, month, day), (3) measurement, and (4) normalized field error. Specifically, the normalized field error E is defined as follows: Assuming the model h = H(x) + epsilon + delta, where 'epsilon' = interpolation error (at the point of measurement), 'delta' = measurement error then E is defined as: E = var(delta) / [var(epsilon)+var(delta)] Alternatively, the program allows specification of a single "default" value of E, when it is not supplied in particular lines in the data file. When E is unknown, specify field error of 999.999 in the data file.

The appropriate covariance function (specific to the data being mapped) is specified below in 'Function F'.

} }

) uses crt, Dos; const MaxGridPoints

= 100;

{ Maximum number of grid points; equals NPX * NPY which are defined in input file ) MaxData = 100; { Maximum number of data points in input data file } LimitM = 20; { This is the maximum LIMIT which can be defined }

type DataArray = Array [I. .MaxData] of real; DataMatrix = Array [I ..2] of DataArray;

{ [2xMaxData]

matrix

GridArray = Array [1..MaxGridPoints] of real; GridMatrix = Array [I..2] of GridArray; { [2xMaxGridPoints] GridArray2 = Array [I..2] of real; GridMatrix2 = Array [1..MaxGridPoints] of GridArray2; { [MaxGridPointsx2]

)

matrix

}

matrix

}

ArrayNobsR = Array [1..MaxData] of real; ArrayNobsI = Array [1..MaxData] of integer; ArrayA = Array [1..LimitM] of real; MatrixA = Array [1..LimitM] of ArrayA; MatrixA2 = Array [I..2] of ArrayA; vat Title : string[128); MObs,i,j,k,l,kx,ier,NObs : integer; B,E,Dist,Date,Time,Obj,Err : real; DateDay,DateMonth,DateYear,tday,tmonth,tyear Limit,NPX,NPY : integer;

: integer;

Measure,VarMeasure,TObs,C : DataArray; XYObs : DataMatrix; ObjField,ErrorField : GridArray; XYGrid : GridMatrix2; Cor : ArrayNobsR; Index,IP : ArrayNobsI; NearXYObs : MatrixA2; NearMeasure,NearVarMeasure,NearT AA,V : MatrixA;

: ArrayA;

[............................................................................ { ................................

PROC~.DURES

................................

[............................................................................ Function F(DelX,DelY,DelT,B : real) : real; { Definition of (USER-SPECIFIED) covariance function var R : real; CAGEO 17 4~-B

}

} }

}

494

B. BERKOWITZ and M . BEN-Zvl

begin R := eqrt(Delx*Delx + Dely*Dely); F := exp(-R/B); end; {........................................................................... ) Function Abs Time (day,month,year : integer) : real; { Convert date to absolute time in units of days ] { Date consists of: YY (year, last two digits), MM (month), DD (day) } { Warning: no range checking for validity of day/month/year is performed I~ var YY,CT,LL,LE, II : integer; begin begin YY := Year + 1900; CT := 365"YY + Day + 31*(Month-l); II := (YY-I) div 4; LL := YY div 4; LE := trunc(Month*0.4 + 2.3); LL := LL-LE; if (Month < 3) then CT := CT÷II else CT := CT÷LL; Abs_Time := CT; end; end; {........................................................................... Function Power (Number,Exponent { Calculates powers }

: real) : real;

begin if (Number>0.0) then Power := exp(Exponent * i n ( n u ~ r ) ) else if (Number = 0.0) and (Exponent > 0.0) Power := 0.0 else if (Nund~er = 0.0) and (Exponent = 0.0) Power := 1.0 else begin writeln ('*** Cannot calculate power writeln ('*** Power assigned default Power := 0.0; end; end;

then

then

of a negative h u m o r ***'); value of zero ....... ***');

(........................................................................... Procedure OAParameters(var B,E,Dist,Date,Time : real; vat Limit,NPXtNPY,DateDay,DateMonth,DateYear { Read objective analysis parameters and grid specifications } var DataFilel

)

: text;

begin assign (DataFilel, 'OAParam.dat'); reset DataFilel); readln DataFile1,Title); readln(DataFile1,B); readln DataFile1,Title); readln(DataFile1,E); readln DataFile1,Title); readln(DataFile1,Dist); readln DataFile1,Title); readln(DataFile1,Time); readln DataFile1,Title); readln(DataFile1,DateYear); readln DataFile1,Title); readln(DataFile1,DateMonth); readln DataFile1,Title); readln(DataFile1,DateDay); Date := AbsTime(DateDay,DeteMonth,DateYear); readln(DataFile1,Title); readln(DataFile1,Limlt); readln(DataFile1,Title); readln(DataFile1,NPX); readln(DataFile1,Title); readln(DataFile1,NPY);

}

: integer);

Algorithm and Pascal program for geostatistical mapping

495

close (DataFilel); end; ( ...........................................................................

Procedure ReadData(var XYObs : OataMatrix; vat Measure,VarMeasure,TObs vat MObs : integer); { Read measurement data: x, ¥, t, measurement, vat DataFile2

)

: DataArray; normalized

field error }

: text;

begin assign (DataFile2,'Measure.dat' reset (DataFile2); i := 0; readln(DataFile2,Title); if (eof(Datafile2)) then begin ClrScr; writeln('*** No data were found in the data file Ill ***'); halt; end else begin repeat i := + I; readln(DataFile2,XYObs[1,i],XYObs[2,i],tday,tmonth,tyear,Measure[i], VarMeasure[i]); TObs[i] := Abs_Time(tday, tmonth,tyear); if (VarMeasure[i]=999.999) then VarMeasure[i] := E; until eof(Datafile2); MObs := i; { MObs is the total number of observations } end; close (DataFile2); end; { ...........................................................................

}

Procedure MakeGrid(NPX,NPY,MObs : integer; XYObs : DataMatrix; var XYGrid : GridMatrix2); { Calculate grid locations where mapped values are to be calculated var MinDatX,MaxDatX,MinDatY,MaxDatY NumDX,NumDY : integer;

: real;

begin { Identify maximum and minimum coordinates for which data exist MinDatX := XYObs[1,1]; MaxDatX := XYObs[1,1]; MinDatY := xYObs[2,1]; MaxDatY := XYObs[2,1]; for i := 2 to MObs do begin if (XYObs[1,i]MaxDatY) then MaxDatY := XYObs[2,i]; end; { Define grid within these limits } NUmDX := NPX - I; NumDY := NPY - I; i := 0; for j := 0 to NumDY do for k := 0 to NumDX do begin inc(i,1); XYGrid[i,1] := MinDatX ÷ k*(MaxDatX-MinDatX)/NumDX; XYGrid[i,2] := MinDatY . j*(MaxDatY-MinDatY)/NumDY;

}

}

496

B. BERKOWITZand M. BEN-ZvI end; end; [ ...........................................................................

)

procedure S e l e c t ( v a r kx : integer; Limit : integer; D i s t , T i m e , D a t e : real; XYObs: DataMatrix; vat Cot : ArrayNobsR; var index : ArrayNobsI; var MObs,NObs : integer; XYGrid : GridMatrix2; M e a s u r e , T 0 b s , V a r M e a s u r e : DataArray; var NearXYObs : MatrixA2; var N e a r M e a s u r e , N e a r T , N e a r V a r M e a s u r e : ArrayA; vat C : DataArray); { Select observations to be used for Objective Analysis at each grid point { The procedure selects a m a x i m u m of LIMIT n e i g h b o r i n g points to each { interpolation point. var delX,delY,delT,R

: real;

{.....................

sub-procedure

SORT . . . . . . . . . . . . . . . . . . . . . }

Procedure S o r t ( v a t Cor : ArrayNobsR; vat Index : ArrayNobsI; NObs : integer); { A shell sort r o u t i n e to sort INDEX and COR down according to the values } { of COR ] var igap, i m a x , i e x , I p l u s G , i s a v e save : real;

: integer;

begin igap := NObs; Repeat igap := trunc(igap/2); imax := NObs - igap; repeat iex := 0; for i := I to imax do begin IplusG := i + igap; if (Cor[i]
end

of sub-procedure

begin { p r o c e d u r e Select } N0bs := 0; for i := I to MObs do begin delX := X Y G r i d [ k x , 1 ] - XYObs[1,i]; delY := X Y G r i d [ k x , 2 ] - XYObs[2,i]; delT := DATE - TObs[i]; R := sqrt((delX*delX) + (delY*dei¥)); if (abs(delT)<=TIME) and (R<=DIST) then begin NObs := NObs + I; Index[NObs] := i; Cor[NObs] := F(delX,deiY,delT,B); end; end;

SORT ...................... }

Algorithm and Pascal program for geostatistical mapping

497

if (NObs()0) then begin if (NObs)Limit) then begin Sort(Cor,Index,NObs); NObs := Limit; end; for i := I to NObs do begin j := Index[i]; NearXYObs[1,i] := XY0bs[1,j]; NearXYObs[2,i] := XY0bs[2,j]; NearT[i] := TObs[j]; NearMeasure[i] := Measure[j]; NearVarMeasure[i] := VarMeasure[j]; C[i] := Cor[i]; end; end; end; [. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

]

Procedure ObjAn(var ier,N0bs : integer; Limit : integer; XYGrid : GridMatrix2; NearXYObs: MatrixA2; NearMeasure,NearT,NearVarMeasure : ArrayA; var 0bj,Err : real); Procedure to carry out the space-time objective analysis routine NearXY0bs is the array of nearest observation positions NearT is the array of nearest observation times NearMeasure is the array of nearest observation values XYGrid is the array of interpolation positions Date is the central interpolation time 0bj is the interpolated value Err is the interpolation error N0bs is the number of o b s e r v a t i o n points IER is an error flag, where = 0, for no error > 0, for matrix inversion errors (see matrix inversion routine) = -I, for no data (warning) = -3, for poor matrix inversion (warning) var

Ave,H,DumC,DumY,P,P2 : r e a l ; { .....................

sub-procedure

SETA ..................... }

Procedure

Seta(var AA : MatrixA; var NearXYObs: MatrixA2; vat NearT, N e a r V a r M e a s u r e : ArrayA; vat ier, NObs : integer; Limit : integer); { This procedure calculates the a u t o c o r r e l a t i o n matrix for the observations { given the positions, NearXYObs, and times, NearT. It returns the { inverted matrix.

vat NA,NV : integer; DelT,DelX,DelY,D

: real;

[..................... Procedure

InvMtx(var var var var var { A routine to invert

sub-procedure

INVMTX ................... }

AA : MatrixA; vat NA : integer; V : MatrixA; var NV : integer; NObs,ier : integer; D : real; IP : ArrayN0bsI); matrices }

vat IEX,IEXMax,m,npm : integer; V M a x , V H , P V T , P V T m x , H o l d : real;

} } }

498

B. BERKOWITZand M. BEN-Zvl {.....................

sub-function

Function IERINV(var N O b s , N A , N V

IZRINV . . . . . . . . . . . . . . . . . . . . . }

: integer)

: integer;

begin IERInv := 0; if (NObs~1) then begin IERInv :z 34; w r i t e l n ( ' * * * NObs < I in Procedure InvMtx J! ***'); end else if (NA(NObs) then begin IERInv := 35; w r i t e l n ( ' * * * N A < NObs in Procedure InvMtx I! ***'); end else if (NV(NObs) then begin IERInv := 36; w r i t e l n ( ' * * * NV < NObs in Procedure InvMtx 1! ***'); end end; {. . . . . . . . . . . . . . . . . . . . .

end of sub-function

begin { s u b - p r o c e d u r e InvMtx ) IER := IERINV(NObs,NA,NV); if (IER<>0) then halt; for j := I to NObs do begin IP[j] := 0; for i := I to NObs do V[i,j] end;

IERINV . . . . . . . . . . . . . . . . . . . . . )

:= AA[i,j];

D := 1.0; IEX := 0; for m := I to NObs do begin VMax := 0.0; for j := I to NObs do begin if (IP[j]=0) then for i := I to NObs do begin if (IP[i]=0) then begin VH := abs(V[i,j]); if (VMax<=VH) then begin VMax := VH; k := i; 1 := j; end; end; end; end; IP[I] := k; npm := NObs + m; IP[npm] := i; D := D * V[k,l]; repeat D:=D*0.1; IEX := IEX + I; until (abs(D)<=1.0); PVT := V[k,l]; if (m=1) then PVTMx := abs(PVT); if (abs(PVT/M)+PVTMx = PVTMx) then begin IER := 33; w r i t e l n ( ' * * * Matrix is singular halt; end;

in Procedure

InvMtx

If ***');

Algorithm and Pascal program for geostatistical mapping

499

V[k,l] := 1.0; for J := I to NObs do begin Hold := V[k,j]; V[k,j] := V[I,J]; V[I,J] := Hold/PVT; end; for i := I to NObs do if (i<>l) then begin Hold := V[i,l]; V[i,l] := 0.0; for j := I to NObs do V[i,j] := V[i,j] - V[I,j]*Hold; end; end; M := MObs + NObs ÷ I; for j := I to NObs do begin M := M-I; L := IP[M]; K := IP[L]; if (K(>L) then begin D := -D; for i := I to NObs do begin Hold := V[i,1]; V[i,l] := V[i,k]; V[i,k] := Hold; end; end; end; if (IEX(=IEXMax) then D := D * Power(10.0,IEX) else begin IER := I; D := IEX; writeln('*** Determinant halt; end;

is too large in Procedure

InvMtx

!! ***');

end; { .....................

end of sub-procedure

INVMTX ...................... }

begin { s u b - p r o c e d u r e SETA } { Test the size of the o b s e r v a t i o n array } NA := Limit; if (NA
500

B. BERKOWITZ and M. BEN-ZvI { Invert the N O b s * N O b s matrix A A - the inverted matrix is n a m e d V } InvMtx(AA,NA,V,NV,NObs,ier,D,IP); { Transfer inverted matrix (V) to 'original' for i := I to NObs do begin for j := I to NObs do AA[i,j] := V[i,j]; end;

matrix

(AA) }

if (D<1.0e-12) then begin w r i t e l n ( ' * * * Warningll - Determinant is very small: ',D,' IER := -3; end; if (IER<>0) then w r i t e l n ( ' * * * Matrix A is singular !! ***'); end; {. . . . . . . . . . . . . . . . . . . . .

end

of s u b - p r o c e d u r e

II ***');

SETA ..................... }

{. . . . . . . . . . . . . . . . . . . . . sub-procedure G E T A V E ................... } Procedure G e t A v e ( v a r AA : MatrixA; var N e a r M e a s u r e : ArrayA; vat NObs : integer; var Ave : real); var suml,sum2 : real; C,D : DataArray; begin for i := I to NObs do begin C[i] :: 0.0; D[i] := 0.0; for k := I to NObs do begin C[i] := C[i] + AA[i,k]*NearMeasure[k]; D[i] := D[i] ÷ AA[i,k]; end; end; suml := 0.0; sum2 := 0.0; for i := I to NObs do begin suml := suml + C[i]; sum2 := sum2 + D[i]; end; ave := suml/sum2; for i := I to N0bs do NearMeasure[i]

:= NearMeasure[i]

- ave;

end; {. . . . . . . . . . . . . . . . . . . . .

end

begin { p r o c e d u r e ObjAn Obj := 0.0; Err := 1.0; IER := -I; if (MObs>0) then

of sub-procedure

GETAVE

..................... }

)

Begin { Calculate the inverted autocorrelation m a t r i x for the o b s e r v a t i o n s Seta(AA, NearXYObs, NearT, NearVarMeasure, ier, NObs, Limit); if (IER<=0) then begin GetAve(AA, NearMeasure, NObs, Ave); { Calculate the RMS interpolation interpolated value, Obj } Err := 0.0; Obj := 0.0; for i := I to NOBS do

error,

Err, and the

}

Algorithm and Pascal program for geostatistical mapping

501

begin M := 0.0; DumC := C[i]; for j := I to NOBS do begin P := DumC * C[j] * AA[i,j]; Err := Err + P; P2 := AA[i,j] * NearMeasure[j]; H := H + P2; end; DumY := DumC * H; Obj := Obj + DumY; end; end; Obj := Obj + Ave; Err := abs(1.0-Err); End; end; { ............................................................................

}

Procedure Diag(Measure : DataArray; MObs : integer); { This procedure calculates the 'diagnostics' of the input data var MeasureMin, MeasureMax, DataFile4 : text;

Variance,

averg,

RMS,

SDV,

temp

}

: real;

begin assign(DataFile4,'Output.lst'); rewrite(DataFile4); averg := 0.0; variance := 0.0; writeln(DataFile4,' Diagnostics: Characteristics of the input data '); writeln(DataFile4,' ============================================== '); writeln(DataFile4,' Number of data points: ',MObs); if (MObs>0) then begin MeasureMax := Measure[l]; MeasureMin := Measure[l]; averg := Measure[l]; if (MObs(>1) then begin for i := 2 to MObs do begin variance := ((i-2)*variance + (i-1)*(Measure[i]-averg)* (Measure[i]-averg)/i) / (i-I); averg := ((i-1)*averg + Measure[i])/i; if (Measure[i]>MeasureMax) then MeasureMax :: Measure[i]; if (Measure[i]
Procedure Results(NPX,NPY : integer; ObjField,ErrorField : GridArray); { This procedure prints out results - mapped data and error fields } var DataFile3

: text;

)

502

B. BERKOWITZand M. BEN-Zvl begin { write to screen } writeln(' '); writeln('Data Field:'); for j := NPY downto I do for i := I to NPX do if (i(NPX) then write(ObjField[(j-1)*NPX ÷ i]:7:3, , ,) else writeln(ObjField[(j-1)*NPX ÷ i]:7:3,' '); writeln(' '); writeln(' '); writeln('Error Field:'); for j := NPY dow~to I do for i := I to NPX do if (i
,

,)

{ ........................................................................... {............................. END OF PROCEDURES ............................ {............................................................................

) ) )

{ ...............................

}

MAIN PROGRAM

...............................

Begin { --- Read objective analysis parameters 0AParameters(B,E,Dist,Date,Time,Limit,

--- }

NPX,NPY,DateDay,DateMonth,DateYear); { --- Read measurement data --- ] ReadData(XYObs,Measure,VarMeasure,TObs,MObs); Clrscr; Writeln('Number

of observations

(data points)

is:

',MObs);

{ --- Calculate grid locations where mapped values should be calculated Matrix XYGrid contains the coordinates } MakeGrid(NPX,NPY,MObs,XYObs,XYGrid);

---

{ --- Loop over grid locations where objective analysis values are to be calculated: SELECT Observations to be used for OA at each grid point, and call OBJAN to do the objective analysis at the point --- ] for kx := I to MaxGridPoints do Begin Select(kx,Limit,Dist,Time,Date,XYObs,Cor,Index,MObs,NObs, XYGrid,Measure,TObs,varMeasure,NearXYObs,NearMeasure, NearT,NearVarMeasure,C); ObjAn(ier,NObs,Limit,XYGrid,NearXYObs,NearMeasure,NearT, NearVarMeasure,Obj,Err);

Algorithm and Pascal program for geostatistical mapping { Save c a l c u l a t e d results for each node in an array ObjField[kx] := ObJ; ErrorField[kx] := Err; Writeln('kx = ',kx,' Obj = ',Obj,' Err = ',Err); End; { End for } { --- Print d i a g n o s t i c s Diag(Measure,MObs);

}

--- }

{ --- Print results - maps of calculated values and variances Results(NPX,NPY,ObjField,ErrorField); writeln; writeln(' writeln(' readln; End.

503

--- }

*** Calculations completed - normal t e r m i n a t i o n *** Press to continue... ***');

***');