Nucl. Tracka Radiat. M¢0~., Vol. 13, No. 4, pp. 17"7-184, 1987 Int. J. :t,~_~. Appl. Inatrum., Part D Printed in Great Britain
0191-278X,88 $3.00 + 0.00
Perpmou Pre~ ptc
REGRESSION AND ERROR ANALYSIS APPLIED TO THE DOSE-RESPONSE CURVES IN THERMOLUMINESCENCE DATING G. W. BEaOE2Z
Geology Department, Western Washington University. Bcllingham, WA 98225, U.S.A. R. A. LOCKHART
Statistics Department, University of Toronto, Toronto, Ontario. Canada M5S IAI and J. Kuo Mathematics Department, Simon Fraser University, Burnaby, B.C., Canada V5A IS6 (Received 20 August 1987)
Abstract--In the dating of Quaternary geological materials by thermolumlinescence (TL). one frequently encounters sublinear TL vs applied dose curves for which accurate extrapolation is required. For dating the last exposure to sunlight of unheated Quaternary sediments, the intersection of two sublinear dose-response curves is usually sought. Adequate methods for estimating the uncertainty in this intersection have not been previously described. Here we outline an iteratively reweighted least-~uares procedure for calculating the coefficients of second-order polynomials as well as saturating exponentials, and a variance-covariance method of error analysis appropriate for the intersection of two dose-response curves. Applications to several data sets are presented. The procedures described here are equally applicable, with trivial changes, to extrapolations of single sublinear dose-response curves (the additive dose technique).
I. INTRODUCTION WITHIN the past several years thermoluminescence (TL) techniques (Wintle and Huntley, 1982; Aitken, 1985; Berger, 1986) have been applied to the dating of unheated Quaternary sediments with variable success. The event being dated is the last exposure to sunlight. Three of the principal causes of erroneous results have been: (I) uncertainty in the degree of resetting of the different TL mineral clocks at deposition; (2) failure to correct for a type of TL instability called anomalous fading; and (3) inappropriate TL data processing. Means for dealing with the first two factors have been demonstrated (see lkrger, 1986, 1987; Berger et ai., 1987). In particular, it has been shown by these studies that the preferred TL method for measuring the equivalent dose (TL apparent age ~- equivalent dose/effective dose-rate) is the partial bleach or R-gamma procedure introduced by Wintle and Huntley (1980). With this procedure, accurate extrapolation of two dose-response curves to their intersection is required (Fig. 1). This is straightforward for young samples ( < i 0 - 2 0 × 10~yr (ka)), which often exhibit linear response curves, and routine weighted least-squares fitting of a straight line (e.g. York, 1966) can be used. For young marine sediments, Berger et al. (1984) used such straight-line least-squares fits and an a d hoc
procedure for estimating the variance in the intersection point. However, depending upon several factors, many sediments greater than 10-20 ka in age will exhibit sublinear dose-response curves. For these, a rigourous and accurate, yet simple, technique for calculating the variance in the intersection is required. Here we describe such a technique. For very old samples (e.g. greater than 100 ka) or at high applied doses (greater than a few kGy), the electron traps contributing to the TL signal generally become filled, and the simplest physically realistic model (first-order kinetics) describing the doseresponse leads to a saturating exponential form (Huntley et al., 1987). Indeed, Berger 0987) has demonstrated that this model satisfies the data for a young loess given large artificial doses. Thus, for many samples, saturating exponential functions are generally appropriate for the required extrapolations and error analysis. Nevertheless, there are also many samples for which the dose--response curves are only somewhat sublinear over the range of useful doses. That is, in the expansion e - " = I - x + x2/2! - x 3 / 3 ! + . . . .
x is small and therefore sufficiently accurate extrapolation and error analysis can be obtained by using a 177
178
G . W . BERGER et al. ,q)vmo y 0o~ ------* 0 R- I" M,thod 7
0
I
/ f
TL
I.
. ~-o,,
Flo. I. An idealized illustration of the partial bleach (R-
gamma) method for measuring the equivalent dose (D,) in an unheated sediment, at one particular temperature in the laboratory readout of TL (glow curves). N represents the TL intensity of the "natural" or unirradiated and unbleached sample. Subsamples given a laboratory 7 (or/Y) dose (0) define the additive dose curve. Subsamples of these illuminated (O) for an arbitrary time define the optical bleach curve. Accurate extrapolation to the intersection point is required.
quadratic polynomial to represent the early build-up curves.
The quadratic would he appropriate when sublinearity begins to set in at low applied doses (e.g. ikrger et at., 1984; ikrger, 1984, 1985a,b), whereas the saturating exponential could be used when sublinearity becomes more extreme (e.g. ikrger, 1987; lkrger et at., 1987). The advantage of the quadratic function is that computation is simpler and more rapid (e.g. convergence is assured for all data sets). We stress that the quadratic polynomial ought not to be used when the required extrapolations represent more than about 20% of the applied dose range. The purpose of this paper is, therefore, to outline an itcratively reweighted least.squares (or quasilikelihood, e.g. McCullagh and Nelder, 1983) regression procedure for the partial bleach method using either second-order polynomials or saturating exponentials in place of the straight lines of Fig, !. Also, a suitably accurate and simple means (the delta or Taylor expansion method) for estimating the variance of the intersection is outlined. This error analysis takes account of the estimates of variances and covariances for the calculated coefficients in the functions.
2. ASSUMPTION OF ERRORS In these partial bleach experiments, the primary cause of imprecision in each data point is probably subsample inhomogeneity, rather than instrumental error. Consequently, it is expedient to assume all the error is in the TL (reproducibility is a few per cent) and to ignore the smaller errors in the applied dose (for which relative errors are less than a few tenths of a per cent). This greatly simplifies the curve fitting. Subsample inhomogeneity is difficult to control because of the sensitivity of the TL process to grainto-grain variations in defect structures and trace impurities, and because of the polymineralic nature
of most samples (usually when < 10am grains are used). Typically, hundreds of grains (if > 100/~ m) or tens of thousands of grains (if < 10/~m) occur in each subsample (each data point in Fig. 1). Therefore, in the face of this intrinsic complexity, the most satisfactory assumption that one can presently make about the errors in each TL measurement is that they represent a constant, unknown-in-advance, per cent error throughout each set of subsamples. A simple model can be constructed which leads to approximately constant per cent errors. Suppose the total TL emission of any sample is produced by adding together the TL emissions of a random number of grains. If the TL emission of each grain is Poisson distributed with the same intensity, e, then a probability identity (Appendix A) shows that the variance of the total TL emission is the sum of two terms---one proportional to e z and one proportional to e. Realistic assumptions (Appendix A) suggest that the e 2 term will dominate. Thus, the variance of the total TL emission should be proportional to the square of its mean; in other words, the per cent error will be constant. The assumption of constant per cent error (suggested to G.W.B. by D. J. Huntley) was used in a study of young marine sediments (lkrger el al., 1984) to assign weights to each data point in straightline, least-squares fits. Because of the use of straight lines, the variance in the intersection point was estimated without explicit reference to any per cent error factor. Thus, it made no difference whether one assumed that the per cent factor was different for each line or not. However, for the models ofsublinear TL growth curves treated here, the per cent error factors appear explicitly in our expressions for computation of the variance, and we must ask ourselves if one should distinguish between these factors for the two curves. Arguments can be made for and against a distinction. For example, that both curves represent the same subsample would suggest a single per cent error factor. On the other hand, the lower curve (e.g. Fig. I) represents a portion of the subsample given a different treatment (optical bleaching) than the upper curve. The detailed physical processes involved in this bleaching are unknown and this ambiguity, coupled with the polymineralic nature of most samples (different minerals have different bleaching rates), suggests that a distinction between the per cent error factors is not unreasonable. However, in this paper, in the absence of contrary evidence, we take the view that a single per cent error factor describes both curves in Fig. I. As a final remark, the procedure described below for obtaining a point estimate (the intersection) and standard error is preferred for TL dating to methods which yield interval estimates, when the point estimates have a normal distribution. This condition is likely to be met when large subsample populations are used or when the per cent error factors are small,
REGRESSION A N D ERROR ANALYSIS IN T H E R M O L U M I N E S C E N C E DATING as they are for all the data sets discussed here. If, on the other hand, the per cent error factors are large and/or the distribution of point estimates is asymmetric, then the (intrinsically complex) methods of interval estimation (e.g. Fieller, 1954) are more appropriate. However, such data manifest poor experimental technique, and consequently have little value in TL dating. Specifically, the methods of interval estimation (likelihood ratio methods) are numerically fierce and, when per cent error factors are small, do not necessarily produce better approximations to error estimates than the speedier, simpler delta method described here.
3. COEFFICIENTS BY ITERATIVELY REWEIGHTED LEAST-SQUARES Standard matrix methods are available in texts (e.g. Sober, 1977) for weighted regression to polynomials, though these generally require different assumptions about errors in the data points than are appropriate for TL dating• In this section we summarize these methods and employ our assumed wcighting scheme to obtain ncarly fully efficient estimates of function coefficients. As well, we obtain an estimate of the constant per cent error in each TL measurement in a standard way from the observed scatter in the data about the regression curve. Ideally, were more known about the error in each TL data point, the data scatter could be used to obtain a chi-squared estimate and compute a goodness-of-fit parameter. Unfortunately, that is presently not possible. We represent observed TL by ,Y and applied dose by ,D. Each i represents one temperature point on a glow curve (TL intensity vs laboratory heating temperature). For successful dating, the intersection point (Fig. !) must be computed at each glow-curve temperature and then plotted against temperature. A constant value or plateau in this plot is called the equivalent dose in the age equation, and is thought to represent those electron traps that have been thermally stable over the time span of interest.
Here d is the dose-axis intercept, or equivalent dose if only a single dose-response curve is to be fitted. This single-curve approach is called the additive dose procedure (Aitken, 1985) and has a special usefulness for dating pottery or heated geological material (e.g. tephra). The associated regression and error analysis follow straightforwardly as a special case of the treatment we present here for the intersection of two curves (Fig. 1). There are three ways to estimate the parameters (a, b. c/d): the least squares; maximum likelihood and quasi-likelihood methods. When o is small (e.g. < 5%), all three methods would yield nearly identical results. However, the least-squares method is unsuitable because it leads to inconsistent estimates for the saturating exponential model. The maximum likelihood technique has several disadvantages. For example, when a is assumed to be the same for both curves, we need to work with a seven-parameter problem and to fit both curves simultaneously, producing relatively lengthy computations. On the other hand, the quasi-likelihood method has several advantages. It is computationally simple and fast. It is almost as efficient as other methods when a is < 5%. The formulae are correct even if the errors (4) are not Gauss|an, unlike maximum likelihood for which the assumption of a Gauss|an distribution of errors is important. That is, for the quasi-likelihood method wc need to assume only that the forms of the mean and variance are known. Below, we consider first the quadratic, then the saturating exponential. We also drop the presubscript to Y and D because all statements apply only to one temperature point i. 3.2. The quadratic In matrix notation (when the curves in Fig. I are labelled I and 2, top to bottom), the estimated parameter vector, |~, is a solution to the following normal equations for unequal weights (e.g. Draper and Smith, 1981, p. 109; Sober, 1977, p. 317): |~ = (D~ WID|)-i(D~ W| YI) where Ii
3.1. The model We wish to fit a model of the form
|~7= G|,
DN
D| =
,Y = f ( , D , 0)(I + ~ , ) where: ¢ is the relative error in a single measurement; {, is the random error model and is Gauss|an with a mean 0 and variance 1; 0 is a vector o f p parameters (or coefficients); and f ( D , O) = a + bD + cD:
DI,
YI = IY'~ 1
or
f ( D , O ) = a ( l + e x p [ - b ( d + D)]).
(i)
179
Lr,,J
(2)
180
G . W . BERGER et al.
.oI
and
where: f~ =fl (Dr, ~,); Pk -- 0k - ~,~ for the kth parameter; Gk(Dtj)= aft/00,;and ~, is the estimate of 0 from the rth iteration(equation (8) below). Since F.~kGk(DI,)is linearin//~,then R(O) = R(~) may be minimized by the iterativelyreweighted leastsquares procedure to get estimates ~. Then
0,+,=0,+B for n subsamples or data points on curve !. Here the weight w~ is the inverse of the variance in the data point. The best estimate of this variance is
[,~(a, + G,o~,+ et D~,)].: In equations (2) the constant I/d~ will cancel out so that the weighting elements can be represented by
(8~
and the iteration is repeated to the desired convergence precision. Here /~ is a solution to the following normal equations, for curve I:
B, = ( X~ W, XO-' (X', w, Y*)
(9)
where the elements of the design matrix Xt arc
w,j=(di+i~ID~+dtD~/)-: or f(2(D,:t~). (3) xjk =
Dfl(D,, Ok) for j = l ~ n , k = l - . 3 D0k
To compute the coefficients,we have adopted a quasi-likelihood or iteratively reweighted least- and the weights are w 0 =f~Z(Dz: O) from equation (I). In equations (9) the response vector Y~ has squares procedure, as follows: elements Y~j - f l (see (7). and note the difference with (i) begin with wl/= I, find t0, from (2); (2) where Y, has elements Y0). Expressions for the (ii)recalculate w~ from (3); elements of X~ are given in Appendix B. (iii)find new estimates m#, from (2); To compute the estimates of the coefficients a, b, d (iv) iterate to the desired convergence precision, for the saturating exponential, we have used the obtaining the estimates ~, for the rth iterationwhere following procedure. wt/is held fixed by ,~,. iThese steps are then repeated for curve 2 and an (i) Obtain initial estimates a, b, d. In practice, we estimate of o is calculated from the pooled weighted estimate these for the first temperature point i either manually or, for moderately sublinear data, by fitting sums of squares of residuals, as follows: an unweighted quadratic. That is, we set w0 = I in 0: = ~.~t w,s( y , / - A ) : + r.~2 w~( Y~ - A ) : (4) equations (2) and obtain nl + n 2 - 6 ao = a - b:/2c, bo = - 2c /b for n l data points in curve 1 and n2 in curve 2. and The above estimate ~, minimizes the weighted do = - In (b/boao)/b o. squared residual, Other procedures for getting initial estimates can be nl e,(o)= Y ~,,(r~-f,) 2 (s) used. j-I (ii) Calculate weights w,j using the exponential function in (!). when wo is treated as a constant (since td,_ t is treated (iii) Find new estimates of ~t from (9). as a constant). The final estimate d, is thus a solution (iv) Obtain new estimates of ~t from (8). to the equation (v) Repeat from step (ii) to the desired convergence DR, precision, that is, until ~ is small. These steps are then repeated for curve 2 and a pooled estimate of ¢ is calculated from (4). Here From thisit follows that in the limitof ~,_ t = #, then again, the final estimate ~, is a solution to equation U '0' = ~" ~.Y~-f(D,O) c?Inf(D/,O)1-0 (6) (6). To summarize, the essential difference between the " ' ,~.,L f(D~,O) 00, J procedures for the quadratic and exponential models for curve I. is that in the latter the estimates/g, minimize exactly the approximate expression (7) at each step, whereas 3.3. The saturating exponential for the quadratic the limiting estimate ~® does not In this case f is a non-linear function in the minimize e=prerafion (5). However, ~, does minimize parameters a, b and d. This can be linearized using the (5) when ~,_ t (in % ) is treated as a constant. Gauss--N~vton algorithm. To do this we replace the weighted squared residual in (5) with 4. T H E VARIANCF.-COVAR!ANCE MATRIX
v,(o)=~=o.
e,(O) = Y. .,,~[r,,-Aj-I
E. #~G,(D,,)] ~ k-I
(7)
As we have already obtained estimates of the parameters Ok, we are now interested in finding
REGRESSION AND ERROR ANALYSIS IN T H E R M O L U M I N E S C E N C E DATING estimates of the variances of these estimated parameters, and ultimately the variance of the intersection point for curves i and 2 in Fig. I. Essentially, ( # - 0) has approximately a multivariate normal distribution with mean 0 and variance-covariance matrix ¢1 = oZ[l(O)l
(I 0)
- ~
where I is the quasi-likelihood symmetric information matrix (McMullagh and Nelder, 1983). The diagonal elements of #:[I(#,)]-~ give the variances of ~, while the covariances are given by the off-diagonal elements. The matrix I is constructed simply. Let ¢~(0) be the matrix dUddO, (Uk is defined in (6)), then
t(o)
=
-
extended matrix has the form .....
(14)
~=ko ', ¢'2 J where ¢1 and ~z are 3 x 3 matrices (from (10)). The variance of/5, is thus nearly
( v',, v',) ¢J v, A: =
~fl _ dr., : OD dD
"
where E denotes the mathematical expectation. The k. s clement of ! is thus (the expectation of ( Y - f ) / f terms being zero): d0,
.
(11)
in large samples, - • and I will be close together. The elements of the matrix I for each of the quadratic and the saturating exponential models are given in Appendix C.
(16)
(note that dft/dc becomes dfj/Od for the exponential model). Equation (15) simplifies to A2= V[C/tVt + V~2Vz
' , . , - -O~ [l nOfl(nDf (pD d0, Oj )' O ] ,). ,
(15)
where V~, V~ are transpose vectors of length 3 and
V~ \da' db" dc}
El,(0)]
181
(17)
of___, oA r do-~ Expressions for the elements of the vectors V and the denominator in (17) are given in Appendix D.
7. APPLICATION TO WATERLAID SEDIMENTS 5. T H E INTERSECTION POINT The value of the dose at the intersection point is the equivalent-dose value D, that we seek. D, is a solution to the equation
f,(O,, Or) -A(O,, ~2) = 0.
(12)
For the quadratic model, the negative D, value is simply -(b I De
b2) +
[(b, - b2)2- 4(a, - a2)(ct - c2)]'/2 2(C, - C2) (13)
where all values are the estimates of the parameters. For the saturating exponential model, the solution D, of equation (12) can be obtained by using the Newton-Raphson numerical method (e.g. McCalla, 1967).
6. ESTIMATE OF VARIANCE IN THE INTERSECTION From the regressions on two curves we have obtained seven parameter estimates, four for each curve (e.g. dr, G,, #a, #), one common to both. The estimate /5, is a function of only six of these parameters, namely a,, 5~, ~,, d2,/~2, ~2, so that we must extend the variance--covariance matrix to the multi-dimensional case (Setting, 1980, pp. 148 and 152). Thus the
The computational procedures outlined above for the quadratic have been applied to glaciolacustrine silts and silty clays (Berger, 1985a, b; Berger et al., 1987) and to loess (Berger, 1987). In the course of developing the methods described here for the saturating exponential model, we also developed a method for cubic polynomials as an intermediate step toward the solution of the error analysis problem for the exponential model. Thus, although some applications of the cubic polynomial model have already been presented elsewhere (Berger et ai., 1987; Berger, 1987), the details of the procedures will not be described here because they are no longer useful. Rather, some results for the cubic model will be included here only to illustrate how closely they can approximate those from the quadratic and exponential models, and in this way to justify our previous use of the cubic model. Here we will present calculated equivalent doses and estimates of their standard deviations at single temperature points of the glow curves for three samples from the study of Berger et aL (1987). The distinction between the results presented here for the saturating exponential model and those presented by lkrger et aL (1987) and Berger (1987) is that here a weighted regression and error analysis is used, whereas previously only unweighted regressions without error analysis were employed for this model. Our first example compares the results obtained from cubics, quadratics, and exponentials (Fig. 2).
182
G . W . BERGER eta/. ~O,,edyOose(kGy) 04 0.8
0
260°C_ ....
ExDonential
-
i
-
_
12 J
_ .~--~'-
g 2
1
0
¢
0
0.2
04
06
Fro. 2. A comparison of the weighted least-squares quadrat-
ics, cubics and exponentials for the partial bleach data of 2 ~ m sized grains from the glaciolacustrine silt sample QNL84-3.SW. at 260°C of the glow curves. The data in the lower half are the same as those on the left of the upper half. Some data points are not shown becauseof overlap. Cubic polynomials have becn applied to the same data as the quadratics (lower part of Fig, 2), as well as to these data extended to higher applied doses (upper part of Fig. 2). In the lower example, the computed least-squares cubics and quadratics are visually indistinguishable on the scale shown, and only the quadratics are shown, in Table 1 the Table I. Computed equivalent-dose values for weighted quadratic, cubic and exponential functions in the partial bleach method, at 260°C of the glow curves* Sample QNL84-3,5W
WL84-5 QNL84-2
Bieachf
3-67/40min 3-67/40 rain 3-67/40 rain 3.67/40 min 3-67/40 rain 3-67/40 rain 3-67/40 rain 3-67/40 rain 3-67/40 min 3-67/24 hr 3-67/24 hr
Dose (kGy) Fit~:
O,§ (Gy)
0-0.6 Q 46.6 + 3.3 50 -t- 14 0-0.6 C 45.6 + 6.2 0-1.2 0-1.2 E 40.9t..5.9 0--0.5
0-1.0
69.3 :I: 7.6
Q 68.9+9.7
0--0.8 0-1.6 0-1.6 cE 0-1.6 0-1.6
E
160 :l:24 140 + 16 175 + 15
170:!:!1
computed intersections (46.6 and 50 Gy) are also indistinguishable within one standard deviation, The standard deviation estimated with the use of cubics ( + 14 Gy) is much greater than that for the quadratics (+3.3 Gy), as expected. The upper part of Fig. 2 illustrates the application of cubits and exponentials to the extended data of the lower part of Fig. 2. The resulting intersection for the cubic (45.6 + 6.2Gy, Table 1) agrees more closely with that obtained with the quadratics (46.6 + 3.3 Gy) than it did for the lower half of Fig. 2. Here the inflexion points in the cubics are clearly visible. For comparison, the intersection obtained with weighted saturating exponentials (40.9 + 5.9 Gy) is also given in Table I. Generally, the use of either polynomial approximation will tend to overestimate the theoretically correct result obtained from the weighted exponential model. However, the extent of this overestimation may not be significant at the level of one standard deviation (Table 1) unless the extrapolation represents a large fraction (say > 15-20%) of the applied dose range. No difference in D, values was observed in the second example (WL84-5) using both the quadratic and exponential models. The extent of extrapolation of the quadratics was about 14%. Our third example compares the results for only the cubics and saturating exponentials (Fig. 3). Here the least-squares fits are only slightly distinguishable by eye, and the computed intersections (sample QNL84-2 in Table I) appear to agree. This example demonstrates that these methods of regression and error analysis for cubic polynomials can provide reasonable accuracy in extrapolations when the TL dose-response curves begin to show evidence of saturation at high doses. This example therefore justifies the previous use of the weighted, cubic polynomial (Berger, 1987; Berger et al., 1987), but as emphasized above, the weighted exponential model is preferred
o II (%)
Aot~ed y ®se(kGy)
3.3
0
08
3.5 3.3 4.1 4.3
.
. . 2eO~ ~ . - .A~..~._.-
4,2
. . ~ Sxpon~l~ ...Cu~c
-
~ ~
12
16
---
3.9 3.9 3.0
3.O
"This temtmmture denotes the int~srated 251-260°C region of the glow curve,. This particular tempmrature is repraentative of a plateau in D0 values for these samples. tTlus is the Corninl sharp cut optical filter and the ill.ruination time for the optical bi~:hini~ :~Q- quadratic, C - cubic. E - saturatinl exponemial. 0The equivalent dose value and estimate of the standard deviation. IIE~imte of the per cent standard deviation in each TL meuuremcnt (equation (4)).
O -
I~
0e
I
I
i
FIO, 3. A comparison of weighted kast-squares cubics with weiilhted qtutsi-likelihood saturating expuaentials for the partial blmc,h data of 2-4~m gralm from the glaciolacustrine silt Saml:~ QNLg4-2, at 260~C of the glow curves. Some data points are not shown because of overlap. The middle curve repr~ent~ an illumination of 40 mitt; the lower, 24 h.
REGRESSION A N D ERROR A N A L Y S I S IN T H E R M O L U M I N E S C E N C E D A T I N G on theoretical grounds, and certainly is thus more accurate in general. With the cubic, unlike with the quadratic, underestimation as well as overestimation of the theoretically preferred D, value can occur. Such underestimation can occur when too few data define the bleached curve and an upward inflexion appears toward the lower end of the applied dose range. This behaviour is another limitation to the use of polynomials above order two.
g. CONCLUSIONS When the dose-response curves of the partial bleach method of TL dating are sublinear, accurate extrapolations to compute the equivalent dose value can be carried out using an iteratively reweighted least-squares, or quasi-likelihood, procedure applied to a saturating exponential model. A constant, unknown relative error is assumed for each TL measurement point and no error is assumed in the applied dose. Accurate estimates of the standard deviations in these equivalent dose values can be computed with ease and rapidity using the delta method, which takes into account estimates of both the variances and covariances in function parameters. When these dose-response curves are only slightly sublinear and when the required extrapolations are less than 15-20%, say, of the applied dose range, then the saturating exponential model can be replaced by a quadratic polynomial. This substitution has the advantage of computational speed, since the problems of convergence and of choosing initial estimates of the parameters do not arise. The limitation of the use of this polynomial extrapolation is that the estimates of equivalent dose values will be larger than those obtained with the exponential model, but this difference will likely be less than one standard deviation. These methods of regression and error analysis are readily applicable, with transparent modifications, to the additive-dose method of TL dating. For this, extrapolations of only single dose-response curves are required, rather than the intersection of two such
183
REFERENCES Aitken M. J. (1985) Thermoluminescence Dating. Academic Press, New York. lkrger O. W. (1984) Thermoluminescencc dating studies of glacial silts from Ontario. Can. J. Earth Sci. 21, 1393-1399. Ikrger G. W. (1985a) Thermoluminescence dating studies of rapidly deposited silts from south-central British Columbia. Can. J. Earth Sci. 22, 704-710. Ikrger G. W. (1985b) Thermoluminescence dating applied to a thin winter varve of the late glacial South Thompson silt, south-central British Columbia. Can. J. Earth Sci. 22, 1736-1739. Berger G. W. (1986) Dating Quaternary deposits by luminescence--recent advances. Geosci. Canada 13, 15-21. Berger G. W. (1987) Thermoluminescence dating of the Pleistocene Old Crow tephra and adjacent loess, near Fairbanks, Alaska. Can. J. Earth Sci. Ikrger G. W., Clague J. J. and Huntley D. J. (1987) Thermoluminescence dating applied to glaciolacustrine sediments from central British Columbia. Can. J. Earth Sci. 24, 425-434. Berger G. W., Huntley D. J. and Stipp J. J. (1984) Thermoluminescence studies on a '4C-dated marine core. Can. J. Earth Sci. 21, !145-1150. Bickel P. J. and Doksum K. A. 0977) Mathematical Statistics. Holden-Day, San Francisco. Draper N. R. and Smith H. (1981) Applied Regression Analysis (2nd edn). John Wiley & Sons, New York. Fieller E. C. (1954) Some problems in interval estimation. J. R. Star. Soc. B 16, 175-185. Huntley D. J., lkrger G. W. and Bowman S. G. E. (1987) Thermoluminescence responses to alpha and beta irradiations, and age determinations when the high dose response is non-linear. Radiat. Effects. McCalla T. R. (1967) Introduction to Numerical Metho,lv and Programming. John Wiley & Sons, New York. McCullagh P. and Nclder J. A. (1983) Generalized Linear Models. Chapman & Hall, New York. Seber G. A. F. (1977) Linear Regression Analysis. John Wiley & Sons, New York. Scrfling R. J. (1980) Approximation Theorems of Mathematical Statistics. John Wiley & Sons, New York. Wintle A. G. and Huntley D. J. 0980)Thermoluminescence dating of ocean sediments. Can. J. Earth Sci. 17, 348-360. Wintle A. G. and Huntley D. S. (1982) Thermoluminescence dating of sediments. Quat. Sci. Rev. 1, 31-53. York D. (1966) Least-squares fitting of a straight line. Can. J. Phys. 44, 1079-1088.
Finally, the methods described here will provide statistically accurate estimates only when the subsample size is "large" (e.g. > 10-15 per curve) and when the per cent errors per measurement are "small" (e.g. ~ 5%). These limitations are not, however, debilitative because data sets with a small number of subsamples and/or large per cent errors manifest poor experimental technique, and are therefore of little value in TL dating applications.
APPENDIX A: A MODEL FOR ERRORS Suppose that in each subsample (each data point in Fig. I) the number of grains emitting any TL signal is a random quantity N (the total number of grains in each subsample is ~N). Suppose also that each emitting untreated grain produced a Poison distributed number of photons with the same mean, or intensity, e. Of course • will change according to the applied dose and/or optical bleaching. Then the total TL emitted is the sum of a random number of Poisson emissions. A well-known identity of probability (Bickel and Doksum, 1977, equation 1.6.12, p. 36) then states that V(total TL) = V[E(total TL[ N)] + E[V(total TL[ N)].
Acknowledgements--G.W.B. gratefully acknowledges fruit-
Here the notation "[ N" means "for a given N". For the situation described, the r.h.s, is just
curves.
ful discussions over the years with D. J. Huntley and R. Divigalpitiya.
V(Ne)+ E(Ne)--=e2V(N)+eE(N).
(al)
184
G.W.
B E R G E R et al.
Now the mean of the total TL signal is simply eE(N), so that the relative error is ~/V(total TL)/eE(N) -
where n = n 1. n2 for growth curves I, 2 respectively in the matrix
~/V(N)+ E(N)/e/E(N).
This relative error is effectively constant, as a function of e, provided that E(N)/e is small compared to V(N). Typically, for < 10/~m grains, we estimate that the mean number of emitting grains, E(N), is of the order of 102-10`, • is of the order of 102 and the variability in N, that is x/V(N)/E(N), as about 1%. In this situation V(N) ~ 102-10` and E(N)/e .~ 10--102. Therefore, the simple model above can lead to a roughly constant per cent error in the TL signal. This simple model shows that the smallest reasonable standard deviation of TL signals would he the square root of the mean (than is V(N) = 0 in (al)). In practice, there is a significant variation in N from subsample to subsample which increases the errors above this square root of the mean. Additionally. any significant variations in grain size or grain-to-grain sensitivity (or el~ciency), changing the signal strength, also will increase the standard deviation. bringing it closer to our constant-per cent-error model.
i~ i . l.l~b
i
l.
The saturating exponential Applying equation (i I) to f = a { 1 - exp[- b(d + D)]} and letting Sj = exp[b(d + Dr)] - l. then
1 " d+D/
b " 1
b&d+Dj
and t~, = G , I,,, = i , , / ~ , = l ~ . D: E L E M E N T S O F V E C T O R V AND THE VARIANCE ~TIMATE
APPENDIX
The quadratic V~ = V 2 =
APPENDIX
B: E L E M E N T S O F M A T R I X X
In equation (9) the elements of the design matrix X are
of(D,,o.) xj, = 0 0, given by equation (!). Setting e,= exp{-b, (d, + O,)] for the rth iteration, then X is the n x 3 matrix with the j t h row equal to where f
is
[I-e,.a,(a,+D,)e,.a,b,e,l.
D,
\o~,l where D. is the estimated intersection point or equivalent dose value from eqution (! 3). In the case of the additive dose method, all expressions for curve 2 disappear from equation (17) and D, is the estimated dose axis intercept (see also
(13)). In the case of the partial bleach method discussed here, because V i = V 2 then 07) simplifiesto
A2 =
V~(¢,, + ~2)vt 12 D,(C! - C2) 4- (b! - b2)] 2"
The saturating exponential APPENDIX
C: E L E M E N T S
OF MATRIX !
The quadratic
Letting e, = exp[ - b I (dt 4- D.)]. then
/
\
Applying equation (I I) to f = a + bD +cD 2, then
" ' ~ °
1"
~f(D,)'"
"" 3
1~ = ~
Df~)v and
,b. = l.b.
l . = [ . = l~.
l
\ ajb,e, / and similarly for V~. In the case of the additive dose method. D, = - d , so that e, = I, with consequent simplification of V~(again. terms for curve 2 are dropped from equation (17)). For the lamiai bleach method discussed here, the denominator in (17) becomes lal bl e, - a 2b~ e2[2.