00044981@3SXOO+O.OO c 1988eergamanP~~l~
~tmspkeric Environmenr Vol.22.No. 6.pp.1221~1227, 1988. Printed inGreatBritain.
INHERENT UNCERTAINTY
IN AIR QUALITY
MODELING
AKULA VENKATRAM ERT, Inc., 1220 Avenida Acaso, Camarillo, CA 93010, U.S.A. (First received 25 May 1987 and
inJinaiform
29 October
1987)
Abstract-This paper summarizes the discussions of a working group that was charged with the task of examining inherent uncertainty in air quality modeling. The major topics of the paper are:
I. Definition of inherent uncertainty in air quality models: 2. Determination of inherent uncertainty; 3. Role of inherent uncertainty in model evaluation. The concepts introduced here are illustrated through a numerical simulation with Gifford’s fluctuating plume model. Key word index: Inherent uncertainty, model evaluation, numerical simulation.
I. INTRODUCTION Our experience with air quality modeling shows that the expected deviation between model predictions and observations is usually as large as the model prediction itself. The regulatory implications of this fact have motivated the modeling community to examine the sources of this relatively large uncertainty in model predictions. The 1982 AMS workshop (Fox, 1984) on model uncertainty, which was one of the outgrowths of this activity, identified the causes of model uncertainty as (1) errors in model inputs, (2) errors in model formulation, and (3) inherent uncertainty associated with the stochastic nature of turbulence. In principle, contributions (1) and (2) can be reduced to improve model performance. However, it was recognized by the workshop participants that inherent uncertainty could constitute a major fraction of the total uncertainty and that it was necessary to account for it explicitly in both the application and evaluation of models. Inherent uncertainty in air quality models raises two questions. First, how does one predict an observation if a large fraction of the observation is stochastic? Second, if the deviations are inherently large, how do you examine the performance of a model against observations? The associated question is: how does one compare the performance of two models? This is important because several studies show that statistical tests alone cannot distinguish between a ‘good’ model (one that is based on sound physical principles), and a ‘bad’ model (one that is not). Because the AMS workshop (Fox, 1984) did not provide much guidance on addressing these questions, the modeling community has paid little attention to them. The impression that inherent uncertainty in models is largely an academic issue has also contributed to this neglect. In order to revive interest in the subject, the Electric Power Research Institute (EPRI) contracted ERT,
Inc. to develop a position paper that reflected the views of experts in air pollution modeling. The review group consisted of the following scientists: B. Hicks (Atmospheric Turbulence and Diffusion Laboratories), J. Weil (Martin Marietta Corporation), J. Wyngaard (National Center for Atmospheric Research), C. Hirtzel (Syracuse University), P. K. Misra (Environment Ontario), J. Tikvart (U.S. Environmental Protection Agency), and A. Venkatram (ERT, Inc.). Dr Glenn Hilst of EPRI monitored the project. The actual discussion that led to this paper consisted of written reviews of a ‘strawman’ document prepared by ERT. These reviews were then incorporated into a revised report, which was again sent out to the project participants for a second round of reviews. A meeting was then held to discuss these reviews and reach consensus on the major issues relevant to inherent uncertainty in modeling. This paper represents a condensation of tens of pages of lively written discussions from the reviewers. Some of the views expressed in this paper have been echoed in a recent editorial by Benarie (1987), who stresses the need to understand the limits of air pollution modeling. While Benarie treats a wide variety of topics to illustrate uncertainty in models, we have restricted our attention to the smallscale dispersion problem. By doing so, we hope to provide specific guidance on the subject to the practitioner of modeling.
2. NATURE OF UNCERTAINTY
The group considered two levels of uncertainty with the following characteristics: (1) an irreducible uncertainty, which is due to the stochastic nature of turbulence and is flow dependent but is model independent; and (2) an inherent uncertainty, which is
1222
AKULA VENKATRAM
also associated with the stochastic character of turbulence but is dependent on a model and its limitationsincomplete physics, space and time resolution, etc. It was assumed that the inherent uncertainty could not be made smaller than the irreducible uncertainty. Although there were mixed opinions on the existence of an irreducible uncertainty, it was felt that this component should be included in general. Hopefully, future research will determine the necessity (or not) and appropriateness of this term. The main focus of the project was on inherent uncertainty. In general terms, inherent uncertainty is caused by the practical impossibility of including certain processes in our description of the problem at hand. What is practical is dependent on the scale of the problem. If we are interested in scales of the order of kilometers, inherent uncertainty is primarily related to our inability to describe the details of the turbulent motion governing a specific concentration measurement. So we are forced to treat the effects of turbulence using statistical rather than deterministic models. At scales of hundreds of kilometers, the uncertainty in model predictions of air quality is primarily caused by mesoscale motions such as sea/land breezes that are not explicitly resolved by the grid system used to describe (model) the phenomena of interest. This uncertainty is different from that at the kilometer scale because these mesoscale circulations can be modeled deterministically in principle. It is computational constraints that force us to treat them statistically. However, the uncertainty associated with this subgridscale parameterization is as real and serious as that caused by boundary layer turbulence. .Our group felt that the problem of inherent uncertainty could be best tackled by first concentrating on the small-scale problem. There were several reasons for this decision. First, we could draw upon our considerable experimental and theoretical experience with small-scale boundary layer turbulence to attack the problem. Our experience with the large-scale problem is very much more limited. The second reason for the small-scale focus is the immediate application to current regulatory problems that deal primarily with scales of the order of kilometers. It was recognized that with the rapidly growing interest in regional transport of pollutants (the acid deposition problem, for example), the inherent uncertainty at large scales is likely to become more prominent. However, we felt that at this stage we could stimulate interest in the overall problem of inherent uncertainty in models by concentrating on the more familiar small-scale problem.
3. METHODS TO STUDY INHERENT UNCERTAINTY
Our group spent some time on defining the term “inherent uncertainty”. Following the suggestion of the AMS workshop (Fox, 1984), we stressed the concept of an ensemble in formulating the definition. It
is recalled that the ensemble refers to a collection of events governed by a similar set of externally imposed conditions. In a later section, we will suggest one way of imposing these conditions. Once the ensemble is defined, the inherent uncertainty can be related to the average deviation between the concentration measured during any one event and the ensemble-averaged concentration. Formally, one can write cc= (CC,-
(C,>l’
jLi2
(1)
where the angle brackets refer to ensemble average, C, is the concentration observed during any one event of the ensemble, and 0, is the standard deviation. For convenience, let us define e,/(C,) as the inherent uncertainty. To interpret the uncertainty in familiar terms, let us assume that the concentrations are lognormally distributed. Then the logarithmic standard deviation can be written as (Csanady, 1973): s = exp~[ln(l+i2)]‘iZ}
Y I +i(for
small i)
(2a)
where i = cc/ ( C, ).
Qb)
What does s tell us? Approximately 95% of the observations are expected to lie within sz of the ensemble mean prediction. Thus, sz is an easily understood measure of model uncertainty, which will be used in the subsequent discussion. It is important to point out that we cannot predict C, because the details of the turbulent motion that governs C,are inaccessible to us. The best we can do is to predict the ensemble average, which corresponds to the prediction from a ‘perfect’ model. These concepts will be formalized in a later section. We can estimate inherent uncertainty from: 1. a group of concentration observations, 2. the residuals between mean model predictions and corresponding observations, 3. models of inherent uncertainty. In principle, we can group observations using a chosen set of criteria and carry out the operations described in Equation (1). The main problem with this procedure is that unless the definition of our ensemble is rather vague, we cannot construct a sample of observations that is large enough for a reliable estimate of the uncertainty. If we are able to obtain an estimate, it is still difficult to relate it to the uncertainty of a specific model prediction. If we are reasonably convinced that the residuals between model predictions and observations are inherent, we can use them to estimate the uncertainty. However, as we will see later, each residual carries the signature of the corresponding model inputs. This means that some sort of binning of residuals is required before we can calculate the uncertainty estimates. Then we run into the same type of data sparsity problem that we encountered in analyzing concentration observations only. In spite of these problems, we can obtain first-cut estimates of inherent
Inherent uncertainty in air quality mtxiefing uncertainty by using the preceding methods. In this connection, it is suggested that Bayesian statistics could be used to facilitate this process. In our opinion, the most reliable estimates of inherent un~rta~nty can only come from appropriate models that have been tested against observations. The formulation of such models has to be preceded by careful study of the physics of inherent uncertainty. This can be best accomplished through laboratory experiments and numerical simulations. The remarkable results obtained by Deardorff and Willis (1984) from their water tank experiments suggests the application of similar experiments to study inherent uncertainty for special situations. Over the past few years, computing power has increased to the extent that we can actually simulate the most important features of turbulent convective flows (see Wyngaard, 1984). In a recent study, Nieuwstadt and Van Haren (1987) have been able to simulate dispersion in these numerically produced turbulent flows. Their results are very similar to those obtained in the Deardorff-Willis tank experiments. This type of rapid progress in numerical experimentation suggests that we should soon be in a position to study inherent uncertainty using brute force computer simulation. Meanwhile, we can take advantage of less computationally demanding techniques to simulate turbulent dispersion. One such technique is based on the Langevin equation, which represents a simple model for the motion of a particle in a turbulent flow. A later section will illustrate the application of the equation. In the next section, we will develop a formalism that will help us to be more precise about the concepts we have discussed thus far. We will then go on to apply the formalism to a concrete problem.
4. DEFINITION
OF INHERENT UNCERTAINTY
Our definition of inherent un~rtai~ty is dependent on the air quality model used to study the relevant observations of concentration. Let us assume that u represents the set of input variables required by the model. Then inherent uncertainty is related to the deviation between the best possible prediction from the model and the corresponding observation. To understand the meaning of the term “best possible prediction”, let us conduct the following thought experiment. Think of a set ofex~riments for which the values of a are fixed. We now observe the concentrations for each of these experiments. In general, these observations can be written as C, = C&Q).
(3)
Here, a is the set of model inputs that are fixed, and @ represents the set of unknown variables that also affect the observation, The observations vary from experiment to experiment because the set /I varies. The fixed
I223
values of a then define an ensemble of possible observations Co(ar fl). We can always define an average over this ensemble and express C,(a,/3) as
Here, ~(a,@) is the residual between a realization CJc(, p) and the ensemble average. Because this average (the first term on the right-hand side of the equation) is over all possible values of fl, it can only be a function of the model inputs ~1,and we can write C&x, B) = C,(a) + cfe, P).
(5)
We see immediately that the best possible model prediction is the average C,,(a) over the ensemble of observations C,(a,& defined by fixed values of a. The deviation c(a,& then represents the inherent uncertainty between model prediction C,(a) and an observation C, (a, /.I). It is implied here that the unknown stochastic component c(a,fl) of the observation can be reduced, in principle, by expanding the model input set a to include more of 8. This is equivalent to increasing the deterministic component C,(a) relative to the stochastic component c(a,fl) of the observation, This, in fact, is the process of model improvement, We use the term “inherent” to describe the uncertainty c(a,j) because turbulence very quickly limits the process of model improvement. The unknown set /I, which consists of the variables that describe the turbulent flow field, is practically inaccessible to us. Although, in principle, c(a,fi) is model dependent, for all practical purposes, it is inherent to the process of dispersion. We are thus forced to admit that a large component of the observed concentration is stochastic: we cannot predict its value for any given realization. We can only hope to estimate the statistics of the inherent uncertainty. The formalism we have developed points out that the residual between model prediction and observation is a function of the model inputs a. Because actual observations of con~ntrations usually correspond to different values of a, each residual between model prediction and observation has to be treated as a unique member of a population described by a. This suggests caution in attaching too much meaning to model performance statistics derived from residuals drawn from different ensembles. We r~ommend careful study of the residuals between model predictions and observations before calculating ‘simple’ or complex measures of model performance. Our thoughts on residual analysis will be described in a later section. Because u cannot be held constant in practice, it is difficult to derive the statistics of inherent uncertainty from just observations. We believe that these statistics can be most reliably estimated from models, which, of course, have to be tested with observations. This proposal becomes obvious if we think of modeling as an attempt to explain/predict observations. Then, we not only have to predict the ensemble mean but also
AKULAVENKATRAM
1224
explain the deviation between the ensemble mean and the corresponding observation. The explanation for the deviation is the model for inherent uncertainty. This suggests an expanded view of an air quality model; it consists of models for the ensemble mean and the statistics of inherent uncertainty. We do realize that the formulation of such an expanded model is likely to be a long-term proposition except for the simplest of situations. The major point that we wish to convey is that the residual between model prediction and observation has to be studied even if we do not model it explicitly. 5. AN EXAMPLE Several of the ideas discussed in the last section can be illustrated through the so-called fluctuating plume model proposed by Gifford (1959). In his idealized plume, dispersion is controlled by two widelyseparated scales of motion. Large-scale (slowly varying) turbulence is responsible for the meandering of the centerline of an instantaneous plume. The spread of the instantaneous plume about the moving centerline is controlled by small-scale turbulence. The concentration at a specified receptor is controlled by: (1) largescale meandering and (2) the concentration distribution within the instantaneous plume. In Gifford’s original model, the instantaneous distribution was specified to be Gaussian; there were no internal fluctuations. The centerline of the plume was taken to move randomly with its coordinates drawn from a normal distribution. The plume was essentially described by specifying the distribution of the plume centerline position and the spread of the instantaneous plume about the centerline. In our previous notation, Gifford’s model is described by
a = Cc,,, g;i> cY’cZ, Y, Z, u, Ql
(64
/I = [positions of plume centerline y,. zP]. (6b) Here, crYiand crZiare the instantaneous plume spreads, and y and z are the coordinates of the receptor of interest. The parameters by and u, are the normal distributions that govern the possible positions of the plume centerline. They are not standard deviations of plume centerline positions for any given averaging time. On the other hand, they represent averages of standard deviations over time periods that share common properties that determine plume spread. For example, these time periods might have the same values of surface heat flux and mixed layer height. These variables govern the turbulence in the boundary layer and, hence, eventually the plume spread. Equation (6) states that our lack of knowledge of the plume position at any given instant of time is responsible for the inherent uncertainty in the model prediction of the ensemble mean, which is the time average (infinite time) of the concentration time series at (y,z). Gifford (1959) used this model to derive simple
expressions for the ensemble mean and variance. This work was extended by Sykes (1984) who derived expressions for the ensemble-averaged variance of time-averaged concentrations about the ensemble mean. In our notation, the ensemble considered by Sykes can be described by a = Ce,i>c;i, cY, err Y, ~3 ~9 Q, rl
(74
p = [position of plume centerline y,, zr]
(7b)
where Tis the averaging time of interest. This ensemble which is now constrained by the averaging time T, is relevant to practical problems that require predictions of time-averaged concentrations. It is important to note again that the sigmas measured over any time period of length Twill deviate from the overall sigmas (0,. and 0,) in Equation (7a). We will now show how Gifford’s fluctuating plume model can be used to investigate the properties of the ensemble described by Equation (6). The basic idea here is to actually simulate the observations by computing possible positions of the plume centerline. The coordinates of the plume centerline are computed through a Langevin-type equation: y&t -I-At) = y,,(t) (1 - n) + (20)’ “0’
(8a)
z&t + At) = z,(t) (1 - a) + (2a)’ ” 0”
(8b)
where a = At/Tr. The Lagrangian time scale is taken to be infinite, which implies that plume positions are controlled by the Eulerian time scale T,of the velocity fluctuations at the source. In Equation (8), CT’ and 0” are random numbers chosen from normal distributions described by cY and er. For convenience, we will assume that oY= CT= = 1.0 and that Eulerian time scales for the horizontal and vertical velocity fluctuations are equal. In the simulations to be described here, we take T, = 60 s, and the instantaneous spreads are assumed to be equal. These spreads ei are specified as fractions of the plume centerline spreads. The time step At is taken to be 0.1 T,. As in Gifford (1959), the concentration caused by the instantaneous plume at the receptor (y,z) is given by: Ci(Ytz)
=
&!exp- (Y I
[
-
YJZ+ (z 207
zpY
(9)
1
Equations (8) and (9) can be used to generate a simulated concentration time series at the receptor (y,z), which can then be used to compute time averages. The time average over a long period, taken to be 100 h in our case, is assumed to be the ensemble average. The prediction from a perfect model corresponds to this ensemble average, while averages over consecutive time periods correspond to possible observations. For the simple problem considered here, it is possible to derive the analytical expression for the ensemble mean C,(a) (Gifford, 1959),
C,(a) =
2,u1 -(&+&)I yeexp[ u
(104
1225
Inherent uncertainty in air quality modeling where the effective sigmas are
ilOb) and we have taken Q = 1 and u = 1 for convenience. Figure 1 shows 20 samples of the ratio of the observed to predicted concentrations (ensemble means) for an averaging time of 1 h and an instantaneous plume spread of 0.2. As expected, these ratios vary around the ideal ratio of unity. Notice that at Y = 2, the ratios deviate substantially from unity, although the model is ‘perfect’. We also see the tendency of the inherent uncertainty to increase with distance from the plume centerline. It is clear that the residuals between model predictions and observations are drawn from distributions whose parameters are functions of variables such as distance from plume centerline. The variation of the inherent uncertainty, as measured by s2, with distance from the plume centerline is illustrated in Fig. 2. Recall that s2 [Equation (2)] provides an estimate of the 95 % confidence interval. At one standard deviation from the plume, s* is 1.8, while at two standard deviations, s2 is 3.3. Because groundlevel concentrations from an elevated pollutant release correspond to observations away from the plume centerline, these uncertainty estimates suggest that we should not expect much better than a factor of two precision in predicting the maximum concentration caused by an elevated release. The effect of averaging time on model uncertainty is illustrated in Fig. 3. Notice that the uncertainty does not drop below 1.5 even at Y = 0.0. Beyond T = 0.25 h ( = 15 Eulerian time scales) the uncertainty drops off with the square root of the averaging time as predicted by Sykes i1984).
Figure 4 illustrates the effect of the instantaneous plume spread on model uncertainty. A decrease in plume spread is associated with an increase in the intermittency of the concentration time series, which, in turn, causes the higher uncertainty. Because the instantaneous plume spread increases relative to the average plume spread with downwind distance from the source, the figure also illustrates the downwind behavior of inherent uncertainty. We see that the uncertainty drops to small values for cr, > 0.3. This result is misleading because Gifford’s model does not account for internal fluctuations within the instantaneous plume, which would lead to an increase in inherent uncertainty. We have to adapt a model such as that proposed by Sawford (1985) to examine the effect of internal fluctuations on inherent uncertainty. We had remarked earlier that og and ~7~represent average values corresponding to the relevant boundary layer variables. Now what happens if we can actually measure the plume statistics UT, 0,’ ji and 5% for any specified hour? It is clear that this extra information should lead to a decrease in the inherent uncertainty. Let us use Gifford’s plume model to demonstrate this result. The new ensemble is now defined by
b = [positions
of plume centerline
p,, zP].
As before, we simulate the plume positions using Equation (8). The plume statistics o:, g:, j,‘, Z,’ calculated from this information will now vary from one averaging period to another. This means that we cannot keep tl fixed even in our numerical experiment. However, this can be done in principle, and we can readily write down the expression for the ensemble mean:
‘0 t
-
y=o
l-
Y= I
m-m
y-2
IO Sample
(llb)
number
Fig. 1. Realizations of ratio of simulated observation to predicted concentration. Averaging time = 1 h and gi = 0.2. Distances Y from the plume centerline are normalized by plume spread.
AKULA VENKATRAM
1226
where 0% = (($2 + crf
(12b)
cr;, = (of)’ + 0;.
(12c)
and ‘0
05
1.5
I
2
Distance from plume centerline
Fig. 2. Variation of s2 with distance from plume centerline. Distance is in units of standard deviation of y, and zp. The instantaneous plume spread ui = 0.2 and averaging time is 1 h.
I ‘0.2
0.4
I
I
I
I
0.6 0.6 Averaging time 1h )
J
1.2
Fig. 3. Variation of s2 (uncertainty) with averaging time. Instantaneous bi = 0.2. 2.5 r
Instantaneous
plume spreod
Fig. 4. Variation of s2 with instantaneous plume spread. Averaging time is 1 h.
In order to calculate the ensemble variance of the time-averaged concentrations, we will assume that the variance is most sensitive to the variables cri,y, z and T that can be held fixed. In effect, we are assuming that the variation of the plume statistics is much smaller than those of the controllable variables. This then allows us to calculate the statistics of the residual between the simulated observation and the ensemble mean prediction [Equation (12)] by simulating a set of observations corresponding to fixed values of cri, y, z and T. The ensemble mean prediction for each of these observations is given by Equation (12). As before, we calculate the ratio a$,, where CJ~is estimated from roughly 50 samples, and cr, is the average over the ensemble means corresponding to each of the simulated observations C,(a,/?). If our assumption about the similarity of these ensemble variances is reasonable, we should find that e,, L c,. Table 1 presents selected results from our simulations. As expected, the uncertainty (s2) increases as the instantaneous plume width decreases. Near the plume centerline, (Y < l), s2 Z 2, which indicates that 95 % of the observations are expected to lie within a factor of two of the model prediction. At Y = 2, the uncertainty is much larger. Notice that the extra information embodied in the plume statistics of ensemble 1 does lead to a decrease in the uncertainty. However, this decrease is not noticeable until Y = 2. This suggests that extra information on plume width is likely to be more important for estimating the impact of elevated releases whose ground-level impacts correspond to off centerline concentrations. It should be stressed that the uncertainty estimates presented here will be smaller than in practice where information on either the average sigmas or the actual sigmas is likely to be absent.
Table 1. Comparison of uncertainties of two different ensembles
Y 0 1
2
T= lh,ai=O.l T = 1h, bi = 0.2 T = 15 min, cri= 0.2 sz SZ S2 Ensemble 1 Ensemble 2 Ensemble 1 Ensemble 2 Ensemble 1 Ensemble 2 1.7 2.2 3.0
1.8 2.3 4.4
1.5 1.6
1.5 1.7
2.0
3.0
1.9
2.3
2.3 3.8
2.8 9.6
Ensemble 1 corresponds to ‘measured plume statistics [see Equation (1 1)] while Ensemble 2 refers to ‘average’plume statistics [see Equation (7)]. Our calculation showed that c, s C, justifying our assumption about the similarity of ensemble variances.
1227
Inherent uncertainty in air quality modeling 6. SUMMARY
Inherent uncertainty makes a significant contribution to the observed deviations between model predictions and observations. This suggests that further progress in air quality modeling will require understanding of the processes that govern the statistics of inherent uncertainty. The rapid increase in computing power during the past few years has made it practical to use computer simulation to gain this understanding. The use of Gifford’s fluctuating plume model to estimate uncertainty illustrates this application of numerical simulation. Over the past few years, several investigators (see Wyngaard, 1984) have simulated important features of selected turbulent flows. The next step is to use these simulated flows to understand dispersion and, thus, inherent uncertainty. What are the recommendations that we can make in the short term? For a start, the examination of residuals should precede the current practice of calculating ~rformance measures. The residual carries important information about the model. In general, the residual &(a,@ can be written as &(a,B) = @a, 8) + i(a) +f(a).
(13)
The first term on the right is the inherent uncertainty. The second term represents the effect of model input error, and the third term is related to errors in the formulation of the ensemble mean. One expects c(a,/?) and i(a) to be random andf(a) to be systematic. Thus, a plot of E(U,fi) against model inputs will convey useful information on the performance of the model. A systematic trend in the residual plot at certain values of a might indicate problems with the ensemble mean prediction. A random distribution of &(a,j?)about zero might indicate acceptance of the model.
In principle, we can model i(a) and c(a, /I) to find out whether the sample of observed residuals belongs to the population of simulated residuals. In practice, this might be difficult except for the simplest of cases. However, even a partial understanding of the statistics of the inherent uncertainty as a function of the model inputs will heip us to determine whether a given model is realistic. Ac~~owled~e~ents-his
work was supported by the Electric Power ResearchInstitute. We would like to thank Dr Glenn
Hilst, the project monitor, for his encouragement during this study. REFERENCES
Benarie M. M. (1987) The limits of air pollution modeling. Atmospheric Environment 21, 1-5. Csanady G. T. (1973) Turbulent Diffusion in the Environment. Reidel, Dordrecht, Holland. Deardorff J. W. and Willis G. E. (1984) Ground-level concentration fluctuations from a buoyant and nonbuoyant source within a ConvectiveIy mixed Iayer. At~s~her~c ~nviron~~t 18, 1297-1309. Pox D. (1984) Un~r~inty in air quality modeting. Buff. Am. Met. Sot. 65, 27-36. Gifford F. (1959) Statistical properties of a fluctuating plume dispersion model. Ado. in Geophys. 6, 117-l 37. Nieuwstadt F. T. M. and Van Haren L. (1987) A large eddy simulation of buoyant and non-buoyant plume dispersion in the atmospheric boundary layer. Proceedings of 16th International Technical Meeting on Air Pollution Modeling and its Applications, 6-10 April 1987, Lindau, F.R.G. Sawford, B. L. (1985) Lagrangian statistical simulation of concentration mean and fluctuation fields. J. Clim. appl. Met. 24, 1152-1166. Sykes R. I. (1984) The variance in time-averaged samples from an intermittent plume. Atmospheric Enujron~nt 18, 121-123.
Wyngaard J. C. (1984) Large eddy simulation: guidelines for its application to planetary boundary layer research. Final report to US. Army Research Office, Research Triangle Park, NC.