Variationally processed Monte Carlo transport theory

Variationally processed Monte Carlo transport theory

Ann. Nucl. Energy, Vol. 25, No. 13, pp. 1055-1068, 1998 © 1998 Elsevier Science Ltd. All rights reserved PII: S0306-4549(98)00010-3 Printed in Great B...

706KB Sizes 0 Downloads 76 Views

Ann. Nucl. Energy, Vol. 25, No. 13, pp. 1055-1068, 1998 © 1998 Elsevier Science Ltd. All rights reserved PII: S0306-4549(98)00010-3 Printed in Great Britain 0306-4549/98 $19.00 + 0.00

Pergamon

Variationally Processed Monte Carlo Transport Theory Mabruk 0. Allagi, Jeffery D. Lewins and Geoffrey T. Parks Department of Engineering, University of Cambridge (Received 21 November 1997)

Abstract--Monte Carlo methods are widely used in neutron and reactor studies. They commonly are accelerated by a variety of variance reduction techniques, including adjoint weighting. W e propose a new use of the adjoint -- in a variational principle to process the Monte Carlo results.In many cases the estimation of the adjoint function is available from the Monte Carlo simulation without further sampling. In this paper we explore a simple model, chosen for known exact results, to compare the benefit of such a variational processing over straight analogue Monte Carlo simulation. The variational Monte Carlo results, for this system, are shown to yield improvement in the accuracy for the same computational cost. Despite the fact that variational results when used during Monte Carlo runs can be biased, they are associated with large reductions in variance. Comparing the ways in which variationalexpressions are processed, the most gain in accuracy is obtained by completing the Monte Carlo simulation before post-processing with the variational principle. © 1998 Elsevier Science Ltd.

1

Introduction

Monte Carlo methods are widely used in solving neutral particle transport equations 'hen the problem must be represented in three dimensions as well as many energy groups r in continuous energy. Despite the computational costs, little else is available. To prolote efficiency and the reduction of error or variance in this stochastic modelling, various ~chniques are used, not least adjoint weighting. In principle if the solution to the adjoint near Boltzmann equation were known, exact results would follow. In practice the exact djoint solution is no more available than the exact forward solution and the effect of the djoint biasing is not always obvious or understood. I055

1056

M.O. Allagi et

al.

We explore in this paper an alternative way of employing the adjoint function or importance, by using it in conjunction with the forward field or flux estimate in a variational principle of classical form, designed to provide answers more accurate than those given by either field alone. In this first account, we use a simple model, to which the exact answer is known, to demonstrate the concept and to show not only that it has advantages in providing greater accuracy for comparable computing cost, but some also disadvantages that need to be understood in using variationally processed Monte Carlo (VPMC). Variational methods have a long classical history. We use the approach of Lagrange in which a particular numerical question is posed for a distributed system, i.e. one single number characteristic of the system is to be calculated. This number will generally depend on a field function such as the neutron flux which is not known exactly, as well as upon a selected and known distribution of detectors sampling the system. By adding the side conditions, the equation to be satisfied by the flux, weighted and integrated over the system, and making this Lagrangian stationary to small errors in the flux, we find the stationary condition to be the adjoint equation for the adjoint flux or the importance of a unit source to the total detector reaction. Our method starts by noting that conventional Monte Carlo (Dubi 1986) follows samples drawn from the source to their termination in capture or escape. The forward or flux field is then estimated by taking the total reactions of all the samples in a small region and dividing by the known cross-section to yield the flux, or by analogous methods. We observe that if the detector distributions are represented in the model, as they must be for accuracy, a direct Monte Carlo answer to our question is available, by noting how r auch the source sampling leads to the detector response of interest. This then provides edditional information, the importance of each unit source as wanted in the Lagrangian, es well as a direct dual estimate. This information on the adjoint function arises since we can in principle know where t h e neutron started that caused the detector reaction; such information is generally lost in the process of estimating the flux from all the source samples. In addition, and here we have a potential advantage over conventional adjoint biasing, the variational process is analogous to the calculation of one further term in avon Neumann series designed to give the right answer. It costs little to do compared with the cost of what is often highly complex geometric modelling for Monte Carlo studies, yet adds appreciably to the accuracy. The model taken is that of two-energy group neutron production problem in infinite space so that energy not position needs to be studied. In the first problem posed in this model, two real sources are present and Monte Carlo methods used to find a stochastic simulation. This problem is used to demonstrate that the best use is made of the variational functional if it is used to post-process the field estimates, i.e. once and once only. Just as we would expect direct analogue Monte Carlo to improve in accuracy with larger samples, the same is shown for the variationally processed Monte Carlo. The model and the problem are chosen for their simplicity so that exact results are available for comparison and the principles of the method evident in application.

Variationally processed Monte Carlo transport theory

1057

In the second problem, we suppose there is only a fast source, no thermal source. Now there is no estimate of the thermal importance available to us from the usual Monte Carlo method. In the present paper we make the simple step of introducing additional or fictitious (as opposed to real) source sampling and show that despi;e this overhead more accurate results are available from the variationally processed Monte Carlo than the standard analogue Monte Carlo results. The two-energy group system of infinite extent with no spatial variation is chosen subcritical and maintained in steady state with a source. Fission is present and modelled here for simplicity as occuring with two fast neutrons for every thermal neutron absorbed in fission. Table I gives the cross-sections assumed) Fast group

T h e r m a l group

Ela = 0.005542

E2a = 0.013920

Els2 = 0.004933

E2f = 0.058300

Table I: Two-group cross-sections (cm -1) The same 'question' is posed in each of the subsequent problems applied to this model. That is, a part of the thermal absorption cross-section is taken as describing a thermal detector whose total reaction to the flux of thermal neutrons induced by the source(s) is to be estimated. Detector properties are given in Table II.

Hfast

=

~ld

H t h . . . . . l = E2d

0 0.00816

Table II: Detector properties (cm -1)

2

Model

Exact solutions to this model are readily available and given in Table III. Here two problems are anticipated: in Problem 1, two equal sources (each 1000 neutrons s -1) are present, fast and thermal leading to fluxes and consequently the detector reaction. In Problem 2, only the same fast source is present and there is no thermal source. The exact solutions to both problems are provided. 1Detector absorption cross-section is included in the thermal group absorption cross-section.

1058

M.O. Allagiet aL

• Sample from $1 • Tally - No.

of fast neutron captures in the system:

(NF)s~

(dPl)S, = (NF)s,/Sla - No. of slow neutron captures in the system: (¢~)s, = (NS)s,/s~o

(NS)s~ (NDS)sl

- No. of slow neutron captures in the detector: direct Csl = (NDS)s, • Sample from $2 • Tally of fast neutron captures in the system: (¢,)s~ = (NP)s~/~l~

(NF)s2

- No. of slow neutron captures in the system: (¢~)s~ (gs)s~/~,~o

(NS)s2

- No.

=

- No. of slow neutron captures in the detector: direct Cs~ = (N DS)s2

o

(NDS)s2

¢, = (el)s, + ( ¢ , ) s ,

¢5 = (¢,)s, + (¢~)s, <> direct C = Csl + Cs2 <> flux-based C = ~2d¢2

Figure 1: Algorithm 1: sampling from $1 and $2 to obtain the forward fields and the detector response.

We note that a direct analogue estimate of the 'question' based on source and source importance is available but that this will not prove identical to the estimate based on flux and detector cross-section due to statistical fluctuations. For small detector cross-sections the statistics are better from the forward estimate than from the this dual. Study of the Lagrangian shows that there is no difference in the result using forward or backward dual.

Variationally processed Monte Carlo transport theory PAST

GROUP

THERMAL

1059

GROUP

. . . . . . .0. . . . 0 . . .0 . . .0 . . . . . . . . '-'> . . . . . . .fast . . . . . source ................................................ /

generation = 0

/ t

0

scattering

---0

1

fission ( ~ )

1 . . . . . . . . . . . . . . . . . . f. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . generation = 1 r--I

C:,

¢7::'

11

Eo

0 3 ...... ri"~

~ (°";

@

I I

fission ( ~ ) I

0

slow capture

(NS)S I =

I

.................. t.................................................................. generation = 2 I I

C:,

fast capture • (NF)S I =

O

11

(~

scattering

1 0

1

slow capture

(NS)S I = (NS)S I + 1 = 2

Figure 2: The forward field measured by, e.g., fast capture; importance measured by thermal capture as a result of one neutron sampled from a fast source.

2.1

Problem 1

Equal fast and thermal sources are present. The Monte Carlo simulation follows the conventional route, Figure 1, of drawing from the source distribution. This drawing does not have to be at random but is constrained so that equal numbers are drawn from the two equal sources before any assessment is made. Such a sampled neutron source has an interaction in its group and its fate determined by a computer generated (pseudo) random number and comparison made with the relative cross-sections for channels, such as capture or fission of a thermal neutron, capture or slowing down of a fast neutron. The neutron is followed to final capture and any progeny from fission on the way are also followed and credited to the sample, Figure 2. At the end, the number of reactions can be used to estimate the fluxes, fast and thermal, by dividing by the known cross-sections and an appropriate proportion of the thermal captures is a direct or dual estimate of the contribution made by the source neutron to the detectors and the 'question'. The average

M.O. Allagi et al.

1060

B A C K W A R D FIELDS • S a m p l e f r o m $1 •

Tally - No. of captures of fast neutrons from $1 by the fast detector: (NDF)s, - No. of captures of slow neutrons from $1 by the slow detector: (NDS)sl

• S a m p l e f r o m $2 •

Tally - No. of captures of fast neutrons from $2 by the fast detector: (NDF)s2 - No. of captures of slow neutrons from $2 by the slow detector: (NDS)s2 I In our case we have slow detector only: I

0

+(NDS)s I ~ ~)t ---- (NDF)SIS I

(NDS)s S, l

o

¢+2 =

(NDF)s2+(NDS)s~ s~ ~

(NDS)s 2 s2

Figure 3: Algorithm 2: sampling from $1 and $2 to obtain the backward fields. In this system the backward fields are the importances of fast and slow neutrons to the slow capture process.

of the sampled contributions, from fast and thermal sources, provide the importance of fast and thermal neutrons respectively, Figure 3. In addition, the total provides the dual estimate of the 'question'. The variational functional or Lagrangian takes the form (Lewins 1965)

(i) where C' is the question of interest f I-IT~dx and [ M ~ + S = 0] is the Boltzmann equation, in operator form, for the flux, driven by a source S. @+ is the adjoint function or importance satisfying the adjoint Boltzmann equation driven by the detector distribution as a source, M*@ + + H = 0 with M* the operator adjoint to M. Suitable vectors and operators are to be understood. In our case, the model is a 2 x 2 matrix M representing the Boltzmann operator, S the two component vector source, and H the two component (one null) detector distribution.

Variationally processed Monte Carlo transport theory

1061

Units

Problem 1

Problem 2

¢2

neutrons cm -2 S - 1 neutrons cm -2 s -1

455382.7 291913.7

358651.3 228084.3

¢+ ,+

counts/neutron counts/neutron

0.9305841 1.45143

0.9305841 1.45143

2382.016 2382.016

1861.168 1861.168

= f H T ~ d x = H2¢2 Cd,,al = f t~+Sdx

counts counts

S-1 S-1

Table III: Exact Solutions ,,

800

N = 1000

__

Analytic

700

600

SO0

g

--100 llo

1800

2000 thermal

2L:=O0 detector response

~400 with and

2600 without

2800 processing

3000

Figure 4: Normal distribution fit to the histograms of (7, L~°°° and T!oo ~10 •

2.1.1

Results

Suppose we take a total of N trials, each consists of H histories, of which half are drawn from the fast source, half from the slow. We have to consider at what stage the variational processing should be applied. It could range over immediate processing, once one of each trial has been followed and recorded. Or it could be delayed until all N trials have been completed. Except in this latter extreme case, the average of the multiple processing would then be formed in the expectation of getting a better answer; the exception is obviously when there is only a single VPMC processed result determined. Our first task is to determine if there is a best choice for batches. Since we have in this problem 1000 fast neutrons and 1000 slow neutrons, the number

1062

M. O. Allagi e t

al.

trials N IO0

1000

10,000

(Analogue Monte Carlo) accuracy standard deviation 0.2143

0.1841

0.08834

16.3353

5.0806

1.6558

t (VPMC) accuracy standard deviation Z~°°

0.4629

7.6656

-10 Llo

0.1793

2.6666

Lloo

0.0343

Not Available

Z] °°°

-0.14215

2.6047

1)oo 10

0.02823

0.4082

Ll00o

0.00675

Not Available

LI °°°°

-0.3246

0.8043

L10o 10o

0.00722

0.07106

L~o0o0 -0.000482

Not Available

Table IV: Accuracy and sample standard deviation with increasing sample size

of histories followed in one trim is H = 2000. We have therefore studied the breakdown of N = 100, 1000 and 10,000 trials between a range from N batches averaged to estimate C' and a single batch (total post-processing). Our notation is that L~ represents the Lagrangian averaged over m batches each containing n trials so that N = n m is the total trials in all. Hence L N represents one extreme and Z~v the other. The results are presented in Table IV and shown in Figure 4 for N = 1000. We know by analysis the exact result sought, and we see that there is a steady improvement in accuracy as we move to postprocessing. By accuracy we mean a comparison with the exact result (estimate minus exact) in contrast to standard deviation (square root of the variance) from the sampling process. Because of the processing of data, we do not have simple sampling theory in which the batch mean is an unbiased estimate of the population mean (as would be the case with conventional analogue Monte Carlo). Indeed we can use the batch data to estimate the variance and Table IV compares, for different number of total trials N, the batch mean with the achieved accuracy at each stage and enables the following conclusions to be drawn: 2 ~Similar results are available analytically in the variational treatment of elementary Kolmogorovequations (Lewins, Parks & Chang 1997).

Variationally processed Monte Carlo transport theory

1063

1. The variational processing introduces a bias which is smaller as the sample size increases, as shown by the accuracy; 2. Simultaneously, the variational processing reduces the variance of the estimate in a way that seems advantageous compared to the biasing at every stage, as shown by the standard deviation. 09940.99~

I

+

A-

0~10~"

0.997

'/ / / N= IO00

/ /

099 09e

095 I-

/

095

o9o I-

,'..~

+ +÷

++

09O O75

o5o I-

O50

-100

O.25

025f 010

010 O.05

,+ ~ + T

O~

00~ 0.01

O~

/

+ ++

Oo:O,oE. ,,

I/

/

0.~ 000,

i

2£OO 2100

23~

2O00 250O 2600 a

2

2900

3~

h

i

m

237O 2375 238O ~

~

i

~

2~0

2405 '

2410 '

b

Figure 5: a-Normal probability plots for CAnalogueMCand L~°°° b-Normal probability plot for ~10 ~100" (Data from Problem 1). Note that a single post-processing has the advantage of reducing the computing costs which in this problem become trivial. However, the single batch does not allow any estimate of the variance and hence internal evidence of accuracy to be provided. One might rely on the corresponding error estimates of the conventional analogue Monte Carlo or chose to sub-divide the total number of trials and repeat the variational processing simply to estimate the variance, while undertaking in addition the post-processing of all the trials in one batch to obtain the most accuracy. We can also check from the batch data what the distribution is; we see from Figure 5 that it is not normal; 3 one might go on therefore to determine such information as the spread, in standard deviations, of the 95% distribution. We conclude from Problem 1 that the variational principle should be applied as postprocessing, that is once and once only to the best estimates that are available for the forward and adjoint fields from sampling. When the size of the singly processed sample 3The normal probability plots have three graphic elements. The plus signs show the empirical probability versus the data value for each point in the sample. The solid line connects the 25'th and the 75'th percentiles of the data and represents a robust linear fit (i.e., insensitive to the extremes of the sample ). The dashed line extends the solid line to the ends of the sample (Jones 1993).

1064

M.O. Allagi et al. 0.2|

-



. . . . .

,

. . . . . . . .

,

61

C~,,euc

. . . . . . . .

,

i

C~ogm.c

0.18

°1I 0.1

~ 8 ~

0.12

0.1

~0.~ 16 0.1~

0.1~

• 10 ~

1~

trials per batch o

.

,

,,,,,

CI 10 ~

lO'

. . . . . . . .

,

. . . . . . . .

io'

i

. . . . . . .

Io2 tdals per batch

b

Figure 6: a-Accuracy of ]~n m versus the accuracy of C' from analogue Monte Carlo, for different numbers of trials per batch in Problem 1. b-Standard deviation of L m versus that of C' from analogue Monte Carlo, for different numbers of trials per batch in Problem 1. Number of trials in both cases is 1000.

is increased, Table IV, we see that as we might expect, both analogue and variationally processed Monte Carlo accuracy improves, the variational processing maintaining its advantage. The actual accuracy achieved is considerably better than the sample standard deviation available from batch processing suggests. In Problem 2, therefore, we shall consider only post-processing. Figure 6 shows the achieved accuracy in two forms. Figure 6-a uses the known exact result to plot the error against trials per batch. This is seen to decrease, subject to statistical fluctuations, and suggests entire post processing to achieve the best results. But more generally the exact result is not known a priori and we have to account for biasing. Although the most accurate result may come from processing all trials in one single batch, a division into multiple batches would allow production of the sample variance and hence expected accuracy. Figure 6-b shows such an assessment. Note that no variance can be estimated from a single batch such as we propose to use to get the best accuracy. If such an estimate is essential, therefore, we can propose two routes: a. to use the variance and hence standard deviation available from the pure analogue Monte Carlo result. This is seen to greatly overestimate the possible inaccuracy. b. to undertake some multiple batch processing solely intended to estimate a better variance. We see that in this problem, for N = 1000, ten batches each of 100

Variationally processed Monte Carlo transport theory

1065

trials is a compromise that allows a better estimate for the variance, although still crude compared to the accuracy actually achieved by single batch post processing.

2.2

Problem

2

In the previous problem both sources were present and the importance of both fast and thermal neutrons was therefore estimated simultaneously with the fast and the thermal fluxes, an obvious advantage. We consider in Problem 2 a case where this advantage is reduced by setting the thermal source to zero. Some effort is now essential to estimate the thermal importance for use in the variational principle. To do this we must undertake estimates not only of the real source pertinent to Problem 2, but a sample from a fictitious source. The principal question is then how much effort to apply for the fictitious sampling which must be at the expense of real sampling that provides the required estimates of both forward fields and one of the two backward fields. We have considered two empirical approaches. The first is simply to say that real sampling provides for three of the four necessary estimates and so should have 75% of the effort. A second approach might be to argue that we need to know the importance of a neutron accurately if there are many such neutrons, not at all if there are none in the problem. Equally we could hazard that we need to know the number of neutrons (or associated flux) accurately where their importance is high, less accurately where their importance is low. We have not yet developed this second approach which would seem to require running estimates of both fields to drive the subsequent sampling to an ideal result, a sort of intelligent 'hoisting by one's own bootstraps' which we think to have considerable promise. What is the best distribution of effort in the present case? A figure of merit is needed to compare the results. The general figure of merit adopted in Monte Carlo work is the reciprocal of the variance of the mean multiplied by the cost in CPU time of calculating one sample. This is equally the variance of the population times the total CPU time (Lewis & Miller 1984), an approach founded in supposing the samples are normally distributed. In view of the biasing, we use the square of the analytically known error rather than the sample variance; in production problems where the result is not known a priori, the sample variance would be necessarily used of course. Figure 7 shows that for the same total sampling, the optimum distribution to raise the figure of merit to a maximum in Problem 2 is some 90% real and 10% fictitious sampling, rather than our 75:25 prescription 4. Nevertheless, this empirical prescription while not optimum yields a marked improvement over analogue Monte Carlo sampling. 4Somestatistical fluctuations is evident about Tre~,t/Ttot,,s = 0.95 without affecting the evident improved figure of merit around Tr,,~t/Ttot,a = 0.9.

1066

M.O.

30

et al.

Allagi

I

I

se 25

..............................

: ................................

: ..........

i 20

.............................

:: ................................

i .... - s

,~.,t...

se

i ....

II . . . . . . . . . i

il

s. S ..............

s

. :. ........

is

Q)

"6 is . . . . . . . . . . . . . . . . . . . . . . . .

:

::

........

I

*-" ...................

i.

: I~

iT

i

::

lo

!

L'711000:

:

i

::

,

o

.

:( 1

: ...................

: ......

i

i

i

i

[

0.6

0.7

0.8

:

:

"

:

0.1

0.2

s"

0.3

0.4

. . . .

i S : !

s:

i

°

:

I

• 1I 0

'st

i

0.5

Treal

0.9

/ Ttotal

Figure 7: Figures of merit for analogue MC and VPMC estimates, as a function of real to total sampling costs. FOM for L1000 1 is multiplied by 0.05.

3

Discussion

Our model is elementary but provides a check of results by analysis. Problem 1 demons~rates that the variational processing should be undertaken after all sampling has been t ~ken for the most accurate results, albeit this accuracy cannot be estimated from internal evidence unless indeed multiple batch processing is used or the analogue results including variance are calculated. We think it is important to emphasise that VPMC introduces a bias but the reduction of error it achieves consistently outweighs this bias. Our significant finding in both problems is that variational processing can increase the accuracy of Monte Carlo estimates at no additional sampling cost. In Problem 1, indeed no additional sampling was necessary. From Table IV, it can be seen that when the number of trials is 1000, the standard deviation obtained by analogue Monte Carlo methods is 5.0806 compared to 0.4082 achieved by variational processing with 100 batches. In order to achieve this value of standard deviation by analogue Monte Carlo methods, the number of trials will be more than 100 times larger. The CPU time is consequently increased in the same order of magnitude. Hence the Figure of Merit of VPMC is also 100 times greater, an indication of the power of the VPMC technique in reducing the variance as well as increasing the accuracy. In Problem 2, fictitious sampling was used when real sampling did not of itself provide the estimates of the importance function required for the question at hand. The demonstration of an optimum distribution between real and fictitious sampling throws back a question on Problem 1 of course. Is it best to undertake

Variationally processed Monte Carlo transport theory

1067

real sampling in proportion to the real source distribution or would an improvement in accuracy of the importances (calling for a departure from the source distribution and a subsequent correction of the bias) lead to better results? We have not pursued this question in its present form because we have embarked in the next stage of research on a method which replaces fictitious sampling with what we call virtual sampling, at no additional sampling cost. We can illustrate the concept here although the present model is not sufficiently challenging to make it suitable to illustrate the value of virtual sampling. We note that a fast neutron from the source is likely to be slowed down to a thermal neutron. If we identify this thermal neutron as coming from a virtual thermal source and interpret the original sample sequence accordingly, we have virtually sampled from a thermal source on sampling from a fast source. We are aware of the complications of correlation this introduces, matters to be addressed in a further paper employing a more challenging model. Finally we remind the reader that the use made here of the adjoint function or importance has apparent advantages over conventional adjoint biasing: 1. The present method introduces bias, but then so does the use of what are at best approximate adjoint fields in conventional biasing. 2. The results of such bias and the improvement in accuracy can be demonstrated readily in the VPMC method. 3. The application of the operators of the Lagrangian introduce a further improvement in the result analogous to a further iteration or term in a yon Neumann series, not available in conventional adjoint biasing. Inherently, we seek to use information associated with individual sources that is available in the Monte Carlo simulation but generally ignored in the final averaging of the neutron distribution or flux, where it is forgotten from whence these neutrons came. This information provides the adjoint field or importance for the variational processing. One further weakness is that the particular question to be calculated must be known in advance if the importance of neutrons to that question, i.e., that particular detector distribution, is to be determined. We could readily in the present model overcome this deficiency by calculating separately (i.e. undertaking the necessary records during sampling) for a fast detector as well as the thermal detector. Then any subsequent question would involve some proportion of fast and some proportion of thermal detectors, with a pre-computed answer having been variationally post-processed for this linear problem. This would double the storage required and increase the record keeping, but answer all questions. If we anticipate extending this approach to more general transport problems, we are in effect saying that we shall estimate the Green's function of the problem, the response

1068

M.O. Allagi et al.

at any field point to a source at any field point. Such information being available, any problem and any question can be answered. We feel that the problem of storage capacity for the Green's function for any realistic model is large but not insoluble in anticipating even cheaper storage media. Perhaps more challenging is the cost of writing the detailed information to storage at every point in what will inevitably be a complex Monte Carlo model. Even this may not be insoluble in terms of the re-introduction of virtual memory combined with our virtual sampling. We note that if conventional variational methods are efficient using Monte Carlo field estimates, then all the related techniques of perturbation theory (known to be challenging in MC theory) and higher order variational and perturbation methods become accessible. Much of course remains to be done but the present results we believe to be encouraging.

References Dubi, A. (1986), Monte Carlo Calculations For Nuclear Reactors, in Y. Ronen, ed., 'CRC Handbook of Nuclear Reactor Calculations', Vol. II, CRC Press, Inc. Jones, B. (1993), Statistics Toolbox User's Guide, Math Works, Inc. Lewins, J. D. (1965), Importance: The Adjoint Function, Pergamon Press. Lewins, J. D., Parks, G. T. & Chang, M. (1997), Private communication. Lewis, E. E. & Miller, W. F. (1984), Computational Methods of Neutron Transport, John Wiley & Sons.