The evaluation of limiting dilution assays

The evaluation of limiting dilution assays

Journal of Immunological Rll M e t h o d s , 49 (1982) R l l - - R 2 3 Elsevier Biomedical Press Review article THE EVALUATION OF LIMITING DILUTIO...

697KB Sizes 8 Downloads 69 Views

Journal of Immunological

Rll

M e t h o d s , 49 (1982) R l l - - R 2 3

Elsevier Biomedical Press

Review article THE EVALUATION OF LIMITING DILUTION ASSAYS S. FAZEKAS DE ST.GROTH Basel Institute for Immunology,

G r e n z a c h e r s t r a s s e 4 8 7 , C H - 4 0 5 8 Basel, S w i t z e r l a n d

(Received 28 October 1981, accepted 31 October 1981)

Theoretical and practical aspects of limiting dilution assays are considered, and the maximum likelihood solution is illustrated by two worked examples. These examples show up the inferiority of the conventional regression analysis, while the advantages of the maximum likelihood over minimum chi-square methods are covered in the discussion. A program, which will perform all the necessary calculations on a pocket calculator, is listed in the Appendix. Key words: l i m i t i n g d i l u t i o n - - killer assays - - r e g r e s s i o n a n a l y s i s -- m a x i m u m -- m i n i m u m

likelihood

chi-square

Limiting dilution assays are n o t a monopoly of immunologists: extinction processes in physics and a variety of biological assays rest on the same principle. The problem of evaluating such assays has attracted the attention of mathematicians, and a complete solution was offered some 60 years ago (Fisher, 1922), with scores of special developments added since. Immunologists, though, still hold the m o n o p o l y on managing to ignore these developments and treating their data in a fashion that is both inefficient and misleading. This nonchalant attitude will be somewhat more difficult to maintain after a paper by Taswell (1981}, addressed to immunologists and demonstrating the error of their ways on immunological material. It is a pity that he advocated a solution which, although asymptotically correct, is both more cumbersome than some alternatives and of questionable validity when applied to small numbers such as are c o m m o n l y encountered in experimental work. Both aspects will be discussed, once the theory and the practice of evaluating limiting dilution assays has been outlined. It should be said right at the outset that the results of a theory can be efficiently applied even without mastering the theory, just as with a complex piece of equipment it is enough to know where to put the sample and which b u t t o n to press. The complex piece of machinery in this case is a pocket calculator, and a program which will perform all the necessary calculations is given in the Appendix. PRINCIPLES

Dilution assays demand a clear distinction between a response and the absence of a response. This condition is readily met in cloning tests (where 0022-1759/82/0000--0000/$02.75 © 1982 Elsevier Biomedical Press

R12

a g r o u p o f cells is e i t h e r p r e s e n t or n o t ) , b u t in killer assays (where an arbit r a r y t h r e s h o l d has to be set, separating s p o n t a n e o u s release o f SICr f r o m release b y killed target cells) it m a y lead to serious bias. As practised, dilution assays i m p l y also t h a t killing is a single-hit process, i.e., the presence o f a single killer p r e c u r s o r cell will start a s e q u e n c e t h a t leads to the eventual lysis o f the a t t a c k e d target cells. Multi-hit and c o o p e r a t i v e m e c h a n i s m s are conceivable, and e x p e r i m e n t s can be devised which will distinguish b e t w e e n t h e m . B u t these are e x p e r i m e n t s and n o t assays. Assays e x p l o i t an already established and p r e s u m a b l y c o r r e c t m o d e l . If a suspension o f cells is sampled, identical volumes will n o t c o n t a i n the same n u m b e r o f cells, h o w e v e r well the suspension was stirred. R a n d o m d i s t r i b u t i o n m e a n s t h a t a sample m a y c o n t a i n 0, 1, 2, ..., i .... cells, with a p r o b a b i l i t y given b y t h e Poisson f o r m u l a Pi = e--m

m i --

i!

'

w h e r e m is the average m u l t i p l i c i t y , and the probabilities add up t o 1. If the cells in a sample are n o t c o u n t e d b u t their presence assessed b y s o m e side effect, such as SlCr-release f r o m killed target cells, the e x a c t n u m b e r o f r e s p o n d e r s remains u n k n o w n and t h e o n l y i n f o r m a t i o n we o b t a i n is w h e t h e r a sample h a d n o r e s p o n d e r s in it (and was h e n c e negative), or had at least o n e r e s p o n d i n g cell in it (and was h e n c e positive). In such quantal d i l u t i o n assays t h e p r o p o r t i o n o f negative r e s p o n d e r s is still r e p r e s e n t e d b y t h e zero t e r m o f t h e Poisson d i s t r i b u t i o n m 0

Po = e - m

0~

- e- m ,

while the positive r e s p o n d e r s b y t h e sum o f all the o t h e r terms. And, since t h e probabilities add u p t o 1, this sum equals el i=1

m

m i ___ i!

1--po=

1--e

-m

.

It seems intuitively clear t h a t b y observing t h e f r a c t i o n o f negative responses we can estimate the m u l t i p l i c i t y , since m = --lnp0. But this estimat i o n is n o t as s t r a i g h t f o r w a r d as o n e c o u l d wish. T o begin with, we d o n o t observe P0 b u t r / n , the f r a c t i o n o f negative responders. By regarding the cell i n p u t as i n d e p e n d e n t (i.e., error-free) variable and setting up a regression of i n ( r / n ) on m (which is t h e m o s t c o m m o n way o f p r e s e n t i n g and 'evaluating' t h e data), all the variation is a l l o t t e d to r/n. This is an e l e m e n t a r y b l u n d e r t h a t w o u l d n o t be t o l e r a t e d in a n y i n t r o d u c t o r y statistics course. A p a r t f r o m yielding an i n c o r r e c t n u m e r i c a l answer, it gives also the false impression t h a t m, as e s t i m a t e d , has n o error. Various c o s m e t i c t r e a t m e n t s , such as

R13

weighting the r/n values by their binomial expectations or by their inverse variance (both considered and rejected by Taswell, 1981), do n o t make good the basic mistake. Mathematically, r/n = e - m is a function of both the observed number r and the hypothetical frequency m. Regarded as a function of m, this expression is the probability with which r/n is observed. Regarded as a function of r, it is not a probability. Fisher (1922) termed it a likelihood and developed a theory of estimation which is independent of the theory of probability. Probability and likelihood have different mathematical properties. The most important of these properties is that the information, I, contained in a set of data can be defined independently of the way a particular parameter is estimated. Further, maximizing the likelihood function by adjustment of the parameter value will always provide an estimate which has the property that the limiting value of its variance equals 1/nI. The efficiency of other methods can thus be evaluated: unless their variances equal 1/nI they are not extracting the full a m o u n t of information available in the data. The likelihood function, for discrete distributions, is the multinomial term in which the expected frequencies are raised to the power of the corresponding observed numbers. Thus, for a quantal dilution assay with only two response classes, we have the binomial term 1 ---

n~ r!(n --r)!

(e--m)r(1

-- e--m)n--r

.

Maximization means differentiating with respect to the parameter and equating the differential to 0, in order to obtain an equation of estimation. As differentiating multiple products is bothersome, recourse is made to a device: an expression and its logarithm will show a m a x i m u m at the same value and therefore conventionally the logarithm of likelihood, denoted by L, is maximized. We have then L = In 1 = In

n! + r ( - ~ n ) + (n - r) l n ( 1 r!(n--r)!

--

e -m)

and dL e -m d m = --r + (n -- r) 1 -- e -m

= 0 ,

the factorial term vanishing on differentiation. Solving, we obtain m = --In(r/n). Thus, provided multiplicity is tested at a single dilution, the intuitive and the m a x i m u m likelihood estimates coincide. But it would be a rare exception

R14 t h a t the a m o u n t of prior knowledge allowed testing at one dilution only. When multiplicity is tested over a series of cell concentrations and the information has to be combined, intuition (i.e. fitting a linear regression} and the m a x i m u m likelihood solution do n o t give the same answer. Here we have L=~ i=l

E(hi) in

ri

--ricif+(ni--ri)ln(1--e

cif)

1

,

where k stands for the n u m b e r of dilution steps, and ci, ni, ri for the concentration of cells, the n u m b e r of replicate cultures and the n u m b e r of negative responses, resp., at dilution i, and mi = cif. In general, this equation does n o t reduce to any simple form capable of explicit solution. It is solved by an iterative procedure: a provisional value of m is guessed and sequential adjustments are calculated from first-order Taylor-Maclaurin expansion of the first differential coefficients, the iteration continuing as long as is necessary for adequate convergence of the solutions. Convenient routines have been worked o u t for several kinds of assay, including limiting dilution (for review and detailed t r e a t m e n t see Finney, 1 9 5 2 - - 1 9 6 4 - - 1 9 8 0 ) . But these solutions still suffer f r o m the fact that the sampling distribution of the estimate is likely to be far from normal. This and some ot her difficulties can be overcome by estimating In m instead of m, by use of the log-log transformation o f Mather (1949). Accordingly, we shall n o t stop by transforming r/n = e -cf into ---In(r/n) = cf, but go one step f urt her and take logarithms once more, making in(--In(r/n)) = in c + I n f . This is the equation of a straight line with unit slope. Because of the constancy of the slope the arithmetic can be considerably simplified by arranging that each cycle of iteration gives an adjustment to the previous estimate o f in m, rather than a new version of the equation of estimation (Finney, 1951). This is the t r e a t m e n t proposed for the evaluation of limiting dilution assays. PRACTICE The m o s t informative range The m e t h o d o f m a x i m u m likelihood, by assessing the a m o u n t of information contained in the data, will also tell in which part of a dilution series the i n f o r m a t i o n is concentrated. Fig. 1, based on Mather's (1949) calculations and standardized to have a m a x i m u m at 1, illustrates two i m p o r t a n t features o f quantal dilution assays. Unlike the usual dose-response curves which give m a x i m u m in f o r mat i on at and are also symmetric about their median, a limiting dilution assay is asymmetric in respect of the information it provides and the peak is at a multiplicity of 1.59, corresponding to a fraction of 0.203 negative responders. The most informative range (comprising levels of i n f o rmatio n above 90% of the m a x i m u m ) falls between multiplicities of 1 and 2.5, corresponding to negative fractions 0.1--0.37. Inform at i on obtain-

R15 Multiplicity 10

43

2 1.59

0i5

0i25 0i125

05

0

011 0'2

013 0:4 0:5 0:6 0,'7 0'8 0:9 Fraction of negative responses

1.0

Fig. 1. The information yielded by quantal assays. The function I = n(ln p)2p/(1 -- p) is plotted, with n = 1.544 to give a maximum of 1. The range of > 90% information is bounded by the dashed lines.

able o u t s i d e this range d r o p s p r e c i p i t o u s l y at multiplicities a b o v e 3 (1.8% at m = 4, 0.1% at m = 5), a n d gradually at multiplicities b e l o w 1 (59.5% at m = 0.5, 34% at m = 0.25, 18.1% at m = 0.125). If t h e r e is a n y p r i o r k n o w l e d g e o f t h e m u l t i p l i c i t y , an i n f o r m a t i v e t e s t should c o v e r t h e range 1 < m < 2.5. If such k n o w l e d g e is n o t available, it is b e t t e r to err o n t h e side o f l o w multiplicities, i.e., having a larger f r a c t i o n o f negative cultures.

The number of replicates As t h e i n f o r m a t i o n y i e l d e d b y o n e d i l u t i o n in a q u a n t a l assay is

I = n

( 1 dp 2 + 1 d(l-p)21 dy 1 -- p dy !

a n d in t h e l o g 4 o g t r a n s f o r m a t i o n y = ln(--ln p), so t h a t d p / d y = p In p a n d d(1 - - p ) / d y = - - p In p, we h a v e

I =n(plnp)2 l + l _ p

=n

(lnp)2.

T h u s t h e i n f o r m a t i o n is directly p r o p o r t i o n a l to t h e n u m b e r o f cultures t e s t e d at each level a n d , o b v i o u s l y , t h e larger t h e n u m b e r o f replicates t h e m o r e precise t h e e s t i m a t e will be. This w o u l d suggest setting u p all cultures at t h e d i l u t i o n t h a t gives m a x i m a l i n f o r m a t i o n . Such o r d a i n m e n t s are useless in practice: if t h e m u l t i p l i c i t y is a c c u r a t e l y k n o w n b e f o r e h a n d , t h e r e is n o great p o i n t in p e r f o r m i n g f u r t h e r assays. B u t it suggests also, a n d t h a t is a p r a c t i c a l suggestion, using a n y p r i o r k n o w l e d g e a n d allocating d i f f e r e n t n u m bers o f cultures to d i f f e r e n t dilutions, m o r e to t h o s e e x p e c t e d t o fall in t h e

R16 most informative range and fewer to the low information limbs of the curve in F i g . 1. Numbers of replicates suggested for dilution assays by Lefkovits and Waldm a n n ( 1 9 7 9 ) , viz. 1 8 0 p e r d i l u t i o n a n d n o t less t h a n 6 0 , a r e u n r e a l i s t i c f o r k i l l e r a s s a y s . N e i t h e r a r e t h e y n e c e s s a r y , as w i t h i n t h e i n f o r m a t i v e r a n g e t h e b i n o m i a l v a r i a n c e is s t a b l e e n o u g h t o give r e l i a b l e e s t i m a t e s w i t h r e p l i c a t e s o f the order of 20. The boundaries of the most informative range being only 2.5-fold apart, i t is w a s t e f u l t o s e t u p d o u b l i n g d i l u t i o n s as n o t m o r e t h a n t w o c o u l d fall within that range. Any prior knowledge of the multiplicity should prompt small dilution steps within the limits 0.6 < m < 3 (covering the >75% inform a t i o n l e v e l ) o r , if s u c h k n o w l e d g e is n o t a v a i l a b l e , l i n e a r d i l u t i o n s t e p s o v e r a 10--20-fold range.

Estimation of the multiplicity The experimental data comprise the number of dilution steps (k), the number of cells tested at dilution i (ci), the number of replicate cultures set up at dilution i (ni), and the number of negative cultures at that dilution (ri). T h e r e q u i r e d c a l c u l a t i o n s a r e g i v e n in d e t a i l , so t h a t t h e y c a n b e performed without the use of a calculator. But iterative computations, espec i a l l y if m a n y d i g i t s h a v e t o b e c a r r i e d , a r e a w k w a r d - - t h e p r o g r a m g i v e n in t h e A p p e n d i x will p r o v i d e t h e s a m e a n s w e r s r a t h e r m o r e a c c u r a t e l y a n d within seconds. TABLE 1 Evaluation of limiting dilution assays. I. Provisional estimation of the multiplicity. Observed e X 10 Experiment 1 (k = 6)

1 2 3 4 5 6

-4

Calculated n

r

p

In c

log-log p

--ln m a

60 60 60 60 60 60

42 37 25 14 15 5

0.700 0.617 0.417 0.233 0.250 0.083

9.210 9.903 10.309 10.597 10.820 11.002

--1.031 --0.727 --0.133 0.375 0.327 0.910

10.241 10.630 10.442 10.221 10.493 10.092

sum = 62.118 In mo = 10.353 Experiment 2 (k = 4)

1 2 3 4

60 60 60 60

60 57 36 6

1.000 0.950 0.600 0.100

9.903 10.309 10.597

--2.970 --0.672 0.834

12.874 10.981 9.763

sum = 33.618 In mo = 11.206 a --ln m = In c -- log-log p.

R17

The two paradigmatic examples, illustrating the Lefkovits-Waldmann (1979) way of evaluating dilution assays, will provide the raw data. This choice allows then direct comparison of their estimates with the maximum likelihood solution. The first step, calculation of a provisional m value, is set out in Table 1. Each experimental point provides an estimate since In m = ln(--ln p) - - I n c. Thus the numerical work involves (1) dividing ri by n i to obtain Pi; (2) looking up the logarithm of ci; (3) looking up the value of log-log p in the tables provided by Mather (1949) or Finney (1952--64--80); and (4) subtracting log-log p~ from In ci, to obtain --ln mi. The average of the latter will be the provisional estimate of In m. TABLE 2 Evaluation o f limiting d i l u t i o n assays. II. The first cycle o f iteration.

Experiment 1 (ln m0 = 10.353)

P

Y

y

nw

nwy

nwy 2

0.700 0.617 0.417 0.233 0.250 0.083

--1.143 --0.45O --0.044 0.243 0.466 0.649

--0.116 0.262 0.088 --0.129 0.145 --0.228

16.243 27.351 34.255 37.827 38.857 38.031

--1.890 7.155 3.026 --4.881 5.636 --8.656

0.220 1.872 0.267 0.630 0.817 1.970

sums: 192.564

0.390

5.776

8.158 12.890 --4.752 --37.376

8.739 11:702 1.139 56.883

sums: 66.194 --21.080 ~- = - - 0 . 3 2 8 4 5

78.463

~- = Experiment 2 (ln m0 = 11.206)

1.000 0.950 0.600 0.100

--1.995 --1.302 --0.897 --0.609

1.071 0.908 --0.240 --1.522

0.00202 7.616 14.198 19.821 24.559

The second step, the iterative solution of the maximum likelihood equation, follows the routine devised by Finney (1951) and set out in detail in his classic on bioassay (Finney, 1952--64--80). The calculations are so arranged that each cycle of iteration provides an adjustment to the provisional In m. The first cycle, based on In m0, is illustrated in Table 2. We require (1) the expected log-log Yi = In c i - - l n m0; ( 2 ) t h e working deviate Yi = piA --Y0, where A (the 'range') and Y0 (the 'minimum working deviate') corresponding to a particular value of Y may be looked up in the tables of Mather or Finney (either source provides also a derivation of these statistics); (3) w~, the weight relative to the expected log-log, is found in the same tables, allowing calculation of niwi; (4) the last two columns, niwiyi and niwiY~ are computed from entries of the previous two. The adjustment provided by this cycle is the weighted mean of the work-

R18 TABLE 3 Evaluation of limiting dilution assays. III. Iterative solution of the maximum likelihood equation. Cycle

Experiment I

Experiment 2

m

In m

y

m

In m

31 361 31 425 31 426

10.3533 10.3554 10.3554

0.00202 0.00002 0.00000

73 545 53 487 53 832 53855 53 857

11.2057 10.8872 10.8936 10.8941 10.8941

=31426 95% range: 27 285-36 194 x 2 (5) = 5.767 p = 0.329

-0.31845 0.00642 0.00044 0.00003 0.00000

~ = 53857 95% range: 4 3 4 6 3 - 6 6 737 ×2 (3) =65-384 p < 10 -9

ing deviates, :~ = E n w y / E n w . T h u s t h e new provisional estimate o f the multiplicity will be In ml = In m0 + y, and the s e c o n d cycle o f iteration is entered with the i m p r o v e d estimate In ml instead o f In m0. I t e r a t i o n c o n t i n u e s until the i m p r o v e m e n t to the previous e s t i m a t e is negligible, say, < 1 0 -6 In m. Table 3 lists the starting estimates and c o r r e c t i o n s o b t a i n e d in successive cycles. With the m o r e extensive as well as m o r e regular d a t a o f e x p e r i m e n t 1 (cf., the last c o l u m n in Table 1), already the provisional estimate o f the multiplicity is within 0.2% o f the final m a x i m u m l i k e l i h o o d estimate. I t e r a t i o n in this case serves o n l y to define the c o n f i d e n c e limits o f the estimate m o r e a c c u r a t e l y . In e x p e r i m e n t 2, with o n l y 3 s c a t t e r e d p o i n t s on which to base calculations, the provisional estimate differs f r o m the final b y 36%. But even here a single c y c l e r e d u c e s the d i f f e r e n c e to 0.7%. C o m p a r i n g the m a x i m u m l i k e l i h o o d estimates with t h o s e o b t a i n e d by fitting a linear regression c o n s t r a i n e d to pass t h r o u g h the origin ( L e f k o v i t s and W a l d m a n n , 1 9 7 9 ) , we find a d i s c r e p a n c y o f 9% ( 2 8 , 8 1 8 vs. 3 1 , 4 2 6 ) in the results o f e x p e r i m e n t 1 and 194% ( 2 7 , 7 7 7 vs. 5 3 , 8 5 7 ) for e x p e r i m e n t 2.

Reliability of the estimate As the Poisson d i s t r i b u t i o n has o n l y o n e p a r a m e t e r , s o l u t i o n o f the maxim u m l i k e l i h o o d e q u a t i o n defines also t h e variance o f the log m u l t i p l i c i t y as 1 / Y n w . This f o r m u l a applies t o the final c y c l e and will give i n c o r r e c t answers if used in earlier stages o f the iteration, especially since E n w converges m o r e slowly t h a n m. N o t e also t h a t the variance refers to In m, a n d will thus set an a s y m m e t r i c c o n f i d e n c e interval a b o u t the e s t i m a t e d multiplicity. The c o n f i d e n c e range is calculated as e x p ( l n m + z/x/ E n w } , w h e r e z is the appropriate n o r m a l deviate chosen. The m a x i m u m l i k e l i h o o d estimates o f multi-

R19 plicity, together with their 95% confidence limits, are shown at the b o t t o m of Table 3. The variance, and hence the fiducial limits, are essentially a measure of the information contained in the data. They do n o t tell us whether the observations are also compatible with the hypothesis, in the present case with the Poisson model. To this end an additional statistic is required, testing the goodness of fit as X ~n-1) = Enwy 2 - - y E n w y

Here, again, the values computed in the last cycle of iteration are to be inserted in the formula. The probabilities attached to particular ×2 values are found in Finney's book, but also in practically any set of statistical tables. Comparing the fiducial limits shown in Table 3 with those obtained by Lefkovits and Waldmann (1979) is not straightforward as their formula (reducing Ey 2 by the component accounting for the regression) implies that all the error lies along the y-axis. The consequences are perplexing. Thus the 95% confidence limits drawn for experiment 1 (their Fig. 7.5) exclude 2 out of 6 experimental points, with a third falling on the borderline. Conclusions like this are liable to shake the confidence of the well-intentioned reader more than somewhat. In their second example, on the other hand, the upper limit lies above the abscissa (their Fig. 7.6), i.e., we are to expect about 1 in 13 sets to give more negative responses than there are cultures in that set. Like the thirteenth stroke of A.P. Herbert's crazy clock, this is not only itself immediately dismissed but casts a shade of doubt on all previous assertions. Turning to the ×5 values, we see that experiment 1 conforms with the model, a probability of 0.33 being well above 0.05, the conventional level of rejection. For experiment 2 the probability attached to a ×5 of 65.4 with 3 df is so low that the single-hit model is clearly inappropriate. Note that the rejection of the hypothesis has nothing to do with the precision with which the multiplicity was estimated: the fiducial ranges for the two sets of data are comparable, only the ×5 values are vastly different. The ×5 values for goodness of fit to the linear regressions of Lefkovits and Waldmann are consistently higher than those obtained by maximum likelihood (7.48 vs. 5.77 and 103.2 vs. 65.4, resp.) but, for the two examples, arrive at the same conclusions. The trend, however, may lead to occasional rejection of inherently valid data, and that without specifying the reason: the test of the hypothesis is there confounded with assessing the precision of the estimate. This, then, is the procedure statisticians call a convenient arrangement of computations, a routine microbiologists learnt to live with in the pre-transistor days. To opt for doing such numerical work by hand today takes a modicum of masochism, especially since the price of a pocket calculator-

R20 cum-printer is rather less than what the average laboratory spends on fetal calf serum in a week. DISCUSSION The popular approach of forcing a linear regression through the origin deserves no discussion. What remains is to compare the efficient ways of evaluating limiting dilution assays, and to see w het her there are any permissible short-cuts. The m e t h o d of m a x i m u m likelihood, together with ot her principles of estimation, is presented with the required rigor in texts on statistical t h e o r y (e.g., Kendall and Stuart, 1961). Here it is sufficient to simply list some of its main properties. (1) The m a x i m u m likelihood estimate is sufficient: it utilizes all the in for m a t i on in a particular set of observations. (2) The estimate is efficient: if observations are increased w i t h o u t limit, the variance of the estimate will be the m i ni m um possible. In finite samples, if a theoretically calculable minimum variance exists, the m e t h o d of m a x i m u m likelih o o d will give it. (3) The estimate is consistent: if observations are increased w i t h o u t limit, the estimate will fall within any specified narrow interval that includes the true value of the parameter. Consistency is to be distinguished from lack of bias. An unbiassed estimate is one whose mean value is equal to the true value of the parameter, even in samples o f fixed finite size. A consistent estimate, such as the maximum likelihood, need n o t be unbiassed. The obvious flaws of the m e t h o d o f m a x i m u m likelihood are its possible bias and the lack of formal p r o o f for efficiency in small samples. No alternative principle, however, has y e t been proved to be free from these disadvantages. A n o th er m e t h o d of evaluating limiting dilution assays, that favored by Taswell (1981), is minimizing Xz. This m e t h o d is also consistent and efficient, but efficiency for small samples or sufficient estimates for the quantal response problem have n o t been conjectured, still less proved. If observations are increased w i t h o u t limit, the m i ni m um ?(2 tends to 2L, and thus the two m e t h o d s are asymptotically equivalent. The discrepancy at the practical level 'arises from the fact that X2 is itself an a p p r o x i m a t i o n , applicable only when the n u m b e r o f responders and the expect ed n u m b e r of responses are bot h large, and the difference between them of a lower order of magnitude' (Fisher, 1922). This is a serious objection to the general use of m i ni m um chi-square for data of the e x t e n t usual in limiting dilution assays. And, as Kendall and Stuart (1961) prove, the loss of i nform at i on inherent in estimation by minimum ?(2 increases w i t h o u t limit as the n u m b e r of classes increases, even though the total n u m b e r of observations is large. Taswell's (1981) preference rests on the advocacy of Berkson, whose latest paper {1980) 'Minimum chi-square, not m a x i m u m likelihood!' is a powerful piece of pleading, just the kind that would sway the opinion of twelve-menand-true. But the professionals at the symposium were n o t so easily mes-

R21 merized and Berkson's thesis suffered devastating criticism by some of the most respected names in mathematical statistics (Efron, p. 469; Ghosh, p. 471; Pfanzagl, p. 479; Rao, p. 482). One point, made by Pfanzagl, bears directly on Taswell's way of comparing the two methods and is worth quoting in its essentials: ' . . efficient estimators differ mainly by a medianbias of order n -1/2 (resp. a bias of order l / n ) . Hence it is meaningless to compare different efficient estimators without eliminating their difference in bias prior to the comparison. This point was already raised by Gosh and Subramanyam (1974), but not fully recognized by Berkson (1980). • . . the m a x i m u m likelihood estimator, as well as the minimum ×2 estimator, have a regular stochastic expansion with

Q1 -

27r--1 12

k 2 n(p -- 7r)2

for the maximum likelihood estimator, and k

Q~

i2

k

(p -- 7r)2 - - ~

~ (Di -- 7r)

for the minimum X2 estimator, where k is the number of dosages, Pi the fraction of responders under dosage i, and p the average of Pi, i = 1, ..., k. Alone the fact that the minimum X2 estimator depends on Pl .... Pk (and not on p only, even though p is sufficient) generates bad feelings. If the m a x i m u m likelihood estimator is corrected so that its median-bias matched the median-bias of the minimum X2 estimator up to order n -in, its risk falls short of the risk of the minimum X2 estimator by an a m o u n t of order 1/n. (On the other hand, the minimum ×2 estimator, lacking property S [= n o t becoming independent of x by setting ~ / n ( p - - l r ) equal to zero], cannot be adjusted so as to match, let alone underbid the maximum likelihood estimator.)' A possible counter-argument that, since minimum ×2 has been proven inferior in theory, it must be superior in practice, is n o t altogether compelling• While theory favors the maximum likelihood solution, it is still legitimate to ask what price we had to pay for using a simpler albeit less efficient method• Mather's log-log transformation (1949), as illustrated in Table 1, gives a provisional estimate of the multiplicity that comes very close to the true value, provided the data are regular. Clearly, classes where all responses are negative or all are positive do n o t contribute to the estimate, and hence the log-log transformation by itself cannot be sufficient. Nor is it unbiassed, as it takes no account of the varying weights attached to particular sets of observations. Such results do not lead to reliable estimates either of the variance or of conformity with a hypothesis. And even the simple computations require repeated looking up of t~tbles, so that an inherently invalid short-

R22 cut takes actually more time and effort than feeding the same data into a small calculator and obtaining the correct answer within seconds. APPENDIX Program for evaluating quart tal dilu tion assays The program is written for a TI-59 calculator, fitted with the Statistics Module. LBL

4

3

STD 5~

3 •3

2

a I STO 59

~ 0 o

! 0 E;TO

I

2 61

6 i2 4

STO 58 4 4 7 O,

STO

51 !

'

STO 57 3 STD 54 +

n 4

3

STO

/: STO Ii

= STO ~5

12

:i

Ei

,

= STO 56 CLR

; STO O0

0

STO O! RCL 15 STO 05 RTH LBL A

R'

LBL

/F:CL

RCL RC* Ol EQ

I[:}ps4

;

RCL 51

SLIM i0 RC:~= O0 SLIM

iRCL 55 OP 04 RCL 04

i

%P4 P.,'S L~L STO O9 ABV DP O6 /BL STB R,S PRI L 11',:< ST:~

s;;

RCk. 12 =

_P

DP

ROb lO

21 DSZ O5 STO

q × RCL lO 1,.X rg )

BEG

RCL

R o0

( 1

"1-1i LNX +/LNN SUM

rip P_l I?SZ 05 RCL RCL I1

STO 04 ,q D',,J

O4

INV i LI'IX OP 06

I

Eli DEG

0 1

= , I NV

LNX OF' I 06

RCL

"oP4 RCL 12

F:CL 04 i =

RCL 11 Xz

%o X2

RCL 10

I

INV LNX =

04

I =

L I IIV

INV LNX × RC~ 01 = RCL O2 +/INV

LNX =

STn I6 DP O6 RCL 58

RCL O2

°~4

LIIX

SUM

i

x RCL 06

I M ,,,I I HV 11

) = x RCL 09

1 STD 18 DP O6 D' A D',,,' RD'v' RDV R,S

LBL

b 0

STO O6 x

LMX

RCL 13 ! _

:RCL ,

3 %P4

RC:~

I

=

STn I

o3 1 II V

GE t,IBP SU~!

!P

O6

RCL

RDV

i STO

16

OF'

IHV uJx

ADV

RCL 57

DEC,

STO 15 E'

LHX OP 06

R!i

I SUM 10 iRCL 1 02

=

, RCL I

+l l RCL I0 = STO 14 SUM O4 aP 06 ixi GE DP RT[t LBL D' RCL 59 rip 04 RCL 18 PGI'I 2I R

LBL Nap

OP 21 DSZ 05 A D ~,,'

RCL 16 PGM 21 C 1

+/np 06 RTN

RCL 04

After accepting the data (instructions 103--144), the program computes a provisional estimate of the multiplicity by Mather's log-log transformation (instructions 145--191, cf., Table 1), iteratively solves the maximum likelihood equation (instructions 280--405, cf., Table 2), works out the best estimate for the multiplicity and its 95% confidence range (instructions 194-243, cf., Table 3), evaluates the goodness of fit and lists the ×2 value together with the attached probability (instructions 244--274 and 406--428, cf., Table 3). The program is run as follows: (1) Enter the number of dilution steps, press key A. (2) Enter the number of cultures for dilution i, press key B.

R23

(3) Enter the number of cells at dilution i, press key R/S. (4) Enter the number of negative cultures at dilution i, press key R/S. For each dilution, repeat steps 2, 3, 4 (if the number of cultures is the same as at the previous entry, step 2 need not be repeated). The printout comprises the input data, estimates of the log multiplicity and the appropriate corrections for each cycle of iteration, the final estimate of the multiplicity together with its lower and upper 95% confidence limits, the ×2 for goodness of fit with its degrees of freedom and the corresponding probability. REFERENCES Berkson, J., 1980, Ann. Stat. 8,457. Finney, D.J., !951, J. Hyg. 49, 26. Finney, D.J., 1952--1964--1980, Statistical Method in Biological Assay (Griffin, London). Fisher, R.A., 1922, Phil. Trans. Roy. Soc. A 222,309. Kendall, M.G. and A. Stuart, 1958--1961--1966, The Advanced Theory of Statistics, 3 Vols. (Griffin, London). Lefkovits, I. and H. Waldmann, 1979, Limiting Dilution Analysis of Cells in the Immune System (Cambridge University Press, Cambridge). Mather, K., 1949, Biometrics 5,127. Taswell, C., 1981, J. Immunol. 126, 1614.