The importance of noise in fish population models

The importance of noise in fish population models

Fisheries Research, 11 (1991) 197-223 197 Elsevier Science Publishers B.V., Amsterdam The importance of noise in fish population models Jon T. Schn...

1MB Sizes 0 Downloads 58 Views

Fisheries Research, 11 (1991) 197-223

197

Elsevier Science Publishers B.V., Amsterdam

The importance of noise in fish population models Jon T. Schnute Department of Fisheries and Oceans, Biological Sciences Branch, Pacific Biological Station. Nanaimo, B.C., VgR 5K6, Canada

ABSTRACT Schnute, J.T., 1991. The importance of noise in fish population models. Fish. Res., 11: 197-223. This paper describes simple, realistic circumstances in which the noise component of a population model dramatically influences conclusions from the analysis. The first example, restricted to five data values, illustrates geometrically why different statistical assumptions can lead to fundamentally dif. ferent results. Such diverse conclusions can also be a problem in realistic data analysis, as illustrated by a second example involvingcatch per unit effort (CPUE) data from a lingcod (Ophiodon elonga, tus) stock. Using methods adapted from Kalman filter theory, I show that a high or low estimate of survival depends on whether or not the model allows for measurement error. Furthermore, additional data suggest that the more complex model with measurement error is more realistic. I conclude thai earlier work on deterministic models may be somewhat misdirected, because statistics is treated almost as an afterthought.

INTRODUCTION

This paper demonstrates that the statistical component of a fishery modell can dramatically influence the resulting analysis. A simple example with five data points illustrates the problem geometrically. Furthermore, the issue raisedl here has considerable practical importance, as shown by analyses of catch per unit effort (CPUE) data from a lingcod (Ophiodon elongatus) stock. Depending on the assumed error structure, a survival estimate ranges from 34 to 62%. The most complex statistical model gives the highest estimate, and additional data suggest this estimate to be most realistic. The paper's arguments are based on likelihood theory, which in some cases relates closely to Kalman filter theory. For instance, the methods here provide recursive estimates of CPUE corrected for measurement error, as well as recursive predictions. The paper's emphasis on statistics reflects a recent shift in the development of fishery models. To illustrate how this change has occurred, I can cite my own experience in fisheries research. When I started in 1975, I regarded differential equations as the principal mathematical structure for modeling fish 0165-7836/91/$03.50

© 1991 - - Elsevier Science Publishers B.V.

198

J.T. SCHNUTE

populations. Although this view came partly from my background in applied mathematics, it was reinforced by the science of the time. For example, Schaefer ( 1954, 1957 ) based his models on the logistic differential equation, and Pella and Tomlinson ( 1969 ) later generalized Schaefer's equation. Ecology courses commonly used the Lotka-Voltcrra equations (e.g. Begon and Mortimer, 1981, p. 77 ) to model cyclic predator-prey relationships. Beverton and Holt ( 1957 ) used differential equations to describe population mortality and animal growth. One problem with this approach stems from the fact that fishery data, such as catch and effort statistics, arc often compiled annually. Consequently, time must be modeled as a discrete (rather than continuous) variable. Such an analysis requires difference (not differential ) equations. For example, Schnute ( 1977 ) integrated Schaefer's logistic equation to obtain a corresponding difference equation. Similarly, Schnute (1987a) converted continuous model equations to discrete equations for application to annual fishery data. These two papers (Schnute, 1977, 1987a) typify the philosophy of my early work in fisheries research. I concentrated on building an appropriate population model and added statistics almost as an afterthought. Given the deterministic model in a form appropriate to the data, it seemed adequate to include a normal error term at some convenient place. The parameters could then be estimated by least squares. If the model could be linearized, then ordinary regression would suffice. In recent years I have realized that the statistical component of a model is at least as important as its deterministic component. Schnute (1989), for example, proposed a much simpler formulation of the Schaefer model. Although this revised model has all the necessary deterministic properties, the method of introducing error has a radical effect on the analysis. The same would be true of the model of Schnute (1977). Far from being a matter for afterthought, statistics emerges as one of the model's most essential features. The present paper documents specific examples that have contributed to this altered perspective. First, the main ideas are illustrated with a geometrically simple example. A more complex model with measurement error is then described, and the corresponding likelihood function is related to rccursive Kalman filter estimates. The practical implications of these ideas are then examined for CPUE data from a lingcod stock off Vancouver Island, British Columbia, Canada. Using a simple population model, I demonstrate that analysis of this stock depends substantially on whether or not the model includes measurement error. Additional size-structure data suggest that the model with measurement error is more realistic. For easy reference, models and likelihood functions discussed here are compiled into tables. Appendices A-C give proofs of key mathematical results, particularly those related to Kalman filter theory. Readers who wish to implement and test these methods can use the lingcod data tabulated in Appendix D. Finally, Appendix E de-

NOISE IN FISH POPULATION MODELS

199

scribes a simulation experiment to check the likelihood analysis in the main text. SIMPLE EXAMPLE

I begin with a simple model for an index x, of the size of a stable fish population over a series of years t. As the population in year t consists of new recruits plus survivors from the previous year, a model for x, might be x, = r + s x , _ l +a~t

( 1)

where r denotes an index of recruitment, s represents the interannual survival coefficient, and cr measures the standard error in computing xt from x,_ I. The sequence Et in ( 1 ) is assumed to be a series of independent normal random variables with mean 0 and variance 1. Thus, model ( 1 ) describes a process o f constant recruitment and survival with an additional random error term. It should be noted that x, indexes (i.e. is proportional to ) the entire population in year t. The constant r, which has the same units as x,, indexes the mean recruitment. Similarly, a indexes the random (as opposed to systematic) variation in x,. The constant s is dimensionless with 0 < s < 1. Suppose that estimates of r and s are to be obtained from model ( 1 ), based on the data (xl ,x2 ,x3 ,x4 ,x5 ) = ( 1,1,3,3,1 )

(2)

These five observations x, determine four points (xt_ l,x,) (1,1), (1,3), (3,3), (3,1)

(3)

for t = 2 ..... 5. If the initial observation xl is not subject to error, then estimation from (1) corresponds to linear regression through the points (3). As illustrated in Fig. 1 these lie at the comers of a square. The result of this regression can be anticipated by a simple argument. The first two points (3) show that, when x,_ ~ is l, the subsequent point xt can be either 1 or 3. Thus, on the left side of the square, the mean prediction is 2. Similarly, the two points on the right side of the square show that, when xt_ is 3, the subsequent point can be either 1 or 3. The mean prediction is again 2. Consequently, the data indicate that x, is independent ofxt_ ~and that the best overall prediction is 2. This logic leads to the estimates (f,g) = (2,0)

(4)

corresponding to the horizontal solid line drawn in Fig. 1. Indeed, a rigorous calculation of least-squares estimates from ( 1 ), based on the data (2), gives the result (4). A second model, apparently equivalent to ( 1 ), is

200

xt =

J.T. SCHNUTE

q+ s -

(5)

l x t + 1 - - OEt+ 1

where q = - r/s and 0 = a/s. This focuses on back-calculation, rather than prediction. Logic similar to that in the previous paragraph shows that, for the data (2), xt appears to be independent of xt+ 1. The argument now involves the bottom and top of the square; the mean ofxt is 2, independent of the value x,+ m= l or x~+ i 3. Estimates from (5) thus become - - -

(t],,.~ -- 1 ) "- (2,0)

(6)

corresponding to the vertical broken line in Fig. I. It should be noted that a vertical line has infinite slope and that g= oo in (6). Incidentally, this analysis ignores the constraint 0 < s < 1. Although models ( 1 ) and (5) appear equivalent, they are not. The first assumes forecast error, starting from the known point x~ = 1. The second is based on backcast error, starting from the known point xs = 1. A regression line through the four points (3) depends on which variable, vertical or horizontal, is presumed to include statistical error. The example here illustrates a well-known feature of regressions. If error lies in the vertical coordinate, the

I f

i F

.............................................. i ..... I I

i I

I

l

I

I

I

0

1

2

3

4

Xt-*

Fig. I. Four points (x,_ ~,x,) determined by the series (2). Solid and broken lines are estimated from these data using forecast and baekcast regression models A' and B', respectively. Dotted and dash-dot lines are estimated from forward and backward A R ( I ) models A and B, respectively.

NOISE IN FISH POPULATION MODELS

201

slope estimate tends to be low. Conversely, error in the horizontal coordinate leads to a high slope estimate. (For further details, see Schnute ( 1987b, pp. 161-165.) The regression approach above assumes that either Xl or x5 is not subject to error, depending on the choice of model ( l ) or (5). More realistically, population data xt would be part of a continuing stochastic process. From this point of view, model ( 1 ) would be identified with an autoregressive process of lag l, commonly denoted AR ( l ). The AR ( 1 ) maximum likelihood estimates from the data (2) and the two models ( 1 ) and ( 5 ), respectively, are (f,~) = (1.72,0.04)

(7)

(0,~-l) = (1.72,0.04)

(8)

These correspond to the dotted and dash-dot lines in Fig. l, both of which nearly pass though the point (:f,X), where ~ = 1.8 is the sample mean of the data (2). The estimates ( 7 ) and ( 8 ) illustrate a general feature of linear time series models. Both forward and backward processes have the same covariance structure (Box and Jenkins, 1976, pp. 197-198 ). Consequently, they lead to symmetric parameter estimates, as in ( 7 ) and (8). The four distinct estimates ~ in (4), (6), ( 7 ), and (8) clearly demonstrate that the assumed error structure influences the results of statistical analysis. In particular, as a line necessarily has slope between zero and infinity, the estimates (4) and (6) show that different error assumptions can lead to completely different conclusions. A model acts as a filter, and the signal that passes through the filter depends on the definition of noise. The modeler must therefore pay close attention to this aspect of model development. To make these arguments analytically precise, Tables l and 2 list detailed models and corresponding likelihood functions for all examples discussed i n this paper. It should be noted that brackets [ ] denote model equations, braces {} indicate likelihood functions, and the usual parentheses ( ) represent equations in the body of the paper. Functions labeled L in Table 2 denote twice the negative log likelihood, except possibly an additive constant independent of parameters. Thus, maximum likelihood estimates for a model can be obtained by minimizing the corresponding function L. Frequently, I refer to L as a "likelihood", leaving implicit the connection between L and the natural ~ logarithm of the true likelihood. In particular, model A in Table l defines the forward AR ( 1 ) process described above. It involves the unconditional mean/z and variance a of each observation xt, i.e. the mean and variance without reference to previous observations. To understand [ A2 ], it should be noted from [ A l ] that the series mean/z satisfies tz=r+ slz

(9)

202

J.T. SCHNUTE

TABLE 1 Equations defining five models (A-E) developed in this paper

xt=r+sxt_l +act; t=2,...,n Iz=r/ ( l - s ) Ot=O'2/( 1 --S 2) X~= / l + x/(Ct)e~ Xl = specified constant

x~=q+s-lxt+l--0G+l; t = l ..... n-- 1 l , = q / ( l - s -I ) fl=02/(1--S -2) x . = ~+~/(fl)¢,+, xn = specified constant y,=r+sy,_l +o'G; t=2,...,n

u=r/(1-s) a=a21( ! - s 2) Yl =/z+ ~/(ot)¢l x , = y , + rt~,; t = 1..... n

x,=r+sxt_l + K(qt--Aqt_ l ); t=2,...,n #=r/( 1- s ) ~=/(2( 1 -- 2S~.+~.2) / ( 1 - s 2 ) x, = / z + ~ / ( y ) ~

O(w) = 1 _[(~ V' - w ) - p ( ~-F L

[AI ] [A21 [A31 [A4] [ A' 4 ] [Bl ] [B2] [B3] [B4] [ B' 4 ] [C1 ] [C2] [C3] [C4] [C5] [D1 ] [D2] [D3] [D4]

V - w ) [ ~ t°'(s/g)/l°s(p)

J

[Ell

The dynamic models A - D involve data x, (t = 1..... n), a CPUE time series. The equilibrium model E involves observed weights w, where O(w) denotes the cumulative probability density. Models A and B have variants A' and B' in which one value ofxt is prescribed (not random). In model C, x, results from measuring with error the unobserved value Yr.

as both x, and x,_ I have unconditional mean/z and as et has mean 0. Solving (9) for/z gives [A2 ]. Similarly, from [ A 1 ], the series variance ot satisfies Ot=S2Ot+tr 2 (10) as x , _ ~ and (~t are independent and as the constant r has variance 0. Solving for ot in (10) gives [A3]. The meaning of [A4] is now also clear: the first observation x~, which cannot be conditioned on a previous observation, has mean/~ and variance a. Model A' is the variant of A in which xt is presumed specified. Similarly, model B is the backward AR ( 1 ) process discussed earlier, and its variant B' presumes that the last observation xn is specified. From {A' 1 } and {B' 1}, the minima of LA, and LB correspond to least-squares estimates. Thus, the regression estimates (4) and (6) are also maximum likelihood estimates for models A' and B', respectively. The likelihood LA in {AI} for the AR( 1 ) model A differs from LA, by additional terms related to x~. These involve the unconditional mean/1 and variance a. Schnute (1987b, pp. 204-207) explained this point in greater detail. An analogous distinction exists between LB and La,.

NOISE IN FISH POPULATION MODELS

203

TABLE 2

Likelihood functions for the models in Table 1, where L denotes twice the negative log likelihood (except possibly an additive constant independent of parameters) LA' = (n-- 1 )log(o "2) + ~ (x,--r--sx,_l )2/cr2

{A'I}

t=2

LA=LA, + l o g ( a ) + (Xt --g)2/a

{Al }

n-I

LB' = (n-- 1 )log(~ 2) + ~. (Xt --q--s-iX,+ t )2/02 La=LB. +log(fl) + ( x , -

u)2/fl

{B' I } {B1 }

{Cl

Lc = ~. [log(v,) + ( x , - u,)2/v,] u, = g v, = a + r 2 u,+, =r+sx,-sr2(x,-u,)/v,; t = 1..... n - 1 v,+l =a2+r2+s2r2(v,-r2)/v,; t = 1..... n - 1

{C2} {C3} {C4} {C5}

LD=Lc

Vt+ I = K 2 ( I +~2--1(22L2/Vt)", t = 1..... H-- 1

{DI} {D2} {D3} {D4} {D5}

ni log[O(wi;s)-O(wi_,;s)]

{El}

u, = g vj = y Ut+ I

=r+sxt--K22(Xt--Ut)/Vt; t =

LE=2N~

1..... n-- 1

In summary, models A' and B' relate directly to classical regression and provide geometric evidence for the importance of model error. Models A and B are more realistic statistically, because they account for error in all obser~ vations, including the first and last. As Fig. l illustrates, even these models can lead to extreme results similar to those suggested by A' and B'. EXTENDED MODEL

The previous section uses an artificial example, with simple models and data, to illustrate potential ambiguities from model error. Here I extend model A to allow more complete analysis of realistic fishery data. The extended model involves two assumptions: (i) the population can be indexed by a true (but unknown ) sequence 3,, that conforms to model A, and (ii) the actual obser, vations x, correspond to measurements ofyt with error. These assumptions are stated mathematically as model C in Table 1. Thus; equations [Cl ] - [ C 4 ] state that y, conforms to A, and [C5] describes x, as an erroneous measurement of Yr. The variates t~, and E, are assumed to be independent and normal with mean 0 and variance I. Model C has four par rameters: r, s, tr, and z. The first three pertain to the AR ( l ) process for y,, and

204

J.T. SCHNUTE

z represents the standard error in measuring xt. As in model A, the series Yt has unconditional mean a and variance o~. From [C5 ], the series xt also has mean a, but its variance ~)~___O(-~ ~"2

(11)

includes the extra component z 2 from measurement error. I refer to model C as a Kalman model, since Kalman filters pertain to random transition processes measured with error (Kalman, 1960 ). As shown in Appendix A and discussed in a different context with greater generality by Meinhold and Singpurwalla ( 1983 ), concepts related to the Kalman filter can be used to derive the likelihood function for model C. Let X, = ( x , .... ,x, )

denote the vector of observations up to time t. Define ut as the expected value of x, for given values ofx~,..,xt_ 1:

u, =E[xt IX,_, ]

(12a)

Similarly, let v, be the variance of x, conditioned on previous observations:

vt=Var[xtlX,_l]

(12b)

By {C2 }-{C5}, the pair (ut,v,) can be computed recursively, starting from the unconditional mean u~ = ~ and variance vl = y = a + r 2. The likelihood Lc in {C1 } can then be evaluated using the resulting sequences u, and yr. Conceptually, the expected value ut predicts the next observation x, from the vector X,_ i of past observations. Furthermore, u, also gives a prediction of the unknown true value y,, as

ut=E[ytlXt_, ]

(13)

This follows from [C5], where E [gtlXt-1] = O. Although the likelihood Lc can be computed from the sequence (ut,vt), the classical Kalman filter deals with a different sequence (u',,v;), defined by

u't =E[yt IXt]

(14a)

v', = Var [y, IX,]

(14b)

In contrast to the prediction u,, the corrected value u', estimates the unknown true Yt from data up to and including time t. In appendix B it is shown that (u't,v',) can be computed recursively from u3 =/~, v3=c~ 1.~;+ I --

(s2v', + 0 . Z ) x , + , + , 2 ( r + s u ' , ) S2V; +0.2+ ,2

(S2U; + 0 " 2 ) * 2 V;+ I -- S2V; "t- 0.2 Ji- ,2

(15a)

(15b) (15C)

NOISE IN FISH POPULATIONMODELS

205

where/t and a in (15a) are given by [C2] and [C3 ]. It should be noted that the estimate u',+l in (15b) is a weighted average of the observation x,+ ~ and the prediction r+su't from the previous estimate u',. For example, absence of measurement error implies that x,=y,, so that the best estimate of y, is the observation xt itself. Thus, if z= 0 in ( 15b ), then u't+ ~=xt+ 1. When z> 0, the recursion ( 15 ) uses past data to correct x, for measurement error. Model C involves both the observed series xt and the unknown series y,. As proved in Appendix C, the single series xt is equivalently determined by model D, where r/t is a series of independent normal random variables with mean zero and variance one. Both models C and D have four parameters; the pair (r,s) is shared, and the pair (tr, z) in C is related to (x,2) in D by O'2=/(2 (,~--S) (,~--S -1 )

(16a)

rE= (2/s)x 2

(16b)

Similarly, (x,2) can be computed from (or,z) by solving the quadratic equation s't2• 2 - [ 0"2"3L~'2 ( 1 + s 2) ] 2 + s z 2 = 0

(17a)

for the root 2 with 121 < 1 and then setting x: = sz2 / 2

(17b)

Model D has an autoregressive (AR) component because of the dependence ofxt on xt_ 1, as well as a moving average (MA) component because of the combined pair of random errors qt and q,_ i. As each component has lag one, this model is commonly labeled ARMA ( l, 1 ) (Box and Jenkins, 1976, pp. 73-80). The equivalence of models C and D implies that the likelihood LD can be obtained from Lc. In particular, the recursions {D2}-{DS} follow from {C2}-{C5}, based on the parameter transformations (16) and (17). These simple recursions achieve an apparently more difficult computational objective. In principle, computing the exact likelihood for an ARMA ( l, 1 ) model requires inverting an n × n covariance matrix, where n is the length of the time series. This insight is relatively recent. Harvey and Phillips (1979) derived the likelihood for a general ARMA model using Kalman filter techniques. The algorithms here essentially specialize their result to the ARMA ( l, 1 ) case, although I derive them independently (Appendices A-C ). This simple case was not given by Box and Jenkins ( 1976 ), presumably because the result was unknown at the time of publication. Furthermore, I am unaware of other publications that give a unified derivation of the Kalman filter ( 15 ) and the likelihoods Lc and Lb. These concepts have, however, been applied in a fisheries context. For example, Sullivan ( 1991 ) used Kalman filter methods to compute the likelihood for general age-structured models. The regression model A', the AR ( 1 ) model A, the Kalman model C, and

206

J.T. SCHNUTE

the ARMA ( 1,1 ) model D are closely related. Model A extends A' by allowing for error in the first observation x~. Model C extends A by including measurement error. As model D is equivalent to C, it follows that adding measurement error to an AR ( 1 ) process converts it to an ARMA ( 1,1 ) process. Recursive calculations play an important role in defining and applying all of these models. For example, the reader may wish to verify that Lc reduces to LAwhen z = 0 and that LD reduces to LAwhen 2=0. Although the recursions for ut and u~ involve the data xt, the recursions for vt and v; do not. Typically, the sequences vt and v't converge rapidly to the constants V-~-/~ 2

(18)

v' = x 2" (2/s)" [I -- (2/s) ]

(19)

respectively. Thus, these sequence limits can be expressed simply in terms of model D parameters. For example, the value v=/¢2 is obviously a fixed point for {DS} (and thus also for {C5}). Similarly, (19) is a fixed point for (15c), where the proof involves the parameter conversions (16) and (17). LINGCOD EXAMPLE

Models A', A, C, and D all describe processes moving forward in time. In this section I use these forward models to analyze data from an active fishery. The results show that realistic data can lead to ambiguities similar to those described geometrically for the simple example. The illustrative data come from a lingcod (Ophiodon elongatus) fishery off Vancouver Island, British Columbia, Canada. Cass et al. (1988) and Schnute et al. (1989a,b) provided details about this fishery and described methods of data collection. The solid curve in Fig. 2 represents the catch per unit effort x, (CPUE, kg h-~ ) for years t = 1956-1988. This series is compiled from trawl fishery data, based on the 25% qualification method (Cass et al., 1988, p. 3 ) restricted to fully recruited fish with weights above 2.25 kg (Schnute et al., 1989a, section 3B). The series here includes 2 years ( 1987 and 1988 ) of data not available for the earlier analysis by Schnute et al. (1989b), but reported by Richards ( 1991 ). (See also Appendix D. ) The forward models (A', A, C, D) in Table 1 all have biological meaning for the series xt in Fig. 2. Model parameters must, however, be interpreted in light of the fact that xt indexes stock biomass, rather than numbers. In particular, r represents an index of recruited biomass, and

s=gs*

(20)

is the product of a survival coefficient s* and a growth factor g. More precisely, s* denotes the fraction of animals that survive from one year to the

207

NOISE IN FISH POPULATION MODELS

~.

o

i..,.

if" ~

A

I

I

I

I

I

I

I

55

60

65

70

75

80

85

t Fig. 2. Annual lingcod catch per unit effort data xt (kg h -I ) for years t = 1956-1988 (solid curve, diamonds). Dotted and broken curves indicate predicted and corrected estimates u, a n d u;, respectively, computed from maximum likelihood estimates for model C. Similar dotted and broken curves near the bottom of the figure represent the corresponding standard errors x/vt and x/fit, respectively. These converge rapidly to constants. A thinly dotted horizontal line indicates the mean g = 324 kg h - ~.

next, and g is the factor by which biomass is multiplied to account for annual growth. As discussed in the next section, size and growth data from this stock lead to the estimate g=l.14

(21)

Consequently, s* can be estimated from s and vice versa. Each forward model assumes that the fish stock experiences constant recruitment and survival, subject to random variations from unknown sources. In effect, the parameters r and s* correspond to long-term levels of recruitment and survival. The coefficient s* necessarily reflects both fishing and natural mortality, so that the model is appropriate only in circumstances where! total mortality is relatively stable. If natural mortality is stable, then fishing mortality must either be stable or dominated by natural mortality. Biologically, s* and g must satisfy the constraints 0 < s * < 1 and g>/1. A stock with high survival and rapid growth could have s > 1 in a forward model. The model would then be inappropriate, however, as the series xt would not be stationary. When 0 < s < 1, all forward models have mean

208

J.T. SCHNUTE

g=r/(l-s)

(22)

which is fairly well estimated by the series mean g. From (22), the parameters r and s partition the mean stock index ¢t into two parts: an index r = ( 1 - s ) / t of newly recruited biomass plus an index s/t of surviving biomass. For example, the series xt in Fig. 2 has mean £ = 3 2 4 k g h -~

(23)

and analysis from a forward model would partition this index between recruitment and survival. The data in Fig. 2 contain no obvious tends, although values x, appear autocorrelated. In particular, data for earlier than 1970 lie above the mean more frequently than data after 1970. As time series models allow for such autocorrelations, the forward models in Table 1 seem appropriate. Incidentally, Schnute et al. (1989b) proposed even more complex models, but those considered here serve to illustrate the importance of different error hypotheses. For each forward model, Table 3 presents corresponding maximum likelihood parameter estimates obtained from the data in Fig. 2. A single column represents the equivalent models C and D, denoted as model C/D. The table reports the six forward model parameters (r,s,a,z,r,2), as well as g and s* from ( 2 0 ) - ( 2 2 ) . The column associated with model E is discussed in the next section. Estimates from models A' and A are almost identical. These differ substantially, however, from estimates according to model C / D . Figure 3 illustrates the geometric significance of these results in relation to data points (xt_ ~,xt) plotted from the time series in Fig. 2. The solid line

xt =r+sxt_l

(24)

TABLE 3 M a x i m u m likelihood p a r a m e t e r e s t i m a t e s for the lingcod data, o b t a i n e d f r o m v a r i o u s m o d e l s Parameter

Model A'

Model A

Model C / D

Model E

r s s* # o r x 2

195 0.40 0.35 324 89 0 89 0

197 0.39 0.34 322 87 0 87 0

94 0.71 0.62 320 49 64 86 0.39

0.88 0.77 -

P a r a m e t e r s r,/~, o, r a n d • h a v e u n i t s kg h - i, w h e r e a s s, s*, a n d 2 are d i m e n s i o n l e s s . E s t i m a t e s f r o m m o d e l s A ' , A, C, a n d D are b a s e d o n t h e series x, in Fig. 2. E s t i m a t e s f r o m m o d e l E d e p e n d on the size structure data in Fig. 5.

NOISE

IN FISH

209

POPULATION MODELS

0 0

tO



.

j

0 0

-j 8¢q

O

I

0

I

I

I

200

I

400

I

I

600

Xt-t

Fig. 3. Scatter plot of points (x~_ ~,xt) determined by the time series in Fig. 2. Broken and solid lines are estimated from these data using models A and C, respectively•

determined by model C / D has a steeper slope than the corresponding broken line from model A. Indeed, the steeper line gives a better visual fit to the plotted data. These features concur with the simple example in Fig. 1, where model A suggests a line with shallow slope. Biologically, the two lines in Fig. 3 lead to distinctly different conclusionsL Model A suggests high recruitment and low survival, whereas model C / D in* dicates low recruitment and high survival. Thus, management strategies based on these models could differ markedly. Both models lead to similar estimates of/~, which agree closely with the series mean (23); the two lines intersect near the mean (£,~) of the data in Fig. 3. The question remains, however i whether this mean CPUE is maintained primarily by recruitment or survival The parameters tT, z, and r in Table 3 help explain the difference between models A and C / D . Both models indicate similar levels of overall error ~c ( 86-87 kg h - ~). Model A assumes no measurement error ( z = 0 ) and ascribes all error to the process ( 0 = 8 7 kg h - l ) . In contrast, model C / D indicates substantial measurement error ( r = 64 kg h - ~) and a lower level of process error ( 0 = 4 9 kg h - l ) . The possibility of measurement error leads to a differ÷ ent interpretation of the data scatter in Fig. 3 and to a different estimate of the trend line (24).

210

J.T. SCHNUTE

This interpretation can be seen also in Fig. 2, where broken and dotted curves show, respectively, the series u't and ut calculated at the maximum likelihood estimates for model C / D . The corrected values u't (which theoretically estimate the true CPUE without measurement error) follow a pattern similar to the series xt, but with less variability. The predictions ut also follow this pattern, but appear slightly out of phase by one time step. This phase shift results from a bias toward the last observation xt in making the prediction ut+ t; see eqn. {C4}. Curves near the bottom of Fig. 2 represent the standard errors x/vt and x/v;, which converge rapidly to the constants 86 kg h - 1 and 43 kg h -t determined by (18) and (19), respectively. The discussion so far has avoided the problem of uncertainty. Each model leads to an estimate g of the key parameter s. Table 4 lists the standard error of g, as estimated from the inverse Hessian of the likelihood function for each case. (For details of the method, see Kendall and Stuart (1979), p. 59.) It should be noted that the estimated standard error of g is almost twice as large from model C as from model A (0.28 vs. 0.16). Thus, by allowing for extra measurement error, model C introduces greater uncertainty into the outcome of the analysis. Figure 4 represents this uncertainty graphically. Here, the Z 2 distribution and a likelihood ratio test (Kendall and Stuart, 1979, chapter 24 ) are used to assign a confidence level p for rejecting a given value of s. For example, the V-shaped broken curve in Fig. 4 corresponds to model A. The low point on the curve (the base of the " V " ) indicates that the estimate g=0.39 can be rejected with confidence p = 0 ; i.e. this value of s is most likely. Values of s more distant from g can be rejected with greater confidence. For example, p = 50% when s=0.28 or 0.50; this explains the 50% confidence interval for s cited for model A in Table 4. Similarly, p = 95% when s = 0.06 or 0.70, corresponding to the 95% confidence interval (model A, Table 4). The solid curve in Fig. 4 represents uncertainty in s according to model C. The inflection point in the left limb of the curve results from the constraint z >I0, which applies here when s < 0.36. A comparison of solid and broken VTABLE4 Uncertainty in s, according to various models Model

g

SE(g)

50% interval

95% interval

A' A C/D E

0.40 0.39 0.71 0.88

0.16 0.16 0.28 0.02

(0.29,0.51 ) (0.28,0.50) ( 0.45,0.85 ) (0.86,0.89)

(0.07,0.73) (0.06,0.70) (0. i 0,0.98 ) (0.83,0.92)

The standard error (SE) o f i comes from the inverse Hessian of the negative log-likelihood function, evaluated at the maximum likelihood estimates. The 50 and 95% confidence intervals are determined from likelihood ratio tests, and thus from the curves in Fig. 4.

NOISE IN FISH POPULATION MODELS

21 l

q

".--.. . . . . . . . . . . . . . .

~

i . . ' Z'T'-" " • . .

d

\

,,, /

,,

o. o

0.0

0.5 s

1.o

Fig. 4. Confidence level p for rejecting a specified value of s, according to various models. Broken and solid curves correspond to models A and C, respectively, based on the data x, in Fig. 2. The dotted curve corresponds to model E, based on the data ~ in Fig. 5. Thinly dotted horizon, tal lines indicate levelsp--- 50% and 95%. Points where these lines intersect the curves determine confidence intervals listed in Table 4.

curves shows three features that distinguish models A and C. First, the solid "'V" is further to the right, corresponding to a higher estimate ofs from model C. Second, the solid "V" is generally wider than the broken "V", correspond. ing to greater uncertainty from model C. Finally, the solid "V" is comparatively asymmetric; it rises steeply on the right, but only gradually on the lefti Thus, although confidence intervals from model A are nearly symmetric about g, confidence intervals from C are not. For example, the 50% s-interval from C is (0.45,0.85 ), which extends much further to the left of g = 0.71 than to the right. Since model C reduces to model A when z = 0 , a hypothesis test could be used to decide whether or not the extra complexity of model C is justified. In the example here, m i n i m u m values of L for models A and C differ by 0.692i also, the parameter r introduces one extra degree of freedom into model Ci As 0.692 corresponds to the level p--- 59% for X2 with 1 df, the null hypothesis A cannot be rejected at the traditional 95% level against the alternative C. This conclusion at first seems satisfying, as the model A suggests greater confidence in s. Such a formal analysis could, however, be highly deceptive~ In reality, the CPUE series in Figs. 2 and 3 almost certainly includes a large

212

J.T.SCHNUTE

component of measurement error. Thus, model A may give not only a low estimate g but also an unrealistically small estimate of uncertainty. In particular, the estimate g= 0.71 from model C is actually excluded by the model A 95% confidence interval (0.06,0.70). In the next section evidence is presented to substantiate the suggestion by model C of a higher, less certain value ofs. The methods employed here depend on theoretical asymptotic properties of the likelihood function. It is reasonable to ask if the data set (33 values of xt) is large enough to justify this approximation. Furthermore, the analysis here imposes the constraints 0~~0, z>_.0

(25)

on model C. These are equivalent to the conditions 0~_.0, 0~<2~
(26)

on model D. Likelihood theory may not apply on the boundaries of the regions (25) and (26). For example, the left limb of the solid curve in Fig. 4 may be inaccurate below s = 0.36, where the constraint z= 0 applies. Indeed, the Kalman model C is not strictly equivalent to the classical ARMA ( 1,1 ) model D, because the latter has a parameter space in which both s and 2 are unrestricted between - 1 and 1 (Box and Jenkins, 1976, p. 77, Fig. 3.10a). (See Appendix C for further details. ) To test small sample properties of Lc with the constraint (25), I performed the simulation experiment described in Appendix E. I conclude that likelihood methods for model C, although potentially biased, give a reasonable approximation to the substantial uncertainty in model parameters. Furthermore, when model C is true, a hypothesis test to reject model A in favor of C has a high probability of failure (i.e. acceptance of A). These results reinforce the importance of the fact that models A and C lead to fundamentally different interpretations of the lingcod data. SECONDANALYSIS To help resolve model ambiguities described in the previous section, I examine a second data set for the same lingcod stock. This consists of individual fish weights taken from catch samples over the period (1956-1988) of the CPUE series xt in Fig. 2. Intuitively, the sample weight data contain information on survival because the presence or absence of large fish indicates a low or high level of mortality, respectively. More precisely, if the growth rate is known, then big fish can be related to old fish and the proportion of old fish indicates the survival rate. Figure 5 summarizes the entire sample data set in histogram form. Weight intervals (w~_ ~,w~) are defined by the sequence of weights

NOISEIN FISHPOPULATIONMODELS

213

(27)

w~ = wo + i . , d w

where Wo= 2.25 kg and dw=0.5 kg. The histogram represents the proportion rci of fish with weight in the interval (wi_ ~, w~) for i = l .... ,27, where the final proportion ~27 represents all fish with weight above 15.25 kg. The proportions ni in Fig. 5 are actually means of annual proportions 7t,, available from each year t of sample data. Fish with weights less than Woare excluded, as these are considered to be pre-recruits. The analysis assumes that the fishery is not size selective for fish weighing more than Wo, so that the size structure in Fig. 5 represents both the catch and the underlying fish population. The data in Fig. 5 are closely related to earlier data reported by Schnute et al. (1989a, Fig. 1 ), although Fig. 5 includes data from the two additionali sample years 1987 and 1988 (Richards, 1991 ). Schnute et al. ( 1989a, Section 3B) justified the choice of minimum recruitment weight Wo in (27). They also used growth data to obtain the growth law (their eqn. (2. la) ) w' = V' + p ( w - V)

(28)

I

c5 !

o d

\

d 0

i

~

i

5

10

15

'

Fig. 5. Histogram summarizing the lingcod catch weight structure from fish samples taken during 1956-1988. The height of each histogram bar represents the proportion ~i of fish within a 0.5-kg weight interval, where interval end-points are defined by the sequence w~ in (27). B r o k e n , solid, and dotted curves represent predicted functions x ( ~ , s ) in (34), where s is estimated by models A, C, and E, respectively. In particular, the dotted curve is estimated from histogram data shown here.

214

J.T. SCHNUTE

where V=2.25 kg, V' =2.88 kg, p = 1.025

(29)

and w' denotes the weight to which a fish with initial weight w grows in 1 year. In particular, it should be noted that V is the minimum recruitment weight and that w' = V' when w= V. Thus, newly recruited fish of weight V grow to weight V' in 1 year. In a second paper, Schnute et al. ( 1989b, eqn. (4.5) ) derived model E, i.e. the theoretical formula [E l ] for the proportion O(w) of fish with weights up to weight w. This describes a sigmoidal curve that starts at 0 when w= Vand approaches 1 as w becomes large. The parameter g in [ El ] can be interpreted as a growth factor, as in the discussion of ( 20 ) and (21 ). Schnute et al. ( 1989b, eqns. (1.32a) and (2.5)) showed that g=~'l~

(30)

where 27

1~= ~ Iti (Wi_, +Wi)/2

(31)

i=l

is the mean fish weight from the data in Fig. 5 and ~' is computed from ff by (28). Thus, g represents the factor by which an average fish increases its biomass in one year. For the example, here, ff=4.79 kg and g = 1.14, giving the value of g in (21 ). Incidentally, because of two additional years of data, the estimate (21 ) differs slightly from the earlier estimate g = 1.15 by Schnute et al. (1989b, eqn. (7.3)). Models C and E share the parameter s, a coefficient of biomass growth and survival. Both models originate from a common theory of stable population dynamics that implies the dynamic law [C1 ], the growth law (28), and the cumulative distribution [ E 1 ]. For example, although model C is dynamic, it is derived from equilibrium conditions of constant recruitment and survival. The e r r o r s ¢~t and Et lead to departures from equilibrium and give the data x, sufficient contrast for estimating r and s. When tr=0 and T=0, model C reduces to x = r / ( 1 - s ) , which is the limit value of the recursion x t = r + s x t _ 1. The likelihood {El } for model E comes from the multinomial distribution (Kendall and Stuart, 1979, p. 50) with an assumed sample size N = 100. Actually, the annual samples leading to Fig. 5 typically have many more than 100 fish, but I choose this low value of N to be conservative. (The choice of Nhas no effect on the maximum likelihood estimates g, although it does influence confidence interval calculations. The estimated standard error of g is proportional to 1/~/N. ) Incidentally, {E 1} needs minor adjustment to accommodate the final interval with i=27. Because this interval includes all fish with weights above w26= 15.25 kg, the final term in {E 1} should use

NOISE IN FISH POPULATION MODELS

O(w27;s) = 1

215

(32)

A related correction theoretically pertains to the 27th term in the sum (31 ), as the mean weight in the 27th interval is not precisely (w26+ wzT)/2. This correction is, however, so slight that it can be ignored. Tables 3 and 4 show the results of likelihood analysis from model E. The estimate g=0.88 is larger than that obtained from model C / D . Also, the standard error of g and the 95% confidence interval for s are extremely small for model E. In fact, these error estimates are probably unrealistically small. The distribution [ E 1 ] assumes that the fishery is not size selective for fish with weights above the m i n i m u m recruitment weight V. As with model A, assumptions that are too restrictive may lead to a deceptively small estimate of error. Figure 4 reinforces this point. The dotted V-curve represents p levels for rejecting values of s, based on model E. This "V" is much narrower than similar curves from models A and C. Indeed, models A and E give contradictory messages about s; the broken and dotted V-curves are almost exclusive of each other. Although model C would not reject the equilibrium estimate = 0.88 with 95% confidence, model E would definitively eliminate the model C estimate g = 0.71. Simulations of model C (Appendix E) do, however, suggest that this estimate may have a negative bias (E2); this bias could perhaps explain the difference between models C and E estimates. In any case, the size structure data strongly suggest that model A both underestimates s and gives an exaggerated sense of confidence in the estimate. This statistical analysis can be interpreted by comparing observed proportion rr~with the prediction

hi(s) =O( wi;s) -O(Wi_l ;s)

(33)

calculated from the distribution [ E 1 ]. The discrete proportions rr~(s) in ( 33 ) are approximately determined by the continuous curve rr(w;s) -

dO(w;,s) dw zlw

(34)

where dO/dw denotes the derivative of O(w). Since Aw is small, the proportion n~(s) is very nearly equal to rr(w;,s) evaluated at the interval midpoint (Wi_ 1+Wi)/2. Schnute et al. ( 1989b, eqn. (4.4)) gave an analytical expression for dO/dw, so that lr(w;s) can be computed explicitly. Figure 5 shows the curves rr(w;s) corresponding to the estimates g from models A (broken curve), C (solid), and E (dotted). It should be noted that the dotted curve conforms reasonably to the observed histogram. The high estimate g from model E indicates that fish as large as 15 kg would occur noticeably within the population. The model C estimate suggests that fish above 12 kg would rarely occur. The very low model A estimate indicates that fish above 7 kg would be extremely rare.

216

J.T. SCHNUTE

This discussion provides a simple biological argument that the model A estimates of s and s* (Table 3 ) must be fundamentally wrong. Such low survivals would never permit fish as large as 15 kg to occur in the population. Since fish of this size do occur, survival must be much higher. Consequently, the apparently reasonable decision to treat (24) as a regression model must also be wrong. Least squares cannot be adopted as a universal tool for parameter estimation, and the modeler is forced to give close attention to the choice of error structure. I conclude, then, that statistics plays an important role in building realistic fish population models. Figure 4 illustrates this point particularly well. The solid and broken curves correspond to the same data set and deterministic model. The different outcomes can be attributed entirely to different error structures. The dotted curve corresponds to a related data set and a deterministic model consistent with that for the other two curves. It lends support to results from the solid curve and eliminates those from the broken curve. Even if the models here are over-simplified (for example, by assuming constant recruitment), these conclusions remain valid. The essential fact is that the same data and deterministic model can produce different results, based on different error structures. Furthermore, for the lingcod stock, even a more detailed analysis with additional data leads to estimates of s consistent with those from models C and E here (Richards, 1991 ). In this paper I have deliberately confined my discussion to simple analyses, where the results can be interpreted easily. These examples, however, have intuitive value for anticipating similar problems with much more complex fishery models. ACKNOWLEDGMENTS

I thank T. Quinn II and J. Heifetz, whose scientific and organizational efforts led me to undertake this work. Conversations with D. Noakes and P. Sullivan stimulated my interest in Kalman filter analysis and improved my technical understanding. Comments by reviewers (J. Pella, M. Masuda,.and J. Clark) and the Associate Editor (T. Quinn II) led to substantial improvements from an earlier draft. My colleague L. Richards provided valuable scientific and editorial suggestions throughout this project.

REFERENCES Begon, M. and Mortimer, M., 1981. Population Ecology, A Unified Study of Animals and Plants. Blackwell Scientific, Oxford, 200 pp. Beverton, R.J.H. and Holt, S.J., 1957. On the dynamics of exploited fish populations. UK Min. Agric. Fish. Food, Fish. Invest., Ser. 2, 19, 533 pp. Box, G.E.P. and Jenkins, G.M., 1976. Time Series Analysis: Forecasting and Control, revised edn. Holden-Day, San Francisco, 575 pp.

NOISE IN FISH POPULATION MODELS

217

Cass, A.J., Richards, L.J. and Schnute, J.T., 1988. A summary of fishery and biological data for lingcod offthe southwest coast of Vancouver Island. Can. Tech. Rep. Fish. Aquat. Sci., 1618, 33 pp. Harvey, A.C. and Phillips, G.D., 1979. Maximum likelihood estimation of regression models with autoregressive-moving average disturbances. Biometrika, 66: 49-58. Kalman, R.E., 1960. A new approach to linear filtering and prediction problems. Am. SOc. Mech. Eng., Set. D (Basic Eng.), 82: 35-45. Kendall, M. and Stuart, A., 1979. The Advanced Theory of Statistics, Vol. 2: Inference and Relationship, 4th edn. Macmillan, New York, 738 pp. Meinhold, J.M. and Singpurwalla, N.D., 1983. Understanding the Kalman filter. Am. Stat., 37: 123-127. Mood, A.M. and Graybill, F.A., 1963. Introduction to the Theory of Statistics, 2nd edn. McGraw-Hill, New York, 443 pp. Pella, J.J. and Tomlinson, P.K., 1969. A generalized stock production model. Inter-Am. Trop. Tuna Comm. Bull., 13: 421-496. Richards, L.J., 1991. Use of contradictory data sources in stock assessments. Fish. Res., 1l: 225-238. Schaefer, M.B., 1954. Some aspects of the dynamics of populations important to the management of commercial marine fisheries. Inter-Am. Trop. Tuna Comm. Bull., 1: 25-56. Schaefer, M.B., 1957. A study of the dynamics of the fishery for yellowfin tuna in the eastern tropical Pacific Ocean. Inter-Am. Trop. Tuna Comm. Bull., 2: 245-285. Schnute, J., 1977. Improved estimates from the Schaefer production model: theoretical considerations. J. Fish. Res. Board Can., 34: 583-603. Schnute, J., 1987a. A general fishery model for a size-structured fish population. Can. J. Fish. Aquat. Sci., 44: 924-940. Schnute, J., 1987b. Data uncertainty, model ambiguity, and model identification. Nat. Resour. Modeling, 2: 1-55. Schnute, J., 1989. The influence of statistical error on stock assessment: illustrations from Schaefer's model. In: R.J. Beamish and G.A. McFarlane (Editors), Effects of Ocean Variability on Recruitment and an Evaluation of Parameters Used in Stock Assessment ModelS. Can. Spec. Publ. Fish. Aquat. Sci., 108: 101-109. Schnute, J.T., Richards, L.J. and Cass, A.J., 1989a. Fish growth: investigations based on a sizestructured model. Can. J. Fish. Aquat. Sci., 46: 730-742. Schnute, J.T., Richards, L.J. and Cass, A.J., 1989b. Fish survival and recruitment: investigations based on a size-structured model. Can. J. Fish. Aquat. Sci., 46: 743-769. Sullivan, P.J., 1991. A Kalman filter approach to catch-at-age analysis. Biometrics (in press). APPENDIX A: MODEL C LIKELIHOOD

In this Appendix the function Lc in {C 1}-{C5} is derived from the assumptions of model C. The likelihood itself is the probability density p (Xn) of the observed vector Xn. This can be expressed as a product of conditional probabilities

p(X.) = p ( x l ) f i P(Xt [Xt-l)

(AI)

t=2

based on a recursive application of the definition of conditional probability. The observations xt are multinormal, as they are generated by linear combi-

218

J.T. SCHNUTE

nations of the normal variates J, and ~t. Consequently, the conditional distributions x, IX,_ z are normal with, say, mean u, and variance vt:

xt]Xt_l "~N[ut,vt]

(A2)

As discussed in connection with ( 11 ), the series x, has unconditional mean /t and variance a + r2; thus, (A3)

xt ~ N[ ut ,vl ]

where Ul and v~ are given by {C2} and {C3}. It follows from ( A I ) - ( A 3 ) that p(X,,) = f i (2nvt) -~/2 exp[ ( y t - u t ) 2 / ( 2 v t ) ]

(A4)

t=l

Comparing (A4) with {CI } shows that

(A5)

Lc = - 2-log [p( X , ) ] - n.log (2rt)

thus, Lc is indeed twice the negative log likelihood, except for an additive constant. It remains to prove the recursions {C4} and {C5}. The proof uses the theorem that statements A and B below are equivalent (Mood and Graybill, 1963, p. 202; Meinhold and Singpurwalla, 1983, p. 125 ). A. The binormal variates (Zt,Z2) have means (Mr,M2), variances ( VI 1, V22), and covariance Vt2. B. The binormal variates (Z~,Z2) have the following unconditional and conditional distributions: Z 1 ~ N [ M ~ , Vt~ ] Z2 I Z, ~ N [ M 2 + ( V~2/V~ ) (Z~ - M l ), V22 --

(A6) (

VXl2/Vl, ) ]

(A7)

TO apply this theorem suppose that Xt_ j is given and consider the variates

(Z~ ,Z2) = (x,,x,+ i)

(A8)

where Z~ ~ N [ u , , v , ]

(A9a)

Z2 I Zl ~ N[ ut + l ,VI+I ]

(A9b)

From [C5 ] and [C1 ], it follows that x, = y, + ,,~,

(Al0a)

NOISE IN FISH POPULATION MODELS

219

+'~[~/+ I

(Al0b)

Xt+l =r+syt+a~t+l

The variates y, Jr, ~t+l, 6t+l in (AI0) are all independent and normal. In particular, let u't' and v't' denote the mean and variance of Yt (where Xt_ t is still presumed given). From the identifications (A8) and the notation of statement A, a straightforward calculation shows that M1 =u't'

(A1 la)

M2 = r + su't'

(A1 lb)

VII = v~" -~- ~"2

(A1 lc)

V12 =sv;'

(AI ld)

V22 -~-s2v; ' + 0"2--{- "t"2

(A1 le)

Thus, using (A6) and (A7) in statement B, it follows from (A9) and (A11 ) that U t ~ Utt t

(Al2a)

Vt = V't' + Z 2

(Al2b)

Ut+ l

=r + su;' + sv;' (xt--u;') / ( v;' + , 2 )

Vt+l =S2Vtt I +0.2+~.2__

( S V l t l ) 2 / (Vtt I .~_,2)

(A12c) (A12d)

Eliminating u'/ and v'; from eqns. (A12) completes the proof of {C4} and

{c5}. APPENDIX B: KALMAN RECURSION

In this Appendix the recursion ( 15 ) is proved for model C, given the deft, nitions (14). Let Xt be given, and apply the theorem in Appendix A to the variates (Z1 ,Z2 ) = (Yt+ i ,xt+ 1)

(B l )

From ( 14 ) and [ C 1 ] it follows that Yt + 1 ""

N[ r +

SU't, S2V't 2 + 0 -2 ]

(B2)

similarly, from [C5 ] x,+~ lY,+~ "N[yt+l,

"L'2]

(B3)

Given the identifications (B1), eqns. (B2) and (B3) correspond to (A6) and (A7) when Mt = M2 =r + su~

(B4a)

Vl I ~__ VI2 ._~s2v~ -~- 0 -2

(B4b)

220

J.T. SCHNUTE

/I22 =sav't +or2+ z 2

(B4c)

Thus, statement B in the theorem allows one to identify the moments (B4) in statement A. The proof of ( 15 ) is completed by applying the theorem again to the variates (B5)

( Z~ ,Z2 ) = (xt+ ~,y,+ ~)

This reverses the role of indices 1 and 2 in (B 1 ) and (B4), with the understanding that V2~= II12. Consequently, in the notation (B4), (A7) implies that Y t + I ]Xt+ l " N [ u't+ l , V't+ l

]

(B6)

where u;+l = M l + ( V I 2 / V 2 2 ) (Xt+l - M 2 )

(B7a)

v't+~ = V~l - ( V 2 2 / V 2 2 )

(B7b)

Combining (B4) and (B7) completes the proof of ( 15 ). APPENDIX C: EQUIVALENCE OF MODELS C AND D

In this Appendix the equivalence of models C and D is proved, given the parameter transformations (16) and (17). Eliminating y, from [CI ] and [C5 ] gives xt=r+sxt_l

+ trot + ~tft - rsc~t_ l

(C1)

This is equivalent to [ D 1 ] if the residual errors at = trot + rc~,- rst~,_ l

(C2)

f , = x ( rl, - ;trl,_ , )

(C3)

in (C1) and [DI ], respectively, are equal. It should be noted that both at and fl, have mean 0. Furthermore, equating the variances Var [ at ] and Var [fit ] gives O"2-1-32( 1 "1"S 2 ) - - ]C2 ( 1 "1"1"22 )

(C4)

Similarly, equating the covariance Coy [ a t , a t _ ~] and Coy [fit,fit- l ] gives - - S't'2 = - - 2 X :

(C5)

As the normal random series {at} and {fit} are both determined by their covariance structure and as both covariances vanish for lags greater than one, it follows that conditions (C4) and (C5) are necessary and sufficient for the statistical equivalence of (C2) and (C3). Solving (C4) and (C5) for (tr2,r 2) gives ( 16 ). Similarly, solving (C5) for x 2 gives (17b), and eliminating x from

NOISE IN FISH POPULATIONMODELS

221

(C4) and ( C5 ) gives ( 17a ). This completes the proof, as both ( 16 ) and ( 17 ) are equivalent to (C4) and (C5). Incidentally, if 0 < s < I, (17) always defines suitable values (x,2) from positive values (tr, z). (Note that the left side of (17a) is positive when 2 = 0 and negative when 2=s. Thus, the quadratic eqn. (17a) has a root 2 with 0 < 2 < s). The converse is not true for the transformation ( 16 ). When 0 < s < 1, ( 16 ) gives positive values (tr, z) only if 0 < 2 < s. Consequently, every modem C has a corresponding model D, but model D requires 0 <2 < s for corre; spondence with model C. (See also the discussion of (25) and (26). ) A P P E N D I X D: L I N G C O D D A T A

For users interested in implementing and testing models A-E, the lingcod data analyzed are presented here. The table below lists the series x, (kg h - i ) for years t = 1956-1988, as shown in Fig. 2. Values should be read from left to right; thus, the first row corresponds to years 1956-1962. 291.87 461.72 291.04 236.48 391.42

306.82 436.90 346.17 183.95 468.78

442.38 397.86 277.51 219.76 325.23

408.31 329.45 363.78 241.71 189.46

293.22 408.25 240.15 179.24 231.35

425.95 583.57 291.87 330.99

290 6~ 297.8il 204.2!7 314.42

Similarly, the table below lists observed proportions n, ( i = 1,27) in Fig. 5, corresponding to weight intervals defined by (27). Values are given as percentages, so that entries in the table sum to 100. i

13.960 5.540 0.669 0.272

15.363 4.283 0.852 0.210

13.460 2.606 0.581 0.205

8.745 2.881 0.600 0.162

! 1.554 1.203 0.348 0.186

4.756 1.534 0.374 0.472

7.1 lJ8 1.6~2 0.37J4

A P P E N D I X E. M O D E L C S I M U L A T I O N

This Appendix describes the results of 500 simulations to test small sample properties of Lc with the constraint (25). In each simulation, I used the l~ngcod parameter estimates for model C (Table 3 ) as actual parameter valueS. Based on these, I generated 33 points (xt, Yt) recursively from [C1 ] and [C5 ], starting with y~ from [C4 ]. I then used the simulated sequence xt to comput e maximum likelihood estimates (f,g,8,f), subject to (25 ). I also computed the difference

A(s) = m i n L(r,s,a,z) - L ( f , g , # , f )

(El)

r,~,T

at s=0.71, i.e. the actual value of s used for simulation. The Monte Carlo

222

J.T. SCHNUTE

TABLE E 1 Results from 500 simulations of model C (or D), based on model C / D parameter estimates from the lingcod data Parameter

r s s* tt a r x 2

Lingcod data

Simulations

Estimate

SE

Mean

SE

2.5% Q

97.5% Q

94 0.71 0.62 320 49 64 86 0.39

91 0.28 0.25 30 32 22 11 0.38

i 49 0.53 0.47 320 53 42 81 0.27

73 0.22 0.19 29 27 31 10 0.24

34 0.13 0.1 ! 265 0 0 60 0.00

284 0.89 0.78 375 98 86 103 0.71

The estimates and their standard errors (SE) computed from the inverse Hessian of the negative loglikelihood function are shown. For each parameter, the table also reports the following statistics from 500 simulated maximum likelihood estimates: mean, standard error (SE), and quantiles (Q) at 2.5% and 97.5%.

experiment resulted in 500 estimates of the parameter vector and 500 values A(s=0.71). Table E 1 summarizes simulation results for maximum likelihood estimates of all parameters listed in Table 3. Comparing simulation means with actual lingcod estimates shows little or no bias in (lt,o,x), but substantial bias in (r,s,s*,z,2). For the latter parameters, however, bias is dominated by the lack of precision evident in the standard errors and in quantiles that include 95% of the estimates. Furthermore, standard errors computed from the inverse Hessian of Lc (or Lo) for the lingcod data give reasonable approximations to standard errors found in the simulated estimates. Although maximum likelihood estimates of s have a downward bias 0.53-0.71 = - 0 . 1 8

(E2)

the likelihood function does correctly indicate a high level of uncertainty in s. The 95% likelihood interval (0.10,0.98 ) for s in Table 4 is actually wider than the interval (0.13,0.89) in Table E1 that includes 95% of the simulated estimates. Analysis of the simulation experiment can be carried one step further. The function d(s) in ( E l ) gives confidence intervals for s, based on the theory that zt(s) should be asymptotically X2 distributed with 1 dfat the actual value s = 0.71. The table below shows various observed and theoretical quantiles:

A(s=0.71 ) g 2 (1 df)

50%

60%

70%

80%

90%

95%

0.34 0.45

0.50 0.71

0.71 1.07

1.08 1.64

1.60 2.70

2.02 3.84

223

NOISE IN FISH POPULATION MODELS

As large values ,d(s) occur less frequently than predicted, it appears that confidence intervals for model C in Fig. 4 are too wide. Thus, intervals constructed by this method will include the true value of s more frequently than predicted by Z 2 with 1 df. I complete this Appendix with a comment on the role of constraints in parameter estimation. In 187 of the 500 simulations, maximum likelihood estimates occurred on the boundary of the region (25). The table below lisls the frequencies with which various constraints were applied. Constraint Frequency

s=O !

tT=0 38

z=0 148

The constraint s = 1 never applied. It should be noted that the most probable constraint (z= O) is the condition that restricts model C to model A. ThuS, the simulation indicates roughly 30% (148/500) probability of obtainir~g model A estimates from data generated by model C (using parameters estimated from the lingcod data).