Fitting a Mejzler distribution to extreme value data

Fitting a Mejzler distribution to extreme value data

journal of statistical planning ELSEVIER Journal of Statistical Planning and Inference 61 (1997) 39 47 and inference " Fitting a Mejzler distribut...

367KB Sizes 0 Downloads 115 Views

journal of statistical planning ELSEVIER

Journal of Statistical Planning and Inference 61 (1997) 39 47

and inference

"

Fitting a Mejzler distribution to extreme value data lshay Weissman Faculo elIndustrial Engineering & Management, Techno-/srael hlstilule O/ Techno/o<~v, 32#00 Ha!lh, I~rac'/

Received 5 Marcia 1996: revised 3 May 1996 Dedicated to the m e m o r y o f David Mejzler, who died in Jerusalem on 11 ,]allltalq, /996

Abstract Extremes of independent observations are considered when the assumption of identical distributions is not justified. In this case, one might fit a distribution function from the Mejzler class, which is much wider than the class of extreme value distributions. AMS classf/ication: 60G70; 62E20 Keywords: Asymptotic distributions; Estimation methods

1. Introduction Let { YI . . . . . Y,,} be a s a m p l e of i n d e p e n d e n t o b s e r v a t i o n s , where Fi is the distribution function (dr) of Yi, a s s u m e d to be continuous. Let YI,, > Y2,, > ' > Y,,,, be the o r d e r e d sample. O u r m a i n interest here is in the u p p e r - s a m p l e extremes, where the a s s u m p t i o n of identical Fi is not justified. In the w o r l d of sports, for instance, s u p p o s e Y~ is the best result of athlete i in a given year, in a given event (here, the higher, the better). Each athlete has his o w n distribution. S u p p o s e the best k0 results entitle their h o l d e r s to be on the O l y m p i c t e a m of a certain country. Then the O l y m p i c c o m m i t t e e might be interested in the d i s t r i b u t i o n of (Y1 . . . . . . . Yko,). It might be the case that a t h r e s h o l d level, Yo, is preassigned, and all athletes whose Yi > Yo enter the O l y m p i c team. In this case, K = K(n, Yo) = ~'i' 1 l:r,>~,,: a n d (Y~ . . . . . . . YK,,) are of main concern. ( F o r events where "best" is lowest, e.g. the 100m dash, take Yi to be minus the time measured.) A n o t h e r e x a m p l e is a reliability system which consists of n units in parallel. The life of unit i is Y~ a n d the life of the whole system is Y~,,. A m a i n t e n a n c e policy might be lo take s o m e a c t i o n when only ko units are still alive or after Y0 hours of o p e r a t i o n . A c c o r d i n g l y , the interest is either in (Yl . . . . . . )~,,,) or in (K, Y1 . . . . . . . YI,,,). 0378-3758 97/S17.00 :(, 1997 Elsevier Science B.V. All rights rcsefved PII S03 8 - 3 7 5 8 1 9 6 ) 0 0 1 3 7 - 1

I. Weissman/Journal of Statistical Planning and Inference 61 (1997) 39 47

40

In order to estimate quantities of interest consistently, one must assume some structure on the {Fi}; otherwise the situation is one observation per distribution. One possibility is to assume that Yi = (o(Y*]i, 0), where the {Y*} are independent and identically distributed (iid), 0 is a (finite-dimensional) parameter and ~o is a k n o w n function. Ballerini and Resnick (1987) assume a linear-trend model, i.e., ~o(Y*]i, O) = Y * + O.i for some u n k n o w n 0, and apply it to sport data. Smith (1988) assumes a quadratic model, namely ~o(Y*]i, O) = Y * + 01i + 02i 2. A n o t h e r a p p r o a c h is to assume that as n --> oo,

P { ( Y I n - b.)/a. <~ x} = ~I Fi(a.x + b.) --* G(x) ~=a

(1.1)

for some normalizing constants a. > 0, b. and some nondegenerate df G. To avoid trivial cases where a finite subset of { Y~} determines Y1. (e.g. Y~ > Yo for i ~< ko and Y~ < Yo for i > ko) we will require that as n ~ ~ , min

l <~i<~n

Fi(anX + b,) ~ 1

(x > XL),

(1.2)

where xL = inf{x: G(x) > 0}. Condition (1.2) is called uniform riyht negligibility (URN). F o r a d f G, let )~(x) = 2o(x) = - log G(x). With complete analogy to the iid case, Mejzler (1956) showed that under (1.1) and (1.2), G must belong to one of the following classes: . ~ o = {G: 2(x) is convex}, ~¢/+ = {G: xL > -

oo and 2(XL + e ~) is convex},

and j / l - = {G: XR < oO and 2(XR -- e ~) is convex}. Here, XR = sup{x: G(x) < 1}. The converse is also true, namely, if GeJCl = JC/°w JCl+u~ then there exist sequences {f,}, {a, > 0}, {b,} which satisfy (1.1) and (1.2) (see G a l a m b o s , 1987 Section 3.10). N o t e that the classical extreme value distributions (EVD) A(x) = exp( - e-=), ~b=(x) = exp( - x -=) (x > 0) and 7~(x) = exp( - Ix[ ~) (x < 0) belong, respectively, to .~,0, d / + and ~ . The df G(x) = x ~ (0 <<.x <<. 1) for ct > 0 belongs to . , / / ° c ~ J / + c ~ J / - . It follows that when the lid assumption is not justified one can fit a df for the sample extremes from a m u c h larger class than the EVD. In this paper, we will demonstrate how to fit a df G e J g to extreme value data. In Section 2 we review briefly some asymptotic results. In Section 3, we discuss some estimation m e t h o d s and in Section 4 we present some examples.

2. Asymptotic theory Let mi. = (Yi. - b.)/a, be the normalized values and let

K.(B) = ~ i=l

1~.... BI

(B c ~ l )

I. Weissman/'Journal olc Statistical Plannimt and lnfi, rence 61 (1997) 39 47

41

be a counting measure on ~1, It is k n o w n (see Weissman, 1975) that under (1.1) and (1.2), as n--, 3c., K,__,D K, where K is a Poisson process, whose m e a n measure at B = (x, ~,) is Z(x) = - log G(x). It follows that if ml >~ m2 >~ ... are the points of K in decreasing order, then, for each fixed k, as n ~ ,~., (mln,

.

.

.

(2.1)

,mk,) o~ (rex, ... ,mk).

The joint density function of (m, . . . .

mk,) is given by

k

¢'dx,, ~ . . . . . xO = G(xO [ I

-- )/(Xi) )

(X 1 ~- X 2 >

"'" > Xk) ,

(2.2)

i=1

where 2'(x) = d2(x)/dx. N o t e that G determines (2.2), so in order to specify a model for the imo, ( one has to choose a d f G ~ ¢ / . The points {Ti = ).(ml): i >~ 1 are the points of a standard h o m o g e n e o u s Poisson process, hence

(2.3)

{m~:i >~ 1} D= .{)-l(Ti). i ~ 1)

and { T i = Z I + Z 2 + ... + Z i : i>~ 1}, where the {Z~: i > ~ l I are iid exponential. Results (2.1) (2.3) are the same as for iid { Y~}, except that in the present case G e . / / , a much larger class than the E V D class.

3. Estimation methods Suppose now that a df G 6 J d is a candidate for the limiting df of YI,,, as in (1.1 I. Then, based on (2.1) and (2.3), for large n and each fixed k, one would expect that {Yi,:i = 1 . . . . . k} o { a n 2 - 1 ( y i ) + b , : i = 1 . . . . . k~,.

(31)

Hence, a graphical m e t h o d to verify that G is indeed the right dfis to plot Y~, vs. ), '(i) for i = 1 . . . . . k. (We have chosen the plotting positions 2 1(i) = )5 I(ETi) for simplicity; one can, of course, use E)o- l(Ti) or )~- a(vi) where v~ is the median of T~.) In view of (3.1), if the points are scattered a r o u n d a straight line, then one has statistical evidence in favor of this G. Moreover, the slope and intercept of that line can serve as estimales for a,, and b,, respectively. However, this plot should be used primarily as a diagnostic tool. In a parametric approach, one wants to fit Go 6-¢/, where 0 is a p a r a m e t e r to be estimated. The likelihood function (approximately, for large n) is obtained from (2.2), namely k

L(O, a, b) = a-kGo((Yk, -- b)/a) [-I ( -- 2~((Y~, -- b)/a)),

(3.21

i-1

where (a, b) stand for (a,, b,). A triple ((7, ~i,/~) which maximizes L can be used as m a x i m u m likelihood estimates (MLE) for (0, a, b). Bias corrections might be needed.

I. Weissman/Journal of Statistical Planning and ln[erence 61 (1997) 39 47

42

O n e has to bear in mind, however, that the m a x i m u m likelihood m e t h o d m a y fail to produce reasonable estimates in irregular models (particularly when the support depends on the parameters). A n o t h e r possible estimation m e t h o d is the best linear unbiased estimation (BLUE). Let Y = (YI,, ..., Yk,) T, ~ : (/tl(0) . . . . ,/~k(0)) v and 1 = (1 . . . . ,1) T be vectors in R k, where l~i(O) = E2o l (Ti). Further, let Z(0) = (a~j(0)) be the v a r i a n c e - c o v a r i a n c e matrix of ()~o 1(T1) . . . . . 2o l(Tk)) x. Taking a p p r o x i m a t i o n (3.1) as equality, one has Y=b'l

+a'l,(0)+e,

(3.3)

where e E R k is a vector of errors with mean 0 and covariance matrix p r o p o r t i o n a l to X(O). In principle, one could e m p l o y B L U E to estimate the parameters, but depending on 20, the c o m p u t a t i o n s might be complex or intractable. A n o n p a r a m e t r i c a p p r o a c h is as follows. Suppose we want to fit a df G = exp( - 2), where 2 is convex (and decreasing). Hence, q5 = ).-1 is also convex and decreasing. Based on (3.1), the distribution of {(Yin -- Ygn)/(YI,, -- Ykn): 1 <~ i <~ k - 1} does not depend on a and b. The plot of these points vs. i should be a reasonable estimate for the function ~b. One can take the closest (in some sense) convex fit with some s m o o t h i n g as the final ~. Once ~3 is determined, a linear regression of { Yi,} on ~(i) will give estimates ~ and/~.

4. E x a m p l e s (i) G is completely specified. In this case, the only p a r a m e t e r s to be estimated are a and b. W i t h o u t loss of generality, assume G(x) = x (0 <~ x <~ 1). Here, 2 - l(y) = e ~' (y ~> 0) and ). ~(Ti) o= UI" U2 ... Ui, where the {Uj} are lid uniform on (0, 1). It follows that t~, = E ) o - I ( T , ) = 2 ~ (i >~ 1)

(4.1)

aij = 2 j { ( 2 ) , _ 2-j}

(4.2)

and (i ~
Let fl = (b, a) T a n d X = [I, #] a k × 2 matrix; then the B L U E offl is fl = D Y w h e r e D 2 × k, given by D = (xTI;- 1X)- 1 X T X - 1,

(4.3)

where 2; = (ai~). Using (4.1) and (4.2), one can show that D

-

2 f -1 3 k - I - 1 ~,1 q- 3 k 1

is

--1 2

-3 6

-9 18

.

.

.--

.

.

3k 3 2"3 k-3

2 x 3 k z) -

4 x 3 k- 2j"

I. Weissman/'dournal of Statistical Planning and h!l~,rence 61 (1997) 39 47

.43

T h e c o v a r i a n c e m a t r i x of fi is

,

-2)

1) ~ - 2

(3 k + 5 ) / / 2

/

2a 2 COV(fi)

~ 3 ~--' -

"

N o t e the v a r i a n c e o f / ; t e n d s to 0 (as k --+ ~') e x p o n e n t i a l l y fast, while the v a r i a n c e of "~ c~ c o n v e r g e s to s1a ~. T h e l i k e l i h o o d function, b a s e d on (3.2), is given by

L ( a , b ) = a -1

(Yi,,-b

1 ,,t,<_r~,, .~ ~ ,,, ~ , + j,;,

\i=l

w h i c h is m a x i m i z e d at a = Y~,, Yk,, a n d / ~ = Yk,,. Bias c o r r e c t e d e s t i m a t e s are a * = 2 k ( 2 k 1 _ 1) l a a n d b * = Yk,,--2 ka*. T h e a s y m p t o t i c (as k---, f~,. k = o 0 1 ) ) relative efficiency of a*, b* relative to the B L U E are 1 a n d 3, respectively. (ii) One-parameter example. S u p p o s e G(x) = x ~ (0 <~ x <~ 1) for s o m e ~. > 0. T h e l i k e l i h o o d (3.2) b e c o m e s here k

i=1

It follows t h a t the M L E for a is ~ = YI,, - / ~ a n d (~,/~) is the s o l u t i o n of k

= 1 +(Yk,,-b)

1

~ (Yi,,-b)

1

(4.4}

i=1

and ~. = k/log {(Y,, - b)/(Yk,, -- b)}.

14.5)

N o t e t h a t (4.4) is g r e a t e r t h a n o r e q u a l to l, so the M L E c a n n o t be c o n s i s t e n t if ~ < 1. A m e t h o d t h a t w o r k s e q u a l l y well for all 2 > 0 is as follows. First. n o t e t h a t tmi~ D= ~(/j~ . . . . . . I;.i1'~,,) a n d t h u s if('1 = ~..'(~. + 1) a n d c,~ = (.:~ + 1).,'(~.. + 2), /Li(~) = E m i = c'1

(i >~ 1)

(4.6)

and ~ij(~) = c o v ( , n i , mj) = d

' "

~4.7)

H e n c e , a s y m p t o t i c a l l y , for l a r g e n, E(G,

-

Yk.) -~ a ( 4

- c~).

Thus, for large k (but k <
44

I. Weissman/Journal of Statistical Planniny and Inference 61 (1997) 39-47

points so that c] = o(c]) for i <<.k/2). O n c e we a p p r o v e the model, we can r u n a simple linear regression a n d the resulting slope a n d intercept estimate log cl a n d log a, respectively. It is preferable, however, to estimate ~ = ~ - 1 by = k , 1 log{(Y1, - Y k , + L , ) / ( Y k , + L ,

(4.8)

-- Yk,)}

with k = 2 k , + 1. It can be s h o w n that for large k,, ~ is u n b i a s e d with variance (approximately) ~2/k,. Use ~ to estimate (4.6) a n d (4.7) a n d then use B L U E to estimate a a n d b. (iii) 100 m dash data. We have chosen the 100 m dash event to d e m o n s t r a t e the use of some of the m e t h o d s discussed above. It does m a k e sense to assume that the r u n n i n g time is b o u n d e d a b o v e a n d below. I n Table 1, we list the 19 best sprinters, each one appears with his best officially a p p r o v e d time. The data, u p d a t e d to 31 D e c e m b e r 1993, are t a k e n from M a t t h e w s (1993) a n d T r a c k a n d Field News (1994). The star on J o h n s o n ' s n a m e refers to some d o u b t s whether his time was drug-assisted or n o t (his 1988 time of 9.79 s was disqualified for that reason). We w o u l d like first to estimate the d i s t r i b u t i o n f u n c t i o n G(y) = ((y - b)/a) ~ (b <<.y <<.a + b) based on these d a t a (without Johnson's). Since these times have been o b t a i n e d as a world-effort of 25 yr (1968 1993), if c o n d i t i o n s r e m a i n the same, this d i s t r i b u t i o n can be t h o u g h t of as the d i s t r i b u t i o n of the best time in the next 25 yr. Then, we could check J o h n s o n ' s time relative to this distribution.

Table 1 Best 100 m sprinters of all time Rank

Name

Year

Time (s)

Y (score) 100(10 - Time)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 !4 15 16 17 18

Johnson* Lewis Christie Burrell Mitchell Cason Smith Marsh Hines Fredericks Lattany Stewart Ezinwa Adeniken Leonard Effiong Woronin Imo da Silva

87 91 93 91 91 93 83 92 68 91 84 91 92 92 77 93 84 86 86

9.83 9.86 9.87 9.88 9.91 9.92 9.93 9.93 9.95 9.95 9.96 9.96 9.96 9.97 9.98 9.98 10.00 10.00 10.00

17 14 13 12 9 8 7 7 5 5 4 4 4 3 2 2 0 0 0

I. Weissman/Journal o[' St~tistical Planning and ln[~,rence 61 (1997) 39 47

15

1

£3

CJ

C4

C'4 C4

O

I



2

4

6

8

°

j

10

Fig. 1. log Yi vs. i for the best 10 times. The first step is to plot the values of log Yi vs. i (i = 1 . . . . . 10). The plot (Fig. 1) loo k s quite linear (R 2 = 0.9767), and the slope - 0.1408 gives C'1 = 0.8687 and ~ = 6.62. The intercept 2.8205 gives fi = 16.79 (Here it was implicitly assumed that b = 0t. Since Y is a continuous r.v., it makes sense to separate ties. A slightly improved fit (R 2 = 0.9844) is obtained by equally spacing ties at the value i in the inter~al (i - ½, i + ½) (see Fig. 2). The resulting estimates are ,:~ = 6.40 and fi = 16,85. Using 15 values (rather than 10) gives very close results. Next we estimate z~ via (4.8), based on the best 15 scores and get z~ = 6.37 with approximately S.E. of as = 2.41. With this value of 4, we estimate (4.6) and (4.7) and apply B L U E to obtain/~ = - 0.34 and fi = 16.59, with covariance matrix a2 I

0.0041 - 0.0047

- 0.0047~ 0.0242J"

Hence, our final estimate for the distribution of the best score is G(v) . = C ' ~+. ~0"34~637 - J

- 0.34 ~< y ~ 16.25.

J o h n s o n ' s score Y = 17 is beyond the point estimate of ~ +/~ = 16.25, but since the standard error of cl +/~ is 2.7, it will certainly fall within any reasonable confidence interval for (a + b). O n the one hand, J o h n s o n ' s score is quite far (0.03 s) from Lewis', c o m p a r e d with the gap between Lewis and Christie (0.01 s). However, as we have seen. J o h n s o n ' s score is not significantly extreme so as to declare it "an outlier".

46

I. Weissman/Journal of Statistical Plannin,q and lnJerence 61 (1997) 39 47

i

i

i

f

i

2

4

6

8

10

Fig. 2. log Yi vs. i for the best 10 times with continuity corrections.

R e m a r k . The m a x i m u m likelihood m e t h o d failed to yield reasonable estimates. In all the examples we have tried, the solution gave ~ ~ 1, ~ ~ Yk, and ~ ~ Y1, - Yk,. F o r comparison, we m a y fit the generalized extreme value distribution ( G E V D ) G(y) = exp [ - {1 - 7(Y - b)/a}~/*'] •

(4.9)

We have used the p r o g r a m G E V 4 (written by R.L. Smith) to c o m p u t e numerical m a x i m u m likelihood estimates. When the input was the best 10 and then the best 15 scores ( a m o n g a sample of 50) the p r o g r a m failed to converge, which indicates a lack of fit. So, we tried the best 17 scores and received = 0.6155,

~ = 1.4412,

~ = 12.6844.

N o t e that -2 > 0, which supports the a s s u m p t i o n that the right end-point of (4.9) y , = b + a/7 is finite.

The point estimate of y , is 3~, = 15.03 with S.E. of 2.5. Again Johnson's score is beyond 3~, but not beyond )~, plus one S.E. Finally, we note that if (4.9) was the right model, then the points (ln()~, - Yi), In i) should scatter a r o u n d a straight line. F o r a fair c o m p a r i s o n with Fig. 2, we plot in Fig. 3 the best 10 scores. The plot looks quite linear ( R 2 = 0.9724), though Fig. 2 exhibits a better fit.

1. Weissman/'Journal of St,tistical Planniml and lJT/i,rence 61 (1997) 39-47

~:.7

q

u2. O

1

0.0

Fig. 3. log(15

0.5

-

1.0

1.5

2.0

Y,) vs. log/for the best 10 times with continuity corrections.

Acknowledgements This research was supported Binational Foundation

by grant No. 92-00008 from United States

Israel

(BSF), J e r u s a l e m , Israel. P a r t o f t h e r e s e a r c h w a s c a r r i e d o u t

d u r i n g a visit t o t h e C e n t e r for S t o c h a s t i c P r o c e s s e s at t h e U n i v e r s i t y o f N o r t h C a r o l i n a , M a y 1994.

References Ballerini. R. and S. Resnik (19871. Records in the presence of a linear trend. Adv. i~ ,4ppl. Prohah. ~9, 801 828. Galambos, J. (1987). The Asymptotic Theory o/Extreme Order Statistics. 2nd ed.. Robert E. Kricgcr Pub. Co., Malabar, FL. Mcjzler. D. (1956). On the problem of the limit distributions for the maximal term of a variational series. Lvov. Politech. Inst. Nauer, Zap. Set. Fiz.-Mat. 38, 90 109. Smith, R.L. (1988}. Forecasting records by maximum likelihood. J. 4met. Statist. Assoc. 83, 331 338. Weissman. I. (1975). Multivariate extrernal processes generated by independent nonidentically distributed rv's..I. Appl. Probab. 12, 477 487. Matthew, P., Ed. (1993), Athletics, In: Association ~[ Track amt Fiehl Statistics. Track and Field News (1994).