ELSEVIER
Statistics
Maxima
& Probability
Letters
21 (1994) 3855394
of bivariate random vectors: Between independence and complete dependence J. Hiisler
Departmrnt of Math. and Statistics, UniuersitJ:of Bern, Sidlerstr. 5, CH-3012 Bern. Sktzrrland Received
December
1993
Abstract We analyse the asymptotic dependence structure of bivariate maxima in a triangular array of independent random vectors. This extends the analysis of the classical case of i.i.d. random vectors and the known relationship in the Gaussian case. We apply the general results to a special model and discuss some examples. Keywords:
Maxima;
Triangular
arrays; Bivariate
random
vector; Dependence;
Bivariate
extreme
value distributions
1. Introduction
Let {(Xin, Zin), 1 d i d n, II 3 l} be a triangular for each fixed n. Define the sequence of bivariate
array of bivariate random vectors, which are independent maxima M,, = (M,, , Mn2), where
M,,z = max Zi,.
M,,i = max Xi, and i
i
We consider the general asymptotic behaviour of M, as IZ+ co, assuming exits and is nondegenerate, with normalization some linear unj(X)
=
bfij
f’{M,I
+
d
X/&j,
j
=
u,,~(x),M~~
which means
of A4, with
1,2,
,<
unz(z)}
%G(x,z)
as n + CC.
(1)
(Here and in the following we consider always the weak convergence.) the limit distribution G(x,z), mainly whether G is such that G(x,z) = min(G,(x),
that the limit distribution u,(x, z) = (u,i(x), u,~(z))
Of interest
is the characterization
of
G,(z)),
that A4, has asymptotically
completely
dependent
components,
or such that
G(x>z)= G,(x)G,(z), which means that M,, has asymptotically independent components. Here Gj denotes the marginal distribution of G,j = 1,2. We give necessary and sufficient conditions for these two properties and some further results on G. 0167-7152/94/$7.00 0 1994 Elsevier Science B.V. All rights SSDl 0167-7152(94)00036-l
reserved
J. Hider
386
J Statistics
& Probability
Letters 21 (1994) 385-394
This situation was discussed for unit normal random variables Xi, and dealing with the question, what behaviour of the correlation coefficient implies asymptotic independence or complete dependence of the components It is well known that M, has asymptotically independent components in the p,* = p* < 1. But the conditions are as follows for the triangular array a correlation coefficient p,*, depending on n, (1 - Pn*)logn -+ a
implies
asymptotic
independence,
(1 - p,*)logn -+ 0
implies
asymptotic
complete
Zi, by Hiisler and Reiss (1989) pt = Corr(X1,,Z1,,) (as IZ-+ co) of M,,, normalized with u,(x, z). case of i.i.d. random vectors with case with normal variables and
dependence.
is a max-stable (or If (1 - p,*)logn + i2, some positive constant, then the asymptotic joint distribution a bivariate extreme value) distribution, with some dependence between the components. The explicit formula of these bivariate extreme value distributions are as follows. @(.) and 4(.) denote the standard normal distribution and density. Proposition 0 (Hiisler 0 and variances
oi,,
Cm P{M,,
n-m
and Reiss, 1989). Assume that X In and Z1, are normal random Zf (1 - p,*) log n + 1.’ E [0, CC], then
variables
with means
and CT;,“, respectively.
d k(x),M
n2 d k(z))
= ,‘im%CP{XI,
d ~~(x),z~~
d
f42(4>ln+ H;.kz),
where
-.(;.~)e-‘-~(,+~)e~x],
H,(X,Z)=eXp[
4ll
=
cx,n(b,
+ x/b,)
and u,~(z) = az,,(b,
+ z/b,,) with b, dejined
by b, = n$(b,).
Note that H,(X,
Z) =
iiy
Hj.(X,
Z) =
exp[ - eemincx,‘)]
and H,(x,z)
= ,l$H,(x,z)
The purpose be defined by
= exp[ - eex - ee’].
of this paper is to extent these results to more general random
variables.
A particular
case can
(2)
Zi=Xi+p,x,
with independent Xi and x and some sequence pn, n 3 0, pn > 0. The asymptotic dependence structure of G(x, z) depends naturally on the asymptotic behaviour of P,,. If X, and Y, are normal variables with mean 0 and variances ai and a;, then the correlation p,* of X1, and Zr, in model (2) is given by
pt=(,+,$Jiz. If pn --f 0, then
This relates the following
general
results to Proposition
0.
J. Hiisler / Statistics
& Probability
Letters .?I (1994) 385-394
387
The particular model (2) can be generalized by dealing with nonindependent random variables Xi, x. In addition, the joint distribution F,, y of Xi, x might depend on n. To include these different extensions, we treat the general model of a triangular array of i.i.d (Xin,Zin), i < n, with joint distribution Fx,z,,. For the general model we discuss in the next section the asymptotic properties of possible limit for the distributions of the partial maxima M,,. Mainly we state necessary and sufficient conditions asymptotic dependence structure, in particular the independence of complete dependence of the components M,j. Other properties are also of interest as the max-stability of max-infinite divisibility of the limit distribution. The general results are applied to some particular cases, as the model (2) with independent random variables Xi and x, in particular if their distributions belong to a domain of attraction of an extreme value law. A large variety of limit laws is possible already for the simple model (2). Our problem is related to the work of Brown and Resnick (1977) and Penrose (1990) who dealt with the maxima of normalized and resealed stochastic processes. They discussed the structure of the limiting process, called max-stable (min-stable) or semi-max-stable (semi-min-stable) processes, respectively. Our limit distributions give explicitly the bivariate distributions of these limiting processes. The generalization to higher-dimensional distributions is obvious and is therefore not dealt with.
2. The general model We deal with the general model and discuss the limit law of M,,, where the limit distribution G of (1) is nondegenerate. Note that the univariate marginal distributions Gj(x) are in general not extreme value distributions, but G is a max-infinitely divisible (max-id.) distribution (Balkema and Resnick, 1977). We assume that G, is continuous. It simplifies the following general discussion, without restricting our applications. Eq. (1) is equivalent to the weak convergence of nP{Xi,
> ~,i(x),Zi~
(3)
> K,(Z)> + H(x,z).
Eq. (1) implies that the univariate limit distributions of the components M”j exist. Thus the univariate distributions of Xi,, and Zi, are such that there exist normalizations U,j(x), j = 1,2, with PIM,i
d uni(x)}
=
CFx&,W)l” + G,(x)
P{M,,
< u,z(z)> =
CF&n,(4)1” + Gz(z),
and
as n +
ax.
This is equivalent
to
nF,,(u,i(x))
-+ r,(x) = log l/G,(x)
nFz,(u,,(z))
+ 72(z) = log l/G,(z)
and
as n -+ cc with F(x) = 1 - F(x). Note that Tj(.) is a nonincreasing Complete dependence means that as mentioned G(x, z) = min(Gi(x),
function,
j = 1,2.
G,(z))
or equivalently H(x, z) = min(zl(x),
z2(z)).
(4)
J. Hider / Statistics & Probability Letters 21 (1994) 385-394
388
Theorem 1 (Complete lim P{Zi, n_cO
> u.~(z)I Xi, = n.i(x)}
then (1) and (4) hold. Conversely, nondecreasing
Assume that G1 and G2 exist and G1 is continuous.
dependence).
in xfor
Proof. SuJiciency.
-+
1 $
Zi(X) < 72(z) < co,
i 0 if
rz(z) < Zi(X) < co,
all x and z
(5)
if(l) and (4) hold and in addition P{Z1,
all n suficiently
Iffor
> u,,~(z)~X~~ = u,~(x)}
is monotone
large, then (5) holds.
Note that by conditioning
for fixed z and x with zi(x) < cc
m nP{Xi,
> u,i(x), Zi,
> r4&)1
=
sx
p{zi,
> u&)
I XI,
= un1(4) dvn>
(6)
where the finite measure v, is generated by nP{X,, > unI(x)} and converges weakly to v, generated by ri(,). By Theorem 5.5 of Billingsley (1968) H(x,z) exists and we get that for continuity points (x,z) of H: if z2(z) 3 ri(x) (hence r2(z) > si(u) for all u > x) m H(x, z) =
VT&)
sx
> zl(u))dv
= z~(x),
and if r*(z) < zi(x) there exists v such that s2(z) = zi(v) 3 rr(u) for u > v > x (since zi(.) is continuous), the same way H(x,z)
= zl(v) = TV.
Thus, H(x, z) = min(z,(x), Necessity: By (4) and ri(u) -=zr*(z) for u 3 x, s
then in
rz(z)). the assumption
a; [l - P{Z In 3 u&)
x
For y > x with v((x,y]) Y[l -P{Z sx
we have
for continuity
points
(x,z) with
ri(x) < s2(z), hence
IXI, = un1@)>1 dvn+ 0.
> 0
In 2 u&z) I XI,
= u,,(4> dvi, + 0.
By the monotonicity [l - P{Z1, 3 u,Jz)IX In = u,,(y)}] v,((x, y]) + 0, which implies the convergence (5) for all y with ri(y) < zz(z). In the case with x < y and z with ri(x) > zl(y) > z2(z) we get in the same way Y
s
x
p{zl,
>
44
I X,, = u,,(u,>dvn-+ 0,
which implies the remaining
statement
P{Z1,
> u,Jz)
( X1, = u,,~(x)} + 0.
0
Remark. Note that the converse can be proved only under some additional assumption on the conditional probabilities (cf. Topsoe, 1967). We used the monotonicity in this theorem, since it is convenient for the following applications. The same remark can be made also to Theorem 2 on the asymptotic independence of the components. 2 (Independence). Assume that G1 and G2 exist G(x, z) = G,(x) G2(z), iffor every continuity point (x, z)
Theorem
lim P(Z1,
n-m
> u,~(z) I X1, = unI(x)}
-+ 0.
and
G1 is continuous.
Eq.
(1) holds with (7)
J. Hider
Conversely,
if(l)
nondecreasing
/ Statistics
& Probability
holds with G(x,z) = G,(x)G,(z)
in x for all n suf/iciently
and in addition P{Z1,
> u,~(y)lX,,
= unI(x)j
is monotone
large, then (7) holds.
Proof. By Eq. (6) we have to prove again that for every continuity
which follows immediately Theorem 1. 0
389
Letters 21 11994) 385-394
by (7). Conversely,
(6) implies
In the other cases, we get a limit distribution
point
(7) by the monotonicity
which is neither
independent
as in the proof
nor completely
of
dependent.
Theorem 3. Assume that G, and G, exist and G1 is continuous. Eq. (1) holds ijfor every continuity point z of G2 J[;
P{ZI,
> %2(Z) I Xl,
= hlW>
for all points x with zl(x) < co, except E = {x: zl(x) < oc and P(Z,,
= &?4,
(8)
a v-nullset E = E(z):
> u,,Jz) 1X In = u,~(x,))
Proof. In the same way as before, the assumptions continuity point z of Gz
Jm P{Z,,
for all continuity
> u,&)
1X1, = unI(u)} dv, --t
points
exists as limit of (1).
z) for some sequence
imply by Theorem
5.5 of Billingsley
x, + x}.
(1968) that for every
J
m g(u, z)dv = H(x, z)
(x,z), except the v-nullset
Gi-x,z) = Gi(x)Gz(z)exp(JIg(u,z)dv)
+g(x,
E. Hence,
x
0
These results can be easily extended to triangular arrays of random vectors, which are for each n stationary. We need in addition some mixing type conditions, for instance the conditions D(u,(x)) and D’(u”(x)) of Hsing (1989) or Hiisler (1990). In the same way we could define the point processes of exceedances and analyse their asymptotic behaviour. We would find the expected fact that the point process of exceedances is asymptotically a Poisson process and that this asymptotic point process can be represented by components which are related to the exceedances in one of the two components, and which are either completely dependent, dependent or independent. Since this is a straightforward extension, we turn now to some applications and to the max-stability property.
3. The special model
We discuss now the special model (2) where Xi are i.i.d. with Fx not depending on n. (1) holds only if to a domain of attraction of an extreme value distribution. Thus Gi, 71 and v are continuous. In addition, we assume that {x} are also i.i.d. with F,, not depending on n, and independent of {Xi). Denote by c(( Y) and o( Y)) the endpoints of the support of Y, a(Y) < o(Y). Fx belongs
J. Hiisler / Statistics & Probability Letters 21 (1994) 385-394
390
Theorem 4 (Complete
Assume
dependence).
that in model (2) G1 and G2 exist and are nondegenerate.
Then (1)
holds ifs linm_W&)
- u,~(x))/P~
3 o-4 Y)
for x,z such that 0 < z2(z) < zl(x) < co, and liy_yp
(u&)
- unl (x))lp,
< a( Y)
for x,z such that 0 < zl(x) < am
Proof.
The statement
follows immediately
> %2(Z) I x1 = u,&,>
P(Z1,
< CO.
by Theorem
= P{ K > M4
1 and the following
relations:
- UnlW)lPn I Xl = %l(X)>
= P{ Yl > @4l2(4 - Unl(4)lPnj
-+
iff li,m,l,nf(u&)
- u,i(x))lp,
3 w( Y),
respectively, lim_sjoup(u&)
- u,i(x))lp,
G a( Y).
In addition, P{Zi, > u,,~(z) 1Xi, = u,,i(x)} = P{ Y, > (u&z) - u,i(x))/p,} x for all n since pn > 0 and u,,~ > 0. 0 For mode1 (2) we find also a simple sufficient
condition
is monotone
for the complete
nondecreasing
dependence.
Corollary 5. Assume in model (2) that G1 and G2 exist with norming values u,,~ > 0 and b,l and that u,J.) selected
in
can be
as u,,~(.) = u,,~(.). Then
G(x,z)
= min(G,(x),
G,(z)) = Gi (min(x, z))
ifSPnani + 0. Proof. Since 3r&).
u&z) = u,i(z)
it follows
easily
that
(U&Z) - u,i(x))/pll
= (z - x)/p,a,i
+
+ co iff zi(x)
0
The existence
of G2 and u,,*(z) = u,i(z) hold, if
nP(Xl + pn YI > u,I(z)) = The crucial
s
nP(X1 > b,l
part in this convergence
+
(z - p,a,ly)lan,)dWy)
+ PI
= zz(z).
is that the subintegral
JY >z/(pnand
It is rather easy to construct examples where this does not hold. However, examples of Section 4 as well as for the normal example.
it holds in all the following
J. Hider
/ Statistics & Probability
Letters 21 11994)
In the same way we find for the simple model (2) necessary by using Theorem 2. Assume
Theorem 6 (Independence). with G(x,z)
= G,(x) GJz)for
linm_~f(u&)
Finally,
that in model
(2) G1 and G2 exist and are nondegenerate. ~~(2) <
that in model
(x,z),
except
Then (1) holds
cc (9)
dependence
case.
(2) G, and G2 exist
and are nondegenerate.
Then (1) holds
v-nullset
E = E(z), defined
if (10)
,‘$ PI y, > (U”Z(4- %tlb))lP”l --) dX>4 holds far points
for the independence,
3 w( Y).
we turn to the intermediate
Theorem 7. Assume
391
and sufficient conditions
all x, z ifS.for all x,z with 0 < zl(x),
- u,~(x))la,
385-394
in Theorem
3.
It is interesting to know whether these limit distributions are extreme value distributions being max-stable. Obviously, if G has completely dependent components, then G is max-stable, since Gr is max-stable. If G has independent components, then G is max-stable if and only if also G2 is max-stable.
4. Applications
These general results are now applied to some concrete cases of model (2), in relation to the extreme value situation. It is well known that the extreme value distributions can be of three types, called type 1, 2 or 3: Type 1: GT.Z(~) = exp(- x-‘) for x > 0, and 0, else. Type 2: G:,,(x) = exp(- (- x)“) for x d 0, and 1, else. Type 3: G;(x) = exp(- em’) for all x. (cf. e.g. Leadbetter et al., 1983; Reiss, 1989). In the first two distributions, a > 0 is a shape parameter. Many combinations could be discussed, but we deal only with a few interesting cases, where each Fx and F, belong to a domain of attraction of an extreme value distribution G*, denoted by F ED(G*). Of interest is the situation where FX and FY belong to the domain of attraction of the same extreme value distribution type, generalizing the mentioned case with normal variables. Note that the normal law @ ED. (A) We attraction positive x positive y
begin by discussing the case where both random variables X and Y belong to some domain of of type 1. It implies that there exists a sequence a,, --f 0 such that nF,(x/a,,,) --f x-l as n + x for and some CI> 0. Similar, let the sequence an*,,2-+ 0 be such that nFY(y/azz) + yy”’ as n + CC for and some x’ > 0. This means that
F, ED(GT,,)
and
Corollary 8. Suppose
Fy ED(GT..,).
(11)
that (11) holds and that
lim a,*2/anl an = ib E (0, a}. n-r If 2 = co, then the limit distribution
G in model (2) is completely
dependent,
and tfi
= 0, then G is independent.
J. Hider
392
J Statistics & Probability Letters 21 (1994) 385-394
Proof. (1) Let A = co. It implies that anlpn + 0 and that nP(X1
+ P” Y, > z/a,r}
-+ z-a
by dominated convergence. Hence, u,z(z) = u,r(z). By the same method, we also get that nP{Xi > X/Q, X1 + pn Y, > z/a,,} + min(x-‘, z-“) = min(zi(x), zz(z)). This implies the statement directly. Naturally, we also have that (u,z(z) - U,l(X))lP,
= (z - x)/M%
+ f m
if z > x or z < x, respectively, since a,, pn + 0. Note also that G1 = Gz = GT,,. (2) Let A = 0. We get p,,/a$ + cc and by the same method nP{X,
+ p. Yl > zpn/an*2} + y-“‘.
Thus, U&Z) = zpJa&.
It follows that
(r&z(z) - 4l1(4)IP, for all positive
= (z - xa%/Aa”1)/&
x, z. Hence, the conditions
+ fyZ of Theorem
2 are satisfied.
0
This result raises the question what dependence occurs if ;1 > 0, finite. We observe (Corollary 9) a rather interesting fact, showing that the marginal limit distribution Gz is in general not an extreme value distribution and thus also G is in general not max-stable. However, if c1= CI’,then G and Gz are max-stable. Corollary 9. Suppose
that in model (2) condition
(11) holds and that
lim u~*~/u~~P~ = 3, E(O, co). n-E Then the limit distribution
G is given by
G(x, z) = K,(x, z) = exp( - x-’ - zma - z-~‘A-” The condition
of the theorem
Proof. By the same method
+ min(x-‘,
(12)
z-“)).
does not imply that CI= tx’.
as in Corollary
8 we find directly
nP{X1
+ pn Y, > ~/a,~} + z-’ + z-“Ama’
nP{XI
> x/unl,X1
that
and
The max-stability Example
+ pn Yl > ~/a,~} + min(x-“,z-“).
in case c( = tl’ is easily seen.
1. We consider
F,(x) = 1 - X-a and
as special example
0
of (1 l), the case that Fx and Fy are Pareto,
which means
F,(y) = 1 - y-“’
for x,y > 1 and CI,CI’> 0. The normalization for the first component is given by u,i(x) = xn”‘, hence a,, = n iI’. Similarly, UX = n lia’. Obviously, (11) holds: F, ED(GT,.) and Fy ED(GT,.,). Thus, u~z/u,,r~,, = (n “@‘- liapn)- ‘. Corollaries 8 and 9 imply the results for the special case: complete dependence if n”“‘-““p, + 0, KI given in (12). independence if n’ia’- li’pn + 00, and dependence if n”“- ““p” -+ l/A with limit distribution (B) We discuss now examples where the marginal distributions Fx and Fz belong both to D(Gz), as in the normal case treated by Hikler and Reiss (1989). This example will exhibit some similarities to the normal case, but another class of max-stable distributions.
J. Hider J Statistics & Probability Letters 21 (1994) 385-394
Example 2. Let us assume that X, Y have an exponential distribution Then b,l = logn and a,, = 1. We derive first u,~(z): if p. # 1, then
F,(x)
393
= Fr(x) = 1 -
exp(
-
x), x
2
0.
UnS(Z) nP(Zi
3 k(z))
=
n
P(Y,
s =
3
h(z)
-
41p,k-xdx
-
W
+
nexp(-
u,,&))
0
nCkexp(
-
~&)IP,)
-
~,2(z))lI(~n- 1) +e-‘,
if we use
&2(Z)=
p,Clogn + log(p,l(p, - 1))+ 21 if l&p, i log n - log( 1 - pJ + z
if lim,p,
> 1, < 1.
We note that we get zi(x) = rz(x) = exp( - x) and we can select u,,(.) E u,,~(.) if pn + 0. If pnanl = p,, -+ 0, it implies by Corollary 5 that M,i and M,, are completely dependent. In the case lim,p, > 1 (u&z) - u,,(x))/p, Hence by Theorem
-+ cc
for all x,z.
6 we have
G(x, z) = G,(x) G2(z) = exp( - emx - e-l). In the case lim, Pn = /1 E (0, l), we find the following exp( - emx - A[(1 - i)‘+“eLj,(x, z, =
continuous
z+x(l-i)]l’i)
G = LI:
if z - x 3 log(1 - A), if z - x < log(1 - 2).
i exp( - ee’)
It is easily verified that In the remaining case We conclude that if ,4 = dependent components.
limit distribution
LA is a max-stable distribution, since Ljlk(x, z) = L,(X + log k, z + log k). lim, pn = 1 it is also possible to show that G(x,z) = G,(x) G2(z) = exp( - eex - em*). lim, P,, 3 1 we have independent components, and only if 2 = 0 we have completely Hence the simple sufficient condition pnanl -+ 0 is necessary in this example also.
Example 3. Instead of the exponential distribution we can consider the family of gamma distributions r,(.) with densityf(x) = exp( - x)x%-l /r(a), x 3 0 and c1> 0. Thus let us assume that Fx(.) = I’,(.) as above, but more tedious, gives that if lim, pn > 1, and Fy(.) = f’,,(.). A similar calculation
Fl.1 =
unz(z) = pn log [
n + (a' -
1)log log
n +
cIlog
Pn - logT(cc’) + + z Pn - 1
and nP{X,
+ pn Y, > u,,(z)} + e-’
as n -+ co, for all z E R. Theorem maxima since
(htz(z)-
1
Unl(X))lP,
-
Pn __
PII
6 implies the asymptotic
1
logn+
independence
of the components
of the bivariate
co
for all z, x using u,i(x) = log
n + (a -
1)log log
n +
x -
log T(a).
The complete dependence follows if pn + 0 by Corollary 5 again, since u&.) = u,,(.) in this case also. Comparing these results with the special case of exponentially distributed random variables (Example 2) we
J. Hiisler i Statistics & Probability Letters 21 11994) 385~394
394
note that the conditions distributions.
for complete
dependence
and independence
are still the same for the class of Gamma
Acknowledgement We would like to thank some useful comments.
R.D. Reiss for helpful and stimulating
discussions
of this work and the referee for
References Balkema, A.A. and S.I. Resnick (1977) Max-infinite divisibility, J. Appl. Probub. 14, 309-313. Billingsley, P. (1968), Weak Convergence of Probability Measures, Wiley, New York. Brown, B.M. and S.I. Resnick (1977) Extreme values of independent stochastic processes. J. Appl. Probab. 14, 732-739. Hsing, T. (1989) Extreme value theory for multivariate stationary sequences, J. Multivariate Anal. 29, 274-291. Hiisler, J. (1990). Extremes of multivariate stationary random sequences, Stoch. Proc. Appl. 35, 999108. Hiisler, J. and R.D. Reiss (1989) Maxima of normal random vectors: Between independence and complete dependence, Statist. Probab. Lett. 7, 283-286. Leadbetter, M.R., G. Lindgren and H. Rootzen (1983) Extremes and Related Properties of’ Random Sequences and Processes, Series in Statistics, Springer, Berlin. Penrose, M.D. (1990), Semi-min-stable processes, Tech. Report 133, Univ. of Santa Barbara. Reiss, R.D. (1989) Order Statistics, Series in Statistics, Springer, Berlin. Topsoe, F. (1967) Preservation of weak convergence under mappings, Ann. Math. Statist. 38, 1661-1665.