Statistics & Probability Letters 6 (1987) 87-96 North-Holland
November 1987
THE ALMOST SURE BEHAVIOR OF THE OSCILLATION M O D U L U S OF THE MULTIVARIATE EMPIRICAL PROCESS
J.H.J. EINMAHL
and F.H. RUYMGAART
Katholieke Universiteit Nijmegen, The Netherlands
Received February 1987 Revised May 1987
Abstract: Let
wn denote the oscillation modulus of the uniform multivariate empirical process, defined as the variation of the process over multi-dimensional intervals with Lebesgue measure not exceeding a n ~ (0, 1). The a.s. limiting behavior of ~% is established for sequences {an} of five different orders of magnitude that constitute an essentially complete spectrum of possibilities. Extensions to processes with underlying d.f, more general than the uniform are indicated. The results may have applications in density estimation and in the theory of multivariate spacings. For related results in the univariate case we refer in particular to Mason, Shorack and Wellner (1983) and Mason (1984), and for a general setting to Alexander (1984).
Keywords: multivariate empirical process,
oscillation modulus, almost sure limits.
1. Introduction
The oscillation modulus of an empirical process is not only a theoretically interesting concept, but has direct applications in e.g. density estimation and the theory of spacings. In this paper we establish the a.s. behavior of the oscillation modulus for multivariate empirical processes of arbitrary finite dimension d: the complete spectrum of interesting possibilities is considered. Even when specializing to d = 1 part (e) of Theorem 3.1 seems to be new, and part (d) of that theorem solves open question 2 on p. 545 in the monograph by Shorack and Wellner (1986). Interesting related results in a general setting m a y be found in Alexander (1984). In Section 3 we present the main result. Section 4 is devoted to a brief outline of certain applications. In the remainder part of this section some notation and definitions are introduced. Let X 1, X 2 .... be a sequence of i.i.d, random vectors on the same probability space taking values in I d, w i t h I = [0, 1] a n d d ~ N. T o d i s p l a y t h e c o o r d i n a t e s o f t ~ R d w e w r i t e t = ( t 1. . . . . t a ) = (tj) a n d , g i v e n s ~ R d, we d e f i n e s ~ t to m e a n t h a t sj ~ tj f o r 1 ~
[Sl, q] × . . . ×[Sd, td]
It-sl
= [S, t ] ,
= 1-I I t i - s i l .
(1.1)
i=l
T h e e m p i r i c a l d.f. at s t a g e n ~ N is as u s u a l d e f i n e d b y
F ~ ( t ) = 1--.#(l<<.i<~n; n
Xi<~t},
t ~ I d,
(1.2)
a n d t h e e m p i r i c a l p r o c e s s w i t h u n d e r l y i n g d.f. F b y
U,,F(t ) = nl/2(Fn(t) - F ( t ) ) ,
t ~ 1 d.
(1.3)
F o r t h e d e f i n i t i o n o f the o s c i l l a t i o n m o d u l u s it will b e c o n v e n i e n t to also i n t r o d u c e t h e n o t a t i o n
U.,F([s ,
/])=
nl/2( Fn([s,
t])-
F([s,
t])),
0167-7152/87/$3.50 © 1987, Elsevier Science Publishers B.V. (North-Holland)
(1.4) 87
Volume 6, Number 2
STATISTICS& PROBABILITYLETTERS
November 1987
where F([s, t]) = P ( X i ~
[s, t]) and Fn([s, t]) --n -1- # ( 1 <~i<~n: X i ~ [s, t ] ) .
In this paper we will focus on the oscillation modulus wn,F(a)=
sup
IU~.F([s, t])~,
a~(0,1),
(1.5)
s ~ t : I t - s l <~a
and on w~v and w~,F which are defined as w,, F with absolute value replaced by + and - respectively. Although the oscillation modulus will be considered for fairly general F, undue technicalities may and will be avoided by occasionally restricting to the special case where F is the uniform distribution, i.e.
F(t) = It [, t ~ I a.
(1.6)
We will simply write
U,,F = U, and W,,F = Wn, if F satisfies (1.6).
(1.7)
Some variations on the definition of oscillation modulus are possible. First one might consider
~,F(a) =
[Un,F([ s, /]) I,
sup
a ~ (0, 1).
(1.8)
s<~t: I t - s l = a
The corresponding ~ F and ~ , F are considered by Mason (1984) for d = 1 who derives interesting results for spacings from the properties of these moduli; the modulus ~ , , r has applications in the theory of density estimation. See also Section 4. A second variation, that boils down to (1.5) for d = 1, is given by 0~nt,F(h) =
IU,,F([S, /])1,
sup
h~(0,1);
(1.9)
s<~t: Itj-sjl<~h ~tj
thanks to the condition on the lengths of the sides of the intervals the problem is essentially reduced to a one dimensional problem. Therefore the present modulus (1.5) is mathematically somewhat harder to handle but seems to be more natural. For arbitrary d, Stute (1984) considers the a.s. behavior of w~,r(h.) for the particular sequence (hn } corresponding to case (b) of Theorem 3.1, and Einmahl and Ruymgaart (1985) consider the a.s. behavior of w'~,F(h~) for F uniform and the particular sequence (h~) corresponding to case (c) of Theorem 3.1. Since properties of w,, " e, t 3 ~,F, + ~0n,F "and ¢,dnt ' F are immediate from those of 0~ ÷ and w~, F we will not pay further attention to these moduli here. O)n,F~ n,F We conclude this section with a preparatory remark. It will be convenient to rewrite Un([s, t]) as an ordinary empirical process over the unit square 1 TM of doubled dimension 2d. For this purpose we introduce the sequence of 2d-dimensional random vectors X/= (1 - Xil . . . . . 1
-
-
Xid , gil
.....
Xid>,
i ~ N.
(1.10)
The common d.f. of the .~, will be denoted by if, the empirical d.f. of the first n by ff~ and the corresponding empirical process by
U~,r((u, t))=nl/2(ffn((u,
t))-ff((u,
t))),
(u, t ) ~ I d × I
a.
(1.11)
Note that if F is the uniform d.f. in (1.6) the d.f. ff reduces to
ff((u, t)) =
I t - s I, 0,
for (u, t) ~ I d x I d with (1 - uj) = (sj) ~< (ty), otherwise.
(1.12)
We will simply write
U,,v = U,, 88
if ff satisfies (1.12).
(1.13)
Volume 6, Number 2
STATISTICS& PROBABILITYLETTERS
November 1987
Let us now observe that U,(
with (sj) = (1 - uj),
(1.14)
so that the process U, indexed by rectangles as needed in definition (1.5) behaves as the ordinary erlapirical process U, indexed by points. Some fundamental tools that might be of independent interest are collected in Section 2. Since we derived these tools for arbitrary F and d they apply in particular to the empirical process in (1.13).
2. Fundamental inequalities Throughout this section the underlying d.f. F is completely arbitrary. Fix v ~ N d and, for t ~ N d with t ~< v, let ~, be a real valued r.v. with mean 0 and variance 0,2< ~ . We assume these r.v.'s to be independent. Let us next define the partial sums
E~t'
Su=
for u ~ N d w i t h
u<,o,
(2.1)
t <~u
and let us write 02 = Var(Sv) = E °t2"
(2.2)
l <~v
The following inequality may be found in Klesov (1983) and is essentially contained in Orey and Pruitt (1973, proof of Lemma 1.2). For a simple proof we refer to Einmahl (1987). Inequality 2.1. For all )~ ~ R we have
P( m
1X) <~2aP(Sv >1h - d 21/~o2).
(2.3)
This inequality is particularly useful when path properties are studied for processes with independent increments. The inequality applies to Poisson processes as these processes do have independent increments. Let N,, v be a Poisson process on I d with E ( N , , F ( t ) ) = nF(t) for all t ~ I d, and consider the standardized process
Zn,F(t)=n-1/2(Nn,F(t)--nF(t)),
t ~ I d.
(2.4)
The following is immediate from Inequality 2.1." Inequality 2.2. For any t ~ I d, ~kE R and either choice of sign we have P sup
Zn,F(S
) ~
~
_
,
•
--
(2.5)
Conditioned on Z . , r ( ( 1 . . . . . 1))---0 the process Z., F is equal in law to U., r in (1.3). By a standard argument (see e.g. Pyke and Shorack (1968) or Ruymgaart and Wellner (1984)) we find the following result. For any t ~ I d with F(t) ~< ½, all X ~ R and either choice of sign we have
:up ±
sup _+ Z , , r ( S ) > X ).
(2.6)
s<~t
89
Volume 6, Number 2
STATISTICS & PROBABILITY LETTERS
November 1987
It remains to find an upper bound for the tail probabilities of the standardized Poisson r.v. Z,.e(t) for given t ~ I d. For this purpose let us introduce the function =2X-2{(l+h)log(l+h)-X},
{~k(X)
qJ(0) = 1,
h~(-1,0)
u ( 0 , oo),
(2.7)
q J ( - 1 ) = 2.
For t ~ I d with F(t) > 0 the usual moment generating function technique yields
P(+Z,,e(t)>~X)<~exp
2--F-~(t)q, nl/TF(t)
,
[O<~X<~nl/2F(t )
(_).
Combining (2.5) - (2.8) we eventually obtain the inequalities that are of basic importance in the study of empirical processes. Inequality 2.3. Let t
~ Id
with 0 < F(t) <~½. For any c ~ (0, 1) we have
P(ssupUn'F(S)>~X)O<~ C+ exp
e(~up--Un,F(S)~)k),C-ex p P ( sup s,~t [ U.,F(S ) [ >/ X) ~< C e x p
2F(t) (-(a-c)X
~(~)
-f~((~)
ff nl/2F(t)
'
2 (-(1-,)X))
~
~72F(~
q~ nl/2F(t)
h>~O,
(2.9)
, O~X~nX/2F(t), ,
X >/0,
(2.10)
(2.11)
where C +, C-, C ~ (0, oo) depend only on d and c.
3. Application to the oscillation modulus
We start this section with two inequalities in the case of the uniform distribution. First we will give a global version of (2.11) and next a convenient maximal inequality. The following representation of the oscillation modulus,
o~,(a) =
sup
I U,((u, t ) ) I ,
(3.1)
ff((u, t))<~a
which is immediate from (1.7), (1, 12) and (1.14), will be used. Inequality 3.1. Let c ~ (0, ½] and a ~ (0, ¼). Then
P(o~,(a)>~Xal/2)~C
1(
log
exp-(1
- ~)X2 t~
())
X 2 (ha)l~
X>~0,
,
(3.2)
where C = C(d, c) E (0, ~). Proof. Let us define 8 by 8 21+1 = 1 - c and l ~ N by Ot+l < a ~< 0 t and observe that l <~log(1/a)/log(1/O). For k ~ N 0 let us introduce
A(O, k ) =
90
v~Ia: (vj)=(O k(j))withk(j)~N
O and ~ k ( j ) = k
j~l
,
(3.3)
Volume 6, Number 2
STATISTICS & PROBABILITYLETTERS
November 1987
and for fl ~ (1, 2) and x ~ (0, 1) define
1)x,
(1 - k ( B -
~ ( B , x) =
u {(Bx, 1)} ((1, 1)}
(5- Vx
Bx + k ( ¢ - 1 ) x ) ~ I~: k s No, k ~<
if Bx < 1,
'(3.4)
if fix >/1,
where [z] denotes the greatest integer ~
# A ( O ' k ) = ( k+d-1)d-1
and #B(fl, x)<~l/((fl-1)x).
According to (2.11) and the fact that the function 7~~, Mp()Q is increasing for )~ >/ - 1 , we have for ff((fi, ?)) ~< a/(1 - c)
( ~l, "t) E I d X I d with
P (u, t),~(~,sup?) [U,((u, t)) [ >i •a
1/2 <~C exp
~
~ (na) 1/2
.
Using (3.1) and (3.3)-(3.4) we find
P(~on(a)>~)~al/2)<~P( sup [V,((u,t))[>~)~a 1/2) "P((u, t))<~OI <~P(
max
sup
[Un((u,t))l~Xa 1/2 )
sup
[Un((u,t))[>~)ka 1/2 )
\yEA(O, I-d) (tj+uj--1)<~v
,
Y'~
P(
v~A(O, l-d)
(tj+uj-1)<~o I
~<
P/
E
max
sup
d
[ Un ((U,
t))[>/)ka 1/2
)
j=l
(3.6)
Hd=aB(1/O, vj) is the Cartesian product of the B(1/O, vj), noting that 1 / 0 ~ (1) 2). For (fi, ?) E I a × I a we take the j-th coordinate fij of fi and the j-th coordinate ?j of ? pairwise together to form 12, for each 1 ~
d
ff((fi,
D)= I-I ((vjo) A 1) ~ 0 j=l
l-2d
~a/(1 - , ) .
Hence by (3.5) the last expression in (3.6) is in turn bounded by
(Id-_ll)j_~( (l__8-'O)oj)Cexp( (I--'2)2~2~((na;1/2)) 1o
1,o>)
._1
aOO,:> exp( , a .-1
Relabeling (1 - e) z by (1 - e), which entails 0 4e+2 = 1 - c, and obtain the fight hand side of (3.2). []
,(
C(log(1/O)) 1-a.
)) . (0/(1 -
0)) a
by C we
91
Volume 6, Number 2
STATISTICS& PROBABILITYLETTERS
November 1987
In order to obtain precise a.s. limits we use a standard blocking argument. The success of this method hinges on a maximal inequality in the following form. For d = 1 we refer to James (1975, Lemma 2.3) and Shorack (1980, Lemma 3). Inequality 3.2. L e t , ~ (0, 1), a ~ (0, 1] and write n, = [(1 + ½,)*], k ~ IN. For all k ~ 1N and h > 2( a / , ) 1/2
we haoe
"(37)
P(nk~h)<~ 2P(60~k+,(a ) >~ ( 1 - - , ) h ) ,
Proof. Note that it suffices to prove (3.7) with >/X replaced by > X. Let {ri: i ~ N} be an enumeration of the set
((U, t ) ~ I d x I d :
ff((U, t ) ) ~ a }
[.-)Q2d
and let us write
S,((u, t ) ) = n ' / 2 U , ( ( u , t)),
(u, t } ~ I d × l
d.
(3.8)
According to the representation (3.1) the probability on the left in (3.7) (with >/X replaced by > X) is less than or equal to
P(
sup ISn(ri) I > X(n k + 1)x/2]. \ nk~tl~
(3.9)
For n k < n <~nk+ 1 and i E IN let us introduce the sets
A ~ = (IS,(ri) I > X(nk + 1)'/2}, Bni = {
(3.10)
I S~,+,(ri) - Stl(ri) [ < X(n k + 1) 1/2. ½,}.
(3.11)
An application of the "lemma for events", see e.g. Lo6ve (1977, Section 18.1, p. 258) generalized to the countable well-ordered index set ((n, i): nk < n <~nk+l, i ~ N} with the lexicographical ordering, of. Einmahl (1987), yields {the probability in (3.9)}.
inf
nk
P(B,,)
P( sup [S,,+,(ri) l>~ (1 - ½e)X(n k + 1)'/2).
(3.12)
/
\iEN
It is easily seen that
P(suplSn,+,(ri) x i~N
>~(1--½c)h(nk+l)l/2)<~P(
sup
ff((u, t))<~a
,Unk.l((u, t ) ) [ > ~ ( 1 - - , ) ~ ) , (3.13)
so that it remains to prove that P(B,~) >~ ½ for all n k < n ~< nk+ 1 and all i ~ N. Application of Chebyshev's inequality yields
P(B,~,) <~
4ff(ri)(1 - ff(ri))(nk+ 1 -- (rl k 4- 1)) 2ca h2(n k + 1),2 ~< ~ ~< ½,
for X > 2(a/,) 1/2 as required. 92
[]
(3.14)
Volume 6, Number 2
STATISTICS & PROBABILITY LETTERS
November 1987
Theorem 3.1. Suppose that F has a continuous density f on I a with m a x i m u m M < co. In each o f the following five cases ( a n ) ( a n ~ (0, 1)) is a sequence such that a n ~0 as n ~ oo. (a) I f l o g ( 1 / a n ) / l o g log n --* c ~ [0, oo) and i f n a , ~ as n ~ oo, we have limsup °~n'F(an) = (2(1 + c ) M ) a/2, n-~oo (a n log log n) 1/2
(3.15)
a.s.
(b) I f l o g ( 1 / a n ) / l o g log n --* oo, n a J l o g n ~ co and na n ~ as n ~ co, we have lim
%,e(a,)
= ( 2 M ) 1/2,
(3.16)
a.s.
n~oo ( a n l o g ( 1 / a n ) ) 1/2
(c) I f n a J l o g n ~ c ~ (0, oo) as n ~ oo we have %'F(an) lim n a / 2log n
c M ( f l + M - 1),
(3.17)
a.s.
n ---* OO
where fl+M is defined by the conditions fl+M(log fl+~ - 1) + 1 = 1 / c M and fl+M > 1. (d) I f nan~log n -:, O, l o g ( 1 / a , ) / l o g n ~ 1 and n2a, ~ as n --* oo we have
lim n ~ oo
nl/26°n,F ( an )log((log n ) / ( na n ))
log n
= 1,
(3.1 8)
a.s.
(e) I f l o g ( 1 / a n ) / l o g n = c E (1, oo) (i.e. a n = n - c ) we have limsupnl/2%,,~(an) = [c/(c-
1)],
(3.19)
a.s.
n ---~ o o
I f in addition c / ( c - 1) ~ N, then c m a y be replaced by c, --* c and limsup by lim.
Proof. Without loss of generality we restrict ourselves in the proof to the case where F is uniform. Let us first prove that the numbers on the right are upper bounds for the respective expressions on the left almost surely. These results follow easily by an application of Inequalities 3.1 and 3.2. Because of the similarity of the proofs for all the cases, we will restrict ourselves to a proof of case (a). By the Borel-Cantelli lemma and Inequality 3.2 it suffices to prove that for every small c > 0 we have F,~°°=lP(Ak) < oo, where ~1/2)
A~=(o:nk.l(a,~)~(2(l+2c)(1-e)~l(l+c)ank+loglognk]
(3.21)
],
and nk = [(1 + ½c)k]. Application of Inequality 3.1 yields
log
P ( A k ) <~
exp - (1 + 2c)(1 + c) an~÷. _ ~ (log log nk)
an~
an~
(2(1+ 2~)(1-e)-l(1 "~
,,
+ c)an~÷ag -~1/2
log log n k ) "
I,nk+lan k )
(3.22) 93
Volume 6, N u m b e r 2
STATISTICS & PROBABILITY LETTERS
November 1987
The conditions in (a) entail that the argument of the function ~p tends to 0 as k ~ o0 so that, for large k, we have
P ( A k ) <~
C~1
log - -
an k
e x p ( - (1 + , ) ( 1 + c)log log n k )
an k d-l(
= C ( !l ° g a - ~ ) a n ,
1 , ~ ]
1(1+'×1+¢) (3.23)
Using once more the conditions under (a) it follows from (3.23) that the P ( A k ) are summable, which completes this part of the proof. Let us next turn to the proof that these same numbers are lower bounds for the expressions on the left almost surely. Due to the fact that all the results in the theorem are independent of the dimension, we can use the one-dimensional lower bounds as far as available. These lower bounds can indeed be found in the literature for the cases (a), (b), (c) and (d), namely in Mason, Shorack and Wellner (1983, Theorem 3), Stute (1982, Lemma 2.9), Koml6s, Major and Tusnhdy (1975, (3.11)) and Alexander (1984, Theorem 3.1(C) and its proof) respectively. Hence it remains to consider the lower bounds in case (e). Let us first consider the case c / ( c - 1) ~ N. For n, m ~ N with m >/n let us write ( m - n)Fn,,, = m F m n F n. Since na n -~ 0 it suffices to prove -
limsup
sup
k---~oo [t--sl~ank+l
(nk+ 1 - nk)Fn~ . . . . . ([s, t]) > / c / ( c - - 1),
a.s.,
(3.24)
with n k = 2 k. For a ~ (0, 1) let ~a be a family of [ l / a ] rectangles [s, t] c I a with disjoint interiors and with I t - s [ = a. By straightforward computation it may be seen that for a Binomial (n, p) r.v. Y with np <~ ½ and n >/2m for some m ~ N 0, there exists a number C ( m ) ~ (0, o¢) depending only on m such that P ( Y > ~ m ) >1 C ( m ) ( n p )
(3.25)
m.
Hence, for large k, we have P(
sup
It-sl >~ P (
\
>_-1 -
(nk+l--nk)F~,
. . . . ~([s, t ] ) > ~ C / ( C - - 1 ) )
~a.k+ ~
max
[s,t]~Y~o....
I-I
(nk+ 1 -- n k ) F n . . . . . . ([S, t]) >~ e l ( c - -
(i-
1)/
/
P((nk+ , -- nk)F n......([S, t])>i c/(C--
I)))
>/1-- (1-- C( c C~l )2(k-(k + ')cI(c/(c-D} ) t2~*+''~l ---, 1 - e x p ( - C ( c ~ C
1)2-¢/(c-')),
ask--*oo.
(3.26)
In the second inequality we use a result in Mallows (1968) concerning the negative dependence of the components in a multinomium. Now (3.24) follows by an application of the Borel-Cantelli lemma. Finally we consider the case c / ( c - 1) ~ N. Then we have l o g ( 1 / a n ) / l o g n ~ c, which is equivalent with a n = n -~+°O). Since again na n --* 0 it suffices to prove liminf
sup
n F n ( [ s , /]) > / [ c / ( c -
n---,oo it_siVa ~
94
1)],
a.s.
(3.27)
Volume 6, Number 2
STATISTICS & PROBABILITY LETTERS
November 1987
This follows if we can prove that O0
/
E e / sup nF~([s,t]) ¢ [c/(c- 1)]
n=l
- 1) < oo.
(3.28)
]
\lt-sl<~a~
Defining tt ~ (0, 1) by # = c / ( c - 1) - [ c / ( c - 1)], and using again ~a, Mallows (1968) and (3.25), we have for large n that
e(
\
sup
t]).< [c/(c- 1)] -
It-sl<~a~ <~P(ts,tn~ax#annF~([s, t])~< [ c / ( c -
1) ]
1 ) ] - 1)
I-I ( 1 - P ( n F . ( [ s , t])>~ [ c / ( c - 1 ) ] ) ) ~ < [s,t]~.
<~
C
{1-
C
c
([~l])n
-(c-a).c/(c-1),+°(', }
n~+°O)
c
These last numbers in (3.29) are summable since ½(c - 1)it > 0. This proves (3.27).
[]
Theorem 3.2. Under the conditions of the previous theorem w +~,Fhas the same a.s. limits as to~,p in each of the five cases. In cases (a) and (b) also w~,t: has the same a.s. limits as w,, F, but in case (c) we have to replace the limit by cM(1 - fl~-M) (flc-M is defined by the conditions fl~-M(log fl~M- 1) + 1 = 1 / c M and fl~-M< 1 for c M > 1, and fl~-M= 0 for c M <~1). In cases (d) and (e) the modulus w~,F is degenerate. Proof. Since w,,F + ~< W,,F and because the lower bounds for w,, r are based on W,,F + anyway, the assertions about w~e are immediate. Applying the same method of proof along with inequality (2.10) rather than (2.11) proves the statements concerning w~,F. []
4. Brief outline of applications 4.1. Multivariate spacings For d = 1 Mason (1984) exploited a certain duality between the oscillation moduli 03~F (see (1.8)) and spacings to derive strong convergence properties for the latter from those obtained for the moduli. When we restrict to spacings of the multidimensional interval type a similar approach is possible for arbitrary d ~ N using the results of Section 3. In Deheuvels et al (1987), however, results for spacings of more general shape can be obtained as well, using a different method.
4.2. Multivariate density estimation In proving uniform a.s. convergence of density estimators - in particular naive density estimators - to the true density, inequality (3.2) is a key result. We will not dwell on this application, since it is well-known: see e.g. Stute (1984).
4.3. Relation with emp. difference process For d = 1, a highly technical proof regarding the strong limiting behavior of the difference of the empirical and the quantile process in Kiefer (1970)~ can be reduced to a few lines when results for the 95
Volume 6, Number 2
STATISTICS & PROBABILITY LETTERS
November 1987
oscillation modulus are used (Stute (1982), Shorack and Wellner (1986, pp. 589, 590)). A result by Watts (1980) on the tails of this so called empirical difference process will be further extended and generalized along these lines in a forthcoming paper by Einmahl and Mason (1987).
References Alexander, K.S. (1984), Rates of growth and sample moduli for weighted empirical processes indexed by sets, Tech. rept., Dept. of Mathematics, University of Washington, Seattle. Deheuvels, P., J.H.J. Einmatd, D.M. Mason and F.H. Ruymgaart (1987), The almost sure behaviour of maximal and minimal multivariate kn-spacings, to appear in J. Mult. Analysis. Einmahl, J.H.J. (1987), Multivariate Empirical Processes, CWI Tract 32, Amsterdam. Einmahl, J.H.J. and D.M. Mason (1987), Strong limit theorems for weighted quantile processes, Tech. report, Un. of Delaware, Dept. of Math. Sc. No. 87-2. Einmahl, J.H.J. and F.H. Ruymgaart (1985), A strong law for the oscillation modulus of the multivariate empirical process, Statistics & Decisions 3, 357-362. James, B.R. (1975), A functional law of the iterated logarithm for weighted empirical distributions, Ann. Probability 3, 762-772. Kiefer, J. (1970), Deviations between the sample quantile process and the sample DF, in: M.L. Purl ed., Nonparametric Techniques in Statistical Inference (Cambridge), 299-319. Klesov, O.I. (1983), The law of the iterated logarithm for multiple sums, Th. Probability Math. Stat. 27, 65-72. Koml6s, J., P. Major and G. Tusn/tday (1975), Weak convergence and embedding, in: Colloquia Mathematica Societatis Jtinos Bolyai, Limit Theorems of Probability Theory (NorthHolland, Amsterdam), 149-165. Lorve, M. (1977), Probability Theory, Part I (Springer, New York, 4th ed.).
96
Mallows, C.L. (1968), An inequality involving multinomial probabilities, Biometrika 55, 422-424. Mason, D.M. (1984), A strong limit theorem for the oscillation modulus of the uniform empirical quantile process, Stoch. Processes AppL 17, 127-136. Mason, D.M., G.R. Shorack and J.A. Wellner (1983), Strong limit theorems for oscillation moduli of the uniform empirical process, Probability Th. Rel. Fields 65, 83-97. Orey, S. and W.E. Pruitt (1973), Sample functions of the N-parameter Wiener process, Ann. Probability l, 138-163. Pyke, R. and G.R. Shorack (1968), Weak convergence of a two-sample empirical process and a new approach to Chernoff-Savage theorems, Ann. Math. Stat. 39, 755-771. Ruymgaart, F.H. and J.A. Wellner (1984), Some properties of weighted multivariate empirical processes, Statistics & Decisions 2, 199-223. Shorack, G.R. (1980), Some law of the iterated logarithm type results for the empirical process, Austral. J. Stat. 22, 50-59. Shorack, G.R. and J.A. Wellner (1986), Empirical Processes with Applications to Statistics (Wiley, New York). Stute, W. (1982), The oscillation behaviour of empirical processes, Ann. Probability 10, 86-107. Stute, W. (1984), The oscillation behaviour of empirical processes: the multivariate case, Ann. Probability 12, 361-379. Watts, V. (1980), The almost sure representation of intermediate order statistics, Probability Th. ReL Fields 54, 281-285.