ELSEVIER
Statistics & Probability Letters 27 (1996) 375-384
Some properties of the Lynden-Bell estimator with truncated data Ao Yuan Department o f Statistics, University o f British Columbia, Vancouver, B.C., Canada V6T IZ2 Received January 1995; revised May 1995
Abstract
Strong consistency of the Lynden-Bell estimator is obtained, and by making use of martingale integral representation, weak convergence results are proved and bootstrapping of the estimator is studied. AMS classification: Primary 62C10, 62C20; secondary 62F12, 62F15 Keywords: Bootstrap; Brownian bridge; Censored observation; Gaussian process; Martingale; Product-limit estimator;
Strong consistency; Truncated observation; Weak consistency
1. Introduction
Let X1,X2, ... ,XN, be independent and identically distributed positive random variables with a common distribution function F('), and Y1, Y2 . . . . . YN, are a sequence of independent and identically distributed positive random variables with a common distribution function G(" ). Here Xi is left-truncated by Yi, that is X~ is observable only if X~/> Y~. We denote the n truncated observations by X o, y o, i = 1 . . . . . N(n), where N(n) = inf N:
I[xo>.yO } = n . i=
In his study of truncated data from an application in astronomy, Lynden-Bell (1971) used non-parametric maximum likelihood arguments to derive the product-limit estimators F,(-) of F(.) and d,(') of G(.), which he defined from the truncated data by 1 - F.(t) = H (1 <~,
ALN(s)'], RN(S) ]
where N
LN(s) = ~.,I~r,
N
RN(S)= ~, I ~ v , < ~ x,l, i=l
0167-7152/96/$12.00 © 1996 Elsevier Science B.V. All rights reserved SSDI 0 1 6 7 - 7 1 5 2 ( 9 5 ) 0 0 1 0 2 - 6
A. Yuan / Statistics & Probability Letters 27 (1996) 375 384
376
and
RN(s) y where N
Q~(s) = ~Ilv<.~,r,<.x,l, i=1
and for a right-continuous function f ( ' ) with left-hand limits define Af(s) = f ( s ) - f ( s - ), and we use the convention 0/0 = 0. F o r any distribution function f(.) on [0, ~ ), let a: = inf(s: A(s) > 0} and b: = sup{s: A(s) < 1}. Assuming F(.) and G(.) are continuous, W o o d r o o f e (1985) showed that sup[P.(t)-
F~(t)l ~ 0 ,
if F ( a ~ ) < 1.
t >/at;
where F~(t) = P(X1 <~ t]Xa > a~). Assuming furthermore that
f
° dF(t) --~
< 0(3 ,
he also showed that nl/2(Fn(') - Fa(')) converges weakly to a G a u s s i a n process on [a~, t] for every t > aG with
F(t) < 1. Let -~i = X~ A Y~ and 6i = Ilx , ~ y,], i = 1 . . . . , n; here we use A to denote m i n i m u m and use ~S to denote the n u m b e r of elements of a set S. Based on these censored observations, a classical estimator of F(.) is the product-limit estimator Pn(.), introduced by K a p l a n and Meier, which is defined by
1 - ft.(t)= .)~<., 1
AH.(s) ~,(s)y
where H . ( s ) = $ { i <<.n: X i ~ s , ( ~ i = 1} = ~ Ilx,<~Ar, ,, i=1
K,(s) = ~ {i <<.n: Xi >>-s} = ~ I~x, ^ y, >1.+ i=l
Let (~.(.) be the product-limit estimator of G('); in order to obtain the confidence Efron's a p p r o a c h , take a sample X °* . . . . . X °*, say, iid F,(.), and a corresponding iid ~,(.), and consider the K a p l a n - M e i e r estimator for F , ( ' ) corresponding 6*=I~,=xo,,~ i = 1 . . . . . m. Let F*(-) be the K a p l a n - M e i e r estimator for i = 1, ... ,m. In his b o o t s t r a p p i n g study, Akritas (1986) proved that as n and m
bands for F('), we can use sample Y o,, ..., y o,, say, to )~* = m i n { X °*, yO,}, ft.(') based on ()~*,6"), tend to infinity,
mt/2(ff,(.) _ ft,(.)) ~ B O [ E ( . ) ] 1 - F(.) on [0, t] for any t < z = sup{s: H(s) < 1}, for almost all sample sequences X1 . . . . . X.; Y1 . . . . . Y., where =~ denotes convergence in distribution, B°(.) denotes the Brownian bridge and 1 - H(s) = (1 - F(s))(1 - G(s)) and E(s) = C(s)/(1 + C(s)) with
C(s)
fl
I[o<~Aa(t)< 11 (1 - AA(t))(1 - H(t)) dA(t)'
where A(t) is the cumulative h a z a r d function corresponding to F(t).
A. Yuan/ Statistics & Probability Letters 27 (1996) 375-384
377
In this paper, I study the strong and weak consistency of/~.(.). Let X~' . . . . , X * be a iid sample from ae,(.), Y* . . . . . Y* be a lid sample from d.(.) and le*(.) be the Lynden-Bell estimator of F,(.) based on (X*, Y*), i = 1 . . . . . m. Modifying the methods of Akritas (1986), I studied the bootstrapping of ~e (.).
2. Main results Let Din(s) = LN(s)/N,
Dz~s) = Qs(s)/N,
D~s) = Rn(s)/N,
Dl(s) = P(Y1 ~< X1 ~< s) = (1 - F(a~))
a(x)dFo(x), -
D2(s) = P(Ya <~ s, Y1 <'%X1) = G(bv)
f
(1) (2)
oc
(1 - F ( x - ) ) d G ~ ( x ) ,
(3)
-oo
where GF(s) = P(Y1 <~ s/Y1 <-%by), and let D(s) = P(Ya ~ s ~< X1) = (1 - f(a~))G(s)(1 - f o(s - )) = G(bv)Gv(s)(1 - f ( s -)).
(4)
F o r functions fl ('), f2(" ), let
•
1
7
)exp
-
f2(s):'
where f~(.) denote the continuous part of a function f(.), and let Sn(t) = 1 -- Pn(t) = ~<~H¢ 1
AL~(s!~. RN(s) f
then ~,(.) = ~(Dls, Dn)('). By the method of Gill (1981), we have the following lemma. L e m m a 1. Let Pa(', ') be the supremum metric on ( - Go, a], and let a > 0 satisfy D(a) > 0; then p¢(~(Dls, Ds), • (D 1, D)) ~ 0 as max (p~ (D i n, D 1), P~ (Dn, D)) --* 0. Proof. Let A(t) = jt o~ D-l(s)dUl(s), As(t) = jt_ooDNl(s)dDls(s); then • (Ds, U)(t) = exp{ - A ( t ) + ~ (AA(s) + log(1 - AA(s))}. s~
We first show that A(.) is a continuous function of DI(') and D(.); in fact, --
[A(t)
As(t)l ~<
;
tt m
fl
t~
-ldD1 -t-
D;ldD1N+
f
D-ldD1 -
;; Bf¢ 1 dO,,:: I1 + / 2
if we take 0 < t~ - ao small enough, we can have
I1 --
f,t~ (1 - F(aa)) ~ o G(x)dFo(x) 1 ~< 8 . ,,; (1 -- ~ ~ - F - - ~ ~< (1 - FG(a --))~',i dFo(s)
+/3
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384
378
Since RN(S)= 0 implies ALN(s)= 0, so 12 is well defined, and since p,(DN, D ) ~ 0 , thus for large N, DN(S) >~D(s)/2 on {s: DN(S) > 0} c~(ao,t~-]. So,
12 <~2ft'D-ldD1N ~ 211 ~< 2e. d at;,
N o w by Helly's Theorem, and the fact that
f t (DN(S)_-- D(s)) dD l(S) f ' (dD1 N(S) ~ dD l(s)) 13 ~ Jt, D(S)DN(S) + l,~ DN(S) ' and on (t~, t], D(s) >t (1 - F(aG))G(t~)(1 - F~(t)), so take 0 < e < (1 - F(aG))G(Q(1 -- F~(t))/2, for large N, we have DN(S) >i G(s) - e; thus
D(s)- tDN(S)- x ~< 2 (1 -- F(a~))- 2G(t,)- 2(1 - FG(t))- 2, SO
2po(Dm D)Di(t) <" 1 - F(a6))2G(t~)2(1 -- F6(t)) 2 ~ 0 .
If:, (DN(s)-D(S)DN(S)
Let S = {s: DiN(S) > Dl(s)}, then
[ft (dDiN(S):_dDl(S)) ~ dD1N(S)_--_d_D_t(s) f( dDl(s)-dDxN(S) DN(S) ~ J..O~s Dr(s) -- e + t,,0~s o Dx(s) -- e ,
~< ( 1
2 F(a~))G(t,)(1 - FG(t)) I f , ,,o~s(dD1N(S)_dDl(s))
-
+ ~.°,o~so(dDt(s) - dDtN(S)) 1 ~< (1 --
2po(DtN. D1) ~0, F(a6))G(t,)(1 -- Fo(t))
thus, 13 "-" 0. Next we show that • is continuous as a function of A('). N o t e
A(t) = /
t d(~c;G(x)dF~(x))S
1 ~<
J.o G(s)(1 - V~(s))
~t
1 - F-G(t7 --)
J dFG(s) ~< .o
1 1 - FG(a --)
and A(-) is non-decreasing with AA(t)=
G(t)AF~(t) G(t)(1 - F~(t -))
=
Fo(t) - Fo(t - ) <1 I - F~(t-)
so the p r o o f is the same as Gill (1981).
fort~
[]
1. Suppose a~ < be, F(aG) < 1, F~(tr - ) < 1; then for any t > aG,
Theorem
sup
t<~x<~#
IP.(x) -- F~(x)[ ---,0,
a.s. as n --* m .
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384 Proof.
379
Since 4~(Ox,
O)(t) =
l-[ ( ,-.<5
1 --
(1-AFdS)Fds_ . e,x p
----g--7-. (f'-~l--t'~(sdFds)
_ ; .U~ .~ =
1 --
F6(t)
as in Gill (1981), so by Lemma 1, we need only to prove
p~(D1N, D1) ~0,
a.s.,
(5)
p,(Dm D) --* O, a.s.
(6)
We only prove (5); the proof of (6) is the same. Note E(D 1N) = D 1; apply Corollary 1.3 of Alexander (1985) to the empirical measure defined by the independent random vectors (Y~, X l) . . . . , (YN, XN) noting that the L2 metric entropy with bracketing of the class of sets of the form { Y~ ~< sl (or Yt < sO, Xi >>-s2 (or X~ > s2)} is of logarithmic order, so for any e, 6, c > 0, and any 0 < r < 2,
~< o { e x p ( - 6N'/¢'+2))}, so (5) follows from the Borel-Cantelli lemma.
[]
Theorem 2. Suppose a~ < bv; then for any t > a~, s u p l d , ( x ) - Ge(x)l ~0, x <<. t
a.s.
Proof. For any functions fl('), fz('), let
qy(fl,fz)(t,:FI>t(1
Afl(S)~exp(-f~dfl¢'s))
.,
f2(s) fl
f2(s) ]"
Then by the same reason as before we have 4~'(D2, D)(t) = Gr(t),
q~'(D2N,DN)(t) = tiN(t)
and
supI~'D2N, Du)(s) - ~'(D2, D)(s)I ~ 0 ,
a.s.
x >~t
since, in the same way as the proofs of (5) and (6), we have max (sup lD2N(x) -- D2(x) l, sup lDN(x) -- D(x) l] --* 0, \x
>~t
x >~ t
/
a.s.
[]
GO
Let ~ = P(Y1 ~ X1) = ~o G(x)dF(x). Consistent estimators of a and the population size N are given by Woodroofe (1985) as = J o d,(x) dP.(x),
•, = n/a..
Let Jgo be the collection of all pairs of distribution functions (F('), G(.)) with aG ~< ae and be ~< be. Then by Theorems 1 and 2 we have the following corollary.
380
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384
Corollary 1. /f(F,G)e ~ o and F(ao) < 1, sup IP.(x) -- F(x)] -~0, a.s.,
Fo(a - ) < 1, t > ao, then
suplG,(x) - G(x)L -~0, a.s.
and ~, --* ~, a.s.,
N , / N ~ 1, a.s.
Now we study the bootstrapping of P, by Efron's resampling scheme, and we assume that the conditions of Theorem 2 are satisfied. Let
i=1
i=1
i=1
and define the estimators of F,(.) and G,(.), respectively, by P*(t) = 1 - l-I (1 <.,
d*(t)
--
J~I>,(1
AL*(s)'~
(7)
R*(s)/'
AQ*(s)~ R*--~/
(8)
First we derive the central limit theorem for if,(.). Let AN(t) = ~to R;~ l(s) dLN(s), A ( t ) = Sto(1 - r(s - ) ) - ' dE(s), MN(t) = N1/2(D1N(t) - ~to DN(s) dA(s)), J(t) = I(RN(,I> ol, and for any distribution function f(-), define f(s) = 1 - f(s - ) ; then we have: Lemma 2.
/~,(t) - F(t) = (1 - F ( t ) ) f l (1-1_F_(~F.(s-)) ANtS,/(-~S')dMN(s). Proof. Since AN(t), and A(t) are nondeceasing and right-continuous, and ANt = 0, so 1-[ (1 s1-I , , ( 1 - A A ( s , , e x p ( - A ¢ ( t , , ,.<,\
AF(s)'~ F-~ ) e x p ( f-
ft_ ~ dF¢(s)~ F-~/t = 1-F(t).
In the same way,
)exp
[ I (1 - AAs(s))exp( - As:(t)) = 1~ (1 s~t
s<~t
k
J_
Thus, by proposition A.4.1 of Gill (1980).
Z(t)
=
1
I],~(1 - AAs(s))exp(- As~(t)) = 1 1-1~.<,(1 - AA(s))exp(- A d O )
is a solution of
z(t)=f'ol-Z(s-) 1 - AA(s)
(d2N(s) -- dA(s)).
i.e. 1 -- F.(t)
-1 --- F=( t )
1--
fo I -1--
F(s) \ ~N(s)
dA(s)
)
l - P.(t) 1 - t(t)
J:
1 -- F.(t).
A. Yuan/ Statistics & Probability Letters 27 H996) 375-384
381
or
F.I -- FF (t )
= N 1/2
f ' 1~_ ~ J dMN. - ----o DN
[]
(9)
L e m m a 3. Ms(t) is a square inte#rable martingale with
(t)=fI(1-AA(s))DNdA(s). Proof. Let n~(s) = l(y, .
f(1
1 <<.i ~ N
(10)
(mi, m j > = O ,
f o r a l l 1 <~ i # j < < . N ,
(11)
- AA)r~dA,
and
so MN = N - l/z ~.~= i ml, and by the bilinearity of ( ",'>, the p r o o f of the L e m m a will be completed. The m e t h o d is a modification of Gill (1980). Let 11 and I2 be disjoint subsets of {1, ... and let Io = 11\ {io}- Consider the counting at the single time instant t, if it exists, for Y~ <~ X j = r Fix t < ~ such that A ( t ) < ~ , events Bm.i by
,N} such that 11 is nonempty; letjo be a fixed m e m b e r of I1; process v(t) = Sto. 0]-lJ ~loAnj 1-b~12(1 - A,j) dnj, which counts 1 which Yj ~< Xj = t for all j ~ 11, and for no j e 12 such that and define t,..i = i2-mt, i = 0 . . . . ,2"; m = 1,2 . . . . . Define the
Bm,i = { V j ~ I1, Yj <<.tm,i < X j ~ tin.i+1; V j 612, Y j > tin.i,
or
Xj~(tm.i, tm.i+ l"] ).
N o w we a p p r o x i m a t e the increment of v over the interval (tm.~,tm.i÷l] by IB..,. If for some j, Yj > Xj, Xj~(tm.~, t~.~+l] for some j e I1, then v(tm.~+l) - v(t~.~) = 0 = IB..,, so we only consider the case: Yj ~< Xj allj and X j e (tin.i, tin,i+ 1"] for allj E 11. Then {v(t~.,+ 1) -- v(t,.,) = 0} ¢> {3j, 1 ~ 11 : X j # Xt} t) {qj e 11, I e X j, X t E (tin.i, tm.i+l ], X j = Xt} and { V ( t m . i + 1) - - v ( t m , i ) 1, Is.., = 0} c ~)j~ j,{tm.i < Yj <<. X j <~ tin.i+1 }, =
or for 12: so
](v(t,.i+l) - v(t~.i)) - ls..,I ~< ~ I{t .... Y,<.x,<.t ..... jell 2 Ilt~.i
E l{t,.,
Let {~-t} be the family of a-algebras to which v, Yi, X~ are adapted; then
~(llr,<.t .... xj<.t ..... } [~-t ...... ) = ~(rj(tm.i)I{x~, ..... 1 [ ~t..,) = rj(tm.,)o~{d~[I{x,~, ..... I IXj > t,,.,] I~'t..,} = rj(tm.i) F(t~,.i+ 1) - F(tm.i) f( rj(t, i) dF(s) 1-F(tm,i) = t..,,t..... ] " l'--'ff~--,n,i)'
(12)
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384
382
SO
#(IB"IY"')
" = fl
x
1-[ rj(tm.i) F(tm'i+a)l-- -(t )F(tm'i)
H(1
j e 12\
F(t~i+l)-- F(tm,i)~ --rj(tm,i)
1 --F(tmi ) ,
dF(s)
/I rj°(tm'i) 1 - - F ( t m i ) ' ,
thus 2
Z
e(In,.,l~,..,)=
i=0
where 0 ~< Wm(S)
;i
w,,dF,
~ (1 -- F(t -- ))-1 <~ o(3 for all m and s and rjo
win(s) a_% ,1 i° rj(s) l -
1 - F(s - ) '
so we have 2m - 1
f[ H (rsAA)H (1- rsAA)rjodA. j~lo
i=0
J~I2
F o r the terms on the right-hand side of (13), we have
e
5~(I{t..,
Yj<~t2-") ~ 0
~
\i=O
as m ~ oo. Similarly, the expectations of the sum over i of conditional expectations of any of the other terms of the right-hand side of(13) tends to zero as m tends to infinity. So let a(.) be the c o m p e n s a t o r of v('); by the a r g u m e n t of T h e o r e m 3.1.1 of Gill (1980), a(') and
i S~o(riAA) j~i ( 1 - rlAA) rsodA are indistinguishable. T a k i n g I1 =j, I2 = ~b, then v = nj has c o m p e n s a t o r (ms) = ~(1 - rjAA)rjdA = ~(1 - AA)rsdA. []
a s = ~rjdA, hence by T h e o r e m 2.3 of Gill (1980),
Let Dl(t) = ~(o.q DdA, and define the process M(t) = B°(Dl(t)), where B°(') is the Brownian bridge. If F(z) < 1, then
L e m m a 4.
[IMN-- Mllh ~ 0,
a.s.
Proof. The p r o o f is a modification of L e m m a 1 of Akritas (1986). Let 5i = I:y,<,x,;,(i = 1 . . . . . N). Let U1, ..., UN be iid U(0, 1), and Ulx . . . . . U1N, be those U's that are ~< D I ( ~ ) , and Uol . . . . . UONobe those U's that are > D I ( ~ ) (No + N~ = N); then the sample (Xi,53, i = 1 . . . . . N m a y equivalently be obtained by setting Xi = D[I(Uli), 6i = 1; i = 1 . . . . . N1 and Xi = Dol(Uol - D I ( ~ ) ) , 31 = 0; i = N1 + 1 . . . . . N, where
Do(t) = P(X1 ~ t, X1 < Y1). Next, set FN(u) = N~/2(1/N)Y~= l(Ilv, <,4 - u)), 0 ~< u ~< 1, and let F be the Brownian bridge that satisfies II r N - F II--' 0 a.s. ( S k o r o h o d construction). N o w note that MN(t) = FN(DI(t)); set B°(Dl(t)) = F(DI(t)), then following the arguments of Shorack (1982, T h e o r e m 4.1) we have IIM~v - B°(D1)I] =
IIF~(DO-
F(D1)II --, 0, a.s.
[]
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384
383
Theorem 3. L e t F , G e ~,Uo, SG - 1 d F < oo , z s a t i s f i e s F(z) < 1; t h e n N 1 / 2 ( P , - F ) ( ' ) ~ B(K(.))(1 - F(')) o n [0,z], where
K(0 =
(1 - - F)-O - - F _ ) G "
Proof. F o r t < z, let ZN(t) = N ' / 2 f f " - - F (t) =
fI1-__-PF DNJ
then Z N ( t ) is a square integrable martingale with
/1--P._\2 j
M
Let VN = (1 -- F ) - ~(1 - - / ? ~ _ ) J - 1DN, b > supo
;i
J~rVN(1 - AA) dMN,
2N(t) =
I2,1
- J}) VN(1 - AA)dMN;
then ZN = _ZN + ZN, and Z_N,ZN are both square integrable martingale, and if F is continuous, then by L e m m a 4,
[IA_ZNII ~< bI[AMNII ~ 0 , a.s. Also, (ZN)(t)
= fi(1 -- J~v) Vu(1 - AA)dMu ~ 0 ,
a.s.
since J~v ~ 1, a.s. on [0, ~], and
_. f'dA f' (Zu)(t)
Jo O =
dF
o(1-F)(1-F-)G"
So the conclusions follows by T h e o r e m 4.1. of Gill (1980) in the case F is continuous. If F is not continuous, a similar treatment to Akritas (1986) yields the same results. [] Corollary 2. [[M~v - MI[ --* 0, a.s. o n [0,~] a s m a n d n ~ oo. Proof. By the same arguments as in L e m m a 4, M * = F t , ( D i N ( t ) ) , and 11Mm*- B°(D1)[[ ~< 11Fro(DiN) - F(D1N)II + II conditionally almost surely.
F(D,N)- r(D~)ll-~ o,
[]
Theorem 4. U n d e r t h e c o n d i t i o n s o f T h e o r e m 3, w e h a v e m~/2(P*~ - P,)(.) ~ B°(K)(.))(1 - F(.)) conditionally almost surely, as m and n ~ ~.
o n [0,r]
384
A. Yuan / Statistics & Probability Letters 27 (1996) 375-384
Proof. Let J* = IIR*.>01, A,(t) --So(1 t __ p,)- 1 dF., m*(t) ~- ml/2(D*m(t) - - S otD m.d A , ) , . by the s a m e w a y as before, we have for t ~< z,
Z*(t):= m x / 2 F *-- F. ff=m(t) = ml/2 f'ol 1------~, D J*' a M * ' and M* is a square integrable martingale with
(M*)(t)=f~o(1-AA.)D*dA. and
(Z*)(t)=3o\
l-ft,
J D~(1-AA")dA"'
so the proof is similar to that of Theorem 3.
[]
Also, a direct application of Theorems 1 and 2 of Doss and Gill (1992) yields the following result for the quantile processes {/~2 1(.) } and {P* - 1(.) }. Theorem 5. Under the conditions of Theorem 3, assume that F has a positive continuous density f; then we have nl/2(/~n -1 --
F-1)(t) ~ -- B°(K(F-I(t))(1 - t ) f - l ( F - l ( t ) )
on [0, V-l(z)]
and nl/2(P *-1 - P~l)(t) => - B°(K(F-I(t))(1 - t ) f - l ( F - t ( t ) )
on [0, F - l ( z ) ]
in probability conditionally. Theorems 4 and 5 can be used to construct confidence bands for F(t)and F - 1(0, respectively, in the same way as Doss and Gill (1992).
References Akritas, M.G. (1986), Bootstrapping the Kaplan-Meier estimator, J. Amer Statis. Assoc. 81, 1032-1038. Alexander, K.S. (1984), Probability inequalities for empirical processes and a law of the Iterated Logarithm, Ann. Probab. 12, 1041-1067. Alexander, K.S. (1985), Rates of growth for weighted empirical processes, Proc. Berkley Conf. in Honor of Jerzy Neyman and Jack Kiefer, Vol. 11, pp. 475-493. Bickel, P.J. and D.A. Freedman (1981), Some asymptotic theory for the bootstrap, Ann. Statis. 9, 1196-1217. Doss, H. and R.D. Gill (1992), An elementary approach to weak convergence for quantile processes, with applications to censored survival data, J. Amer. Statist. Assoc. 87, 869-877. Gill, R.D. (1980), Censoring and Stochastic Integrals, Mathematical Centre Tract 124 (Mathematical Centrum, Amsterdam), Gill, R.D. (1981), Testing with replacement and the product limit estimator, Ann. Statist. 9, 853-860. Lai, T.L. and Z. Ying (1991), Estimating a distribution function with truncated and censored data, Ann. Statist. 19, 417 442. Lynden-Bell, D. (1971), A method of allowing for known observational selection in small samples applied to 3CR quasars. Monthly Notices Roy. Astron. Soc. 155, 95 118. Shorack, G.R. (1982), Bootstrapping robust regression, Comm. Statist. Theory Methods 11, 961-972. Woodroofe, M. (1985), Estimating a distribution function with truncated data, Ann. Statist. 13, 163-177.