Copulas, degenerate distributions and quantile tests in competing risk problems

Copulas, degenerate distributions and quantile tests in competing risk problems

Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587 www.elsevier.com/locate/jspi Copulas, degenerate distributions and quantile test...

238KB Sizes 2 Downloads 63 Views

Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587 www.elsevier.com/locate/jspi

Copulas, degenerate distributions and quantile tests in competing risk problems Tim Bedford University of Strathclyde, Department of Management Science, Strathclyde Business School, 40 George Street, Glasgow G1 1QE, Scotland Received 8 July 2004; accepted 15 October 2004 Available online 15 August 2005

Abstract It is well known that, given observable data for a competing risk problem, there is always an independent model consistent with the data. It has been pointed out, however, that this independent model does not necessarily have to be one with proper marginals. One purpose of this paper is to explore the extent to which one might try to use the non-parametric assumption that the marginals are proper in order to test whether or not independence holds. This will lead us naturally to a closely related estimation problem—how we estimate the marginals given that a certain quantile of one variable is reached at the same time as a given quantile of the other variable. The problem will be considered using the copula-based approach of Zheng and Klein. Two different methods are discussed. One is a non-parametric maximum likelihood method. The other is a consistent estimator, called the bilinear adjustment estimator, that can be computed quickly and which thus lends itself more readily to simulation methods such as bootstrapping. The ultimate objective of the work in this paper is to provide an additional analytical tool to understand the effectiveness of preventive maintenance. Preventive maintenance censors component failure data and thus provides an important example of competing risk within the reliability area. © 2005 Elsevier B.V. All rights reserved. Keywords: Competing risk; Right censoring; Copula; Constrained estimator; Reliability; Preventive maintenance

1. Introduction Data arising in reliability analysis is often “dirty”. One of the problems that arises is that the failure data one is really interested in has been censored in some way. The way we interpret that data depends critically on what we assume about the censoring. Conversely 0378-3758/$ - see front matter © 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2004.10.029

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1573

there may well be a practical interpretation to the censoring. For example, if failure times are censored by preventive maintenance then asset managers will be interested in whether or not that maintenance is effective (positively correlated with failure times) or ineffective (independent of failure times). The competing risk approach models a sequence of i.i.d. lifetime variables Y1 , Y2 , . . . . Each observable Y is considered to be the minimum of two variables, X and Z, Y = min(X, Z). In addition, we observe which of X and Z was equal to Y and denote this information using an indicator variable. Hence observable data consist of pairs (t1 , I1 ), (t2 , I2 ), . . . , (tn , In ). The observable data will allow us to estimate the sub-survival functions, ∗ SX (t) = P (X > t, X < Z) and SZ∗ (t) = P (Z > t, Z < X),

but not the true distribution functions of X and Z, FX and FZ . Hence we are not able to estimate the underlying failure distribution for X without making additional, non-testable, model assumptions. A characterization of those distributions for X that are possible for given subsurvival functions was made in Bedford and Meilijson (1997). The usual assumption made is that the competing variables are statistically independent. It is well known that under this assumption the marginals of X and Z can be identified, and are estimated by the Kaplan–Meier (KM) estimator. This was generalized to other forms of dependency defined in terms of copulae in Zheng and Klein (1995).

2. Copulae and the copula-graphical estimator The copula of two random variables (X, Z) is the distribution of (FX (X), FZ (Z)), which is supported on the unit square. The independent copula is the uniform distribution on the unit square, and is the one that is obtained for independent variables. However, any distribution on the unit square that has uniform marginals is a copula. Zheng and Klein showed the general result that, if the copula of (X, Z) is known, then the marginal distribution functions of X and Z are uniquely determined by the competing risk data. This result is captured in the following theorem: Theorem 2.1. Suppose the marginal distribution functions of (X, Z) are continuous and strictly increasing in (0, ∞). Suppose the copula C is known and the corresponding probability measure for any open set of the unit square is positive. Then FX and FZ , the marginal distribution functions of X and Z, are uniquely determined by the sub-distribution functions. This result includes the “usual” result that the distribution functions are identifiable when the variables are independent. (Sometimes the condition that FX and FZ are strictly increasing on (0, ∞) is not satisfied. If X and Z have a finite upper bound, say FX is strictly increasing on (0, t1 ) and FZ is strictly increasing on (0, t2 ) with FX (t1 )=1 and FZ (t2 )=1, then FX and FZ are uniquely determined on (0, min(t1 , t2 )).)

1574

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

v =FZ (z)

Z

x= z

(x, z) (FX (x), FZ (z)) u =FX (x)

X

Fig. 1. The transformation to the copula.

We shall now try to explain both why this theorem holds and why we can get degenerate marginals. (The theorem assumes that the marginals are proper, and goes on to identify them. However, in practice we are not guaranteed that such marginals will exist for given copula and subsurvival functions.) The issue of degenerate marginals was raised in van der Weide and Bedford (1998). By a straightforward calculation we get  t ∗ SX (t) = FX (t) − Cu (F (x), G(x))f (x) dx 0

and an analogous formula for SZ∗ . From this formula follows that the marginal distribution functions FX and FZ are solutions of the following system of ordinary differential equations: d ∗ S (t), dt X d {1 − Cv (FX (t), FZ (t))}FZ (t) = SZ∗ (t), dt FX (0) = FZ (0) = 0,

{1 − Cu (FX (t), FZ (t))}FX (t) =

where Cu (FX (t), FZ (t)) and Cv (FX (t), FZ (t)) denote the first order partial derivatives (j/ju)C(u, v) and (j/jv)C(u, v) calculated in (FX (t), FZ (t)). The solution to this system of differential equations is a line in the unit square. This line, together with its parameterization by time, is equal to the image of the diagonal (t, t) (t > 0) under the transformation (x, z)  → (u, v) = (FX (X), FZ (Z)). Fig. 1 shows the relation between the diagonal and its image in the unit square. The time parameter of the image of the diagonal enables us to determine the time corresponding to each value of u and v and hence to determine the transformations FX−1 and FZ−1 . This obviously determines FX and FZ . An estimation method to obtain them from data is given in Zheng and Klein (1995). The copula-graphical estimator is a non-parametric maximum likelihood estimator. This estimator uses the identity FX (t) + FZ (s) = C(FX (t), FZ (s)) + FX∗ (t) + FZ∗ (s), (see Fig. 2) and can be numerically implemented as follows. Suppose our data consist of points (t1 , I1 ), (t2 , I2 ), . . . , (tn , In ), and define t0 = 0. Our estimators FˆX and FˆZ are step functions such that FˆX (t0 ) = FˆZ (t0 ) = 0, and the functions are constant except possibly at the times ti . FˆX jumps at those times for which a Y was

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1575

TIM BEDFORD

(F X(t), FZ(s))

(F X(t), FZ(s))

area is FX (t)

(F X(t), FZ(s))

area FX*(t)

area C (FX (t),FZ (s))

area FZ*(s)

area FZ (s)

Fig. 2. Illustration of identity used in the copula-graphical estimator.

1.2 1 0.8 0.6 0.4 0.2 0 0

0.2

0.4

0.6

0.8

1

1.2

Fig. 3. Copula-graphical estimator example.

observed, and FˆZ jumps at those times for which a Z was observed. The heights of the jumps are determined iteratively for i = 1, . . . , n by numerically solving the equation FˆX (ti ) + FˆZ (ti ) = C(FˆX (ti ), FˆZ (ti )) + FˆX∗ (ti ) + FˆZ∗ (ti ), for either FˆX (ti ) or FˆZ (ti ) (depending on which function jumps at ti ). The algorithm recursively determines the points (FˆX (ti ), FˆZ (ti )), (i = 1, . . . , n). Fig. 3 shows the operation of the algorithm with the independent copula for the example data of 20 observations given in Table 1. The smallest observation is of an X, or more formally of the event {X = 0.025, Z > 0.025}. Starting from (0, 0) we move up to find 1 the value of  so that 20 of the mass on the square is in the rectangle with corners (0, 0),

1576

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

Table 1 Ordered data Time Failure type Time

0.025 1 0.583

0.055 1 0.880

0.071 2 1.211

0.158 1 1.220

0.253 2 1.254

0.445 1 1.260

0.546 2 1.473

0.546 2 1.627

0.551 2 2.035

0.582 1 3.223

Failure type

1

2

2

1

1

2

2

2

1

1

1 , our estimate of FˆX (0.025). The current point (0, ), (1, 0), and (1, ). This gives  = 20 1 on the square is now (0, 20 ). The second observation is also of an X, and starting from the 1 current point we move up until 20 of the mass is in a rectangle. This gives a new current 2 2 ˆ point of (0, 20 ), and estimates FX (0.055) as 20 . The next observation is of a Z. Moving 1 from the current point we now find a vertically oriented rectangle with mass 20 . This is the rectangle with corners (0.1, 0), (0.1, 0.05556), (1, 0.05556), and (1, 0). The new current point is (0.1, 0.05556) and our estimate of FˆZ (0.071) is 0.05556. The algorithm proceeds in this way until all the estimates have been made.

3. Defective marginals From Fig. 1 it is possible to see what might go wrong when using the independent model (or any other copula). There is no particular reason why the solution to the differential equation, with starting point (0, 0), should go up to the top right hand corner (1, 1). If that does not occur then it must actually run up to one of the sides of the square. The meaning of this is that one of the variables in the competing risk model is defective, i.e. there is a positive probability that it will be infinite. For most applications of competing risk models, the idea that the component life time might be infinite does not seem very plausible. Hence the independent competing risk model may be rather less widely applicable than we think. Another way to see how degenerate marginals can arise for the independent competing risk model is the formula given by Tsiatis (1975)  s  ∗ (s)/ds dSX SX (t) = exp ds . ∗ ∗ 0 SX (s) + SZ (s) In principle, the integral in this expression might not be divergent, which corresponds to degenerate marginals (this formula was used in van der Weide and Bedford (1998) to produce examples). This phenomenon can often be seen (although not proven to exist) when using the KM estimator. This is an estimator of the survivor function of a competing risk variable, and hence is a non-increasing function taking the value 1 at 0. In general, the estimator does not reach the value 0 for large values of t, this partly being due to the fact that the last observation may have been of the other variable. However, in practice we may see that the KM estimator seems to be converging nicely pointwise but without looking as if it will descend down to 0. This is consistent with the underlying distribution being defective, that is, with the existence of an atom at infinity.

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1577

Of course, it might simply be that we have not observed enough data to be able to estimate the tail. In a parametric setting where we effectively make an assumption about how the tail will depend on the rest of the data, we might have enough evidence to say that there is an atom at infinity if we had thought to include the possibility of an atom at infinity in our parametric model (which, of course, is rarely done). In a non-parametric setting one might want to test the hypothesis of independence under the assumption that the marginals are proper. However, this is difficult as defectiveness is a property of the tails of the subsurvivor functions, and proper and improper models can be found arbitrarily close to one another. Theorem 3.1. (1) Given a pair of continuously differentiable subsurvivor functions which give rise to a defective marginal distribution under the independence assumption, there are pairs of continuously differentiable subsurvivor functions which are arbitrarily close (in the induced C 0 topology) which give rise to proper marginals under independence. (2) Given a pair of continuously differentiable subsurvivor functions which give rise to proper marginal distributions under the independence assumption, there are pairs of continuously differentiable subsurvivor functions which are arbitrarily close (in the C 0 topology) which give rise to a defective marginal under independence. Proof. The proof can be made easily if we consider that the differential equation construction of the marginals from the subsurvivors and the copula can be reversed. Suppose that a copula is given (the independent one for example), and that we specify a smooth function d from the non-negative reals to the unit square such that d is increasing in both variables, d(0) = (0, 0), and at least one of the coordinates of d(t) converges to 1 as t → ∞. We can then define two subsurvivor functions as follows:  1  1 ∗ SX (t) = dv du, u=d1 (t) v=d2 (d1−1 (u))

∗ (t). SZ∗ (t) = (1 − d1 (t))(1 − d2 (t)) − SX

The construction is illustrated in Fig. 4. It is straightforward to check that, for this pair of subsurvivor functions, d is the solution to the differential equation discussed above. Hence the marginals corresponding to the subsurvivor functions under the given copula are simply ∗ (t) = 1 − d1 (t), SX

SZ∗ (t) = 1 − d2 (t). Now we can prove the two statements in the theorem. (1) Suppose that X has the defective distribution. In this case we have d2 (t) → 1,

d1 (t) → d ∗ < 1 as t → ∞.

Make a change to d1 (t) for large t (greater than some t ∗ , say) so that it increases to 1. The corresponding subdistribution functions for the new d are unchanged for t < t ∗ . This implies that the subdistribution functions for the new d are close to the original ones in the C 0 sense, and by construction the marginals are proper. Since the change can be made

1578

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

TIM BEDFORD SX*(t)

SZ*(t)

(d1(t), d2(t))

Fig. 4. Constructing sub-survivor pairs.

at arbitrarily large t ∗ we are able to find subdistribution functions with proper marginals arbitrarily close to the original subdistribution functions. (2) In this case we make a small change to d1 (t) for large t so that its limit is less than 1. The subdistribution functions for the new d are close to the original ones, but the marginal of X is defective. 

4. Estimation constrained on quantile information In the above, we have seen that small perturbations at high enough t values in the survivor functions can change an independent model from having defective to having proper marginals. For this reason it is not possible to test the independence assumption directly under the assumption of proper marginals. On the other hand, for most practical modelling situations, very large times (near to infinity) are usually numbers that do not have a credible physical interpretation. For example, if we are looking at the reliability of a pacemaker subject to random replacements, then although our reliability model may cover times larger than 1000 years (in fact any time t > 0), we may find that the patients have died by then. The question of what is too large a number to be relevant for a particular application is obviously very application dependent. Because of this it is appropriate to look at high quantiles of the model. A degenerate marginal means that there is a quantile of one marginal which is not reached before all quantiles of the other variable have been reached. For practical applications the modeller might believe that some particular quantile of one variable should occur beyond a particular quantile of the other variable. If this did not happen then the dependence assumption might be questioned. This leads us naturally to the question of whether we can build constrained estimators for the two marginals so that given quantiles occur at the same time. Such constrained estimators

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1579

TIM BEDFORD

(qX,qY)

Fig. 5. Constrained image of the diagonal.

could be used in hypothesis testing. Note that constrained estimators’ more single variables have been studied for some time (see Thomas and Grunkemeier, 1975). The problem has a natural interpretation in the copula. It asks if we can make an adjustment of the copula-graphical estimator to ensure that we meet the quantile constraints. Recall that in building the copula-graphical estimator we construct an estimator of the image of the diagonal in the copula space. If we want to build estimators for the two marginals that simultaneously pass through given quantile points, then this corresponds to the image of the diagonal in the copula space passing through the point whose coordinates are those quantiles, see Fig. 5. There is a natural constraint that we have to take account of: The probability mass of the quadrant in the copula whose south west point is (qX , qZ ) is equal to the probability that both X and Z (and hence Y) are greater than that common time point at which they reach the quantile constraints. For the independent copula this probability is (1−qX )(1−qZ ). Hence the proportion of observations of Y above the (as yet undetermined) time should be (1 − qX )(1 − qZ ) in the independent model. In order to construct an estimator we have to therefore divide the data into two parts: the first 1 − (1 − qX )(1 − qZ ) portion of the ordered Y observations and the remainder. The conditional estimator for the data above the quantile values is not a problem, as the copula-graphical estimator can be applied. For the lower part of the data we have a problem. If we applied the copula-graphical estimator to the lower part of data then there is no reason for it to meet the quantile constraints, for the corner point in the L-shape form produced by the algorithm does not need to be (qX , qZ ), see Fig. 6. Hence we have to find an alternative way of ensuring that the quantile constraint is met. We propose two different methods of solving this problem. One is theoretically satisfactory and gives a non-parametric maximum likelihood estimator. The other method is a consistent estimator that can be computed much more quickly, in particular for non-independent copulae, and is therefore more suitable for simulation-based statistical methods.

1580

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1.2 corner point required 1 0.8 0.6 (qX,qY) corner point produced 0.4 by Copula-Graphical estimator 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 Fig. 6. Estimator not meeting the constraints.

4.1. Maximum likelihood estimation In this part we shall only deal with the independent copula. Other copulas are much more complex due to the fact that the probabilities of the different events have to be evaluated using integration. For the independent copula the probability is just the same as the area for the events we are looking at. As remarked above, the problem in finding a constrained estimator is to ensure that the values of the estimator at times below that corresponding to the constrained quantiles are found in such a way that both X and Z estimators can simultaneously meet the requirement. For the non-parametric maximum likelihood estimator we follow the original estimator of Zheng and Klein, in the sense of building horizontal or vertical strips in the copula corresponding to the ordered observations made. We now have to ensure that these are chosen with maximum likelihood given that they end up forming a corner point at (qX , qZ ). This can be formulated as an optimization problem. In order to state it simply we have to introduce some notation. Firstly, the copula variable corresponding to X will be denoted by u, and that corresponding to Z will be v. In other words U = FX (X) and V = FZ (Z). An ordered data set of Y observations will be denoted by y1 , . . . , yn . Within this set of observations there are nX observations of X, x1 , . . . , xnX and nZ observations of Z, z1 , . . . , znZ . In order to keep track of how the observations are interspersed with each other in the data set we define the X-position of a Z observation, xpos(i), to be the index of the X observation immediately before zi . The Z-position of an X observation, zpos(i), is defined similarly. If there was no previous observation the position is defined as 0. For example, if the ordered observations are x1 , z1 , x2 , x3 , z2 , z3 then we have xpos(1) = 1,

zpos(1) = 0,

xpos(2) = 3,

zpos(2) = 1,

xpos(3) = 3,

zpos(3) = 1.

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1581

v3

v2

v1

(0,0)

u1

u2

u3

Fig. 7. Partition example.

In order to find the marginal estimators we use a partition of the unit square with the same geometric form as that used by the copula-graphical estimator, but we adjust the widths of the partition elements to ensure that the constraint is met while maximizing the likelihood. The partition for the above example is shown in the solid lines of Fig. 7. In order to see how to proceed further we now formulate the optimization problem to determine the constrained estimator. For this we have to choose a sequence of values u1 , . . . , unX and v1 , . . . , vnZ between 0 and 1 defining a partition that maximizes the expected likelihood. The partition element of the copula corresponding to observation xi is the vertical strip with lower corner points at (ui−1 , vzpos(i) ) and (ui , vzpos(i) ). Hence its probability (area) is equal to (ui − ui−1 )(1 − vzpos(i) ). Similarly, the partition element of the copula corresponding to observation zi is the horizontal strip with lower corner points at (uxpos(i) , vi−1 ) and (uxpos(i) , vi ). Hence its probability (area) is equal to (1 − uxpos(i) )(vi − vi−1 ). It will be slightly more convenient to introduce new variables for the differences between the ui , vj . Define dui = ui − ui−1 (with u0 = 0 by convention) and dv i = vi − vi−1 (with v0 = 0). These new variables are just non-negative. Then the probability of the strip mentioned above is ⎛ ⎞  px,i = dui ⎝1 − dv j ⎠ . j >zpos(i)

1582

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

Similarly, the probability of the horizontal strip corresponding to the ith Z observation is ⎛ ⎞  pz,i = dv i ⎝1 − duj ⎠ . j >xpos(i)

The optimization problem that finds the maximum likelihood partition for the nonconstrained estimator is simply max s.t.

nX  i=1 nX  i=1 nZ 

ln px,i +

nZ 

ln pz,i ,

i=1

dui = 1, dv i = 1,

i=1

dui , dv j 0. Since we know that the copula-graphical estimator gives each strip the same mass of 1/n, and since this is the optimum value of the problem stated, we obtain confirmation that the copula-graphical estimator is a non-parametric maximum likelihood estimator. If there are quantile constraints then we have to add in two conditions. Let iX = max{i : i/n < 1 − (1 − qX )(1 − qZ ), Yi = Xi }, iZ = max{i : i/n < 1 − (1 − qX )(1 − qZ ), Yi = Zi }. This is the index of the observation that is roughly at the {1 − (1 − qX )(1 − qZ )} quantile of the empirical Y distribution. The new optimization problem is as follows: max s.t.

nX  i=1 nX  i=1 nZ  i=1 iX  i=1 iZ 

ln px,i −

nZ  i=1

dui = 1, dv i = 1, dui = qX , dv i = qZ ,

i=1

dui , dv j 0.

ln pz,i ,

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1583

The solution to this optimization problem gives us a new tiling of the copula space and enables us to construct marginal estimators that meet the quantile requirements at the same time. To see this suppose for definiteness that iX > iZ which means that the lastY observation before proportion 1−(1−qX )(1−qZ ) was actually an X. The constrained copula-graphical estimator for FX will take the value qX on the interval [uiX , uiX +1 ), while that for FZ will take the value qZ on the interval [viZ , viZ +1 ). By definition though, viZ < uiX < viZ +1 . Hence at time uiX we have FˆX (uiX ) = qX , FˆZ (uiZ ) = qZ . The optimization problem is easily seen to be concave as the objective function is a sum of concave terms. It can therefore be solved using gradient methods. A feasible solution (though not usually the optimal feasible solution) is provided by the estimator of the next section. 5. Bi-linear adjustment estimation The optimization problem discussed above clearly gives us a solution that is theoretically most appropriate for the independent copula. However, as discussed above, the solution to this type of problem can be very difficult, either because of the large number of variables involved or because the copula is not independent. This motivates the need to find a simpler solution that can be applied more easily in practice. The solution proposed here is that of bi-linear adjustment.As we saw in Fig. 6, the problem can be construed geometrically as the problem of transforming one L-shaped figure onto another. The constraints that such a transformation should have are: • The origin is preserved. • The point (1, 1) is preserved. • Vertical lines are mapped to vertical lines, and horizontal lines are mapped to horizontal lines. This last constraint ensures that we retain a tiling of the form used in the copula-graphical estimator, and implies that the transformation should be of the form (u, v)  → ((u), (v)) for some functions , . The simplest choice for ,  is a bilinear function. Given that 0 and 1 are to be preserved, we simply have to specify what the image should be of the change-point in the function. The algorithm proposed here is: (1) Construct the unconstrained copula-graphical estimator. (2) Determine the indices iX and iZ as above. X (3) For  choose the bilinear map that maps ii=1 dui onto qX and for  that which maps iZ dv to q . i Z i=1 (4) Transform the tiling determined for the copula-graphical estimator by the bi-linear maps to a new tiling. (5) Construct the constrained copula-graphical estimators from the new tiling.

1584

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

We remark that this algorithm is extremely simple to apply and therefore lends itself well to situations such as bootstrapping where we want to repeatedly make computations. Furthermore, since this method always gives us a tiling that satisfies the constraints of the optimization problem discussed above, it actually provides us with a feasible solution from which we can apply gradient methods, or others, as required. Theorem 5.1. The linear adjustment estimator is consistent. Proof. If the quantile constraints hold for the population from which the data are drawn then by consistency of the copula graphical estimator the copula graphical estimators converge (in the usual sense) to the true distribution functions. Hence the image of the diagonal converges to the true image of the diagonal. Therefore the transformation required to linearly adjust the estimator to make it meet the constraints converges to the identity. This implies that the estimate of the image of the diagonal produced by the copula graphical estimator and that produced by the linear adjustment estimator converge to each other, so consistency of the copula-graphical estimator implies consistency of the linear adjustment estimator. 

6. Hypothesis testing The first important reference to the use of constrained estimators for hypothesis testing is Thomas and Grunkemeier (1975). A recent review can be found in Barber and Jennison (2003). It is not the primary purpose of this paper to explore the numerical behaviour of the estimators we have developed. However, we do want to discuss the ways in which these estimators could be applied. The two techniques that we discuss are likelihood ratio and bootstrapping.

6.1. Likelihood ratio The likelihood ratio test was first applied for right censoring applications in Thomas and Grunkemeier (1975). The LR statistic is computed by taking the difference of the loglikelihoods for the constrained and unconstrained estimators. The log-likelihoods can be computed quite simply for both cases (as the sum of the logarithms of the areas of the partition elements). Typically, the limiting distribution for the LR test is chi-squared. This is an asymptotic result though, and more modern numerical techniques are available that might produce better estimates for smaller sample sizes.

6.2. Bootstrapping Simulation methods such as bootstrapping provide a ready alternative to asymptotic methods. However, they are computationally intensive and as such require fast estimation methods. The bilinear adjustment estimator has clear potential for such a technique.

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1585

7. Numerical examples We have taken a data set with 20 observations given in the table below. Fig. 3 shows the tiling that is obtained when generating the copula-graphical estimator. Now suppose that we want to find the constrained estimator such that the 75% quantile for Z occurs at the same time as the 40% quantile for X. The probability of an observation of Y above that time is (1 − 0.75) × (1 − 0.4) = 0.15, so that out of 20 observations we should expect 3 to be above that time. Fig. 8 shows the tiling produced by the bilinear adjustment. In order to find the constrained maximum likelihood estimator the optimization problem described above was solved numerically. The linear adjustment estimator gives a feasible starting point. The solution is shown in Fig. 9. The marginal estimators produced by the three methods are shown in Figs. 10 and 11.

1.2 1 0.8 0.6 0.4 0.2 0 0

0.2

0.4

0.6

0.8

1

1.2

1

1.2

Fig. 8. Partition example.

1.2 1 0.8 0.6 0.4 0.2 0 0

0.2

0.4

0.6

0.8

Fig. 9. Partition example.

1586

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1.2 1 0.8 0.6 0.4

Unconstrained MLE BL

0.2 0 0

1

2

3

4

Fig. 10. Estimates of the marginal for X.

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2

Unconstrained MLE BL

0.1 0 0

0.5

1

1.5

2

Fig. 11. Estimates of the marginal for Z.

8. Conclusions We have presented two ways to determine constrained non-parametric estimators ensuring that the copula graphical estimators for the two competing risk variables achieve given quantiles at the same time. The maximum likelihood method is theoretically the best, but may be difficult to compute especially when the copula is not independent. The bilinear adjustment estimator is simple to calculate and provides a good starting point for numerical solution of the maximum likelihood optimization problem. The motivation for considering such a constrained estimator arises from the observation that the marginal distributions (for given subsurvivor functions and copula) could be degenerate under the assumption of a given copula. Although a test of the copula based on degeneracy is not possible, we have shown that hypotheses about quantiles of the marginals can be used to explore the appropriateness of different underlying copulae. One possible application for this work is in understanding the quality of preventive maintenance being applied in a plant.

T. Bedford / Journal of Statistical Planning and Inference 136 (2006) 1572 – 1587

1587

Acknowledgements I would like to thank John Quigley, Sangita Kulathinal, Cornel Bunea, Hans van der Weide, Catalina Mesina, for their helpful conversations about this subject. References Barber, S., Jennison, C., 2003. A review of inferential methods for the Kaplan–Meier estimator. Preprint University of Bristol. Bedford, T., Meilijson, I., 1997. A characterisation of marginal distributions of (possibly dependent) lifetime variables which right censor each other. Ann. Statist. 25, 1622–1645. Thomas, D.R., Grunkemeier, G.L., 1975. Confidence interval estimation of survival probabilities for censored data. J. Amer. Statist. Assoc. 70, 865–871. Tsiatis, A., 1975. A non-identifiability aspect in the problem of competing risks. Proc. Nat. Acad. Sci. USA 72, 20–22. van der Weide, J.A.M., Bedford, T., 1998. Competing risks and eternal life, safety and reliability. In: Lydersen, S., Hansen, G.K., Sandtorv, H.A. (Eds.), Proceedings of ESREL’98, vol. 2. Balkema, Rotterdam, pp. 1359–1364. Zheng, M., Klein, J.P., 1995. Estimates of marginal survival for dependent competing risks based on an assumed copula. Biometrika 82, 127–138.