Covariate adjustment of cumulative incidence functions for competing risks data using inverse probability of treatment weighting

Covariate adjustment of cumulative incidence functions for competing risks data using inverse probability of treatment weighting

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70 journal homepage: www.intl.elsevierhealth.com/j...

1MB Sizes 0 Downloads 44 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Covariate adjustment of cumulative incidence functions for competing risks data using inverse probability of treatment weighting Anke Neumann ∗ , Cécile Billionnet 1 French National Health Insurance Fund for Salaried Employees (CNAMTS), Department of Public Health Studies, Avenue du Professeur André Lemierre 50, 75020 Paris, France

a r t i c l e

i n f o

a b s t r a c t

Article history:

In observational studies without random assignment of the treatment, the unadjusted com-

Received 2 October 2015

parison between treatment groups may be misleading due to confounding. One method to

Received in revised form

adjust for measured confounders is inverse probability of treatment weighting. This method

14 January 2016

can also be used in the analysis of time to event data with competing risks. Competing

Accepted 4 March 2016

risks arise if for some individuals the event of interest is precluded by a different type

Keywords:

to different event types, is observed or is of interest. In the presence of competing risks,

Competing risks

time to event data are often characterized by cumulative incidence functions, one for each

of event occurring before, or if only the earliest of several times to event, corresponding

Multiple-outcome data

event type of interest. We describe the use of inverse probability of treatment weighting

Cumulative incidence function

to create adjusted cumulative incidence functions. This method is equivalent to direct

Subdistribution function

standardization when the weight model is saturated. No assumptions about the form of

Inverse probability of treatment

the cumulative incidence functions are required. The method allows studying associations

weighting

between treatment and the different types of event under study, while focusing on the earli-

Propensity score

est event only. We present a SAS macro implementing this method and we provide a worked example. © 2016 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

In observational studies without random assignment of the treatment, the unadjusted comparison between treatment groups may be misleading due to confounding. One method to adjust for measured confounders is inverse probability of treatment weighting [1,2]. This general approach has been applied



in many settings, e.g., to adjust survival curves for time to event data [3]. In time to event analysis, competing risks may be a concern. They arise if for some individuals the event of interest is precluded by a different type of event occurring before, or if only the earliest of several times to event, corresponding to different event types, is observed or is of interest. In the

Corresponding author. Tel.: +33 172601399. E-mail addresses: [email protected] (A. Neumann), [email protected] (C. Billionnet). 1 Tel.: +33 172602866. http://dx.doi.org/10.1016/j.cmpb.2016.03.008 0169-2607/© 2016 Elsevier Ireland Ltd. All rights reserved.

64

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

presence of competing risks, the data may be characterized by the cumulative incidence functions, one for each event type. In this paper, we describe the use of inverse probability of treatment weighting to create adjusted cumulative incidence functions, we present a SAS macro implementing this method and provide a worked example.

2.

Methodology

2.1.

An example for competing risk

As a motivating example, we consider a study on the longterm outcome in early-stage Hodgkin’s disease patients. The data are from Pintilie’s book on competing risks [4]. They were chosen to illustrate statistical issues that arise in practice and not for purposes of drawing medical conclusions. The data set contains 865 patients treated for Hodgkin’s disease between 1968 and 1986 at the Princess Margaret Cancer Center in Toronto [5]. All patients had early stage disease (I or II) and were treated either with radiation (only) or with radiation and chemotherapy. The recorded event types were first relapse, second malignancy (the first malignancy after diagnosis of Hodgkin’s disease) and death. Time to event was calculated from the date of diagnosis (in days). The data set contains the following covariates: age (in years), sex, size of mediastinum involvement (no, small or large), extranodal disease (yes or no), and clinical stage (I or II). Clearly, death is a competing event when interest is in relapse or second malignancy. On the other hand, in real life, relapse and second malignancy do not preclude later death and relapse does not exclude later second malignancy, and vice versa. Nevertheless, all three types of event must be considered as competing with each other when only the event occurring first, i.e., the composite event, is under study. That is, for instance, death occurring after relapse or second malignancy is not of interest to the underlying research question. If, on the contrary, interest was in mortality, regardless of relapse and second malignancy, the competing risk framework would not be helpful. In Section 4, we will be interested in the event occurring first, both in its type and in its time to event.

estimated in the presence of censored observations, but under the condition that censoring is ‘independent’, that is, the average risk of event among individuals not censored is the same as what the average risk would be among all individuals if no censoring occurred. In this sense, an individual censored at time t should be representative for those still ‘at risk’ (i.e., uncensored and having not yet experienced the event) at that time. Concretely, under independent censoring, F(t) may ˆ The be estimated by 1 minus the Kaplan–Meier estimator S(t). Kaplan–Meier estimator at time t is a product with one factor for each observed time to event up to t. For time s, this factor is 1 minus the observed proportion of individuals who experienced the event at time s among all individuals still ‘at risk’ at time s. These definitions can be extended to the competing risk situation where the event may be of any one of m ≥ 2 distinct types or causes denoted by J [6]. The cumulative incidence function, or subdistribution, for an event of type j (j = 1, 2, . . ., m) is defined as the joint probability Fj (t) := P(T ≤ t, J = j). Analogously, the subsurvivor function is defined as Sj (t): = P(T > t, J = j). Alternatively, T may be considered as the earliest of m latent, possibly unobserved, event times T1 , T2 , . . ., Tm , one for each of the m event types: T = min (T1 , T2 , . . ., Tm ). This has been called the latent failure time approach to competing risks. In this setting, Fj (t) is the probability that the event of type j occurs up to time t and this before any other type of event, or, in other words, the probability that the earliest event occurs up to time t and this event is of type j. Note that Fj (t) depends not only Tj , the time to event for type j, but also on the time to event for all other event types. The cumulative incidence function Fj can take values only up to P(J = j), typically smaller than 1, hence the term ‘subdistribution’. Note that the overall distribution function is equal to the sum of the cumulative incidence functions over all event types:

F(t) = P(T ≤ t) =

m  j=1

2.2. Cumulative incidence functions in competing risk analysis Survival analysis deals with the time elapsed from an initiating event, e.g., the onset of some disease, to a well-defined event. Let T denote the random variable representing the time to event of interest, considered as continuous. It is usually characterized by the survivor function, or survival, defined as the probability S(t): = P(T > t), or, equivalently, by the distribution function F(t): = P(T ≤ t) = 1 − S(t). If the event was observed for each individual in the study population, then F(t) can be estimated as the relative frequency of time to event less than or equal to t. However, in clinical research practice, time to event is most often observed only for a fraction of individuals included in the study. For all others, what is known is that the time to event is greater than the observation time. This is referred to as censoring. Fortunately, F(t) can also be

P(T ≤ t, J = j) =

m 

Fj (t).

j=1

Fj (t) can be estimated by a sum with one term for each observed time to event up to t. For time s, this term is the estimated probability of being free of any event just before time s (i.e., the Kaplan–Meier estimator at the preceding observed time to event) multiplied by the observed proportion of individuals who experienced the event of type j at time s among all individuals still ‘at risk’ at time s. Note that 1 minus the Kaplan–Meier estimator calculated from events of type j, while treating the events of all other types as ‘independent’ censoring, is an upwards biased estimate of Fj . As a consequence, summing up the estimated cumulated incidence (1 minus estimated survival) over the m event types yields a value which is larger than the estimated cumulated incidence for the composite event. The inappropriateness of this ‘naïve’ Kaplan–Meier estimator for competing risks data is explained in greater detail in the literature [6–8].

65

71% 50%

1.16 × 350 = 405.2 0.75 × 219 = 163.8 569 71% 1.63 × 350 = 569 2.60 × 219 = 569 1138 50%

Proportion of radiation

Disease stage II (zi = 2)

Total

71%

350 219 569 62% Radiation (xi = 1) Chemotherapy and radiation (xi = 2) Total Proportion of radiation

1/(350/569) = 1.63 1/(219/569) = 2.60

1.63 × 0.71 = 1.16 2.60 × (1–0.71) = 0.75

0.79 × 266 = 210.8 2.84 × 30 = 85.2 296 71% 1.11 × 266 = 296 9.87 × 30 = 296 592 50% 1.11 × 0.71 = 0.79 9.87 × (1–0.71) = 2.84 266 30 296 90%

1/(266/296) = 1.11 1/(30/296) = 9.87

swi wi

swi wi

Stabilized weight Weight Number of patients

Radiation (xi = 1) Chemotherapy and radiation (xi = 2) Total Proportion of radiation

The application of inverse probability of treatment weighting to the adjustment of cumulative incidence functions is straightforward. First, the weights are calculated as indicated in Section 2.3. Second, for each treatment x and each type of event j, the cumulative incidence function for the event of type j is estimated according to Section 2.2 from the weighted sub-population of all individuals having received treatment x. Confidence bands can be estimated using bootstrap, with newly calculated weights for each bootstrap replication. Note that non-stabilized and stabilized weights result in the same

Disease stage I (zi = 1)

2.4. Adjusted cumulative incidence functions with inverse probability of treatment weighting

Treatment

The method of inverse probability of treatment weighting can be used to adjust for measured confounding [1,2]. Let X be the random variable denoting treatment and Z the covariate vector. With inverse probability of treatment weighting, each individual i is assigned a weight wi , which is equal to the inverse of the probability of receiving his or her own treatment xi conditional on the observed covariate vector zi , i.e., wi = 1/P(X = xi |Z = zi ). Usually, weights are estimated from the data, most often using a logistic regression model. In practice, the weights wi may be highly variable, with very large values for small values of P(X = xi |Z = zi ), and in some settings stabilization is searched for. It may be achieved by multiplying by the marginal probability of receiving the observed treatment leading to stabilized weights swi = P(X = xi )/P(X = xi |Z = zi ), where P(X = xi ) can be estimated by the frequency of treatment xi . Weighting creates a pseudo-population in which the probability of being treated does not depend on the measured covariates. In Table 1, this is illustrated for our example using the disease stage as only covariate for inverse probability of treatment weighting. Finally, the parameter or function of interest is estimated from this pseudo-population, most often by using standard statistical methods. However, variance estimation must account for the weighted nature of the pseudopopulation. Cole and Hernán [9] discuss in detail the construction of weights, including weight stabilization and weight truncation, within the more general context of a time-varying treatment variable and with time-varying covariates. Note that the weights wi are closely related to the propensity score, which was originally defined as P(X = 1|Z = zi ) for a binary treatment variable with values 1 (active treatment) and 0 (control) [10,11]. We point out that for a given individual i the weight wi depends on her/his treatment xi , whereas the propensity score P(X = 1|Z = zi ) does not. When the weight model is correctly specified, using estimates of the weights rather than the true weights has been shown to reduce the variance of the estimator of interest (see e.g. [1]). This is because both systematic and chance imbalances between treatment groups are corrected. Assessing whether the weight model has been adequately specified involves a comparison of the treatment groups in the pseudo-population, in particular by checking for all covariates the value of the standardized difference [11,12].

Group of patients

Inverse probability of treatment weighting Table 1 – Illustration of inverse probability of treatment weighting by a numeric example. In the pseudo-population created by weighting, treatment is unrelated to disease stage.

2.3.

Number of patients weighted by

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

0.01 −0.01

36% 31% 33%

12% 88%

4% 34% 62%

43% 57%

P values are based on the Wilcoxon test and the Chi-square test for age and the other covariates, respectively. Standardized difference according to Austin [11]. ∗

∗∗

53% 20% 54% 5%

33.6% 66.4% 0.74 −0.74 <0.0001

33.3% 66.7%

0.01 0.01 −0.02 13.6% 32.6% 53.8% −0.89 0.07 0.61 <0.0001

13.3% 31.9% 54.8%

0.04 0.01 53.5% 9.4% 55.8% 9.7% 0.01 −0.48 0.8472 <0.0001

−0.02 35.7 (24.7) 0.15 0.5320 33.8 (12.9)

(n = 616)

39.9(16.4)

P Radiation and chemotherapy (n = 249)

Initial population

35.2 (19.2)

(n = 872.1)

Radiation and chemotherapy (n = 853.7) value* (n = 865)

35.3 (15.5) Mean age, in years (standard deviation) Men (%) 54% 9% Extranodal disease (%) Size of mediastinum involvement (%) 13% Large 33% Small 54% No Clinical stage (%) I 34% 66% II

In this section, we illustrate the proposed method by analyzing the Hodgkin’s disease study data described before. Except for age and sex, all covariates were related to treatment (Table 2): patients treated with radiation and chemotherapy had more often mediastinum involvement, extranodal disease and clinical stage II. Among the 616 patients treated with radiation, 58% had at most one of the three events considered. For the 249 patients treated with chemotherapy and radiation, this proportion was 49%.

Radiation

Illustration by a worked example

Table 2 – Patients characteristics.

4.

Standardized difference**

SAS code which implements the presented method is provided in the Supplementary Material. It is organized in three SAS macros briefly described in the following. The SAS Institute supplies the autocall macro %CumIncid (version 9.2 and later) [13] for estimating the unadjusted cumulative incidence function for a given type of event. We generalized the estimation to weighted observations and named the resulting macro %CumIncid2. All modifications are explicitly described in comments in the SAS code. The macro %CumIncidIPTW calculates the adjusted cumulative incidence functions. First, the (non-stabilized) weights are calculated. Second, for each type of event the macro %CumIncid2 is executed using the calculated weights and with the treatment variable as stratum variable. That is, the cumulative incidence functions are estimated separately for each treatment x based on the weighted sub-population of all individuals having received the treatment x. Third, in order to obtain bootstrap confidence bands, this estimation procedure is repeated for each bootstrap replication. The weights are newly calculated for each bootstrap replication. Finally, the macro %CumIncidIPTWPlot can be used to display the estimated cumulative incidence functions, grouped together either by treatment or by type of event. The invocation of the macros %CumIncidIPTW and %CumIncidIPTWPlot is illustrated (a) at the end of the SAS code for artificial data and (b) in Appendix A for the data of the worked example.

Radiation

Implementation with SAS

All patients

3.

Pseudo-population (non-stabilized weights)

estimates of the adjusted cumulative incidence functions. This is because for a given treatment x stabilization consists in multiplying the non-stabilized weight by a constant value, which does not modify the cumulative incidence. When the weight model is saturated (i.e., the estimator of the probability P(X = x|Z = z) is identical to the frequency of treatment x observed among all individuals with the covariate vector z, for all x and y), this method is equivalent to direct standardization of the cumulative incidence functions to the overall study population. In this sense, the cumulative incidence functions estimated for treatment x represent the experience of the entire study population had all individuals received treatment x. For the graphical representation, the cumulative incidence functions may be grouped per treatment or per event type.

Standardized difference**

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

Covariate

66

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

67

Fig. 1 – Crude cumulative incidence functions for treatment with radiation (A) and radiation and chemotherapy (B), respectively. 95% confidence bands are shown as dotted lines.

The crude cumulative incidence functions are shown in Fig. 1, one for each treatment group and for each type of event. For example, for radiation the crude cumulative incidence at 15 years from diagnosis is 37%, 2%, and 10% for relapse, second malignancy, and death, respectively. Summing up these three values yields 49%, the crude cumulative incidence for the composite event. The functions in Fig. 1 were obtained using the SAS autocall macro %CumIncid. For comparison, we calculated the naïve estimators, i.e., 1 minus the Kaplan–Meier estimator based only on the considered event type and treating the other types of event as independent censoring. For radiation, we obtained at 15 years from diagnosis 38%, 4%, and 15% for relapse, second malignancy, and death, respectively, summing up to 57%, which is actually noticeably higher than the crude cumulative incidence for the composite event (49%). We applied inverse probability of treatment weighting to adjust the cumulative incidence functions for age, sex, size of mediastinum involvement, extranodal disease, and clinical disease stage. The invocation of the SAS macro %CumIncidIPTW is detailed in Appendix A. The non-stabilized weights varied between 1.06 and 15.6, with a mean value of 2.00. Note that the mean value of the weights is near to the number of the treatments under study, which corresponds to the requirement that the mean value of the stabilized weights is near to one [9]. In the resulting pseudo-population, the standardized difference between the two treatment groups was less than 0.1 for all covariates (Table 2), a value that has been taken by some authors to indicate a negligible difference in the mean (see reference in Ref. [11] and further discussion in Ref. [12]). The adjusted cumulative incidence functions are presented in Fig. 2. Confidence bands were obtained by bootstrap with 1000 replications. For example, for relapse, adjustment increases the difference in estimated cumulative incidence between both treatment groups. After adjustment, the cumulative incidence of the composite event at 25 years is 66%

(95% confidence interval: 62–70%) for radiation and 55% (47–63%) for radiation and chemotherapy. Compared to radiation, the event occurring first with radiation and chemotherapy is more often second malignancy and less often relapse. We recall that this illustration of methodological issues does not aim at conveying medical conclusions.

5.

Discussion

We described a straightforward method to perform covariate adjustment with time to event data in the presence of competing risks and we developed a SAS macro to make the method readily usable. With the choice of the cumulative incidence function as entity to be estimated, inference for the event of interest is made ‘in the presence of competing risks’. The presented approach requires no assumption about the form of the cumulative incidence function. When the weight model is saturated, the method is equivalent to direct standardization to the entire study population. Interpretation of the estimated cumulative incidence functions is therefore simple and intuitively appealing. The method helps to visually depict associations between treatment and the different types of event under study, while focusing on the earliest event only. This focus may be motivated by the research question under study and/or form a practical constraint in studies where follow-up of individuals is stopped at the first event, in particular if each of the considered event types virtually prevents the others from happening. To the best of our knowledge, so far, there are no methodological papers proposing covariate adjustment of cumulative incidence functions for competing risks data with inverse probability of treatment weighting. However, the presented method could be considered as a special case of more recent, rather sophisticated approaches such as that one proposed by Bekaert et al. [14]. The general approach of

68

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

Fig. 2 – The impact of adjustment on the estimated cumulative incidence for the event types relapse (A), second malignancy (B), and death (C), respectively.

inverse probability of treatment weighting is widely accepted as one method for covariate adjustment [1–3,9–11] and differences to alternative methods, in particular propensity score matching and adjustment by regression modeling, are well understood [11,15]. Direct adjustment of cumulative incidence functions has been proposed by Zhang and Zhang [16]. Their approach is based on the Fine and Gray proportional subdistribution hazards regression model [17,18]

and assumes constant subdistribution hazard ratios for each covariate. When in a comparison between treated and untreated patients the interest is in standardization to the population of treated individuals rather than in standardization to the entire study population, the presented approach can be modified by using propensity score matching instead of weighting [11,15]. However, propensity score matching involves several decisions concerning the matching method and the resulting estimates may depend on it. Also note that with matching some data are thrown away, whereas with weighting all data are used. The presented method is subject to limitations. First, inverse probability of treatment weighting requires that, for every covariate combination, each of the treatment groups under study is chosen with strictly positive probability. Second, estimation of the confidence intervals by bootstrap may be time-consuming for large data sets. Third, we only used a logistic regression model to estimate the weights. In certain situations, other approaches such as decision trees or boosting may provide an improved specification of the weight model. Fourth, due to the non-parametric description of the time to event data, the presented approach does not provide a simple relationship between treatment and the cumulative incidence of the different types of event under study. Alternatively, the cumulative incidence function could be modeled by the Fine and Gray proportional subdistribution hazards regression model depending only on treatment and inverse probability of treatment weighting could be applied to adjust for the covariates. However, this approach comes with model assumptions. An alternative approach in competing risk analysis is to estimate the cause-specific hazard function for each type of event, most often using Cox regression models [8]. For a given type of event j, the cause-specific hazard function at time t quantifies the instantaneous risk at t for developing an event of type j among all individuals ‘at risk’. The cumulative incidence function for event type j can be calculated as a function of the cause-specific hazard functions of all event types. Inverse probability of treatment weighting could also be applied within this approach by using a Cox regression model depending only on treatment and a weight model depending on the covariates. The discrete-time analogous to this approach is a special case of that one proposed by Moodie et al. [19], which is also suited for a time-varying treatment variable and time-varying covariates. Relying on Cox models, their approach also involves proportional hazard assumptions. Finally, only the event which occurs first is considered with the described method. When interest is also in what happens after the first event, multi-state models could be applied [20]. These models describe the overall event history and again the use of inverse probability of treatment weighting could be considered for adjustment.

Conflict of interest None.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

Appendix A.

69

A.3. Display of the estimated functions

In this appendix, the invocation of the SAS macros %CumIncidIPTW and %CumIncidIPTWPlot is illustrated using the data of the worked example.

A.1. Data preparation The Hodgkin’s disease study data from Pintilie’s book on competing risks [4] are available under http://www.uhnres. utoronto.ca/labs/hill/People Pintilie.htm.

Here, the grouping of the adjusted cumulative incidence functions by type of event is requested (grouping=event). The inclusion of confidence bands into the figure is requested, more precisely the inclusion of the confidence bands based on bootstrap standard deviations (ic mode=std).

Appendix B. Supplementary material The SAS code which implements the presented method is provided in the Supplementary Material associated with this article and can be found at http://dx.doi.org/10.1016/ j.cmpb.2016.03.008.

A.2. Estimation of the adjusted cumulative incidence functions

Here, the calculation of confidence bands is requested (ic=yes) using 100 bootstrap replications (nb bootstrap=100). The default value of 0.05 was used for the complement of the confidence level for the calculation of the confidence bands (parameter alpha). The macro %CumIncidIPTW calculates confidence bands based on bootstrap standard deviations as well as confidence bands based on percentiles. For the latter to be valid, the number of bootstrap replications must be sufficient to give satisfactory estimates of the percentiles.

references

[1] P.R. Rosenbaum, Model-based direct adjustment, J. Am. Stat. 82 (1987) 387–394, http://dx.doi.org/10.1080/01621459.1987.10478441. [2] J.M. Robins, M.A. Hernán, B. Brumback, Marginal structural models and causal inference in epidemiology, Epidemiology 11 (2000) 550–560, http://dx.doi.org/10.1097/ 00001648-200009000-00011. [3] S.R. Cole, M.A. Hernán, Adjusted survival curves with inverse probability weights, Comput. Methods Programs Biomed. 75 (2004) 45–49, http://dx.doi.org/10.1016/j.cmpb.2003.10.004. [4] M.M. Pintilie, Competing Risks – A Practical Perspective, John Wiley & Sons, Chichester, 2006. [5] P. Petersen, R. Tsang, M. Gospodarowicz, M. Pintilie, W. Wells, et al., Stage I and II Hodgkin’s disease: long term outcome and second cancer risk, Radiother. Oncol. 72 (2004) S23, http://dx.doi.org/10.1016/S0167-8140(04)80575-9. [6] J. Kalbfleisch, R. Prentice, The Statistical Analysis of Failure Time Data, John Wiley & Sons, New York, 1980. [7] K.J. Rothman, S. Greenland, T.L. Lash, Modern Epidemiology, 3rd ed., Lippincott Williams & Wilkins, Philadelphia, 2008. [8] P.K. Andersen, R. Geskus, T. de Witte, H. Putter, Competing risks in epidemiology: possibilities and pitfalls, Int. J. Epidemiol. 41 (2012) 861–870, http://dx.doi.org/ 10.1093/ije/dyr213. [9] S.R. Cole, M.A. Hernán, Constructing inverse probability weights for marginal structural models, Am. J. Epidemiol. 168 (2008) 656–664, http://dx.doi.org/10.1093/aje/kwn164. [10] P.R. Rosenbaum, D.B. Rubin, The central role of the propensity score in observational studies for causal effects,

70

[11]

[12]

[13] [14]

[15]

[16]

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 2 9 ( 2 0 1 6 ) 63–70

Biometrika 70 (1983) 41–55, http://dx.doi.org/10.1093/ biomet/70.1.41. P.C. Austin, An introduction to propensity score methods for reducing the effects of confounding in observational studies, Multivar. Behav. Res. 46 (2011) 399–424. P.C. Austin, E.A. Stuart, Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies, Stat. Med. 34 (2015) 3661–3679, http://dx.doi.org/10.1002/sim.6607. http://support.sas.com/kb/30/addl/fusion 30511 1 cumincid.sas.txt (accessed 07.05.15). M. Bekaert, S. Vansteelandt, K. Mertens, Adjusting for time-varying confounding in the subdistribution analysis of a competing risk, Lifetime Data Anal. 16 (2010) 45–70, http://dx.doi.org/10.1007/s10985-009-9130-8. T. Stürmer, K.J. Rothman, R.J. Glynn, Insights into different results from different causal contrasts in the presence of effect–measure modification, Pharmacoepidemiol. Drug Saf. 15 (2006) 698–709, http://dx.doi.org/10.1002/pds.1231. X. Zhang, M.J. Zhang, SAS macros for estimation of direct adjusted cumulative incidence curves under proportional

[17]

[18]

[19]

[20]

subdistribution hazards models, Comput. Methods Programs Biomed. 101 (2011) 87–93, http://dx.doi.org/10.1016/j.cmpb.2010.07.005. J. Fine, R. Gray, A proportional hazards model for the subdistribution of a competing risk, J. Am. Stat. Assoc. 94 (1999) 496–509, http://dx.doi.org/10.1080/01621459.1999. 10474144. M.M. Kohl, M. Plischke, K. Leffondré, G. Heinze, PSHREG: a SAS macro for proportional and nonproportional subdistribution hazards regression, Comput. Methods Programs Biomed. 118 (2015) 218–233, http://dx.doi.org/10.1016/j.cmpb.2014.11.009. E.M.E. Moodie, D.A. Stephens, M.B. Klein, A marginal structural model for multiple-outcome survival data: assessing the impact of injection drug use on several causes of death in the Canadian Co-infection Cohort, Stat. Med. 33 (2014) 1409–1425, http://dx.doi.org/10.1002/ sim.6043. H. Putter, M. Fiocco, R.B. Geskus, Tutorial in biostatistics: competing risks and multi-state models, Stat. Med. 26 (2007) 2389–2430, http://dx.doi.org/10.1002/sim.2712.