Journal Pre-proof A literature review of treatment-specific clinical prediction models in patients with breast cancer Natansh D. Modi, Michael J. Sorich, Andrew Rowland, Jessica M. Logan, Ross A. McKinnon, Ganessan Kichenadasse, Michael D. Wiese, Ashley M. Hopkins
PII:
S1040-8428(20)30046-9
DOI:
https://doi.org/10.1016/j.critrevonc.2020.102908
Reference:
ONCH 102908
To appear in:
Critical Reviews in Oncology / Hematology
Received Date:
23 December 2019
Accepted Date:
16 February 2020
Please cite this article as: Modi ND, Sorich MJ, Rowland A, Logan JM, McKinnon RA, Kichenadasse G, Wiese MD, Hopkins AM, A literature review of treatment-specific clinical prediction models in patients with breast cancer, Critical Reviews in Oncology / Hematology (2020), doi: https://doi.org/10.1016/j.critrevonc.2020.102908
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier.
A literature review of treatment-specific clinical prediction models in patients with breast cancer AUTHORS Natansh D Modi1, Michael J Sorich1#, Andrew Rowland1, Jessica M Logan2, Ross A McKinnon1, Ganessan Kichenadasse1, Michael D Wiese2 and Ashley M Hopkins1#
ro of
#-Ashley M Hopkins and Michael J Sorich contributed equally to this study
AFFILIATIONS
-p
1) College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia.
re
2) School of Pharmacy and Medical Sciences, University of South Australia,
lP
Adelaide, South Australia, Australia.
Natansh D Modi
na
CORRESPONDING AUTHOR
ur
Email:
[email protected]
Jo
Address: 5D317, Flinders Medical Centre, Bedford Park, SA, 5042 Phone: +61 8 8201 5647
CONTENTS Contents ..................................................................................................................................... 1
Abstract ...................................................................................................................................... 3 Keywords ................................................................................................................................... 4 1. Introduction ............................................................................................................................ 5 2. Treatment-specific clinical prediction models ....................................................................... 5 2.1. Discrimination ................................................................................................................. 5 2.2. Validation ........................................................................................................................ 6 2.3. Risk scoring and online tools .......................................................................................... 6
ro of
3. Literature Search .................................................................................................................... 6 Table 1: Study inclusion criteria ............................................................................................ 7 4. Overview of identified treatment-specific clinical prediction models ................................... 7 4.1. Discrimination ................................................................................................................. 7
-p
4.2. Validation ........................................................................................................................ 8 4.3. Risk scoring and online tools .......................................................................................... 8
re
Figure 1: Performance rankings of identified models ............................................................ 8
lP
5. Prediction models for therapeutic outcomes .......................................................................... 8 Table 2: Treatment-specific clinical prediction models for therapeutic outcomes in patients with breast cancer based upon clinicopathological data ...................................................... 11
na
6. Prediction models for adverse outcomes ............................................................................. 14 Table 3: Treatment-specific clinical prediction models for adverse outcomes in patients
ur
with breast cancer based upon clinicopathological data ...................................................... 16 7. Future Directions ................................................................................................................. 18
Jo
8. Conclusion ........................................................................................................................... 18 References ................................................................................................................................ 20
HIGHLIGHTS
Treatment-specific clinical prediction models aim to reduce the uncertainty in therapeutic and adverse outcomes by providing personalised estimates of the likely benefits, harms, and prognosis following treatment commencement.
To achieve these objectives, models need to be of a clinical-grade with evidence of accuracy, reproducibility, generalizability, and user-friendliness.
The structured search identified seventeen treatment-specific clinical prediction models for therapeutic or adverse outcomes in patients with breast cancer. Significant gaps in the availability of validated models for both therapeutic and
ro of
re
-p
adverse outcomes were identified.
ABSTRACT
lP
Despite advances in the breast cancer treatment, significant variability in patient outcomes remain. This results in significant stress to patients and clinicians. Treatment-specific clinical
na
prediction models allow patients to be matched against historical outcomes of patients with similar characteristics; thereby reducing uncertainty by providing personalised estimates of
ur
benefits, harms, and prognosis. To achieve this objective, models need to be clinical-grade with evidence of accuracy, reproducibility, generalizability, and be user-friendly. A
Jo
structured search was undertaken to identify treatment-specific clinical prediction models for therapeutic or adverse outcomes in breast cancer using clinicopathological data. Significant gaps in the presence of validated models for available treatments was identified, along with gaps in prediction of therapeutic and adverse outcomes. Most models did not have userfriendly tools available. With the aim being to facilitate the selection of the best medicine for a specific patient and shared-decision making, future research will need to address these gaps.
KEYWORDS
Jo
ur
na
lP
re
-p
ro of
Clinical prediction model; treatment-specific; breast cancer, literature review.
1. INTRODUCTION Over the last decade, there have been significant advances in the treatment of breast cancer, however, it is still one of the leading causes of death in women [1, 2]. Depending on the cancer subtype, stage, histopathology, patient preference, or gene expression, current treatment options for breast cancer consist of locoregional surgery, radiotherapy, systemic chemotherapy, endocrine therapy or targeted therapy [3]. Despite these advances, significant variability and uncertainty in therapeutic and adverse outcomes remain between treated patients [4]. This uncertainty results in significant distress to patients, significant others and their clinicians.
ro of
Clinicopathological data can be used to develop clinical prediction models that allow patients to be matched against historical outcomes of patients with similar characteristics [5, 6].
Treatment-specific clinical prediction models aim to reduce the uncertainty in therapeutic and adverse outcomes by providing personalised estimates of the likely benefits, harms, and
-p
prognosis following treatment commencement [5]. Effective communication of personalised and well-validated predictions of an individual’s expected therapeutic and adverse outcomes
re
can improve shared decision making, empower patients, and enable patients and clinicians to make better decisions regarding whether to commence and continue medicines [5, 7, 8].
lP
A structured review of studies that present or validate treatment-specific clinical prediction models of therapeutic or adverse outcomes for patients with breast cancer based upon clinicopathological data was conducted. This is followed by a comprehensive comparison of
na
the included predictors, performance and extent of validation for the identified treatmentspecific clinical prediction models.
ur
2. TREATMENT-SPECIFIC CLINICAL PREDICTION MODELS A treatment-specific clinical prediction model must clearly outline the treatment and time
Jo
point in care in which the model is to be used [9]. It should also define the specific therapeutic or adverse outcome being predicted and the patient population on which the model was built [9]. Clinical prediction models may undergo performance assessment, validation and then refinement to risk scoring tools [9]. 2.1. Discrimination Model discrimination refers to the ability of a prediction model to differentiate between patients of varying risk [10]. The ability of a model to discriminate will be dependent upon
the distribution of patient characteristics in utilized data, and the correlation of characteristics to predicted outcomes [10]. Well discriminated models have more spread in risk predictions compared to poorly discriminated models [11]. Discrimination of logistic regression or timeto-event models is commonly represented by concordance statistic (c-stat); also known as the area under the receiver operating characteristic (ROC curve) [11]. 2.2. Validation Prediction models may be assessed for internal validity or external validity. Internal validity demonstrates model reproducibility and is conducted using a sample of the model development data [12]. Internal validity also demonstrates the stability of the selected
ro of
variables within a model [11]. Internal validity can be assessed by apparent, split sample,
cross-validation, and bootstrapping procedures [12]. While split sample methods (i.e. splitting data into development and internal validation cohorts) are commonly utilized, bootstrapping is a more efficient form of estimating internal validity [11, 12].
-p
External validity is the process of evaluating model performance and
generalizability/transportability in populations intrinsically different from the development
re
data [13]. Models are considered externally valid where performance is maintained within a new dataset [13]. External validation provides a more robust assessment of model prediction
lP
performance than internal validation, as internal validation provides no appraisal of performance in external data [11, 13]. 2.3. Risk scoring and online tools
na
Clinical prediction models can be integrated into tools that allow the calculation and presentation of personalized risks for individuals [14]. Clinical prediction tools, such as
ur
nomograms, scoring schematics or online applications are important methods for allowing end-users to interpret and use clinical prediction models [14]. The key to facilitating the clinical use of prediction models is that developed tools need to be accessible, interpretable
Jo
and user-friendly [15].
3. LITERATURE SEARCH A structured literature search of Embase, Pubmed, and Google Scholar was undertaken in December 2018 to identify studies that developed treatment-specific clinical prediction models for therapeutic or adverse outcomes in patients with breast cancer based upon clinicopathological data (Table 1). The review strategy included searching the name of FDA
approved breast cancer drugs (e.g. trastuzumab, pertuzumab, paclitaxel, eribulin anddoxorubicin, etc.) AND ‘Breast Cancer’ AND ‘Clinical Prediction Model’ OR ‘Prediction model’ OR ‘Prediction tool’ OR ‘Prognostic model’ OR ‘Prognostic score’ AND ‘Survival’ OR ‘mortality’ OR ‘life expectancy’ OR ‘progression-free survival’ OR ‘adverse events’ OR ‘toxicity’ OR ‘toxic potential’. The reference lists of publications presenting treatmentspecific clinical prediction models were also examined. Table 1: Study inclusion criteria -
Publications outlining the development, validation or updating of a pre-treatment / baseline clinical prediction model of a therapeutic (overall, progression-free,
ro of
disease-free or failure-free survival) or adverse outcome in patients with breast cancer treated with specified treatment options i.e. a pre-treatment, treatmentspecific clinical prediction model
The models must be built using routinely collected clinicopathologic data
-
The publication must clearly outline the treatment and time point in care in which the model is of use Published in the last 10 years and in English
re
-
-p
-
lP
4. OVERVIEW OF IDENTIFIED TREATMENT-SPECIFIC CLINICAL PREDICTION MODELS
na
The structured search identified seventeen treatment-specific clinical prediction models for therapeutic or adverse outcomes in patients with breast cancer based upon clinicopathological data. Identified models were developed using both real-world (n=7) and clinical trial (n=10)
ur
datasets. Ten models predicted therapeutic outcomes (Table 2), and seven predicted adverse outcomes (Table 3). Identified models were diverse in treatment-specificity, stage of disease,
Jo
ethnicity, outcomes, predictors, performance, and level of validation (Table 2, Table 3 & Figure 1).
4.1. Discrimination Of the 17 identified treatment-specific clinical prediction models for therapeutic or adverse outcomes in patients with breast cancer, six models did not report discrimination performance. Only one model demonstrated a ‘very strong’ discrimination performance [14], whilst five models had a ‘strong’ discrimination performance (Figure 1).
4.2. Validation The majority (n=9) of identified models did not report internal validation metrics, limiting reviewer confidence in discrimination performance and stability of the selected predictors (Figure 1). Only one of the identified models had external validity metrics presented in the literature (Figure 1), and it is a model currently used in several clinical practices. 4.3. Risk scoring and online tools The majority (n=14) of identified models had manual risk scoring tools (risk score calculator or a classification tree) presented within the identified publications. One model was available
ro of
within an online risk application (Figure 1), facilitating its use within several clinical practices.
-p
Figure 1: Performance rankings of identified models
Not Reported Poor < 0.6 Moderate 0.6-0.7 Strong 0.7 - 0.8 Very Strong > 0.8
6 4 5
Not Reported Bootstrapping 1:1 random split External Validity
lP
1
re
1
9
6
1 1
Not Developed Developed
na
3
Does not exist Exist
14 16
1
ur
Risk Online scoring tool tool
Validation
Discrimination : AUC/ROC/C-stat
Performance rankings of identified models
2
4
6
8 10 Number of Models
12
14
16
18
Jo
0
5. PREDICTION MODELS FOR THERAPEUTIC OUTCOMES Ten treatment-specific clinical prediction models for therapeutic outcomes in patients with breast cancer were identified, with the majority (n=6) built using observational real-world data. Three models aimed to predict therapeutic outcomes in patients with early breast cancer, whilst the remaining focused on patients with advanced breast cancer (Table 2).
The early-stage models focused on predicting outcomes to first-line therapies. ‘PREDICT v2.0’ [16] predicted overall survival to first-line systemic chemotherapy, trastuzumab, bisphosphonates, and hormonal therapy options. Guarneri et al [17] aimed to predict outcomes to first-line anthracycline therapy. PREDICT was originally developed in 2010 and of the identified models is the most likely to be used in clinical practice [18]. PREDICT has undergone several major updates, and in 2017 PREDICT v2.0 was released. Updates include the incorporation of additional biomarkers (e.g. HER-2 status, Ki-67 status) to improve discrimination and reduce underestimation in rarer patient subsets [16, 19, 20]. Of the identified models, PREDICT is the only one to have reported external validity, internal
ro of
validity, discrimination and to have presented an online tool [16]. The model development data was also large (n = 5738) [16]. Further, PREDICT v2.0 has been externally validated in the patient cohorts from the Netherlands and Scotland [21, 22] and is commonly used in Australia by surgical and medical oncologists.
-p
The advanced stage models largely focused on HER2 targeting therapies. Hopkins et al [23, 24] presented a model to predict therapeutic outcomes in patients initiating first-line
re
trastuzumab, pertuzumab, and docetaxel; and a model for later-line use of ado-trastuzumab emtansine. The Hopkins et al [23, 24] models were built using clinical trial data. Blanchette et al [25] used observational data, aiming to predict therapeutic outcomes in patients with
lP
HER2 positive advanced breast cancer, initiating trastuzumab for the first time. The observational cohort included patients initiating trastuzumab as both a first (± pertuzumab) and later-line therapy [25]. Finally, De Sanctis et al [26] aimed to predict outcomes in
na
patients with advanced breast cancer, initiating eribulin as a later-line therapy (≥2 prior lines of therapy for advanced disease, the cohort of mixed HER2 status).
ur
In the 2000’s the ‘Adjuvant!’ clinical prediction model was widely used in clinical practice to provide estimates of overall survival for recently diagnosed patients with early breast cancer
Jo
[27]. Over time poor prediction accuracy across different populations (French, Dutch & United Kingdom) became apparent, particularly compared to the now utilized PREDICT [16, 28, 29]. Adjuvant! is an example of where a lack of model updating/ recalibration ultimately leads to poor performance and disparities with contemporary practice. For example, Adjuvant! did not include key biological markers such as HER-2 and Ki-67 [28]. Most of the identified models predicting therapeutic outcomes used Cox proportional hazards analysis as the model construction process. Hopkins et al [24] utilized a recursive partitioning
analysis. Recursive partitioning analysis allows a streamlined development of a risk scoring tool (i.e. a decision tree), however, the methodology requires large datasets to achieve precise predictions [30]. De Sanctis et al [26] employed a discriminant function analysis (DFA). DFA estimates linear combinations to maximize discrimination and outlier identification [26]. There was significant heterogeneity in variables analysed and selected between identified models. For example, Blanchette et al. [25] was the only model to include a platelet-tolymphocyte ratio (PLR), an important prognostic variable [31, 32]. The model developed by Hopkins et al. [23] was the only model to include lactate dehydrogenase; another important
ro of
prognostic variable [33]. Model discrimination was most commonly presented through visual representation of
divergent survival outcomes between groups using a Kaplan-Meier survival figure and
Jo
ur
na
lP
re
-p
numerically via concordance statistics.
Table 2: Treatment-specific clinical prediction models for therapeutic outcomes in patients with breast cancer based upon clinicopathological data Outcome Overall Survival
Patient Population n = 5738; observational realworld data
Model Parameters 1) Age at Diagnosis 2) Post Menopausal - Yes - No - Unknown 3) ER Status - Positive - Negative 4) HER2 Status - Positive - Negative - Unknown 5) Ki-67 Status - Positive - Negative - Unknown 6) Tumour size (mm) 7) Tumour grade -1 -2 -3 8) Detected by - Screening - Symptoms - Unknown 9) Positive Nodes 10) Micrometastases - Yes - No - Unknown - Ki-67 ≥ 15%
f
Drug Chemotherapy alone, Endocrine Therapy alone & Combined Chemoendocrine
oo
Stage Early Breast Cancer
Guarneri et al. 2009 [17]
Jo ur
na l
Pr
e-
pr
Reference Candido dos Reis et al. 2017 [16] (Predict v2.0)
Stage II-III
99% patients on an Anthracycline
Overall Survival
n = 221; observational realworld data
Model Model constructed via using Cox proportional hazards analysis Model is a recalibration of Predict v1
User-friendliness - Includes only routinely collected clinic data - Online tool developed based on the model.
Model discrimination: AUC For ER Negative: 0.726 AUC For ER Positive: 0.796 Internal Validation Process: NA External Validation: Scotland Validation Study AUC: 0.74-0.77 Netherlands Validation Study AUC: ER Positive = 0.79 ER Negative = 0.75
Model constructed via using Cox proportional hazards analysis Model discrimination represented via Kaplan-Meier and log rank test Internal Validation Process: NR External Validation: Does not exist
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Stage II-III
99% patients on an Anthracycline
Disease Free Survival
- Ki-67 ≥ 15%
n = 221; observational realworld data
Overall Survival
n = 154; observational realworld data
oo
Trastuzumab
Metastatic Breast Cancer
Pr
Trastuzumab
Eribulin
Jo ur
De Sanctis et al. 2018 [26]
HER2 Positive Metastasis
Failure Free Survival
na l
Blanchette et al. 2018 [25]
Progression-Free Survival
- Age - No. of distant sites of metastasis - CNS metastasis - ER Status - PLR - ALP
pr
HER2 Positive Metastasis
e-
Blanchette et al. 2018 [25]
n = 154; observational realworld data
Model constructed via using Cox proportional hazards analysis
- Nodal Positivity
f
Guarneri et al. 2009 [17]
Model discrimination represented via Kaplan-Meier and log rank test
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: NR External Validation: Does not exist
Model constructed using Cox proportional hazards analysis Model discrimination represented via Kaplan-Meier only
- Includes only routinely collected clinic data - No risk classification tool developed - No online tool
Internal Validation: NR External Validation: Does not exist
- No. of distant sites of metastasis (≥ 2 versus 0-1)
Model constructed via using Cox proportional hazards analysis
- CNS metastasis (Yes versus no)
Model discrimination represented via Kaplan-Meier only
- PLR (Log transformed)
Internal Validation Process: NR
- Includes only routinely collected clinic data - No risk classification tool developed - No online tool
External Validation: Does not exist n = 71; observational realworld data
1) Number of metastatic sites -1 -2 - 3 to 4 2) ECOG PS -0 -1 - 2 to 3
Model constructed via using Cox proportional hazards analysis and Discriminant Function Analysis Model discrimination represented via -Kaplan-Meier only Internal Validation: NR External Validation: Does not exist
- Includes only routinely collected clinic data - No risk classification tool developed - No online tool
Ado-trastuzumab emtansine
Overall Survival
n = 1593; Clinical trial data
1) Number of metastatic sites -≤2 ->2 2) ECOG PS -0 -≥1
f
HER2 Positive Advanced Breast Cancer
Hopkins et al. 2019 [23]
HER2 Positive Advanced Breast Cancer
n = 1593; Clinical trial data
e-
Progression-Free Survival
Pertuzumab, Trastuzumab & Docetaxel
Pr
HER2 Positive Advanced Breast Cancer
Ado-trastuzumab emtansine
Overall Survival
n = 408, Clinical trial data
na l
Hopkins et al. 2019 [23]
HER2 Positive Advanced Breast Cancer
Jo ur
Hopkins et al. 2019 [24]
pr
oo
Hopkins et al. 2019 [24]
Pertuzumab, Trastuzumab & Docetaxel
Progression-Free Survival
1) Number of metastatic sites -≤2 ->2 2) ECOG PS -0 -≥1
Model discrimination represented via Concordance statistic and Kaplan-meier C-stat = 0.62
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation: NR External Validation: Does not exist Model constructed via using Recursive Partitioning Analysis Model discrimination represented via Concordance statistic and Kaplan-meier C-stat = 0.59
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation: NR External Validation: Does not exist 1) Number of metastatic sites -<3 -≥3 2) LDH - ≤ ULN - > ULN
n = 408, Clinical trial data
Model constructed via using Recursive Partitioning Analysis
Model constructed via using Cox proportional hazards analysis Model discrimination represented via Concordance statistic and Kaplan-meier C-stat = 0.65
3) ECOG PS -0 -≥1
Internal Validation: NR
1) Number of metastatic sites -<3 -≥3
Model constructed via using Cox proportional hazards analysis
2) LDH - ≤ ULN - > ULN 3) NLR - < 2.5 - ≥ 2.5
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
External Validation: Does not exist
Model discrimination represented via Concordance statistic and Kaplan-meier C-stat = 0.60
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation: NR External Validation: Does not exist
Footnotes: ECOG PS = Eastern Cooperative Oncology Group Performance Status, ER = Estrogen Receptor, HER2 = Human Epidermal growth factor Receptor 2, AUC = Area Under the ROC Curve, ROC = Receiver Operating Characteristic curve, CNS = Central Nervous System, PLR = Platelet-To-Lymphocyte Ratio, NLR = Neutrophil-To-Lymphocyte Ratio, ALP = Serum Alkaline Phosphatase, LDH = Lactate dehydrogenase, ULN = upper limit of normal, C-stat = Concordance statistic, NR = not reported.
6. PREDICTION MODELS FOR ADVERSE OUTCOMES Seven treatment-specific clinical prediction models for adverse outcomes in patients with breast cancer were identified. The majority (n=6) were built using clinical trial data, possibly a reflection of the complexities of curating toxicity data within electronic health record databases. Three models predicted adverse event outcomes in early breast cancer, whilst the remaining were for advanced breast cancer. Significant heterogeneity in predictor variables within the models was observed, however, this was expected as the induced toxicities of medicines are heterogeneous with respect to presentation and biological aetiology (e.g. rash
ro of
with lapatinib and cardiovascular AEs with trastuzumab) (Table 3). Regarding the early breast cancer models, Ezaz et al [34] aimed to predict heart failure and cardiomyopathy to trastuzumab; Upshaw et al [35] aimed to predict cardiotoxicity to
doxorubicin; and Romond et al [36] aimed to predict cardiac dysfunction to doxorubicin,
-p
cyclophosphamide, paclitaxel, and trastuzumab.
For patients with advanced breast cancer, Dranitsaris and Lacouture [37] developed a model
re
to predict rash and diarrhoea to lapatinib and capecitabine combination therapy. Further, Dranitsaris et al [38, 39] developed models to predict neutropenia and cardiotoxicity to
lP
doxorubicin or pegylated liposomal doxorubicin.
The majority (n=6) of identified models used the concordance statistic to present discrimination. Ezaz et al [34] presented discrimination solely through 3-year risks of heart
na
failure/cardiomyopathy by identified risk groups.
In contrast to modelling approaches used for therapeutic outcomes, there was a diverse set of
ur
statistical methods utilized to model adverse event outcomes. For example, Dranitsaris and Lacouture [37] and Dranistaris et al [38, 39] utilized generalized estimating equation (GEE) logistic regression methodology; Ezaz et al [34] used Cox proportional hazard analysis;
Jo
Romond et al [36] used Cox cause-specific proportional hazard analysis; and Upshaw et al [35] utilized logistic regression. GEE logistic regression differs from logistic regression as it allows adjustment for treatment cycle clustering, facilitating cycle-based and baseline predictions [38]. All identified adverse outcome models reported internal validation. The majority (n=6) used bootstrapping and one model used a split sample process. None of the models had reports of external validity identified in the literature. All identified publications for the adverse
outcome models reported a manual risk scoring tool, however, none presented an online
Jo
ur
na
lP
re
-p
ro of
application.
Table 3: Treatment-specific clinical prediction models for adverse outcomes in patients with breast cancer based upon clinicopathological data Drug
Outcome
Ezaz et al. 2014 [34]
Early Breast Cancer (Stage I-III)
Trastuzumab
Heart Failure and Cardiomyopathy
Patient Population n = 1664; observational real-world data
Model Parameters 1) Age - 67 to 74 - 75 to 79 - 80 to 94
f
Stage
oo
Reference
Pr
e-
pr
2) Prior Adjuvant Therapy - No adjuvant therapy - Anthracycline therapy - Non-anthracycline therapy
Romond et al. 2012 [36]
Doxorubicin
HER2 Positive, Non-metastatic
Cardiotoxicity
na l
Early Breast Cancer
Jo ur
Upshaw et al. 2019 [35]
Doxorubicin, Cyclophosphamide, Paclitaxel & Trastuzumab
n = 967; Clinical trial data
Model
User-friendliness
Model constructed via using Cox proportional hazards analysis
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Model discrimination represented via 3-Year Risk of HF/CM by Risk Score Internal Validation Process: Random 50/50 split of cohort
3) Pre-existing CV Risk Factors - No pre-existing CV risk factors - Coronary Artery Disease - Renal Failure - Atrial Fibrillation/Flutter - Diabetes Mellitus - Hypertension
External Validation: Does not exist
1) Increasing Age 2) Higher Body Mass Index 3) Lower baseline LVEF
Model constructed via using Logistic Regression Model discrimination represented via Concordance statistic C-stat = 0.701
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: Bootstrapping External Validation: Does not exist
Cardiac Dysfunction
n = 937; Clinical trial data
1) Age <50 50-59 ≥ 60 2) Baseline LVEF ≥ 65% 55%-64% 50%-54%
Model constructed via using Cox cause-specific proportional hazards analysis Model discrimination represented via Concordance statistic C-stat = 0.72 Internal Validation Process: Bootstrapping External Validation: Does not exist
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Advanced Stage HER2 Positive Cancer
Lapatinib & Capecitabine
Grade ≥ 2 Diarrhoea
n = 197; Clinical trial data
1) Age (26-80) 2) Multiple cycles of Therapy 3) Presence of Skin Metastases 4) Grade I Diarrhoea in the prior cycle 5) Therapy started in spring
Advanced Stage HER2 Positive Cancer
Lapatinib & Capecitabine
Grade ≥ 2 Rash
n = 197; Clinical trial data
Dranitsaris et al. 2008 [39]
Doxorubicin or Pegylated Liposomal Doxorubicin
Metastatic Breast Cancer
Neutropenic Complications
na l
Metastatic Breast Cancer
Jo ur
Dranitsaris et al. 2008 [38]
Pr
e-
Dranitsaris & Lacouture 2014 [37]
pr
oo
f
Dranitsaris & Lacouture 2014 [37]
Doxorubicin or Pegylated Liposomal Doxorubicin
n = 509; Clinical trial data
1) Presence of Brain Metastases 2) Planned dose of Capecitabine per cycle 3) Concomitant use of 5HT3 antiemetics
Model constructed via using Generalized estimating equations Model discrimination represented via ROC ROC = 0.78
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: Nonparametric bootstrapping External Validation: Does not exist Model constructed via using Generalized estimating equations Model discrimination represented via ROC ROC = 0.67
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: Nonparametric bootstrapping External Validation: Does not exist 1) Group (Doxorubicin vs PLD) 2) Age ≥ 59 Years 3) WHO performance status PS 1 vs. 0 PS 2 vs. 0 4) Cycle 1 vs > Cycle 1 5) Neutrophils ≤ 2 x 109 cells/L in the previous cycle
Model constructed via using Generalized estimating equations Model discrimination represented via ROC ROC = 0.74
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: Nonparametric bootstrapping External Validation: Does not exist
Cardiotoxicity
n = 509; Clinical trial data
1) Group (Doxorubicin vs PLD) 2) Cycle2 3) Interaction effects : Doxorubicin x Cycle2 4) Age < 50 Years 5) Weight ≥ 70 Kg 6) Baseline anthracycline exposure ≥ 100 mg/m^2 7) WHO performance status ≥ 1 vs. 0
Model constructed via using Generalized estimating equations Model discrimination represented via ROC ROC = 0.84
- Includes only routinely collected clinic data - Risk scoring tool developed based on the model - No online tool
Internal Validation Process: Nonparametric bootstrapping External Validation: Does not exist
Footnotes: HF = Heart Failure, CM = Cardiomyopathy, CV = Cardiovascular, 5HT3 = Serotonin Receptor, PLD = Pegylated Liposomal Doxorubicin, WHO = World Health Organization, ROC = Receiver Operating Characteristic curve, PS = Performance Status, LVEF = Left Ventricular Ejection Fraction.
7. FUTURE DIRECTIONS High variability between patients in therapeutic and adverse outcomes to treatment strategies for breast cancer causes significant stress to patients when considering the initiation of new treatments. Thus, there is significant potential for treatment-specific clinical prediction models to play an important role in guiding the selection of the best medicine for a specific patient and in facilitating shared decision making. To achieve these objectives, models need to be of a clinical-grade with evidence of accuracy, reproducibility, generalizability, and userfriendliness, as well as available for the range of treatment options, while presenting
ro of
information on prognosis, treatment-benefit and adverse outcomes [9]. User-friendliness includes the development of tools that allow rapid calculations in clinical practice, as well as the presentation of predictions in a format interpretable to patients and clinicians. Mechanisms to facilitate the use of prediction tools in clinical practice include
incorporating only variables that are collected in routine practice. One hope is that emerging
-p
real-world cancer databases (e.g. CancerLinQ Discovery and Project GENIE) will facilitate the development of models based on a greater range of routinely appreciated
re
clinicopathological data, ultimately providing more precise predictions. For example,
as lactate dehydrogenase.
lP
PREDICT [16] has yet to test or incorporate the significance of common laboratory data such
It is also anticipated that emerging real-world databases will facilitate the longitudinal
na
validation and recalibration of treatment-specific clinical prediction models in lieu of practice changes, as well as the development of tools that present predictions to the range of available treatment options [40-42]. Currently, there are large gaps within the availability of treatment-
ur
specific clinical prediction models, for example, there were no therapeutic outcome prediction models for patients with HER2 negative advanced breast cancer considering
Jo
available first-line treatment options. In addition, no tools were identified that presented both therapeutic and adverse outcomes to treatment options.
8. CONCLUSION This study has identified significant variability in the methods, validation, and presentation of risk prediction tools for treatment-specific clinical prediction models for breast cancer within the literature. PREDICT [16] was an identified model aiming to inform likely therapeutic outcomes for patients with early breast cancer considering first-line systemic chemotherapy,
trastuzumab, and hormonal therapy options. PREDICT [16] is the only identified model to report recalibration, external validation and to be presented within an online tool. Significant gaps in the availability of validated models for both therapeutic and adverse outcomes were identified, along with gaps in models for the full range of breast cancer treatment options. With the aim of treatment-specific clinical prediction models in breast cancer being to facilitate the selection of the best medicine for a specific patient and shared decision making, future research will need to address these gaps.
Declaration of competing interests: Associate Professor Rowland, Professor Sorich and
ro of
Professor McKinnon report grants from Pfizer, outside the submitted work. The authors have no other conflicts of interest to disclose.
-p
Funding: Research undertaken with the financial support of Cancer Council South
Australia’s Beat Cancer Project on behalf of its donors and the State Government through the
re
Department of Health (Grant ID: 1159924 and 1127220). R.A.M receives financial support from the Cancer Council’s Beat Cancer Project with support from its donors and the South
lP
Australian Department of Health. A.R. is supported by a Beat Cancer Mid-Career Research Fellowship from Cancer Council SA. A.M.H is a researcher funded by a Postdoctoral
na
Fellowship from the National Breast Cancer Foundation, Australia (PF-17-007).
Jo
ur
Acknowledgements: Not applicable
REFERENCES
7.
8.
9.
10.
11.
12.
13.
14.
Jo
15.
ro of
6.
-p
5.
re
4.
lP
3.
na
2.
Torre, L.A., et al., Global cancer statistics, 2012. CA: A Cancer Journal for Clinicians, 2015. 65(2): p. 87-108. Tong, C.W.S., et al., Recent Advances in the Treatment of Breast Cancer. Frontiers in oncology, 2018. 8: p. 227-227. McDonald, E., et al., Clinical Diagnosis and Management of Breast Cancer. Journal of Nuclear Medicine, 2016. 57: p. 9S-9S. McKinlay, J.B., et al., Physician Variability and Uncertainty in the Management of Breast Cancer: Results from a Factorial Experiment. 1998. 36(3): p. 385-396. Vogenberg, F.R., Predictive and Prognostic Models: Implications for Healthcare DecisionMaking in a Modern Recession. American Health & Drug Benefits, 2009. 2(6): p. 218-222. Han, K., K. Song, and B.W. Choi, How to Develop, Validate, and Compare Clinical Prediction Models Involving Radiological Parameters: Study Design and Statistical Methods. Korean journal of radiology, 2016. 17(3): p. 339-350. Harris, A.H.S., Path From Predictive Analytics to Improved Patient Outcomes: A Framework to Guide Use, Implementation, and Evaluation of Accurate Surgical Predictive Models. Annals of surgery, 2017. 265(3): p. 461-463. Santos, J. and I. Fonseca, Incorporating Scoring Risk Models for Care Planning of the Elderly with Chronic Kidney Disease. Current Gerontology and Geriatrics Research, 2017. 2017: p. 8067094. Collins, S.G., et al., Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement. Circulation, 2015. 131(2): p. 211219. Alba, A.C., et al., Discrimination and Calibration of Clinical Prediction Models: Users’ Guides to the Medical LiteratureUsers’ Guides to Discrimination and Calibration of Clinical Prediction ModelsUsers’ Guides to Discrimination and Calibration of Clinical Prediction Models. JAMA, 2017. 318(14): p. 1377-1384. Steyerberg, E.W. and Y. Vergouwe, Towards better clinical prediction models: seven steps for development and an ABCD for validation. European Heart Journal, 2014. 35(29): p. 19251931. Steyerberg, E.W., et al., Internal validation of predictive models: Efficiency of some procedures for logistic regression analysis. Journal of Clinical Epidemiology, 2001. 54(8): p. 774-781. Terrin, N., et al., External validity of predictive models: a comparison of logistic regression, classification trees, and neural networks. Journal of Clinical Epidemiology, 2003. 56(8): p. 721-729. Révész, D., et al., Decision support systems for incurable non-small cell lung cancer: a systematic review. 2017. 17(1): p. 144. Austin, P.C., et al., Developing points‐based risk‐scoring systems in the presence of competing risks. Statistics in Medicine, 2016. 35(22): p. 4056-4072. Candido dos Reis, F.J., et al., An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.(Report). Breast Cancer Research, 2017. 19(1). Guarneri, V., et al., A prognostic model based on nodal status and Ki-67 predicts the risk of recurrence and death in breast cancer patients with residual disease after preoperative chemotherapy. Annals of oncology : official journal of the European Society for Medical Oncology, 2009. 20(7): p. 1193-1198. Wishart, G.C., et al., PREDICT: a new UK prognostic model that predicts survival following surgery for invasive breast cancer. Breast cancer research : BCR, 2010. 12(1): p. R1-R1.
ur
1.
16.
17.
18.
25.
26. 27.
28.
29.
30.
31.
32.
33.
Jo
34.
ro of
24.
-p
23.
re
22.
lP
21.
na
20.
Wishart, G.C., et al., Inclusion of KI67 significantly improves performance of the PREDICT prognostication and prediction model for early breast cancer. BMC Cancer, 2014. 14(1): p. 908. Wishart, G.C., et al., PREDICT Plus: development and validation of a prognostic model for early breast cancer that includes HER2. British journal of cancer, 2012. 107(5): p. 800-807. van Maaren, M.C., et al., Validation of the online prediction tool PREDICT v. 2.0 in the Dutch breast cancer population. European Journal of Cancer, 2017. 86: p. 364-372. Gray, E., et al., Independent validation of the PREDICT breast cancer prognosis prediction tool in 45,789 patients using Scottish Cancer Registry data. British Journal of Cancer, 2018. 119(7): p. 808-814. Hopkins, A.M., et al., Predictors of Long-Term Disease Control and Survival for HER2-Positive Advanced Breast Cancer Patients Treated With Pertuzumab, Trastuzumab, and Docetaxel. 2019. 9(789). Hopkins, A.M., et al., Primary predictors of survival outcomes for HER2-positive advanced breast cancer patients initiating ado-trastuzumab emtansine. The Breast, 2019. 46: p. 90-94. Blanchette, P., et al., Factors influencing survival among patients with HER2-positive metastatic breast cancer treated with trastuzumab. Breast Cancer Research and Treatment, 2018. 170(1): p. 169-177. De Sanctis, R., et al., Predictive Factors of Eribulin Activity in Metastatic Breast Cancer Patients. Oncology, 2018. 94 Suppl 1(Suppl 1): p. 19-28. Ravdin, P.M., et al., Computer Program to Assist in Making Decisions About Adjuvant Therapy for Women With Early Breast Cancer. Journal of Clinical Oncology, 2001. 19(4): p. 980-991. Hajage, D., et al., External Validation of Adjuvant! Online Breast Cancer Prognosis Tool. Prioritising Recommendations for Improvement (External Validation of Adjuvant! Online). PLoS ONE, 2011. 6(11): p. e27446. Campbell, H.E., et al., An investigation into the performance of the Adjuvant! Online prognostic programme in early breast cancer for a cohort of patients in the United Kingdom. British Journal of Cancer, 2009. 101(7): p. 1074. James, K.E., R.F. White, and H.C. Kraemer, Repeated split sample validation to assess logistic regression and recursive partitioning: an application to the prediction of cognitive impairment. Statistics in Medicine, 2005. 24(19): p. 3019-3035. Templeton, A.J., et al., Prognostic Role of Platelet to Lymphocyte Ratio in Solid Tumors: A Systematic Review and Meta-Analysis. Cancer Epidemiology Biomarkers & Prevention, 2014. 23(7): p. 1204. Templeton, A.J., et al., Prognostic Role of Neutrophil-to-Lymphocyte Ratio in Solid Tumors: A Systematic Review and Meta-Analysis. JNCI: Journal of the National Cancer Institute, 2014. 106(6). Liu, D., et al., Prognostic significance of serum lactate dehydrogenase in patients with breast cancer: a meta-analysis. Cancer management and research, 2019. 11: p. 3611-3619. Ezaz, G., et al., Risk Prediction Model for Heart Failure and Cardiomyopathy After Adjuvant Trastuzumab Therapy for Breast Cancer. Journal of the American Heart Association, 2014. 3(1): p. n/a-n/a. Upshaw, J.N., et al., Personalized Decision Making in Early Stage Breast Cancer: Applying Clinical Prediction Models for Anthracycline Cardiotoxicity and Breast Cancer Mortality Demonstrates Substantial Heterogeneity of Benefit-Harm Trade-off. Clinical Breast Cancer, 2019. 19(4): p. 259-267.e1. Romond, E.H., et al., Seven-year follow-up assessment of cardiac function in NSABP B-31, a randomized trial comparing doxorubicin and cyclophosphamide followed by paclitaxel (ACP) with ACP plus trastuzumab as adjuvant therapy for patients with node-positive, human
ur
19.
35.
36.
38.
39.
40.
41.
Jo
ur
na
lP
re
-p
42.
ro of
37.
epidermal growth factor receptor 2-positive breast cancer. Journal of clinical oncology : official journal of the American Society of Clinical Oncology, 2012. 30(31): p. 3792-3799. Dranitsaris, G. and M.E. Lacouture, Development of prediction tools for diarrhea and rash in breast cancer patients receiving lapatinib in combination with capecitabine. Breast Cancer Research and Treatment, 2014. 147(3): p. 631-638. Dranitsaris, G., et al., Identifying Patients at High Risk for Neutropenic Complications During Chemotherapy for Metastatic Breast Cancer With Doxorubicin or Pegylated Liposomal Doxorubicin: The Development of a Prediction Model. American Journal of Clinical Oncology, 2008. 31(4). Dranitsaris, G., et al., The development of a predictive model to estimate cardiotoxic risk for patients with metastatic breast cancer receiving anthracyclines. Breast Cancer Research and Treatment, 2008. 107(3): p. 443-450. Goldstein, B.A., A.M. Navar, and M.J. Pencina, Risk Prediction With Electronic Health Records: The Importance of Model Validation and Clinical Context. JAMA cardiology, 2016. 1(9): p. 976-977. Vickers, A.J., Prediction models in cancer care. CA: a cancer journal for clinicians, 2011. 61(5): p. 315-326. Wellcome Trust; Sharing research data to improve public health: full joint statement by funders of health research. 2011; Available from: https://wellcome.ac.uk/what-we-do/ourwork/sharing-research-data-improve-public-health-full-joint-statement-funders-health.
VITAE Natansh D Modi – Bachelor of pharmacy (Honours); Bachelor of pharmaceutical science – Flinders University
Michael J Sorich - Bachelor of pharmacy (Honours); Graduate Diploma in medical statistics; PhD; Professor in Pharmacology – Flinders University Andrew Rowland – Bachelor of science (Honours); PhD; Associate Professor in
ro of
Clinical Pharmacology - Flinders University
Jessica M Logan – Bachelor of laboratory medicine (Honours); PhD – University of
-p
South Australia
Ross A McKinnon – Bachelor of pharmacy, Bachelor of science (Honours), PhD,
re
Dean (Research) and Beat Cancer Professor - Flinders University
lP
Ganessan Kichenadasse – Bachelor of Medicine and Bachelor of Surgery; Fellow of
na
the Royal Australasian College of Physicians – Flinders University Michael D Wiese – Bachelor of pharmacy (Honours); PhD; Associate Professor in
ur
Pharmacotherapeutics – University of South Australia
Jo
Ashley M Hopkins – Bachelor of pharmacy (Honours); PhD; National Breast Cancer Foundation Early Career Fellow (Australia) - Flinders University
ro of
-p
re
lP
na
ur
Jo