A746
VA L U E I N H E A LT H 2 0 ( 2 0 1 7 ) A 3 9 9 – A 8 1 1
Objectives: The hepatitis C virus (HCV) is a chronic, life-threatening disease which is substantially under-diagnosed. Accelerating time to diagnosis can lead to earlier treatment and improved patient outcomes. This was a retrospective database study to develop an algorithm which could be used to identify undiagnosed patients with HCV based on routinely collected patient data. The effectiveness of non-parametric machine learning methods was also compared with more conventional parametric methods. Methods: Data were extracted from US prescription and open-source medical claims between 2010 and 2016. Outcomes for HCV patients were coded as 1; outcomes for non-HCV patients were set to 0. Index date for HCV patients was the first observed date of diagnosis, ensuring only pre-diagnosed predictors were used. The most recent activity was used as the index date for non-HCV patients. Features captured information on demographics, treatments, procedures and symptomatology, including temporal associations between the timing of events and the index date. Binary classifiers were estimated based on conventional parametric methods – unconstrained logistic regression and logistic regression with penalty – and nonparametric machine learning methods - random forest, gradient boosting and an ensemble of classifiers based on logistic regression. Five-fold cross-validation was used to identify optimal hyperparameters which included a differential misclassification penalty. Predicted Positive Value (PPV) at 50% sensitivity was used to evaluate model performance based on hold-out data. Results: The sample comprised 120,000 HCV and 60,000,000 non-HCV patients. PPV (based on a HCV to non-HCV ratio of 1:34) was 72.3%, 70.8%, 65.0%, 52.1% and 51.7% for the ensemble, gradient boosting, random forest, logistic regression with penalty and unconstrained logistic regression, respectively. Conclusions: The evidence suggests that algorithms leveraging routinely collected real-world data could be an effective way to screen for undiagnosed HCV patients. State-of-the-art machine learning approaches also substantially out-performed conventional approaches, highlighting the potential value of these methods. PRM86 The Impact Of Human Papillomavirus Vaccine In Taiwan: A Transmission Dynamic Modelling Approach Wen Y1, Chang S1, Chou T1, Wu H1, Lin YJ1, Fann SJ2, Chou H3, Chang CJ1 Gung University, Taoyuan, Taiwan, 2Academia Sinica, Taipei, Taiwan, 3Chang Gung Memorial Hospital, Taoyuan, Taiwan
1Chang
Objectives: Human papillomavirus (HPV) is the general sexually transmitted disease. This study is to predict the infected cases after HPV vaccination policy implemented in Taiwan. Methods: The 5 years epidemiologic consequences of HPV vaccination was estimated via a transmission dynamic approach based on a Susceptible-Infectious-Recovered (SIR) model. The required epidemiological parameters were obtained from the database in Taiwan’s Health and Welfare Data Science Center and available public sources. The moderate-term epidemiologic consequences of non-vaccinated and vaccinated were estimated and compared. The differential equations were solved with the fourth-order Runge-Kutta method that is implemented in R Statistical Software. Results: Compare to those nonvaccinated, 9-valent vaccine will decrease incidence of HPV infections in different age groups (15-19, 20-29, 30-39, and 40-59 years old) among high sexual active female population. The trend is more obvious in the lower aged group. The cumulative reduction incidence over 5-year horizon is 54.5, 64.2, 60.7, and 56.6 per 100,000 female respectively. There will be more female in non-vaccinated group suffer cervical intraepithelial neoplasia 1 (CIN1) in different age groups among high sexual active female population than in vaccinated group. However, the incidence increases moderately along year. The incidence of CIN1 decreases in vaccinated group in each aged groups. However, for CIN2/3 the incidence increases in 30-60 years old high sexual active female population by year not matter HPV vaccine is given or not. The incidences at the 5th-year increase 2.2 times to 22.0 times from the 1styear in all aged group. There is no different incidence between non-vaccinated and vaccinated for cervical cancer in each age group. Conclusions: The result from this model suggests that HPV vaccination can protect female avoid suffering HPV infection and CIN1 in Taiwan. PRM87 Bayesian Methodology Considering Historical Data To Predict Overall Survival From Immature Trial Data In Oncology Soikkeli F1, Hashim M2, Postma M3, Heeg B2 of Eastern Finland, Kuopio, Finland, 2Ingress Health, Rotterdam, The Netherlands, 3University of Groningen, Groningen, The Netherlands
1University
Objectives: At time of health technology assessment, overall survival (OS) data can be immature. This results in higher uncertainty over long-term extrapolations, and consequently in decision-making. The aim was to assess whether Bayesian methods using historical data could be used to predict long-term survival from immature trial data. Methods: First, two reports on different data cuts (30 and 50 months) of the VISTA trial were retrieved. In that trial, bortezomib (V) combined with melphalan and prednisone (MP) was compared to MP in patients with multiple myeloma (MM), first-line setting. Additionally, we retrieved another report where mature OS data for MP in first-line setting was reported (historical trial). Second, individual participant data was extracted from reported Kaplan-Meier curves. Third, we fitted parametric survival models in WinBUGS over the MP arm of the historical trial. The resulting shape coefficient was used as an informative prior for the shape coefficient of the combined parametric fit over OS data from the VISTA trial. Non-informative to very informative priors were assessed. The analysis was based on the 30-months follow-up data and best fit was assessed using the 50-months follow-up data. The best fit was based on minimizing the area under the curve (AUC) between the observed and fitted curves. Results: Compared to extrapolations with non-informative priors, extrapolations with the informative priors resulted in 13% and 14% decrease in the AUC for the MP and the VMP arms, respectively. With the very informative priors, compared to extrapolations with non-informative priors, an 18% and 16% decrease in the AUC for the MP and the VMP arms was observed, respectively. Conclusions: The Bayesian approach with the use of historical trial
data can be used to more accurately predict overall survival on immature survival data. In future, the findings need to be confirmed by other datasets. PRM88 Probabilistic Sensitivity Analysis In Health Economic Models; How Many Simulations Should We Run? Hatswell AJ1, Bullement A1, Paulden M2, Stevenson M3 1BresMed Health Solutions, Sheffield, UK, 2University of Alberta, Edmonton, AB, Canada, 3University of Sheffield, Sheffield, UK
Objectives: Probabilistic sensitivity analysis (PSA) addresses parameter uncertainty inherent in a decision problem, and provides the most accurate incremental cost-effectiveness ratio (ICER) for non-linear models. Literature on the number of simulations required suggests a “sufficient number”, or until “convergence” which is seldom defined. Some methods to assess convergence of the mean ICER (such as “jackknifing”) exist, but are rarely used. In this study, we aim to define convergence and use empirical data to propose simulation numbers for different outcomes. Methods: 250,000 individual simulations were drawn from a variety of cohort models (n= 30) constructed in different disease areas. Random samples were drawn to simulate 1,000 PSAs using up to 25,000 simulations per PSA. We identified the numbers of simulations required for the mean values of the PSAs to be reasonably close to the ‘true’ results, defined as the mean results of the 250,000 individual simulations. Convergence to the edges of the distribution was also analysed. The results considered were the costs, quality-adjusted life years (QALYs), incremental net monetary benefit (INMB) and ICER. Results: Costs, QALYs and INMB were within 1% of the mean by 1,000 simulations in over 98% of scenarios. For the ICER at 1,000 simulations the ICER was within ±£500 of the ICER in 94.7% of scenarios, which increased to 98.6% at 5,000 simulations, and to 99.3% at 10,000 simulations. The edges of the distributions took longer to stabilise in all cases, not stabilising within 1% until over 10,000 simulations for nearly 90% of scenarios. Conclusions: Health economic models have sufficient complexity that low numbers of samples cannot guarantee accurate results. We recommend 10,000 simulations be used for decision making, which gives suitably stable mean results without excessive computational burden, unless convergence before this point has been demonstrated either via repeated sampling, or jackknifing. PRM89 Approaches To Modelling The Cost-Effectiveness Of Interventions For Heart Failure: A Systematic Review Sadler S1, Watson L1, Crathorne L1, Green C2 of Exeter, Exeter, UK, 2Exeter University, Exeter, UK
1University
Objectives: To review modelling methods used to assess the cost-effectiveness of interventions for heart failure (HF). Methods: A systematic search of the literature up to September 2016 across Medline, Embase, Cochrane Library, EconLit and CINAHL databases. We included studies that reported a model-based evaluation, including both costs and health impacts, of a HF intervention. Studies reporting only cost-effectiveness analyses alongside a clinical trial were excluded. Results: We identified 54 publications describing 52 economic models associated with HF interventions. The model-based evaluations comprised surgical (n= 20), medical (n= 16), service-level (e.g. telehealth, specialist clinics) (n= 9) or screening-/monitoring-type interventions (n= 4), or assessed disease management (n= 2). One study compared multiple interventions. The most common modelling framework was a Markov cohort method (n= 41); with models predominantly modelling disease progression via New York Heart Association grade or using a simple two-state (alive/dead) model. Several studies additionally included transition states for hospitalisation events. Two studies adapted the Markov cohort approach for sub-group analyses using risk equations. Eight studies reported a patient-level discrete event simulation approach, and four studies were decision trees. Key structural inputs to model development were data used to model mortality and to predict hospital admissions. Conclusions: A range of modelling approaches have been used successfully to assess the cost-effectiveness of HF interventions. Whilst the simple Markov cohort approach appears appropriate for the decision problem stated in most cases (i.e. estimating cost effectiveness), other methods have been used to good effect. To date modelling has not addressed the specific nature of the impact of HF on quality of life/ wellbeing, other than via use of NYHA and/or impact of hospital admissions. Future modelling may further consider this through use of natural history states using health states informed by health outcome measures commonly used in HF. PRM90 Transparency Matters: Use Of Digital Software Platform In Replicating Systematic Reviews Husain F, Sayre T, Feinman T, Fazeli MS, Kuang Y, del Aguila MA Doctor Evidence, Santa Monica, CA, USA
Objectives: Organizations that submit systematic reviews (SR) and network meta analyses (NMA) to regulatory agencies, health technology assessment bodies, and guideline groups often employ manual methods and technologies that obscure decisions made and limit replicability. This research assessed the transparency of manual methods with comparison to software assisted replication and representation of SRs. Methods: We considered four SRs, three non-governmental and one governmental agencies in the US and Europe, for multiple myeloma (MM), plaque psoriasis (PPs), Type 2 diabetes mellitus (T2DM) and multiple sclerosis (MS). Text descriptions of inclusion criteria (patients, interventions, comparators, outcomes) and meta-analysis methods were used to recreate each SR with software-assisted extraction and analysis tools to record and store decisions made at each step of the analyses. We then compared our results to that of each SR. Results: Seventy-seven studies were included in the original SR: MM= 9, MS= 33, PPs= 28, T2DM= 7. The replication SR analysis identified 80 studies: MM= 11, MS= 34, PPs= 28, T2DM= 7. Efficacy rankings, point estimates and number of studies differed between the published and recreated SRs. For example, in the PPs NMA, the PASI 75 ranking for infliximab