Use of interactive telephone technology for longitudinal data collection in a large trial

Use of interactive telephone technology for longitudinal data collection in a large trial

Contemporary Clinical Trials 33 (2012) 364–368 Contents lists available at SciVerse ScienceDirect Contemporary Clinical Trials journal homepage: www...

312KB Sizes 0 Downloads 9 Views

Contemporary Clinical Trials 33 (2012) 364–368

Contents lists available at SciVerse ScienceDirect

Contemporary Clinical Trials journal homepage: www.elsevier.com/locate/conclintrial

Use of interactive telephone technology for longitudinal data collection in a large trial Charlotte Russell a, b,⁎, Denise Howel c, Martin P. Ward-Platt d, Helen L. Ball a, b a b c d

Parent-Infant Sleep Lab, Durham University, Queen's Campus, University Boulevard, Thornaby, Stockton-on-Tees, TS17 6BH, United Kingdom Medical Anthropology Research Group, Department of Anthropology, Durham University, Dawson Building, South Road, Durham, DH1 3LE, United Kingdom Institute of Health & Society, Newcastle University, Newcastle upon Tyne, NE2 4AX, United Kingdom Newcastle Neonatal Service, Ward 35 Royal Victoria Infirmary, Newcastle upon Tyne, NE1 4LP, United Kingdom

a r t i c l e

i n f o

Article history: Received 16 August 2011 Revised 19 October 2011 Accepted 31 October 2011 Available online 11 November 2011 Keywords: Interactive telephone technology Data collection Patient-reported data Longitudinal data Survey methods

a b s t r a c t We report here on the use of interactive telephone technology for collecting longitudinal data in a large randomized non-blinded parallel trial. Data were primarily collected via an automated interactive telephone system which enabled data to be downloaded by researchers periodically via a secure website. Alternative methods were used by some participants to provide data; here we analyze the demographic profiles of groups by preferred data provision, and consider the cost-effectiveness and efficiency of such a system. The automated telephone system was used to provide the majority of data obtained (75.7%), however the group preferring to use this system to provide the majority of their data was on the whole older, more likely to be married, university educated, higher income and white compared to participants preferring to submit their data via personal phone call or post. We conclude that interactive telephone technology provides a means by which large quantities of longitudinal data may be collected efficiently. Depending on the target population, however, considerable staff time may be required to manage attrition and consequent data loss, and alternative strategies should be considered to minimize this. © 2011 Elsevier Inc. All rights reserved.

1. Introduction Longitudinal research methods enable clinicians and academics to conduct health surveillance programs, study behavior and the on-going effects of health-promoting interventions. When participants are asked to report events on a daily or weekly basis over a number of months, the data collection method needs to maximize accuracy and response rates while minimizing costs and the burden on participants. Unfortunately these are often mutually contradictory aims. Potential methods include home or telephone interviews, post-

⁎ Corresponding author at: Parent-Infant Sleep Lab, Durham University, Queen's Campus, University Boulevard, Thornaby, Stockton-on-Tees, TS17 6BH, United Kingdom. Tel.: + 44 191 334 0794; fax: + 44 191 334 0249. E-mail address: [email protected] (C. Russell). 1551-7144/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.cct.2011.10.012

al or web-based questionnaires at varying intervals, or diaries collected at the end of a given period. Where the data required are restricted to a limited number of closed questions, an automated interactive telephoneto-web data collection system may be efficient and costeffective. We designed and tested such a system for collecting follow-up data in a large randomized trial in the North East of England. We report here on usability and practicality of the system, the characteristics of the respondents, and the quality of data obtained when we implemented this system having previously assessed its feasibility [1]. The primary outcome measures for the North-East Cot Trial (NECOT) required us to obtain prospective data on infant feeding and sleeping practices each week for 26 consecutive weeks. With 1071 trial participants this potentially involved the administration of almost 28,000 infant care questionnaires with each participant answering up to 11 questions

C. Russell et al. / Contemporary Clinical Trials 33 (2012) 364–368

365

per questionnaire. We required a low-cost and efficient method to capture and organize this large amount of data. It is becoming increasingly common to offer alternative ways of collecting data. Mixed-mode surveys [2] offer alternatives either initially or after non-response. Although mixed mode surveys typically result in a higher overall response rate, there are potential limitations if participants respond differently to different survey modes [3]. Although we aimed to get the majority of responses via the interactive telephone system, we offered participants alternative ways of providing follow-up data. We have compared the characteristics of participants using each method. 2. Methods NECOT participants were recruited at routine antenatal clinics at the Royal Victoria Infirmary, Newcastle. Women were approached in person by trial staff recruiting at the clinic between the hours of 9.30 am and 4.30 pm Monday to Friday. As far as possible all women were approached; recruiters introduced themselves and the NECOT trial and provided women attending their routine 12 week scan with a patient information leaflet. Women were approached again at the time of their 20 week routine scan and asked if they would be willing to participate in the trial. If they had been missed by recruiters previously, or had not previously attended an appointment at the clinic, women at this stage were provided with an enrollment pack containing the PIL, enrollment form and consent form along with a freepost envelope enabling them to return the forms if they wished to participate. Between January 2008 and March 2009, 3453 women were assessed for eligibility; 2221 were excluded between this stage and randomization based on failure to meet inclusion criteria (sufficient English comprehension to understand patient information materials; singleton pregnancy; intention to deliver at the RVI and not decided against breastfeeding); declining to participate; non-return of enrollment forms and withdrawal before randomization. Further checks took place just prior to randomization at 32–34 weeks gestation, to check if the participant was still in the area, and that the pregnancy was ongoing. Participants were randomized into intervention and control groups prior to delivery and received standard care or the intervention condition on the postnatal ward [4]. The intervention condition in this trial involved provision of a ‘side-car’, or ‘clip-on’ crib [bassinet] upon arrival on the postnatal ward following delivery, for use until discharge from the postnatal ward. The control condition was provision of a standalone cot [bassinet], as per standard care at the RVI. Following hospital discharge participants received weekly postal delivery of a question-card to their home address (Fig. 1) prompting them to call the automated telephone service. Printed on the card was the free-phone number for the service, an individual study ID number and week number (1–26), and the study questions which included the option to request that a member of research staff contact them. Upon calling the free-phone number, participants were guided through data provision by an automated pre-recorded voice response system, and responses (all yes/no) were entered via the telephone keypad. Our previous pilot [1] and

Fig. 1. Question postcard.

testing of the system indicated that calls took approximately one minute to complete. Postcards were mailed weekly by research staff, divided into two mailings per week, timed so that all participants would receive the card just prior to the target day for data provision. Calls (responses) made to the system were automatically sorted into daily databases by the service provider and made available to research staff via a secure website. Staff were free to download data at any time; in practice data were downloaded weekly and copied into a master spreadsheet database. Downloaded data were scrutinized by staff in order to identify missing weeks of data for individual participants. If a participant failed to provide data for two or more weeks, efforts were made by telephone or postal contact to re-engage them in the follow-up. If these efforts failed, and four or more consecutive weeks of data were missing, participants were deemed to have dropped out of the study and no further attempts were made to contact them. Changes made subsequent to our pilot study [1] included removal of two questions from the questionnaire — about contact with a health professional and return to work. Additionally, one part of a question was added (baby slept at side of bed) at the request of the ethics committee. We also provided alternate methods for submitting data as described below. 2.1. Use of alternative methods for reporting data In designing the data collection system it was recognized that not all participants would have access to a landline

366

C. Russell et al. / Contemporary Clinical Trials 33 (2012) 364–368

from which to provide their weekly data. Due to mobile network charges we were unable to provide a free-phone service to mobile phone users, so participants who provided us with no landline number at enrolment were offered the opportunity to record their weekly data via an alternative method. Participants could choose to a) receive a weekly telephone call from a member of staff, b) indicate their responses on the question-postcard itself, and return it in a freepost envelope provided weekly, or c) email their data directly to the research team. Although few participants chose an alternative method for data provision prior to commencing followup, all three of these options (particularly weekly staff calls) played an important role in reducing participant attrition and non-compliance during the follow-up phase. The demographic characteristics of participants were compared across the three most common survey modes. Some women provided data by more than one method over the period of the study. We have therefore split these into subgroups based on the method they used most often (‘majority method’ — Auto System, Staff Phone, and Post). Those who provided less than 13 weeks of data have not been classified by method, and are included in the ‘Poor response’ group which included both those who never engaged with the data collection process and those who only managed to do so for a few weeks. 2.2. Statistical methods Mean age was compared using an unpaired t-test and categorical variables were compared via the chi-squared test. 3. Results 3.1. Automated system use and cost 1071 women were eligible for inclusion in the trial and 870 of these provided some follow-up data: 749 responded for 13 weeks or longer. Of a total of 19,183 weeks of valid data provided by all methods, 75.7% were submitted via the automated system. The cost of provision of the system over the life of the follow-up period (21 months) was £6410 inclusive of VAT. Using this method we obtained 14,563 weeks of usable data (£0.44 per usable data week). In total 775 participants used the system to provide at least one week of data at a mean cost of £8.27 per participant and providing an average of 18.8 weeks of data. Staff time required for compiling and mailing postcards, downloading data, sorting and checking amounted to 2 days per week. 3.2. Data quality While the automated system was designed to be straightforward to use and thus reduce the potential for error, all datasets were checked by staff to identify incorrect or duplicate entries during the data sorting process. Due to an error in the system which provided this data via a web interface we identified that a large number of ‘system duplicates’ (n = 1178) were occurring. In order to identify these so they could be removed from the final database a small software application was developed in-house. Overall, the nature of the data ultimately extracted for analysis in this study (e.g.

cessation of a particular behavior for two or more consecutive weeks) conferred a high tolerance for participant error, and indeed scrutiny of the dataset by research staff revealed a remarkably small number of anomalous entries (e.g. there were 34 instances of incorrect ID or week number provided — approximately 0.2% of the total entries submitted via the system, and 92 participant-duplicated week entries). However the potential for, and implications of, participant entry error should be considered on a case-by-case basis in future studies employing this method. 3.3. Participant perspectives To assess the usability of the system, a completion questionnaire was sent to all participants who completed followup (n = 764). We received completed questionnaires from 537 (a response rate of 70%) although not all respondents answered all questions. Most participants were happy to use the automated system, finding it easy to use (498 of 514 respondents — 97%), and appreciated receiving postcards each week as a prompt to call the service (520 of 523 respondents — 99%). Of 489 respondents, 191 (39%) indicated that they had received one or more calls from research staff reminding them to call their data in to the automated system. To assess whether these calls were considered helpful or intrusive, we asked participants to indicate their opinion on a Likert scale. The majority of respondents appreciated these calls with 84% finding them helpful, 8% finding them less helpful or intrusive, and 7% feeling neutral. Over 17 months of follow-up (October 2008 to February 2010), 597 participants (56% of the total eligible to participate in the follow-up phase) required some level of staff input, due to missing data, on 3179 individual occasions. The disparity between the number of participants who returned completion questionnaires indicating they had been contacted by us, and the number actually contacted by us, reveals the presence of a large group of participants who did not complete the follow-up phase, mainly due to neverengaging, or withdrawal after providing some data. Unfortunately these participants remain unrepresented in our evaluation based on completion questionnaires. 3.4. Use of alternative methods for reporting data Of a total of 19,183 weeks of data provided by all methods, 75.7% were submitted via the automated system; 19.9% given in person to staff members over the telephone; 3.9% sent in the post and 0.4% sent via email. Note that a substantial proportion of participants required an initial contact by the research team at the start of their follow-up period (at which point early missing data were collected in person) to prompt their engagement with the automated system. As expected on the basis of our earlier pilot study [1], participants differed between those preferring to engage in the follow-up process using the automated system and the alternative data collection methods. Table 1 summarizes demographic and trial allocation data for participants who used one of the three major methods (Auto System, Staff Phone, Post) to provide at least 13 weeks of data. Data from email submissions are not discussed due to the small number of participants who used this method

C. Russell et al. / Contemporary Clinical Trials 33 (2012) 364–368

367

Table 1 Comparison of demographic characteristics by majority method of response.

Mother's age — mean (SD) Lives alone (%) University level — education (%) Ethnic group — white (%) Household income (%) b£20k £20–40k >£40k (%) First baby Intervention arm a b c d

Auto System (n = 609)

Staff Phone (n = 112)

Post (n = 25)

Poor response (n = 322)a

P valueb

P valuec

P valued

32.3 (5) 7 59 92

28.9 (6) 20 43 88

28.4 (6) 20 36 78

28.8 (6) 23 31 88

b 0.0001 b 0.001 0.002 0.15

0.0002 0.014 0.03 0.02

b0.0001 b0.001 b0.001 0.08

20 36 44 47 49

53 14 33 46 53

50 27 23 56 52

54 24 22 42 49

b 0.001 0.84 0.44

0.003 0.70 0.79

b0.001 0.075 0.899

Women providing b 13 weeks of follow-up data. P-value from comparison of Staff Phone to Auto System subgroup. P-value from comparison of Post to Auto System subgroup. P-value from comparison of combined (Auto System + Staff Phone + Post) to ‘Poor response’ subgroup.

for the majority of their data (n = 3). For comparison, details have been provided of participants who provided b13 weeks of data (Poor response). Participants using the automated system (Auto System) to submit the majority of their weekly data were more likely than those using the Staff Phone or Post option to be older, not living alone, university educated, higher income and white. Those providing a poor response were generally younger, more likely to live alone, and on lower incomes. The highest proportion of non-white participants was seen in the Post group (22% compared to Staff Phone — 12% and Auto System — 8%). The reason for this difference is unclear; it might be that options using a phone presented a greater language barrier to participants from non-white backgrounds. There was no significant difference between the different data collection methods chosen in the proportion having their first vs subsequent baby, or whether they had been allocated to the intervention vs control trial arm. 4. Discussion The provision of an automated interactive telephone-toweb data collection system is an efficient method for collection of detailed longitudinal data over an extended timeperiod, for data that can be expressed as simple categorical variables. For the majority of participants, the system was easy and convenient to use, however a large proportion of participants required some level of personal attention from the research team, predominantly due to calls lapsing or missing data. The time taken to complete the questionnaire via the automated system was minimal (approx. one minute per week) and it would be reasonable to conclude that this contributed to the acceptability of the system overall. A longer questionnaire might therefore result in reduced compliance. It appears from these data that provision of alternative methods for providing data was instrumental in reducing attrition within the younger age-group. The difference in average age was quite small, but this reflected that 8% of those in the Auto System group were aged b 24 years compared to 27% in the Staff Phone group and 11% who used Post as their majority method. A practical explanation for this is lack of access to free-phone calls from landlines, and the prevalence of

mobile phones within younger cohorts. Anecdotally, an additional factor appears to be a greater proportion of younger participants living in atypical or non-traditional households (for example with parents or in-laws), possibly not having consistent access to a landline and/or residence at the location to which their weekly postcard was sent. Recent research shows that younger age and lower socioeconomic status are clearly associated with mobile (cell) telephone-only households in the UK [5]. This supports our suggestion that lack of access to landlines, and, by extension, free-phone calls, may at least partially explain the demographic characteristics of our Staff Phone group. Recent work in the US [6] demonstrates that mobile-only adults differ from adults also having landline phones in engaging in higher-risk behaviors (e.g. smoking and drinking), and concern about the impact of these differences on the representativeness of samples is increasing globally [7,8]. We were able to reduce attrition by offering alternative methods of providing follow-up data. However, this also introduces challenges. There is some evidence that participants may answer questions differently depending on the mode of administration [2,3,9]. Others find minimal technology effect on data quality [10]. Since all the questions in this survey were simple, with yes/no responses, any differences in responses between survey modes due interpreting the options visually or aurally, or giving the more recently heard option are likely to be small. However, the Auto System and Staff Phone modes may possibly have differed in the responses gathered due to the social desirability of answers to questions relating to breastfeeding. Unfortunately, the relatively small numbers in the subgroups did not make it possible to disentangle any differences in responses to specific questions by survey modes from those associated with demographic characteristics. Compared to in-person survey methods, the use of an automated system – and also of the alternate methods provided in this study – means that the identity of the responder may remain unverified. In the present study we did not check the identity of responders; neither did we test/retest because later retest would have been affected by recall bias. Given our findings regarding the demographic characteristics of the participants within our sample who preferred

368

C. Russell et al. / Contemporary Clinical Trials 33 (2012) 364–368

to use alternative methods to report their data, we would recommend that in future studies efforts are made to build in specifically targeted strategies – perhaps involving compensation for call costs, or other incentives – to engage this group. Considering this sub-sample is especially important given that its members may already be less likely to engage in research programs, and given the nature of the group in terms of health/risk behaviors. Similarly, this study had a reasonably high response-rate; 70% of eligible participants providing 13 or more weeks of data. Two factors may have been particularly influential here; the personal contact from research staff both at recruitment and when being contacted due to failure to submit weekly data, and the nature of the participant group which was typical of UK breastfeeding women — higher income, education and age. It is recommended that researchers using this technology consider how best to utilize personal contact in order to enhance response rates based on the particular characteristics of the population targeted. We conclude that in studies involving collection of a large quantity of longitudinal data automated interactive telephone technology provides a cost-effective solution and facilitates a prospective design, incorporating a short-recall interval, which would not be feasible using traditional survey methods. Such a system is straightforward for research staff to manage, however, considerable time might be required to minimize missing data, and this should be considered and factored into the project budget at the outset.

Acknowledgments The NECOT project ream comprised the authors plus Dawn Mee (data manager), Lyn Robinson and Catherine Taylor (recruitment and trial admin), and Lynne MacDonald (RVI postnatal ward manager). References [1] Ball HL. Bed-sharing practices of initially breastfed infants in the first 6 months of life. Infant Child Dev 2007;16:387–401. [2] Dillman DA. Mail and internet surveys: the tailored design method. 2 ed. New York: John Wiley & Sons, Inc.; 2000. [3] Greene J, Speizer H, Wiitala W. Telephone and web: mixed-mode challenge. Health Serv Res 2008;43:230–48. [4] Ball HL, Ward-Platt MP, Howel D, Russell C. Randomised trial of sidecar crib use on breastfeeding duration (NECOT). Arch Dis Child 2011;96: 630–4. [5] Ofcom. The communications market 2010: UK; 2010. [6] Lee S, Brick JM, Brown ER, Grant D. Growing cell-phone population and noncoverage bias in traditional random digit dial telephone health surveys. Health Serv Res 2010;45:1121–39. [7] O'Cathain A, Knowles E, Nicholl J. Testing survey methodology to measure patients' experiences and views of the emergency and urgent care system: telephone versus postal survey. BMC Med Res Methodol 2010;10. [8] Dal Grande E, Taylor AW. Sampling and coverage issues of telephone surveys used for collecting health information in Australia: results from a face-to-face survey from 1999 to 2008. BMC Med Res Methodol 2010;10. [9] Bowling A. Mode of questionnaire administration can have serious effects on data quality. J Public Health 2005;27:281–91. [10] Bexelius C, Merk H, Sandin S, Nyren O, Kuhlmann-Berenzon S, Linde A, et al. Interactive Voice Response and web-based questionnaires for population-based infectious disease reporting. Eur J Epidemiol 2010;25: 693–702.