Evaluation tools for assistive technologies: a scoping review

Evaluation tools for assistive technologies: a scoping review

Journal Pre-proof Evaluation tools for assistive technologies: a scoping review Gordon Tao, MSc, Geoffrey Charm, BSc, Katarzyna Kabacińska, BSc, Willi...

4MB Sizes 3 Downloads 34 Views

Journal Pre-proof Evaluation tools for assistive technologies: a scoping review Gordon Tao, MSc, Geoffrey Charm, BSc, Katarzyna Kabacińska, BSc, William C. Miller, PhD, Julie M. Robillard, PhD PII:

S0003-9993(20)30083-6

DOI:

https://doi.org/10.1016/j.apmr.2020.01.008

Reference:

YAPMR 57767

To appear in:

ARCHIVES OF PHYSICAL MEDICINE AND REHABILITATION

Received Date: 12 December 2019 Accepted Date: 2 January 2020

Please cite this article as: Tao G, Charm G, Kabacińska K, Miller WC, Robillard JM, Evaluation tools for assistive technologies: a scoping review, ARCHIVES OF PHYSICAL MEDICINE AND REHABILITATION (2020), doi: https://doi.org/10.1016/j.apmr.2020.01.008. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Inc. on behalf of the American Congress of Rehabilitation Medicine

Evaluation tools for assistive technologies

Evaluation tools for assistive technologies: a scoping review Gordon Tao, MSc1,2; Geoffrey Charm, BSc1,3; Katarzyna Kabacińska, BSc4; William C. Miller, PhD1,2; Julie M. Robillard, PhD4,5 1 GF Strong Rehabilitation Research Lab, Vancouver Coastal Research Institute, Vancouver, BC, Canada 2 Department of Occupational Science and Occupational Therapy, The University of British Columbia, Vancouver, BC, Canada 3 Department of Integrated Sciences, The University of British Columbia, Vancouver, BC, Canada 4 Division of Neurology, Department of Medicine, The University of British Columbia, Vancouver, BC, Canada 5 BC Women's and Children's Hospital, Vancouver, BC, Canada

Acknowledgements We thank Charlotte Beck, Health Sciences Librarian at the University of British Columbia, for her instrumental guidance in developing our search strategy. This scoping review was supported by AGE-WELL NCE Inc. (GT, GC, KK, WCM, JMR), a member of the Networks of Centres of Excellence program, and the Canadian Consortium on Neurodegeneration in Aging (JMR).

Corresponding Author Assistant Professor of Neurology, The University of British Columbia Scientist, Patient Experience, BC Children’s & Women’s Hospitals 4480 Oak Street, Vancouver BC V6H 3N1 CANADA

EVALUATION TOOLS FOR ASSISTIVE TECHNOLOGIES [T] 604-875-3923 [E] [email protected]

1

1

Evaluation tools for assistive technologies: a scoping review

2

Abstract

3

Objective: Assistive technologies (ATs) support independence and well-being in people with

4

cognitive, perceptual, and physical limitations. Given the increasing availability and diversity of

5

ATs, evaluating the usefulness of current and emerging ATs is crucial for informed comparison.

6

We aimed to chart the landscape and development of AT evaluation tools (ETs) across disparate

7

fields in order to improve the process of AT evaluation and development.

8

Data Sources: We performed a scoping review of AT-ETs through database searching of

9

MEDLINE, Embase, CINAHL, HaPI, PsycINFO, Cochrane Reviews, and Compendex as well as

10

citation mining.

11

Study Selection: Articles explicitly referencing AT-ETs were retained for screening. We included

12

ETs if they were designed to specifically evaluate ATs.

13

Data Extraction: We extracted five attributes of AT-ETs: AT-category, construct evaluated,

14

conceptual frameworks, type of end-user input used for AT-ET development, and presence of

15

validity testing.

16

Data Synthesis: From screening 23 434 records, we included 159 AT-ETs. Specificity of tools

17

ranged from single to general ATs across 40 AT-categories. Satisfaction, functional performance,

18

and usage were the most common constructs of 103 identified. We identified 34 conceptual

19

frameworks across 53 ETs. Finally, 36% incorporated end-user input and 80% showed validation

20

testing.

21

Conclusions: We characterized a wide range of AT-categories with diverse approaches to their

22

evaluation based on varied conceptual frameworks. Combining these frameworks in future AT-

23

ETs may provide more holistic views of AT usefulness. AT-ET selection may be improved with

24

guidelines for conceptually reconciling results of disparate AT-ETs. Future AT-ET development

25

may benefit from more integrated approaches to end-user engagement.

26

Key Words: Self-Help Devices; Health Care Quality, Access, and Evaluation; Outcome

27

Assessment (Health Care)

28

List of Abbreviations

29

AT – Assistive Technology

30

ET – Evaluation Tool

31

ISO – International Organization for Standardization

32

MeSH – Medical Subject Headings

33

ICF – International Classification of Functioning, Disability and Health

34

WHO – World Health Organization

35

Introduction

36

Assistive technologies (ATs) are products for assisting with managing a broad range of

37

health conditions and to improve quality of life for end-users (Figure 1)., e.g. older adults and

38

persons with disabilities. ATs have the potential to facilitate self-care (1,2), reduce healthcare

39

costs (3), and empower end-users (4). As such, ATs have diverse purposes and functions,

40

ranging from devices that compensate for body function impairments, to those that help with

41

participation in social activities. Some examples include sensory aids for sensory impairments

42

(5), prostheses (6) and wheelchairs for mobility restrictions (7), telerehabilitation systems for

43

barriers to care access (8), and social robots for psychological wellbeing (9).

44

Despite the rise in AT options, it remains challenging to ensure end-users gain

45

awareness of available options, can make informed choices about AT, and are able to navigate

46

the services that provide ATs. Even when AT is successfully obtained by end-users, AT

47

abandonment remains a prevalent problem (10,11), leading to reduced benefits from AT

48

provision. AT abandonment often involves low satisfaction with AT design or inability of the AT

49

to meet end-users’ specific needs or chosen activities (10,12). Therefore, evaluating the overall

50

usefulness of AT from an end-user perspective is fundamental to ensuring that end-users

51

benefit from their ATs and avoiding abandonment. This imperative formed the principle

52

motivation for this study.

53 54

Nielsen describes usefulness as the combination of utility, the degree to which a product meets one’s needs, and usability, the easiness and pleasantness of using the product (13). For

55

the purposes of this review, we used the term AT usefulness to represent an overarching

56

construct by which end-users ascribe value to ATs. Therefore, uptake and ongoing use of ATs is

57

contingent not only on their efficacy, but also on the myriad ways they may or may not be

58

considered useful to end-users (14), e.g. in terms of satisfaction (15), quality (16), or associated

59

stigma (17).

60

Such considerations are important for matching AT to individuals, service delivery, and

61

also throughout the development process of ATs. Moreover, these design considerations are

62

particularly important when developing ATs for people with multiple disabilities or

63

comorbidities, as their needs may be complex and multidimensional. To address these needs,

64

greater emphasis has been placed on ensuring end-user input is meaningfully incorporated into

65

the design of ATs with the belief that better AT evaluation during development leads to greater

66

benefits, e.g. improved AT mechanics or user experience (18–20). This trend is consistent with

67

broader calls for the integration of end-user perspectives in AT development through

68

methodologies such as user-centered design and participatory research (21–23).

69

In both prototyping research and with final products, researchers often use structured

70

interviews, focus grouping, and ad-hoc questionnaires to evaluate end-user perspectives on AT

71

usefulness (15,24,25). While these methods offer important insights, their lack of

72

standardization limits comparisons across technologies. Moreover, these assessments can be

73

time-consuming, resource intensive, and often only occur at one time point in AT development;

74

consequently, such approaches do not necessarily lead to successful integration of end-user

75

perspectives when AT development requires multiple iterations.

76

Addressing these challenges, standardized evaluation tools (ETs) are a common

77

approach to assessing aspects of AT usefulness. These tools have the advantage of being readily

78

administered and allowing quantitative comparison across time points. However, ETs of ATs

79

(AT-ETs) across AT fields can be heterogeneous; they may evaluate varied or overlapping

80

constructs and may draw on theories, models, and frameworks from different research fields.

81

Moreover, there is limited guidance on selecting the most appropriate application of ETs for AT

82

development, especially for emerging ATs. Whereas fit and comfort may be of particular

83

interest in prosthetics (26), accessibility may be of greater concern in e-health (27) while

84

usability relates to all AT. Furthermore, ETs generally require validation to clarify their

85

suitability; some ETs may be thoroughly validated while others are not.

86

If ETs are to be used for the development of ATs, they too should be grounded in end-

87

user priorities to ensure ATs are developed in alignment with their needs and values. Such

88

alignment may also support better service delivery post-development. However, it is unclear

89

the extent to which end-users’ input has been incorporated into existing ETs. Moreover, it is

90

unclear which of these needs and values may be transferable across AT fields. These

91

considerations are particularly salient to older adults, as they represent a large population of AT

92

users who may use multiple and diverse combinations of ATs. There remains ambiguity in

93

distinguishing concepts and considerations worth evaluating in ATs generally from those

94

particular to specific ATs.

95 96

These gaps represent barriers to assessing the overall usefulness of ATs. As such, they hinder the advancement of knowledge necessary for reducing AT abandonment, addressing of

97

end-users’ unmet needs, and enhancing of benefits gained by AT end-users. To date, the AT

98

literature lacks a comprehensive overview of AT-ETs across the disparate AT fields that may

99

raise, define and ameliorate these gaps.

100

The objective of this scoping review was to expand the understanding of how ATs are

101

evaluated across AT fields by characterizing the landscape of existing tools used to evaluate ATs

102

in peer-reviewed literature. In doing so, we aimed to describe how the overarching construct of

103

AT usefulness is constituted in AT evaluation. We also aimed to examine the ways in which end-

104

user perspectives are integrated into these ETs. Fulfilling these objectives would help us to

105

better understand and operationalize the assessment of AT usefulness throughout the AT

106

development process in a way that considers the layered needs and contexts of end-users.

107

To address these objectives, we asked the specific research questions:

108

1) What tools have been specifically developed for the evaluation of assistive

109 110 111 112 113

technologies? 2) What constructs are evaluated by these tools and what conceptual frameworks do they rely on to do so? 3) To what extent do these tools include end-user input and demonstrate validation testing.

114

Methods

115

Study Design

116

To examine AT-ETs, we undertook a scoping review of the literature applying the

117

framework of Arksey and O’Malley (28) and guidelines from the Joanna Briggs Institute (29).

118

Considering the broad nature of ATs and the diverse ways and contexts in which they can be

119

evaluated, a scoping review provided the flexibility to iteratively develop our search strategy.

120

We based our search strategy on the International Organization for Standardization (ISO) 9999

121

definition of AT (Figure 1) and the following definition of ET:

122

Evaluation Tool: A quantitative instrument that directly assesses one or several: (1)

123

qualities of the AT, e.g. usability, ergonomics, aesthetics, or (2) perceptions of users with

124

respect to AT, e.g. satisfaction, confidence, or (3) AT-specific skills, e.g. wheelchair skills,

125

or (4) impacts of AT on users, e.g. Quality of Life, abilities, limitations.

126

While evaluation of ATs regularly involves using instruments that measure users’

127

functional capacity, mental state, symptoms, and quality of life, we consider only those

128

instruments explicitly developed for the context of using ATs.

129

Search Strategy

130

We developed a search strategy using Ovid MEDLINE (Supplemental S1). Our search

131

logic targeted the intersection of the concepts: ATs, ETs, rehabilitation, and older adults or

132

people with disabilities. To construct the search strategy, we mapped keywords related to

133

these concepts to Medical Subject Headings (MeSH) terms in MEDLINE. Associated keywords

134

were added based on “scope notes”. We iterated the search strategy through MeSH tree

135

exploration and as we encountered new terms that fit our concepts. This process continued

136

until no new MeSH terms were added. We then translated the MEDLINE search strategy into

137

Embase (Ovid), CINAHL (EBSCOhost), PsycINFO (EBSCOhost), HaPI (Ovid), Cochrane Reviews

138

(Ovid), and Compendex (Engineering Village). Search strategies across databases were

139

iteratively adjusted as new terms arose. There were no restrictions on the search timeframe.

140

However, search results were limited to English records.

141

Screening

142

Records from all databases were downloaded into EndNote (Clarivate Analytics, Jersey)

143

for deduplication and screening. We removed duplicates matched on Title, Year, Author or

144

Volume. With verification, we removed remaining duplicates based on Title, Year match.

145

For screening of title and abstract, three researchers screened one-third of the

146

database. A screening guide was created by the first author based on initial screening of 400

147

records. The screening guide was reviewed and discussed with the entire research team. Initial

148

screening by the second and third researchers was conducted together with the first screener

149

until >95% agreement was achieved. As screening continued, any ambiguities were resolved by

150

the research team through discussion.

151 152

For each record, we screened abstracts only if the title referenced at least one type of AT. Records were retained for full-text screening if the abstract named or indicated use of a

153

formal tool to evaluate AT. We used this criterion to exclude the majority of studies using ad-

154

hoc questionnaires or interview methods to evaluate AT.

155

In the full text of retained records, we identified formally developed tools used to

156

evaluate AT. For these tools, we used citation mining and targeted searches to access the tool

157

itself, records that detail the development of the tool, and records demonstrating validation

158

testing (validity or reliability) of the tool. Using these “source” documents, two researchers

159

independently screened each tool for inclusion in our scoping review according to the following

160

inclusion criteria: 1) met our definition of evaluation tool, 2) specifically developed to evaluate

161

AT or in context of AT, and 3) readily administered. We excluded tools if they: 1) were ad-hoc

162

questionnaires, 2) focused on evaluating user characteristics instead of the AT, and 3) were

163

surveys. We also included checklists and guidelines if they could be readily used as a list of

164

criteria either met or unmet (i.e. could be easily scored). For tools where screeners disagreed,

165

inclusion was determined in discussion with the entire team.

166

Data Extraction and Coding

167

To address the research questions of this scoping review, we extracted and coded data

168

from the source documents of included ETs according to five a priori attributes: 1) AT-category,

169

2) construct evaluated, 3) conceptual framework, 4) end-user input, and 5) validation testing

170

(Table 1). Extraction and coding (Supplemental S2A) were performed according to an extraction

171

guide by the first author and a second researcher for 30% of ETs; codes were progressively

172

compared, with initial differences and ambiguity discussed and resolved as they arose.

173

Remaining ETs were extracted and coded by the first author and reviewed by the second

174

researcher. The senior author also reviewed the coding scheme. Where possible, we used the

175

ET’s own terms as codes for each attribute. For articles using multiple words interchangeably to

176

describe the constructs evaluated, we gave preference to the predominant term in the article.

177

As the extraction process progressed, we relied more heavily on existing codes where terms

178

were theoretically or practically identical, e.g. “upper extremity prosthesis” and “prosthetic

179

arm” were both coded as “upper limb prostheses” while “limb prosthesis” remained separate.

180

For AT-categories, we also reviewed tools at the item level to determine whether ETs

181

were applicable to broader AT categories or specific to narrower categories. For example, “I like

182

how my prosthesis looks” applies to prostheses generally, but “I can easily put my shoe on my

183

prosthesis” applies only to lower-limb prostheses.

184

We extracted conceptual frameworks as an umbrella term to capture the models, formal

185

frameworks, and theories used to develop ETs. These conceptual frameworks were included if

186

they were explicitly used to inform the conceptual content generation for the ET. Conversely,

187

frameworks used primarily for guiding the structural organization of ET items (as opposed to

188

content generation) were not included, e.g. Rasch analysis or Classical Test Theory.

189

End-user input was identified as any instance where end-users were provided with the

190

opportunity to contribute to or provide opinions about the conceptual content, i.e. items, of

191

the ET. Instances where data were collected from participants but did not allow for participant

192

feedback were not considered as input.

193

Data Summary

194

Data were descriptively summarized and conceptually mapped to illustrate the

195

landscape of ETs. We statistically summarized each attribute according to the frequency of ETs

196

using Excel (Microsoft, Redmond WA, USA). We conceptually mapped end-user input vs

197

validation testing by cross-tabulation. The remaining attributes were also cross-tabulated;

198

notable trends were identified via visual inspection of a chord diagram.

199

We also categorized identified AT-categories, conceptual frameworks, and constructs

200

evaluated into larger thematic domains (Supplemental S2B) in order to chart broad groupings

201

of AT-ETs. For AT-categories, we acknowledge formal classification systems for AT do exist

202

(30,31). However, these taxonomies have not seen widespread adoption in the development of

203

AT-ETs. Our thematic categorization aimed to chart broad groupings as they exist in the

204

literature, without adhering to a specific taxonomy. Therefore, with AT-categories and

205

conceptual frameworks, we made groupings to approximate fields of study. For constructs

206

evaluated, we made groupings according to broadly related constructs.

207

Results

208

In total, our final database search identified 30 625 articles (Figure 2) on September 15,

209

2018. Figure 2 details the screening results. In total, we identified 308 unique ETs as candidates

210

for inclusion. The first author evaluated the primary source(s) of each ET for inclusion. A second

211

researcher independently reviewed each ET for inclusion, resulting in 93 (30%) ambiguous ETs

212

being reviewed by four researchers on the team. Of the original evaluation, 18 (5.8%) inclusion

213

decisions were changed. A total of 159 AT-ETs (Supplemental S3) were included in this scoping

214

review (32–186).

215

Throughout the screening process, we encountered articles studying translations,

216

adaptations, and modifications for some tools. For these, we considered them as part of the

217

same ET. However, we considered tools based on existing tools, but given a different name, as

218

independent tools.

219

For complete lists and counts of extracted data and thematic domain organization, see

220

Supplemental S4-6.

221

AT-Categories

222

Across all AT-ETs, we identified 40 categories of AT (Figure 3). These ranged from

223

narrowly specific categories, e.g. manual wheelchairs (140), to broader categories, e.g. mobility

224

aids (161), to ATs in general (74).

225

The most common categories of AT evaluated were Hearing Aid (20 ETs), General AT (19

226

ETs), and Lower Limb Prosthesis (17 ETs). Of the least common, 14, 9, and 5 ATs were evaluated

227

by one, two, and three ETs respectively. The AT-categories evaluated by one ET also ranged

228

from narrow to general, e.g. Electronic Mobile Shower Commode (61) or Medical Devices (154).

229

Conceptual Frameworks

230 231

In total, we identified 34 conceptual frameworks (Figure 4), with 51 ETs (32%) explicitly pointing to one or more conceptual frameworks as the primary basis for the development of

232

the tool. With 11 ETs, the most common framework used was the International Classification of

233

Functioning (ICF) developed by the World Health Organization (187). Conversely, 18

234

frameworks each appeared in the development of a single ET and 8 frameworks appeared

235

twice.

236

We identified 12 ETs that cited particular outcomes, e.g. Health-Related Quality of Life

237

(HRQL), as the conceptual basis. Concept definitions from the ISO were also used in the

238

development of 3 ETs. Additionally, we identified 6 ETs developed using methodologies that

239

inform the conceptual makeup of the tool, i.e. Grounded Theory, Delphi process, and

240

Participatory Action Research. The development of one ET also referenced a past study as the

241

conceptual framework (77).

242

A total of 69 (43%) ETs did not explicitly specify a particular conceptual framework as

243

the basis for tool development. These tools often used a generalized literature review (185),

244

clinical experience (88), or expert panels (123). Moreover, we coded 40 ETs as being based on

245

existing ETs to develop the conceptual content of the ET. For example, some ETs selected items

246

from multiple existing ETs (42,56,114) while others adapted more general tools to suit AT

247

evaluation (78,188). Of these ETs, 7 were also explicitly aligned with at least one specific

248

conceptual framework. We also identified 2 ETs provided by the AT industry, i.e. ETs developed

249

by companies involved in AT production and development.

250

For 4 ETs, we were unable to determine whether a specific framework was defined in

251

their development. The best sources we could access for these tools only described their

252

content or validation.

253

End-user Input and Validation

254

Across all tools included in our scoping review, 57 (36%) involved some form of end-user

255

input during ET development while the remaining ETs did not specify any such input (Figure 5).

256

We identified 10 categories of end-user input, with the most common input involving

257

Interviews (18 ETs) and Focus Groups (10 ETs). Four types of end-user input appeared only

258

once.

259

For 127 (80%) ETs, our targeted searches found quantitative evidence of validation

260

(Figure 5). While we did not fully assess the degree or type of validation, the most common

261

evidence supporting validity was in the form of Cronbach’s alpha and test-retest reliability.

262

While identifying evidence for validation, we observed that some tools were only tested via

263

Cronbach’s alpha in a single context; other tools were thoroughly validated using multiple

264

measures and across multiple populations and languages.

265

For 5 ETs, measurement validation was not applicable; these ETs were in the form of

266

checklists or guidelines that could be used as checklists.

267

Constructs

268

In total, we identified 103 constructs evaluated by ETs (Figure 6). For 66 (42%) ETs, 2-9

269

constructs were coded. The most common constructs evaluated were Satisfaction (31 ETs),

270

Functional-Performance (28 ETs), and Usage (17 ETs); these constructs appeared independently

271

in some ETs and together in others. For Functional-Performance, we collapsed all instances

272

where the performance of AT-related tasks, e.g. wheelchair transfer, were evaluated.

273

Of the least common, 58 constructs were evaluated once; 13 twice; 10 three times; and

274

6 constructs were evaluated four times. Of the singletons, some were novel constructs specific

275

to particular AT-categories, e.g. Sense of Presence for the AT-category Virtual Rehabilitation.

276

Other constructs were similar to more frequently coded constructs but remained conceptually

277

distinct. For example, Discontinuance is closely related to Usage, yet is the conceptual inverse.

278

Conceptual Mapping

279

To explore the interrelationships between AT-categories, frameworks, and constructs

280

evaluated, we charted the data in a chord diagram (Figure 7). This representation provides a

281

visual means of identifying particularly frequent linkages between concepts. For visual clarity,

282

we used a cut-off count of 4 or more pairings. The ICF and Matching Person and Technology

283

were the only conceptual frameworks meeting this cut-off. Of 31 ETs evaluating Satisfaction, 17

284

pointed to no specific framework. Of 28 tools evaluating Functional Performance, 11 pointed to

285

no specific framework and 9 were based on existing tools. For complete cross-tabulation and

286

chord diagram, see Supplemental S7-10.

287

Hearing Aid appeared to have the most frequently consistent linkages. Common

288

constructs evaluated included AT-Skills, Benefit, Functional Performance, Satisfaction, and

289

Usage. Of the 20 Hearing Aid ETs 13 pointed to no specific framework. Furthermore, all ETs

290

evaluating the construct Benefit were associated with Hearing Aid ETs, save for one ET

291

evaluating Cochlear Implant.

292 293

Thematic Categorization To summarize each ET attribute, we categorized data into thematic domains (Table 2).

294

AT-categories were grouped into 8 domains, with the majority of ETs evaluating “Mobility

295

Aids”. Conceptual Frameworks were grouped into 8 domains. Of the ETs that explicitly

296

incorporated a framework, the most common were “Health Frameworks”. Moreover, “Mobility

297

Aids” were most commonly developed using “Health Frameworks” (10 ETs). Frameworks

298

related to “Technology Design” were exclusively associated with “Assistive Robots” (5 ETs) and

299

“Assistive Media” (4 ETs) while “AT Frameworks” primarily informed “General AT” (7 ETs) and

300

“Mobility Aids” (4 ETs). Constructs were grouped into 9 domains. The most commonly

301

evaluated constructs related to “Functioning & Activities”.

302

Discussion

303

AT evaluation is the process of determining the degree to which ATs successfully

304

promote participation, remedy impairments, and mitigate health-related limitations, i.e.

305

determining overall AT usefulness. In this process, AT-ETs provide quantitative evidence that

306

supports the innovation of novel AT products, identifies limitations and potential improvements

307

for existing ATs, and aids the matching to and procurement of ATs by end-users. As such,

308

readily and appropriately administered AT-ETs benefit all AT stakeholders, including AT

309

developers, researchers, clinicians, and end-users.

310 311

Our scoping review characterized ETs specifically developed for AT contexts according to five attributes: construct(s) evaluated, applicable AT-category, explicit conceptual framework,

312

end-user input, and validation testing. Our results demonstrate AT-ETs occupy a diverse

313

landscape of AT-categories, evaluate a wide range of constructs, and draw on disparate

314

conceptual frameworks. Moreover, our scoping review includes ETs from the earlier periods of

315

AT evaluation. While this particular perspective lends greater representation to AT fields with

316

longer histories, it allows us to highlight the overall conceptual footprint of AT-ETs across time.

317

Conceptual Frameworks

318

Conceptual frameworks underpin particular approaches to AT evaluation. They highlight

319

the constructs to be evaluated as well as the relationships between constructs; they determine

320

how these constructs and ET results are interpreted; and they influence how end-user input is

321

incorporated into ET development. Awareness of these conceptual frameworks helps inform

322

our understanding of how overall AT usefulness is characterized in the AT literature. We

323

identified conceptual frameworks representing a variety of perspectives on AT evaluation, from

324

Psychology and Social Science theories (142) to end-user oriented methodologies (109) to

325

frameworks specific to AT (14). However, these frameworks were sporadically employed. Two

326

frameworks stood out as approaches for holistically evaluating and selecting appropriate AT for

327

end-users: the Human Activity Assistive Technology model (8,189) and the Matching Person

328

and Technology assessment process (190). These frameworks (models, more specifically) have

329

proven useful in conceptually guiding research and practice (189,191) and appear broadly

330

applicable across AT-categories. Yet, only 8 ETs were based on these frameworks. Similarly, the

331

ICF, which aims to provide a comprehensive framework for health and disability, was the most

332

common conceptual framework, but formed the basis of only 11 ETs spanning two AT domains:

333

“General AT” and “Mobility Aids”. Half of the identified frameworks were employed only once.

334

These results indicate a potential gap between theoretical conceptions of AT and applied

335

methods for assessing overall AT usefulness.

336

Conversely, the diverse domains of conceptual frameworks we identified highlight

337

opportunities for applying new and improved approaches to AT evaluation. For example,

338

“Technology Design” frameworks such as the Technology Acceptance Model (192) and Unified

339

Theory of Acceptance and Use of Technology (193), used to develop ETs for “Assistive Robots”

340

and “Assistive Media”, may be useful for informing future ETs of limb prostheses as they

341

become more technologically sophisticated. Such opportunities may be leveraged to better

342

support holistic research and development of ATs as multi or transdisciplinary endeavours

343

aimed at meeting end-users’ unmet needs. Moreover, these opportunities for conceptual cross-

344

application may apply across multiple AT domains we identified.

345

AT-Categories

346

For ATs, we identified 7 distinct thematic domains (and an “Other AT” domain).

347

However, within these domains, four specific AT-categories represented over 40% of the 159

348

AT-ETs identified: Hearing Aid (domain of “Hearing Aid Amplification Devices”), General AT,

349

Lower Limb Prosthesis (domain of “Orthotics & Limb Prosthetics”), and Manual Wheelchair

350

(domain of “Mobility Aids”). As such, these AT-categories appear to be nodal points of

351

reference in AT evaluation. They indicate particularly influential sub-fields of AT literature that

352

shape how AT usefulness is broadly examined. ETs for more recently emergent ATs, i.e. domain

353

of “Assistive Digital Media and Communication”, appeared more widely distributed. This may

354

simply reflect the broad applications of digital media as AT. Moreover, the definitions and

355

boundaries of these AT-categories continue to evolve as digital technology rapidly advances

356

(194,195). Ensuring that tools used for assessing AT usefulness can keep pace with such a

357

dynamic domain may then be an important avenue of future research.

358

Of the 7 thematic domains for AT-categories, “Hearing Amplification Devices” emerged

359

as particularly distinct. ETs in this domain represented a fifth of AT-ETs identified, yet also

360

appear relatively homogenous. This may be explained by a long and prodigious history, with the

361

oldest included ET from 1984 (91). These ETs consistently evaluate AT in similar ways, i.e. most

362

commonly in terms of Satisfaction, Usage, Functional Performance, and Benefit. While these

363

ETs were usually based on existing tools (99) and literature review (85), they were rarely

364

situated in an explicit conceptual framework. This combination of long history and consistency

365

is highlighted by the strong representation and linkages related to Hearing Aid in Figure 3.

366

Moreover, these characteristics underscore the historical contingencies of AT evaluation, where

367

historically distinct fields may conceptualize comparable constructs according to their own

368

traditions. For example, Benefit as a construct capturing the holistic impact of AT appears

369

primarily in the domain of “Hearing Amplification Devices” and originates from early ET

370

development in the 1980s-90s (89,91,104), whereas ETs in other domains consider the

371

combination of “Quality of Life” and “Functional Performance” (122,196). While this well-

372

established approach to assessing overall AT usefulness of “Hearing Amplification Devices” may

373

be sufficient, the field may benefit from other evaluation approaches that integrate better with

374

the broader ecosystem of ATs. For example, User-Experience (159) may be a more relevant

375

aspect of AT usefulness now that hearing aids have smartphone integration.

376

377

Constructs

The most frequently evaluated construct, Satisfaction, was also evaluated in a wide

378

range of AT-categories, from Wheelchairs (148) to Telehealth (185), and scopes, from

379

Orthopaedic Shoes (167) to General AT (76). However, the vast majority of AT-ETs evaluating

380

Satisfaction appeared unassociated with any formal conceptual frameworks. This disconnect

381

suggests the construct of Satisfaction may not be equivalent across AT domains. User

382

satisfaction has been widely studied across multiple fields and is conceptually integrated into

383

frameworks such as Nielson’s usability heuristics (13). Considering this hierarchy of satisfaction

384

as a component of usability, it was interesting to see Satisfaction (31 ETs) represented far more

385

than Usability (11 ETs) as the primary construct evaluated, or even compared to the closely

386

related constructs of Usability, User-Experience, and Ease of Use, together (21 ETs total). This

387

disconnect further highlights a gap between usability research and AT evaluation. Future AT-ETs

388

aimed at the user-centred design may benefit from directly incorporating usability frameworks.

389

Usage, Functional Performance, and AT-Skills were also common constructs, with the

390

latter two constructs being consolidations of various AT-specific tasks and skills. The construct

391

of Usefulness itself was only evaluated in 5 ETs. Less common constructs (1-10 ETs) were also

392

widely distributed across identified (thematic) domains. Some of these were distinct, e.g.

393

Stigma in the domain “Attitudes” or Learnability in the domain “User Experience”. Other

394

constructs were closely related, e.g. Discontinuance and Usage or Utility, Practicality, and

395

Convenience. The latter examples highlight ambiguity in the extent to which ETs evaluating

396

similar constructs may or may not be interchangeable.

397

Considering the diverse conceptual frameworks that inform many of the evaluated

398

constructs, there is little guidance on the structural relationships between constructs evaluated

399

across different ETs and how we may amalgamate such evidence. For example, how might we

400

compare the overall AT Usefulness of two Lower Limb Prostheses when the evidence describes,

401

in one instance, Functional-Mobility and Use-related-factors according to the PRECEDE

402

framework (127) and, in another instance, Functional-Performance and Utility according to a

403

HRQL framework (124)? Such ambiguity in the interchangeability and relationships amongst

404

constructs evaluated can make ET selection challenging. This challenge is compounded by the

405

heterogeneity with which constructs are evaluated across ETs even within a single domain:

406

some ETs measure a single construct, e.g. Satisfaction (129), and others measure multiple sub-

407

constructs as part of a broader construct, e.g. Satisfaction, Activity Restriction, and Psychosocial

408

Adjustment (130). Similarly, selecting and interpreting ETs of different scopes, i.e. “General AT”

409

vs more specific domains, face similar challenges. The former may be more appropriate for

410

broader contexts where multiple types of AT are considered while the latter can provide more

411

AT-specific detail. However, combining both may require clear situating overlapping constructs,

412

e.g. Quality of Life (124,190). While addressing such considerations will be contingent on the

413

conventions of a given AT domain, e.g. “Assistive Media” vs “Oral Health”, greater consolidation

414

and standardization of constructs may support easier selection ETs and interpretation of their

415

results. The taxonomy of AT outcomes by Jutai and colleagues aims to provide clarity in these

416

respects (14). However, it has not seen widespread adoption.

417

End-user Input and Validation

418

Across the 159 ETs included, the vast majority demonstrated some degree of validation

419

testing. While we made no systematic assessment of these tests, standard measures of internal

420

consistency and reliability were commonly used (197). Conversely, only a little over a third of

421

ETs involved end-user input in their development. This low proportion may be partly explained

422

by a quarter of ETs being based on existing tools. Furthermore, not all AT evaluation contexts

423

require end-user input, e.g. measuring Ambulation.

424

However, for assessment of end-user perspectives on ATs (existing or during

425

development), it may be desirable to use AT-ETs developed in explicit alignment with end-user

426

priorities. Of the end-user input we observed, Interviews and Focus Groups were the most

427

popular. Other forms of input ranged from less involved General Feedback to more integrated

428

Delphi (129) and Mixed Methods (148) studies. End-user input for ET development exists on a

429

broader spectrum of end-user engagement in the process of AT development, from tokenistic

430

participation (198) on one end to participatory research (199) and co-creation (200) on the

431

other. The latter approaches not only support the moral imperative of democratizing research

432

(201) but also allow for novel opportunities to address end-users’ unmet assistive needs and

433

preempt potential issues leading to AT abandonment. This is echoed in the growing body of

434

guidelines for such user-centred approaches (202–204). For example, recent work by Robillard

435

and colleagues outlines a framework for ethical adoption of AT for dementia (205). AT

436

stakeholders should consider the strengths and limitations of these approaches to

437

incorporating end-user perspectives when considering AT-ETs. From our results, the landscape

438

of AT-ETs has considerable room for engaging end-users using more integrated approaches for

439

AT-ET development.

440 441

Study Limitations Our screening strategy required abstracts to explicitly reference AT-ETs, meaning some

442

ETs may have been overlooked. However, more popular ETs were likely captured in other

443

records. Caution should be used in interpreting the representativeness of AT-specific ETs for AT

444

evaluation generally. Some domains may tend to use non-AT-specific ETs for AT evaluation. For

445

example, the System Usability Scale is commonly used to evaluate assistive robots (206).

446

Moreover, the search strategy was aimed at identifying AT-ETs over a large timeframe. As such,

447

the proportional representation of ETs and their attributes should also be interpreted with

448

caution, e.g. Hearing Aid ETs discussed earlier. Some ETs may have fallen out of use due to

449

newer ETs or changing AT contexts such as evolving technology. Our results do not describe the

450

extent of use or preference of ETs in current research or clinical practice. Furthermore, our data

451

do not distinguish between the propensity of an AT field to generate ETs from the size of the

452

field in terms of overall research activity. Rather, our scoping review creates a transdisciplinary

453

landscape within which guidance regarding the above considerations may be addressed in

454

future work.

455

While we made efforts to precisely define and operationalize the attributes we

456

extracted from ETs, their codings can be open to interpretation. For closely related constructs,

457

the choice to retain separate codes or to collapse into a single code is subjective. Therefore, the

458

construct codings of this scoping review should be interpreted as indicative of trends in AT

459

evaluation.

460

As this scoping review focused on peer-reviewed literature, it did not necessarily

461

capture ETs used by AT professionals from other sources such as books and manuals, e.g. for

462

augmentative and alternative communication (207,208). Future work may address this gap by

463

better integrating such tools into peer-reviewed literature.

464

Conclusions

465

By characterizing key attributes of AT-ETs across the landscape of various AT fields, this

466

scoping review creates a conceptual nexus for understanding how ATs are evaluated in the

467

literature. It provides a consolidated database of AT-ETs that represents a wide range of AT-

468

categories and the multitudinous aspects by which overall AT usefulness may be constituted.

469

These approaches delineate a rich landscape of inquiry and the many potential ways ATs can

470

benefit people with disabilities. However, navigating the evidence for such benefits across AT-

471

categories can be a daunting task, making it difficult to compare and judge AT usefulness.

472

Moreover, the selection of appropriate AT-ETs remains a challenge. These challenges must also

473

be considered when developing ATs and setting AT policy.

474

We identified opportunities to address these challenges through providing guidance on

475

and advancing approaches to AT evaluation. Given the sporadic and heterogeneous application

476

of conceptual frameworks in ET development, the conceptual positioning of constructs

477

evaluated can be ambiguous. Future research should provide guidance on how to better situate

478

such disparate ETs in contemporary contexts; this would ease the selection process and

479

interpretation of existing ETs. Such guidance may also help streamline AT selection and service

480

delivery for both AT professionals and clients. Considering the low rate of ETs that include end-

481

user input, bridging this gap through more integrated approaches of end-user engagement will

482

help better align AT-ETs with end-user values. Moreover, consolidating multiple conceptual

483

frameworks in developing such ETs may lead to a more holistic perspective of AT usefulness.

484

Future ET development in this vein may better highlight end-user needs and perspectives that

485

prevail across AT populations and categories. Likewise, ensuring linkage of AT evaluation with

486

service delivery evaluation would further support more holistic AT interventions. Such a new

487

generation of AT-ETs would help simplify the process of comparing, servicing, and developing

488

ATs, thereby reducing AT abandonment and improving benefit derived from ATs. The results of

489

this scoping review provide a comprehensive starting point to pursue these endeavours.

490

References

491 492 493

1.

Freedman VA, Agree EM, Martin LG, Cornman JC. Trends in the Use of Assistive Technology and Personal Care for Late-Life Disability, 1992–2001. Gerontologist. 2006 Feb 1;46(1):124–7.

494 495 496

2.

Forducey PG, Glueckauf RL, Bergquist T, Maheu MM, Yutsis M. Telehealth for Persons with Severe Functional Disabilities and their Caregivers: Facilitating Self-care Management in the Home Setting. Psychol Serv. 2012 May;9(2):144–62.

497 498 499 500

3.

Mann WC, Ottenbacher KJ, Fraas L, Tomita M, Granger CV. Effectiveness of assistive technology and environmental interventions in maintaining independence and reducing home care costs for the frail elderly. A randomized controlled trial. Arch Fam Med. 1999 Jun;8(3):210–7.

501 502 503

4.

Hutzler Y, Fliess O, Chacham A, Van den Auweele Y. Perspectives of Children with Physical Disabilities on Inclusion and Empowerment: Supporting and Limiting Factors. Adapt Phys Activ Q. 2002 Jul;19(3):300–17.

504 505

5.

Dillon H. Hearing aids. In Hodder Arnold; 2008 [cited 2018 Nov 14]. Available from: https://dspace.nal.gov.au/xmlui/handle/123456789/773

506 507

6.

Lusardi MM, Jorge M, Nielsen CC. Orthotics and Prosthetics in Rehabilitation. Elsevier Health Sciences; 2013. 865 p.

508

7.

Lange ML, Minkel J. Seating and wheeled mobility: a clinical resource guide. 2018.

509 510

8.

Cook AM, Polgar JM. Assistive Technologies: Principles and Practice. Elsevier Health Sciences; 2014. 497 p.

511 512

9.

Broekens, J., Heerink, M., Rosendal, H., Amsterdam Machine Learning lab (IVI, FNWI). Assistive social robots in elderly care: a review. Gerontechnology. 2009;8(2):94–103.

513 514

10. Biddiss E, Chau T. Upper-Limb Prosthetics: Critical Factors in Device Abandonment. American Journal of Physical Medicine & Rehabilitation. 2007 Dec;86(12):977.

515 516 517

11. Federici S, Borsci S. Providing assistive technology in Italy: the perceived delivery process quality as affecting abandonment. Disability & Rehabilitation: Assistive Technology. 2016 Jan;11(1):22–31.

518 519 520

12. Kittel A, Marco AD, Stewart H. Factors influencing the decision to abandon manual wheelchairs for three individuals with a spinal cord injury. Disability and Rehabilitation. 2002 Jan 1;24(1–3):106–14.

521 522

13. Nielsen J. Usability Engineering. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.; 1993.

523 524 525

14. Jutai JW, Fuhrer MJ, Demers L, Scherer MJ, DeRuyter F. Toward a Taxonomy of Assistive Technology Device Outcomes. American Journal of Physical Medicine & Rehabilitation. 2005 Apr;84(4):294.

526 527

15. McCreadie C, Tinker A. The acceptability of assistive technology to older people. Ageing & Society. 2005 Jan;25(1):91–110.

528 529 530

16. Wessels R, Dijcks B, Soede M, Gelderblom GJ, De Witte L. Non-use of provided assistive technology devices, a literature overview. Technology and Disability. 2003 Jan 1;15(4):231–8.

531 532

17. Parette P, Scherer M. Assistive Technology Use and Stigma. Education and Training in Developmental Disabilities. 2004;39(3):217–26.

533 534 535

18. Cowan RE, Fregly BJ, Boninger ML, Chan L, Rodgers MM, Reinkensmeyer DJ. Recent trends in assistive technology for mobility. Journal of NeuroEngineering and Rehabilitation. 2012 Apr 20;9(1):20.

536 537 538

19. Orpwood R, Chadd J, Howcroft D, Sixsmith A, Torrington J, Gibson G, et al. Designing technology to improve quality of life for people with dementia: user-led approaches. Univ Access Inf Soc. 2010 Aug 1;9(3):249–59.

539 540 541

20. LeRouge C, Ma J, Sneha S, Tolle K. User profiles and personas in the design and development of consumer health technologies. International Journal of Medical Informatics. 2013 Nov 1;82(11):e251–68.

542 543 544

21. Stojmenova E, Imperl B, Žohar T, Dinevski D. Adapted User-Centered Design: A Strategy for the Higher User Acceptance of Innovative e-Health Services. Future Internet. 2012 Aug 27;4(3):776–87.

545 546

22. Poulson D, Richardson S. USERfit – a framework for user centred design in assistive technology. Technology and Disability. 1998 Jan 1;9(3):163–71.

547 548 549

23. Magnier C, Thomann G, Villeneuve F, Zwolinski P. Methods for designing assistive devices extracted from 16 case studies in the literature. Int J Interact Des Manuf. 2012 May 1;6(2):93–100.

550 551 552

24. Broadbent E, Stafford R, MacDonald B. Acceptance of Healthcare Robots for the Older Population: Review and Future Directions. International Journal of Social Robotics. 2009 Autumn;1(4):319–30.

553 554 555

25. Eysenbach G, Köhler C. How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ. 2002 Mar 9;324(7337):573–7.

556 557 558 559

26. Berke GM, Fergason J, Milani JR, Hattingh J, McDowell M, Nguyen V, et al. Comparison of satisfaction with current prosthetic care in veterans and servicemembers from Vietnam and OIF/OEF conflicts with major traumatic limb loss. J Rehabil Res Dev. 2010;47(4):361– 71.

560 561 562

27. Goldberg L, Lide B, Lowry S, Massett HA, O’Connell T, Preece J, et al. Usability and Accessibility in Consumer Health Informatics: Current Trends and Future Challenges. American Journal of Preventive Medicine. 2011 May 1;40(5, Supplement 2):S187–97.

563 564

28. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005 Feb 1;8(1):19–32.

565 566

29. The Joanna Briggs Intstitute. Joanna Briggs Institute Reviewers’ Manual: 2015 edition/supplement. Australia: The Joanna Briggs Intstitute; 2015. 1–24 p.

567 568 569

30. International Organization for Standardization. ISO 9999: 2016. Assistive products for persons with disability: classification and terminology [Internet]. 2016. Available from: https://www.iso.org/standard/60547.html

570 571 572

31. Lenker JA, Shoemaker LL, Fuhrer MJ, Jutai JW, Demers L, Tan CH, et al. Classification of assistive technology services: Implications for outcomes research. Technology and Disability. 2012 Jan 1;24(1):59–70.

573 574

32. Hanspal RS, Fisher K, Nieveen R. Prosthetic socket fit comfort score. Disability and Rehabilitation. 2003 Nov 18;25(22):1278–80.

575 576 577

33. Smarr C-A, Mitzner TL, Beer JM, Prakash A, Chen TL, Kemp CC, et al. Domestic Robots for Older Adults: Attitudes, Preferences, and Potential. International Journal of Social Robotics. 2014 Apr;6(2):229–47.

578 579 580

34. Heerink M, Kröse B, Evers V, Wielinga B. Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model. International Journal of Social Robotics. 2010 Dec;2(4):361–75.

581 582

35. Louie W-YG, McColl D, Nejat G. Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults. Assistive Technology. 2014 Jul 3;26(3):140–50.

583 584 585 586

36. Heerink M, Krose B, Evers V, Wielinga B. Measuring acceptance of an assistive social robot: a suggested toolkit. In: RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication [Internet]. Toyama, Japan: IEEE; 2009 [cited 2019 Apr 5]. p. 528–33. Available from: http://ieeexplore.ieee.org/document/5326320/

587 588 589 590

37. de Witte AMH, Hoozemans MJM, Berger MAM, van der Slikke RMA, van der Woude LHV, Veeger DHEJ. Development, construct validity and test–retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball. Journal of Sports Sciences. 2018 Jan 2;36(1):23–32.

591 592

38. Ghossaini S, Spitzer J, Borik J. Use of the Bone-Anchored Cochlear Stimulator (Baha) and Satisfaction among Long-Term Users. Seminars in Hearing. 2010 Feb;31(01):003–14.

593 594

39. Dutt SN. The Birmingham bone anchored hearing aid programme: some audiological and quality of life outcomes. [Nijmegen, Netherlands]: Radboud University Nijmegen; 2002.

595 596 597

40. Dockx K, Alcock L, Bekkers E, Ginis P, Reelick M, Pelosin E, et al. Fall-Prone Older People’s Attitudes towards the Use of Virtual Reality Technology for Fall Prevention. Gerontology. 2017;63(6):590–8.

598 599 600

41. Bennett RJ, Jayakody DMP, Eikelboom RH, Taljaard DS, Atlas MD. A prospective study evaluating cochlear implant management skills: development and validation of the Cochlear Implant Management Skills survey. Clinical Otolaryngology. 2016;41(1):51–8.

601 602 603

42. Palmieri M, Berrettini S, Forli F, Trevisi P, Genovese E, Chilosi AM, et al. Evaluating Benefits of Cochlear Implantation in Deaf Children With Additional Disabilities: Ear and Hearing. 2012;33(6):721–30.

604 605 606

43. Esser-Leyding B, Anderson I. EARS® (Evaluation of Auditory Responses to Speech): An Internationally Validated Assessment Tool for Children Provided with Cochlear Implants. ORL. 2012;74(1):42–51.

607 608 609 610

44. Amann E, Anderson I. Development and validation of a questionnaire for hearing implant users to self-assess their auditory abilities in everyday communication situations: the Hearing Implant Sound Quality Index (HISQUI). Acta Oto-Laryngologica. 2014 Sep;134(9):915–23.

611 612 613

45. Hinderink JB, Krabbe PFM, Van Den Broek P. Development and application of a healthrelated quality-of-life instrument for adults with cochlear implants: The Nijmegen Cochlear Implant Questionnaire. Otolaryngol Head Neck Surg. 2000 Dec 1;123(6):756–65.

614 615 616 617

46. O’Neill C, Lutman ME, Archbold SM, Gregory S, Nikolopoulos TP. Parents and their cochlear implanted child: questionnaire development to assess parental views and experiences. International Journal of Pediatric Otorhinolaryngology. 2004 Feb;68(2):149– 60.

618 619 620 621

47. Samuel V, Gamble C, Cullington H, Bathgate F, Bennett E, Coop N, et al. Brief Assessment of Parental Perception (BAPP): Development and validation of a new measure for assessing paediatric outcomes after bilateral cochlear implantation. International Journal of Audiology. 2016 Nov;55(11):699–705.

622 623 624

48. King N, Nahm EA, Liberatos P, Shi Q, Kim AH. A New Comprehensive Cochlear Implant Questionnaire for Measuring Quality of Life After Sequential Bilateral Cochlear Implantation: Otology & Neurotology. 2014 Mar;35(3):407–13.

625 626 627

49. Berman MI, Jr. JCB, Hull JG, Linardatos E, Song SL, McLellan RK, et al. Feasibility Study of an Interactive Multimedia Electronic Problem Solving Treatment Program for Depression: A Preliminary Uncontrolled Trial. Behav Ther. 2014 May;45(3):358–75.

628 629

50. Corrigan PJ, Basket RM, Farrin AJ, Mulley GP, Heath MR. The development of a method for functional assessment of dentures. Gerodontology. 2002;19(1):41–5.

630 631 632

51. Komagamine Y, Kanazawa M, Kaiba Y, Sato Y, Minakuchi S, Sasaki Y. Association between self-assessment of complete dentures and oral health-related quality of life. Journal of Oral Rehabilitation. 2012;39(11):847–57.

633 634

52. Pace-Balzan A, Cawood JI, Howell R, Lowe D, Rogers SN. The Liverpool Oral Rehabilitation Questionnaire: a pilot study. Journal of Oral Rehabilitation. 2004 Jun;31(6):609–17.

635 636 637

53. Heikkinen K, Suomi R, Jääskeläinen M, Kaljonen A, Leino-Kilpi H, Salanterä S. The Creation and Evaluation of an Ambulatory Orthopedic Surgical Patient Education Web Site to Support Empowerment: CIN: Computers, Informatics, Nursing. 2010 Sep;28(5):282–90.

638 639 640

54. Boyer C, Selby M, Scherrer J-R, Appel RD. The Health On the Net Code of Conduct for medical and health Websites. Computers in Biology and Medicine. 1998 Sep 1;28(5):603– 10.

641 642

55. Minervation. LIDA Instrument [Internet]. 2006 [cited 2019 Apr 5]. Available from: http://www.minervation.com/category/lida/

643 644 645

56. Nahm ES, Resnick B, Mills ME. Development and pilot-testing of the perceived health Web Site usability questionnaire (PHWSUQ) for older adults. Stud Health Technol Inform. 2006;122:38–43.

646 647

57. Caboral-Stevens M, Whetsell MV, Evangelista LS, Cypress B, Nickitas D. U.S.A.B.I.L.I.T.Y. Framework for Older Adults. Res Gerontol Nurs. 2015 Dec;8(6):300–6.

648 649 650

58. Bol N, van Weert JCM, de Haes HCJM, Loos EF, de Heer S, Sikkel D, et al. Using Cognitive and Affective Illustrations to Enhance Older Adults’ Website Satisfaction and Recall of Online Cancer-Related Information. Health Communication. 2014 Aug 9;29(7):678–88.

651 652 653 654

59. Quamar A. Systematic Development and Test-Retest Reliability of The Electronic Instrumental Activities of Daily Living Satisfaction Assessment (EISA) Outcome Measure [Internet] [Pennsylvania, USA]. University of Pittsburgh; 2018 [cited 2019 Apr 6]. Available from: http://d-scholarship.pitt.edu/34211/

655 656 657

60. Tam C, Rigby P, Ryan SE, Campbell KA, Steggles E, Cooper BA, et al. Development of the measure of control using electronic aids to daily living. Technology & Disability. 2003 Dec;15(3):181–90.

658 659 660

61. Friesen EL, Theodoros DG, Russell TG. Development, Construction, and Content Validation of a Questionnaire to Test Mobile Shower Commode Usability. Top Spinal Cord Inj Rehabil. 2015 Winter;21(1):77–86.

661 662

62. Riemer-Reiss ML, Wacker RR. Factors associated with assistive technology discontinuance among individuals with disabilities. Journal of Rehabilitation. 2000 Jul;66(3):44–50.

663 664 665

63. Scherer MJ, Cushman LA. Measuring subjective quality of life following spinal cord injury: a validation study of the assistive technology device predisposition assessment. Disability and Rehabilitation. 2001 Jan;23(9):387–93.

666 667

64. Agree EM, Freedman VA. A Quality-of-Life Scale for Assistive Technology: Results of a Pilot Study of Aging and Technology. Phys Ther. 2011 Dec;91(12):1780–8.

668 669 670

65. Jalil S, Myers T, Atkinson I. A Meta-Synthesis of Behavioral Outcomes from Telemedicine Clinical Trials for Type 2 Diabetes and the Clinical User-Experience Evaluation (CUE). J Med Syst. 2015 Feb 13;39(3):28.

671 672 673

66. Roelands M, Van Oost P, Depoorter A, Buysse A. A social-cognitive model to predict the use of assistive devices for mobility and self-care in elderly people. Gerontologist. 2002 Feb;42(1):39–50.

674 675 676

67. Mortenson WB, Demers L, Fuhrer MJ, Jutai JW, Lenker J, DeRuyter F. Development and Preliminary Evaluation of The Caregiver Assistive Technology Outcome Measure. Journal of Rehabilitation Medicine (Stiftelsen Rehabiliteringsinformation). 2015 May;47(5):412–8.

677 678

68. Andrich R, Ferrario M, Moi M. A model of cost-outcome analysis for assistive technology. Disability and Rehabilitation. 1998 Jan;20(1):1–24.

679 680 681

69. Ryan SE, Campbell KA, Rigby PJ. Reliability of the Family Impact of Assistive Technology Scale for Families of Young Children With Cerebral Palsy. Archives of Physical Medicine and Rehabilitation. 2007 Nov;88(11):1436–40.

682 683 684

70. Wessels R, de Witte L, Andrich R, Ferrario M, Persson J, Oberg B, et al. IPPA, a user-centred approach to assess effectiveness of assistive technology provision. Technology & Disability. 2000 Oct;13(2):105–15.

685 686 687 688

71. Desideri L, Roentgen U, Hoogerwerf E-J, de Witte L. Recommending assistive technology (AT) for children with multiple disabilities: A systematic review and qualitative synthesis of models and instruments for AT professionals. Technology & Disability. 2013 Feb;25(1):3– 13.

689 690 691

72. Sund T, Brandt A, Anttila H, Iwarsson S. Psychometric properties of the NOMO 1.0 tested among adult powered-mobility users. Canadian Journal of Occupational Therapy. 2017 Feb;84(1):34–46.

692 693

73. Smith RO. OTFACT: multi-level performance-oriented software with an assistive technology outcomes assessment protocol. Technology & Disability. 2002 Jun;14(3):133–9.

694 695

74. Jutai J, Day H. Psychosocial Impact of Assistive Devices Scale (PIADS). Technology and Disability. 2002;14:107–11.

696 697 698 699

75. Koumpouros Y, Papageorgiou E, Karavasili A. Development of a New Psychometric Scale (PYTHEIA) to Assess the Satisfaction of Users with Any Assistive Technology. In: Duffy, VG and Lightner, N, editor. Advances in Human Factors and Ergonomics in Healthcare. 2017. p. 343–53. (Advances in Intelligent Systems and Computing; vol. 482).

700 701 702

76. Demers L, Weiss-Lambrou R, Ska B. The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): An overview and recent progress. Technology and Disability. 2002 Jan 1;14(3):101–5.

703 704

77. Andrich R. The SCAI instrument: measuring costs of individual assistive technology programmes. Technology & Disability. 2002 Jun;14(3):95–9.

705 706 707

78. Jette AM, Slavin MD, Ni P, Kisala PA, Tulsky DS, Heinemann AW, et al. Development and initial evaluation of the SCI-FI/AT. The Journal of Spinal Cord Medicine. 2015 May;38(3):409–18.

708 709 710

79. Arthanat S, Bauer SM, Lenker JA, Nochajski SM, Wu YWB. Conceptualization and measurement of assistive technology usability. Disability and Rehabilitation: Assistive Technology. 2007 Jan 1;2(4):235–48.

711 712 713

80. Ryan SE, Klejman S, Gibson BE. Measurement of the product attitudes of youth during the selection of assistive technology devices. Disability & Rehabilitation: Assistive Technology. 2013 Jan;8(1):21–9.

714 715

81. Cox RM, Alexander GC. The Abbreviated Profile of Hearing Aid Benefit: Ear and Hearing. 1995 Apr;16(2):176–86.

716 717 718

82. Wong LLN, Hang N. Development of a Self-Report Tool to Evaluate Hearing Aid Outcomes Among Chinese Speakers. Journal of Speech, Language, and Hearing Research. 2014 Aug;57(4):1548–63.

719 720 721

83. Dillon H, James A, Ginis J. Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. J Am Acad Audiol. 1997 Feb;8(1):27–43.

722 723

84. Cox RM, Alexander GC, Xu J. Development of the Device-Oriented Subjective Outcome (DOSO) Scale. Journal of the American Academy of Audiology. 2014 Sep 1;25(8):727–36.

724 725 726 727

85. Cienkowski KM, McHugh MS, McHugo GJ, Musiek FE, Cox RM, Baird JC. A computer method for assessing satisfaction with hearing aids: Un método computacional para evaluar la satisfacción en el uso de auxiliares auditivos. International Journal of Audiology. 2006 Jan;45(7):393–9.

728 729 730

86. Yueh B, McDowell JA, Collins M, Souza PE, Loovis CF, Deyo RA. Development and Validation of the Effectiveness of Auditory Rehabilitation Scale. Archives of Otolaryngology–Head & Neck Surgery. 2005 Oct 1;131(10):851.

731 732

87. Cox RM, Alexander GC. Expectations About Hearing Aids and Their Relationship to Fitting Outcome. Journal of the American Academy of Audiology. 2000;11(7):17.

733 734 735

88. Driscoll C, Chenoweth L. Amplification Use in Young Australian Adults with Profound Hearing Impairment. Asia Pacific Journal of Speech, Language and Hearing. 2007 Mar;10(1):57–70.

736 737 738

89. Gatehouse S. Glasgow Hearing Aid Benefit Profile: Derivation and Validation of a ClientCentered Outcome Measure for Hearing Aid Services. Journal of the American Academy of Audiology. 1999;10(2):24.

739 740 741

90. Vreeken HL, van Rens GHMB, Kramer SE, Knol DL, van Nispen RMA. Effects of a Dual Sensory Loss Protocol on Hearing Aid Outcomes: A Randomized Controlled Trial. Ear and Hearing. 2015 Aug;36(4):e166.

742 743 744

91. Walden BE, Demorest ME, Hepler EL. Self-Report Approach to Assessing Benefit Derived from Amplification. Journal of Speech, Language, and Hearing Research. 1984 Mar;27(1):49–56.

745 746 747

92. Korkmaz MH, Bayır Ö, Er S, Işık E, Saylam G, Tatar EÇ, et al. Satisfaction and compliance of adult patients using hearing aid and evaluation of factors affecting them. European Archives of Oto-Rhino-Laryngology. 2016 Nov;273(11):3723–32.

748 749 750

93. Gil-Gómez J-A, Manzano-Hernández P, Albiol-Pérez S, Aula-Valero C, Gil-Gómez H, LozanoQuilis J-A. USEQ: A Short Questionnaire for Satisfaction Evaluation of Virtual Rehabilitation Systems. Sensors. 2017 Jul 7;17(7):1589.

751 752 753 754 755 756

94. Gil-Gómez J-A, Gil-Gómez H, Lozano-Quilis J-A, Manzano-Hernández P, Albiol-Pérez S, Aula-Valero C. SEQ: Suitability Evaluation Questionnaire for Virtual Rehabilitation Systems. Application in a Virtual Rehabilitation System for Balance Rehabilitation. In: Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare [Internet]. ICST, Brussels, Belgium, Belgium: ICST (Institute for Computer Sciences, SocialInformatics and Telecommunications Engineering); 2013 [cited 2019 Apr 6]. p. 335–338.

757 758

(PervasiveHealth ’13). Available from: http://dx.doi.org/10.4108/icst.pervasivehealth.2013.252216

759 760

95. Gaine WJ, Smart C, Bransby-Zachary M. Upper Limb Traumatic Amputees: Review of prosthetic use. Journal of Hand Surgery. 1997 Feb;22(1):73–6.

761 762 763

96. Mills TL, Holm MB, Schmeler M. Test-Retest Reliability and Cross Validation of the Functioning Everyday With a Wheelchair Instrument. Assistive Technology. 2007 Jun 30;19(2):61–77.

764 765 766

97. Kazi R, Singh A, Cordova JD, Clarke P, Harrington K, Rhys-Evans P. A new self-administered questionnaire to determine patient experience with voice prostheses (Blom-singer valves). Journal of Postgraduate Medicine. 2005 Oct 1;51(4):253.

767 768 769

98. Dillon H, Birtles G, Lovegrove R. Measuring the Outcomes of a National Rehabilitation Program: Normative Data for the Client Oriented Scale of Improvement (COSI) and the Hearing Aid User’s Questionnaire (HAUQ). J Am Acad Audiol. 1999;10(2):67–79.

770 771

99. Hallam RS, Brooks DN. Development of the Hearing Attitudes in Rehabilitation Questionnaire (HARQ). British Journal of Audiology. 1996 Jan;30(3):199–213.

772 773 774

100. Cox R, Hyde M, Gatehouse S, Noble W, Dillon H, Bentler R, et al. Optimal outcome measures, research priorities, and international cooperation. Ear Hear. 2000;21(4 Suppl):106S-115S.

775 776 777 778

101. Kearns NT, Peterson JK, Smurr Walters L, Jackson WT, Miguelez JM, Ryan T. Development and Psychometric Validation of Capacity Assessment of Prosthetic Performance for the Upper Limb (CAPPFUL). Archives of Physical Medicine and Rehabilitation. 2018 Sep;99(9):1789–97.

779 780

102. West RL, Smith SL. Development of a hearing aid self-efficacy questionnaire. International Journal of Audiology. 2007 Jan;46(12):759–71.

781 782

103. Desjardins JL, Doherty KA. Do Experienced Hearing Aid Users Know How to Use Their Hearing Aids Correctly? American Journal of Audiology. 2009 Jun;18(1):69–76.

783 784

104. Cox RM, Gilmore C. Development of the Profile of Hearing Aid Performance (PHAP). Journal of Speech, Language, and Hearing Research. 1990 Jun;33(2):343–57.

785 786

105. Cox RM, Alexander GC. Measuring Satisfaction with Amplification in Daily Life: The SADL Scale. Ear and Hearing. 1999 Aug;20(4):306.

787 788

106. Korman M, Weiss PL, Kizony R. Living Labs: overview of ecological approaches for health promotion and rehabilitation. Disability and Rehabilitation. 2016 Mar 26;38(7):613–9.

789 790

107. Jerram JC, Purdy SC. Evaluation of hearing aid benefit using the Shortened Hearing Aid Performance Inventory. J Am Acad Audiol. 1997 Feb;8(1):18–26.

791 792 793

108. Ching TYC, Hill M. The Parents’ Evaluation of Aural/Oral Performance of Children (PEACH) Scale: Normative Data. Journal of the American Academy of Audiology. 2007 Mar 1;18(3):220–35.

794 795 796 797

109. Hayward DV, Ritter K, Grueber J, Howarth T. Outcomes That Matter for Children With Severe Multiple Disabilities who use Cochlear Implants: The First Step in an Instrument Development Process. Canadian Journal of Speech-Language Pathology & Audiology. 2013;37(1):58–69.

798 799 800

110. Pruitt SD, Varni JW, Setoguchi Y. Functional status in children with limb deficiency: Development and initial validation of an outcome measure. Archives of Physical Medicine and Rehabilitation. 1996 Dec;77(12):1233–8.

801 802 803

111. Pruitt SD, Varni JW, Seid M, Setoguchi Y. Prosthesis satisfaction outcome measurement in pediatric limb deficiency. Archives of Physical Medicine and Rehabilitation. 1997 Jul;78(7):750–4.

804 805 806

112. Callaghan B, Johnston M, Condie M. Using the theory of planned behaviour to develop an assessment of attitudes and beliefs towards prosthetic use in amputees. Disability and Rehabilitation. 2004 Jul 22;26(14–15):924–30.

807 808 809

113. Cairns N, Murray K, Corney J, McFadyen A. Satisfaction with cosmesis and priorities for cosmesis design reported by lower limb amputees in the United Kingdom: Instrument development and results. Prosthetics and Orthotics International. 2014 Dec;38(6):467–73.

810 811 812

114. Gailey RS, Roach KE, Applegate EB, Cho B, Cunniffe B, Licht S, et al. The Amputee Mobility Predictor: An instrument to assess determinants of the lower-limb amputee’s ability to ambulate. Archives of Physical Medicine and Rehabilitation. 2002 May;83(5):613–27.

813 814

115. Resnik L, Baxter K, Borgia M, Mathewson K. Is the UNB test reliable and valid for use with adults with upper limb amputation? Journal of Hand Therapy. 2013 Oct;26(4):353–9.

815 816 817 818

116. Virginia Wright F, Hubbard S, Jutai J, Naumann S. The prosthetic upper extremity functional index: Development and reliability testing of a new functional status questionnaire for children who use upper extremity prostheses. Journal of Hand Therapy. 2001 Apr;14(2):91–104.

819 820

117. Fisher K, Hanspal R. Body image and patients with amputations: does the prosthesis maintain the balance? Int J Rehabil Res. 1998 Dec;21(4):355–63.

821 822

118. Gardiner MD, Faux S, Jones LE. Inter-observer reliability of clinical outcome measures in a lower limb amputee population. Disability and Rehabilitation. 2002 Jan 1;24(4):219–25.

823 824 825

119. Houghton A, Allen A, Luff R, McColl I. Rehabilitation after lower limb amputation: A comparative study of above-knee, through-knee and Gritti—Stokes amputations. British Journal of Surgery. 1989;76(6):622–4.

826 827 828

120. Deathe AB, Miller WC. The L Test of Functional Mobility: Measurement Properties of a Modified Version of the Timed “Up & Go” Test Designed for People With Lower-Limb Amputations. Phys Ther. 2005 Jul 1;85(7):626–35.

829 830 831

121. Resnik L, Adams L, Borgia M, Delikat J, Disla R, Ebner C, et al. Development and Evaluation of the Activities Measure for Upper Limb Amputees. Archives of Physical Medicine and Rehabilitation. 2013 Mar;94(3):488-494.e4.

832 833 834

122. Hart DL. Orthotics and Prosthetics National Office Outcomes Tool (OPOT): Initial Reliability and Validity Assessment for Lower Extremity Prosthetics. JPO: Journal of Prosthetics and Orthotics. 1999 Oct;11(4):101.

835 836 837

123. Abu Osman NA, Eshraghi A, Gholizadeh H, Wan Abas WAB, Lechler K. Prosthesis donning and doffing questionnaire: Development and validation. Prosthetics and Orthotics International. 2017 Dec;41(6):571–8.

838 839 840

124. Legro MW, Reiber GD, Smith DG, del Aguila M, Larsen J, Boone D. Prosthesis evaluation questionnaire for persons with lower limb amputations: Assessing prosthesis-related quality of life. Archives of Physical Medicine and Rehabilitation. 1998 Aug;79(8):931–8.

841 842 843

125. Hafner BJ, Gaunaurd IA, Morgan SJ, Amtmann D, Salem R, Gailey RS. Construct Validity of the Prosthetic Limb Users Survey of Mobility (PLUS-M) in Adults With Lower Limb Amputation. Archives of Physical Medicine and Rehabilitation. 2017 Feb;98(2):277–85.

844 845 846

126. Franchignoni F, Monticone M, Giordano A, Rocca B. Rasch validation of the Prosthetic Mobility Questionnaire: A new outcome measure for assessing mobility in people with lower limb amputation. Journal of Rehabilitation Medicine. 2015;47(5):460–5.

847 848 849

127. Grisé M-CL, Gauthier-Gagnon C, Martineau GG. Prosthetic profile of people with lower extremity amputation: Conception and design of a follow-up questionnaire. Archives of Physical Medicine and Rehabilitation. 1993 Aug;74(8):862–70.

850 851 852

128. Hagberg K, Branemark R, Hagg O. Questionnaire for Persons with a Transfemoral Amputation (Q-TFA): Initial validity and reliability of a new outcome measure. The Journal of Rehabilitation Research and Development. 2004;41(5):695.

853 854 855

129. Bilodeau S, Hébert R, Desrosiers J. Questionnaire sur la satisfaction des personnes amputées du membre inférieur face à leur prothèse: Développement et validation. Canadian Journal of Occupational Therapy. 1999 Feb;66(1):23–32.

856 857 858

130. Gallagher P, MacLachlan M. Development and psychometric evaluation of the Trinity Amputation and Prosthesis Experience Scales (TAPES). Rehabilitation Psychology. 2000;45(2):130–54.

859 860 861 862 863

131. Yodpijit N, Jongprasithporn M, Khawnuan U, Sittiwanchai T, Siriwatsopon J. HumanCentered Design of Computerized Prosthetic Leg: A Questionnaire Survey for User Needs Assessment. In: Ahram TZ, Falcão C, editors. Advances in Usability, User Experience and Assistive Technology. Springer International Publishing; 2019. p. 994–1005. (Advances in Intelligent Systems and Computing).

864 865

132. Boswell-Ruys CL, Harvey LA, Delbaere K, Lord SR. A Falls Concern Scale for people with spinal cord injury (SCI-FCS). Spinal Cord. 2010 Sep;48(9):704–9.

866 867 868

133. Gagnon D, Décary S, Charbonneau M-F. The Timed Manual Wheelchair Slalom Test: A Reliable and Accurate Performance-Based Outcome Measure for Individuals With Spinal Cord Injury. Archives of Physical Medicine and Rehabilitation. 2011 Aug;92(8):1339–43.

869 870 871

134. Sauret C, Bascou J, Rmy N de S, Pillet H, Vaslin P, Lavaste F. Assessment of field rolling resistance of manual wheelchairs. The Journal of Rehabilitation Research and Development. 2012;49(1):63.

872 873 874

135. Fliess-Douer O, van der Woude L, Vanlandewijck Y. Development of a new scale for perceived self-efficacy in manual wheeled mobility: A pilot study. Journal of Rehabilitation Medicine. 2011;43(7):602–8.

875 876 877

136. Harris F, Sprigle S, Sonenblum SE, Maurer CL. The Participation and Activity Measurement System: An example application among people who use wheeled mobility devices. Disability and Rehabilitation: Assistive Technology. 2010 Jan;5(1):48–57.

878 879 880

137. Auger C, Miller WC, Jutai JW, Tamblyn R. Development and feasibility of an automated call monitoring intervention for older wheelchair users: the MOvIT project. BMC Health Serv Res. 2015;15.

881 882 883

138. Kumar A, Schmeler MR, Karmarkar AM, Collins DM, Cooper R, Cooper RA, et al. Test-retest reliability of the functional mobility assessment (FMA): a pilot study. Disability & Rehabilitation: Assistive Technology. 2013 May;8(3):213–9.

884 885 886

139. Gollan EJ, Harvey LA, Simmons J, Adams R, McPhail SM. Development, reliability and validity of the queensland evaluation of wheelchair skills (QEWS). Spinal Cord. 2015 Oct;53(10):743–9.

887 888 889

140. Fliess-Douer O, Van Der Woude LHV, Vanlandewijck YC. Reliability of the Test of Wheeled Mobility (TOWM) and the Short Wheelie Test. Archives of Physical Medicine and Rehabilitation. 2013 Apr;94(4):761–70.

890 891 892

141. DiGiovine MM, Cooper RA, Boninger ML, Lawrence BM, VanSickle DP, Rentschler AJ. User assessment of manual wheelchair ride comfort and ergonomics. Archives of Physical Medicine and Rehabilitation. 2000 Apr;81(4):490–4.

893 894 895

142. Rushton PW, Miller WC, Kirby RL, Eng JJ, Yip J. Development and content validation of the Wheelchair Use Confidence Scale: A mixed-methods study. Disabil Rehabil Assist Technol. 2011;6(1):57–66.

896 897 898

143. Crytzer TM, Dicianno BE, Robertson RJ, Cheng Y-T. Validity of a Wheelchair Perceived Exertion Scale (Wheel Scale) for Arm Ergometry Exercise in People with Spina Bifida. Perceptual and Motor Skills. 2015 Feb;120(1):304–22.

899 900 901 902

144. Vereecken M, Vanderstraeten G, Ilsbroukx S. From “Wheelchair Circuit” to “Wheelchair Assessment Instrument for People With Multiple Sclerosis”: Reliability and Validity Analysis of a Test to Assess Driving Skills in Manual Wheelchair Users With Multiple Sclerosis. Archives of Physical Medicine and Rehabilitation. 2012 Jun;93(6):1052–8.

903 904 905

145. Dharne MG. Reliability and validity of Assistive Technology Outcome Measure (ATOM) version 2.0 for adults with physical disability using wheelchairs [Internet]. 2006 [cited 2019 Apr 6]. Available from: http://ubir.buffalo.edu/xmlui/handle/10477/49085

906 907 908

146. Kilkens OJ, Post MW, van der Woude LH, Dallmeijer AJ, van den Heuvel WJ. The wheelchair circuit: Reliability of a test to assess mobility in persons with spinal cord injuries. Archives of Physical Medicine and Rehabilitation. 2002 Dec;83(12):1783–8.

909 910 911

147. Kirby RL, Dupuis DJ, Macphee AH, Coolen AL, Smith C, Best KL, et al. The wheelchair skills test (version 2.4): measurement properties. Archives of Physical Medicine and Rehabilitation. 2004;85(5):794–804.

912 913 914

148. Mortenson WB, Miller WC, Miller-Pogar J. Measuring Wheelchair Intervention Outcomes: Development of the Wheelchair Outcome Measure. Disabil Rehabil Assist Technol. 2007 Sep;2(5):275–85.

915 916 917

149. Crane BA, Holm MB, Hobson D, Cooper RA, Reed MP, Stadelmeier S. Test-Retest Reliability, Internal Item Consistency, and Concurrent Validity of the Wheelchair Seating Discomfort Assessment Tool. Assistive Technology. 2005 Sep 30;17(2):98–107.

918 919 920

150. Barks L, Luther SL, Brown LM, Schulz B, Bowen ME, Powell-Cope G. Development and initial validation of the Seated Posture Scale. Journal of Rehabilitation Research and Development. 2015;52(2):201–10.

921 922 923

151. Cress ME, Kinne S, Patrick DL, Maher E. Physical Functional Performance in Persons Using a Manual Wheelchair. Journal of Orthopaedic & Sports Physical Therapy. 2002 Mar;32(3):104–13.

924 925 926

152. Askari S, Kirby RL, Parker K, Thompson K, O’Neill J. Wheelchair Propulsion Test: Development and Measurement Properties of a New Test for Manual Wheelchair Users. Archives of Physical Medicine and Rehabilitation. 2013 Sep;94(9):1690–8.

927 928 929

153. Stanley RK, Stafford DJ, Rasch E, Rodgers MM. Development of a functional assessment measure for manual wheelchair users. The Journal of Rehabilitation Research and Development. 2003;40(4):301.

930 931 932 933

154. Lesén E, Björholt I, Ingelgård A, Olson FJ. Exploration and preferential ranking of patient benefits of medical devices: a new and generic instrument for health economic assessments. International Journal of Technology Assessment in Health Care. 2017;33(04):463–71.

934 935 936

155. Loy JS, Ali EE, Yap KY-L. Quality Assessment of Medical Apps that Target MedicationRelated Problems. Journal of Managed Care & Specialty Pharmacy. 2016 Oct;22(10):1124– 40.

937 938 939

156. Anderson K, Burford O, Emmerton L. App Chronic Disease Checklist: Protocol to Evaluate Mobile Apps for Chronic Disease Self-Management. JMIR Research Protocols. 2016 Nov 4;5(4):e204.

940 941 942

157. Schnall R, Cho H, Liu J. Health Information Technology Usability Evaluation Scale (HealthITUES) for Usability Assessment of Mobile Health Technology: Validation Study. JMIR mHealth and uHealth. 2018 Jan 5;6(1):e4.

943 944 945

158. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR mHealth and uHealth. 2015 11;3(1):e27.

946 947 948 949

159. Martínez-Pérez B, de la Torre-Díez I, Candelas-Plasencia S, López-Coronado M. Development and Evaluation of Tools for Measuring the Quality of Experience (QoE) in mHealth Applications. Journal of Medical Systems [Internet]. 2013 Oct [cited 2019 Apr 5];37(5). Available from: http://link.springer.com/10.1007/s10916-013-9976-x

950 951 952 953 954 955

160. Jutai JW. End-user Participation in Developing the Assistive Technology Outcomes Profile for Mobility (ATOP/M) (Thematic Session: End-user Participation in the Development of Assistive Device Assessments and Outcome Measures). In: Gelderblom, GJ and Soede, M and Adriaens, L and Miesenberger, K, editor. EVERYDAY TECHNOLOGY FOR INDEPENDENCE AND CARE. Assoc Advancement Assist Technol Europe; 2011. p. 1026–32. (Assistive Technology Research Series; vol. 29).

956 957 958

161. Magasi S, Wong A, Miskovic A, Tulsky D, Heinemann AW. Mobility Device Quality Affects Participation Outcomes for People With Disabilities: A Structural Equation Modeling Analysis. Archives of Physical Medicine and Rehabilitation. 2018 Jan;99(1):1–8.

959 960 961

162. Gray DB, Hollingsworth HH, Stark S, Morgan KA. A subjective measure of environmental facilitators and barriers to participation for people with mobility limitations. Disability and Rehabilitation. 2008 Jan;30(6):434–57.

962 963 964

163. Prajapati C, Watkins C, Cullen H, Orugun O, King D, Rowe J. The “S” test - a preliminary study of an instrument for selecting the most appropriate mobility aid. Clinical Rehabilitation. 1996 Nov;10(4):314–8.

965 966 967

164. Hermansson L, Fisher A, Bernspång B, Eliasson A-C. Assessment of Capacity for Myoelectric Control: a new rasch-built measure of prosthetic hand control. Journal of Rehabilitation Medicine. 2004 Jan 1;1(1):1–1.

968 969 970

165. Preciado A, Del Río J, Lynch CD, Castillo-Oyagüe R. A new, short, specific questionnaire (QoLIP-10) for evaluating the oral health-related quality of life of implant-retained overdenture and hybrid prosthesis wearers. Journal of Dentistry. 2013 Sep;41(9):753–63.

971 972 973

166. van Netten J, Hijmans J, Jannink M, Geertzen J, Postema K. Development and reproducibility of a short questionnaire to measure use and usability of custom-made orthopaedic shoes. Journal of Rehabilitation Medicine. 2009;41(11):913–8.

974 975 976

167. Jannink M, de Vries J, Stewart R, Groothoff J, Lankhorst G. Questionnaire for usability evaluation of orthopaedic shoes: construction and reliability in patients with degenerative disorders of the foot. Journal of Rehabilitation Medicine. 2004 Nov 1;36(6):242–8.

977 978 979

168. Pröbsting E, Kannenberg A, Zacharias B. Safety and walking ability of KAFO users with the C-Brace® Orthotronic Mobility System, a new microprocessor stance and swing control orthosis. Prosthetics and Orthotics International. 2017 Feb;41(1):65–77.

980 981 982 983

169. Swinnen E, Lafosse C, Van Nieuwenhoven J, Ilsbroukx S, Beckwée D, Kerckhofs E. Neurological patients and their lower limb orthotics: An observational pilot study about acceptance and satisfaction. Prosthetics and Orthotics International. 2017 Feb;41(1):41– 50.

984 985 986

170. Heinemann AW, Bode RK, O’Reilly C. Development and measurement properties of the Orthotics and Prosthetics Users’ Survey (OPUS): A comprehensive set of clinical outcome instruments. Prosthetics and Orthotics International. 2003 Dec;27(3):191–206.

987 988 989

171. Nilsson L, Eklund M, Nyberg P. Driving to Learn in a powered wheelchair: Inter-rater reliability of a tool for assessment of joystick-use. Australian Occupational Therapy Journal. 2011 Dec;58(6):447–54.

990 991 992

172. Hasdai A, Jessel AS, Weiss PL. Use of a computer simulator for training children with disabilities in the operation of a powered wheelchair. Am J Occup Ther. 1998 Mar;52(3):215–20.

993 994 995

173. Kenyon LK, Farris JP, Cain B, King E, VandenBerg A. Development and content validation of the power mobility training tool. Disability and Rehabilitation: Assistive Technology. 2018 Jan 2;13(1):10–24.

996 997 998

174. Larsson P, Odont D, John MT, Nilner K, Odont D, Bondemark L, et al. Development of an Orofacial Esthetic Scale in Prosthodontic Patients. The International Journal of Prosthodontics. 2010;23(3):8.

999 1000 1001 1002

175. Perea C, Preciado A, Río JD, Lynch CD, Celemín A, Castillo-Oyagüe R. Oral aesthetic-related quality of life of muco-supported prosthesis and implant-retained overdenture wearers assessed by a new, short, specific scale (QoLDAS-9). Journal of Dentistry. 2015 Nov;43(11):1337–45.

1003 1004 1005 1006 1007

176. Flores E, Tobon G, Cavallaro E, Cavallaro FI, Perry JC, Keller T. Improving patient motivation in game development for motor deficit rehabilitation. In: Proceedings of the 2008 International Conference in Advances on Computer Entertainment Technology - ACE ’08 [Internet]. Yokohama, Japan: ACM Press; 2008 [cited 2019 Apr 5]. p. 381. Available from: http://portal.acm.org/citation.cfm?doid=1501750.1501839

1008 1009 1010

177. Boucher P, Atrash A, Kelouwani S, Honoré W, Nguyen H, Villemure J, et al. Design and validation of an intelligent wheelchair towards a clinically-functional outcome. Journal of NeuroEngineering and Rehabilitation. 2013;10(1):58.

1011 1012 1013

178. McDonald R, Surtees R, Wirz S. A comparison between parents’ and therapists’ views of their child’s individual seating systems. International Journal of Rehabilitation Research. 2003 Sep;26(3):235–43.

1014 1015 1016

179. Fife SE, Roxborough LA, Armstrong RW, Harris SR, Gregson JL, Field D. Development of a Clinical Measure of Postural Control for Assessment of Adaptive Seating in Children with Neuromotor Disabilities. Phys Ther. 1991 Dec 1;71(12):981–93.

1017 1018 1019

180. Mills T, Holm MB, Trefler E, Schmeler M, Fitzgerald S, Boninger M. Development and consumer validation of the Functional Evaluation in a Wheelchair (FEW) instrument. Disability and Rehabilitation. 2002 Jan 1;24(1–3):38–46.

1020 1021 1022

181. Moir L. Evaluating the Effectiveness of Different Environments on the Learning of Switching Skills in Children with Severe and Profound Multiple Disabilities. British Journal of Occupational Therapy. 2010 Oct;73(10):446–56.

1023 1024 1025 1026

182. Hirani SP, Rixon L, Beynon M, Cartwright M, Cleanthous S, Selva A, et al. Quantifying beliefs regarding telehealth: Development of the Whole Systems Demonstrator Service User Technology Acceptability Questionnaire. Journal of Telemedicine and Telecare. 2017 May;23(4):460–9.

1027 1028 1029

183. Demiris G, Speedie S, Finkelstein S. A questionnaire for the assessment of patients’ impressions of the risks and benefits of home telecare. Journal of Telemedicine and Telecare. 2000 Oct;6(5):278–84.

1030 1031 1032 1033

184. Bakken S, Grullon-Figueroa L, Izquierdo R, Lee N-J, Morin P, Palmas W, et al. Development, Validation, and Use of English and Spanish Versions of the Telemedicine Satisfaction and Usefulness Questionnaire. Journal of the American Medical Informatics Association. 2006 Nov 1;13(6):660–7.

1034 1035 1036

185. Yip MP, Chang AM, Chan J, MacKenzie AE. Development of the Telemedicine Satisfaction Questionnaire to evaluate patient satisfaction with telemedicine: a preliminary study. Journal of Telemedicine and Telecare. 2003 Feb;9(1):46–50.

1037 1038 1039

186. Finkelstein SM, MacMahon K, Lindgren BR, Robiner WN, Lindquist R, VanWormer A, et al. Development of a remote monitoring satisfaction survey and its use in a clinical trial with lung transplant recipients. Journal of Telemedicine and Telecare. 2012 Jan;18(1):42–6.

1040 1041

187. World Health Organization. International classification of functioning, disability and health: ICF. Geneva: World Health Organization; 2001.

1042 1043

188. Egilson ST, Coster WJ. School function assessment: performance of icelandic students with special needs. Scandinavian Journal of Occupational Therapy. 2004 Nov 1;11(4):163–70.

1044 1045 1046

189. Giesbrecht E. Application of the Human Activity Assistive Technology model for occupational therapy research. Australian Occupational Therapy Journal. 2013 Aug 1;60(4):230–40.

1047 1048

190. Scherer MJ, Craddock G. Matching Person & Technology (MPT) assessment process. Technology and Disability. 2002 Jan 1;14(3):125–31.

1049 1050

191. Brandt A, Iwarsson S, Ståhle A. Older people’s use of powered wheelchairs for activity and participation. J Rehabil Med. 2004 Mar;36(2):70–7.

1051 1052

192. Chuttur M. Overview of the Technology Acceptance Model: Origins, Developments and Future Directions. Sprouts: Working Papers on Information Systems. 2009;9(37):23.

1053 1054

193. Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly. 2003;27(3):425–78.

1055 1056 1057

194. Shaw T, McGregor D, Brunner M, Keep M, Janssen A, Barnet S. What is eHealth (6)? Development of a Conceptual Model for eHealth: Qualitative Study with Key Informants. Journal of Medical Internet Research. 2017;19(10):e324.

1058 1059

195. Olla P, Shimskey C. mHealth taxonomy: a literature survey of mobile health applications. Health Technol. 2015 Apr 1;4(4):299–308.

1060 1061 1062

196. Heinemann AW, Bode RK, O’Reilly C. Development and measurement properties of the Orthotics and Prosthetics User’s Survey (OPUS): A comprehensive set of clinical outcome instruments. 2003.

1063 1064

197. Sullivan GM. A Primer on the Validity of Assessment Instruments. J Grad Med Educ. 2011 Jun;3(2):119–20.

1065 1066

198. Hahn DL, Hoffmann AE, Felzien M, LeMaster JW, Xu J, Fagnan LJ. Tokenism in patient engagement. Fam Pract. 2017 Jun 1;34(3):290–5.

1067 1068 1069

199. Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, et al. Uncovering the Benefits of Participatory Research: Implications of a Realist Review for Health Research and Practice. The Milbank Quarterly. 2012;90(2):311–46.

1070 1071

200. Couvreur LD, Goossens R. Design for (every)one: co-creation as a bridge between universal design and rehabilitation engineering. CoDesign. 2011 Jun 1;7(2):107–21.

1072 1073 1074

201. Domecq JP, Prutsky G, Elraiyah T, Wang Z, Nabhan M, Shippee N, et al. Patient engagement in research: a systematic review. BMC Health Services Research. 2014 Feb 26;14(1):89.

1075 1076

202. Ocloo J, Matthews R. From tokenism to empowerment: progressing patient and public involvement in healthcare improvement. BMJ Qual Saf. 2016 Aug 1;25(8):626–32.

1077 1078 1079

203. Velsen LV, Geest TVD, Klaassen R, Steehouder M. User-centered evaluation of adaptive and adaptable systems: a literature review. The Knowledge Engineering Review. 2008 Sep;23(3):261–81.

1080 1081 1082

204. Boger J, Jackson P, Mulvenna M, Sixsmith J, Sixsmith A, Mihailidis A, et al. Principles for fostering the transdisciplinary development of assistive technologies. Disability and Rehabilitation: Assistive Technology. 2017 Jul 4;12(5):480–90.

1083 1084 1085

205. Robillard JM, Cleland I, Hoey J, Nugent C. Ethical adoption: A new imperative in the development of technology for dementia. Alzheimer’s & Dementia. 2018 Sep 1;14(9):1104–13.

1086 1087

206. Koumpouros Y. A Systematic Review on Existing Measures for the Subjective Assessment of Rehabilitation and Assistive Robot Devices. J Healthc Eng. 2016;2016.

1088 1089

207. Reed PR, Lahm EA. A Resource Guide for Teachers and Administrators about Assistive Technology. 2005;26.

1090 1091 1092

208. Beukelman DR, Mirenda P. Augmentative and Alternative Communication: Management of Severe Communication Disorders in Children and Adults. 2nd edition. Baltimore: Brookes Publishing Company; 1998. 624 p.

1093

Figure Legend

1094

Figure. 1 International Organization for Standardization (ISO) definition for Assistive Products

1095

(Technology).

1096

Figure 2. Flow diagram detailing article counts from initial search to inclusion.

1097

Figure 3. The most common AT-categories evaluated. Counts represent number of ETs for each

1098

AT-category. AT-categories evaluated by ≥ 4 ETs were included. In this review, digital health

1099

technology appeared differentiated as eHealth (web-based health services), mHealth (health

1100

applications focused on mobile technology platforms), and Telehealth (technology focused on

1101

providing remote client-clinician interaction) according to our code list (Supplemental S2).

1102

Figure 4. The most common conceptual frameworks explicitly used for ET development. Counts

1103

represent number of ETs related to each conceptual framework. Frameworks used by ≥ 3 ETs

1104

were included.

1105

Figure 5. Number of ETs where validation evidence was not found (Left) and was found (right).

1106

The vertical axis categorizes the types of end-user input that was explicitly used in ET

1107

development.

1108

Figure 6. The most common constructs evaluated. Counts represent number of ETs evaluating

1109

each construct. Constructs evaluated by ≥ 5 ETs were included. Several ETs evaluated multiple

1110

constructs; these ETs were counted once for each construct they evaluated.

1111

Figure 7. Chord diagram indicating paired linkages between attributes; pairings with ≥ 3

1112

linkages were included. Thicker bands indicate a greater number of linkages. Bands flow from

1113

AT-category to Conceptual Framework to Constructs Evaluated to Conceptual Framework.

1114

These bands may be interpreted as a combination of both quantity and consistency of ETs,

1115

representing their overall footprint in the AT-ET literature. For example, the strong linkages

1116

appear with the heavily represented Hearing Aid AT-category with Satisfaction, Benefit, and

1117

Usage. Aside from Existing Tool, the ICF was the most heavily represented conceptual

1118

framework, with consistent linkages to General AT, Mobility Aids, and Participation. Moreover,

1119

Assistive Social Robots, UTAUT, and Acceptance were consistently interlinked. Unsurprisingly,

1120

Functional Performance was assessed in ETs for a variety of AT-categories.

EVALUATION TOOLS FOR ASSISTIVE TECHNOLOGIES Table 1. Attributes extracted from ETs. Data

Description

AT Category

The specific type or scope of AT that the tool was developed to evaluate.

Construct Evaluated

The primary construct(s) that the tool measures. Constructs measured by subscales were also recorded.

Conceptual Framework

The defined framework, model, or theory according to which the conceptual content of the tool was developed.

End-user Input

The type of user input used for tool development.

Validation Testing

Reliability or Validity studies conducted (Yes/No).

1

EVALUATION TOOLS FOR ASSISTIVE TECHNOLOGIES Table 2. Categorization of AT categories, conceptual frameworks, and evaluated constructs into thematic domains. ET counts are sum totals of constituent attributes; e.g. an ET evaluating two different “Attitudes” constructs is counted twice or an ET evaluating both a “User Experience” construct and an “Impacts” construct is counted twice. AT Category

# ET

Conceptual Framework

# ET

Construct Evaluated

# ET

Mobility Aids

36

Health Frameworks

18

Functioning & Activities

71

Hearing Amplification Devices

32

AT Frameworks

12

Attitudes

63

Orthotics & Limb Prosthetics

31

Outcomes

9

User Experience

56

Assistive Digital Media and Communication

26

Technology Design

6

AT Design Factors

40

General AT

22

Methodologies

6

Impacts

34

Oral Health

6

Psychosocial Frameworks

6

Status/Well-being

25

Assistive Robots

4

Standardized Definitions

4

Personal Factors

6

Other AT

2

Other Frameworks

4

Participation

8

Needs

6

1

ISO 9999:2016 2.3 assistive product any product (including devices, equipment, instruments and software), especially produced or generally available, used by or for persons with disability (2.12) -

for participation (2.13),

-

to protect, support, train, measure or substitute for body functions (2.4)/structures and activities, or

-

to prevent impairments (2.11), activity limitations (2.2) or participation restrictions (2.14)

Records identified through database searching n = 30 625

Records identified through other sources n = 146

Records after duplicates removed n = 23 256

Records screened n = 23 256

Records excluded n = 21 907

ETs assessed for eligibility n = 308

ETs excluded, with reasons n = 149

ETs included in review n = 159

Common AT-Categories Hearing Aid General AT Lower Limb Prosthesis Manual Wheelchair eHealth Wheelchair Cochlear Implant Wheeled Mobility Device Upper Limb Prosthesis Telehealth

mHealth Mobility Aids 0

5

10

15

20

25

Common Conceptual Frameworks None Specified Exis�ng Tool ICF MPT TAM HRQL UTAUT Social Cogni�ve Theory PROMIS Delphi (comprehensive)

0

10

20

30

40

Count MPT – Matching Person and Technology TAM – Technology Acceptance Model HRQL – Health-related Quality of Life UTAUT – Unified Theory of Acceptance and Usage of Technology PROMIS – Pa�ent-Reported Outcomes Measurement Informa�on System

50

60

70

80

End-user Input and Valida�on Development Informa�on Inaccessible None Specified

Type of End-user Input

3

4

74

16

Interview

2

Focus Group

2

General Feedback

0

Qualita�ve Study

0

Expert Panel

1

Ques�onnaire Feedback

2

18 10 7 5 4 2

Survey

0 1

Mixed Methods

0 1

Grounded Theory

0 1

Delphi

0 1 -25

No Valida�on Found

0

25

Valida�on Found

50

75

Common Constructs Evaluated Sa�sfac�on Func�onal-Performance Usage AT-Skills Usability Mobility Quality-of-Life Benefit Comfort Acceptance Ease-of-Use A�tudes Usefulness Par�cipa�on HQoL 0 HQoL – Health-related Quality of Life

5

10

15

20

25

30

35

M Moobility bilit y a PP a r ti c Qu r ticipipat ati ion ali on ty of Lif e

Functional FPer uncform tionanc al e Performance

AT

e nsefit ofeU Ease B efit Ben ills Sk ce ATptan ce Ac ills Sk

Cons truc ts E val ua ted

n io n t ac tio it sfsfac Sa ati S

UT AU T MP T

lity abbiility s U sa U e Ussaagge U

ICF

Assistive Genera l AT Social R obots Gen

eral

ol o T g

stin

He

ari

He

F

ar

Manual Wheelchair

led

sis

the

Wh ee

in

g

Ai

d

Aid

AT

ros

P LL

LL – Lower Limb UL – Upper Limb ICF - International Classification of Functioning, Disability and Health MPT – Matching Person and Technology UTAUT – Unified Theory of Acceptance and Use of Technology

ual Man lcshtahier sis eero W LLh P alth Telehe sis UL Prosthe

eHealth

ist eiHng Mo eaToo bilit lth l Vir tu yD al R evic eha bilita e tion UL Pr osthe sis Mobility A ids

IC

Ex

M

PT

ng

y or

Exi

AT

Ca t eg

rk ewo m a l Fr a tu ep c n Co

None Specified