Technological developments in assessment

Technological developments in assessment

Technological developments in assessment 20 Robert L. Kane1 and Thomas D. Parsons2,3,4 1 Cognitive Consults and Technology LLC, Washington Neuropsyc...

230KB Sizes 2 Downloads 91 Views

Technological developments in assessment

20

Robert L. Kane1 and Thomas D. Parsons2,3,4 1 Cognitive Consults and Technology LLC, Washington Neuropsychology Research Group, Neuropsychology Associates of Fairfax, Fairfax, VA, United States, 2Director, NetDragon Digital Research Centre, University of North Texas, Denton, TX, United States, 3Director, Computational Neuropsychology and Simulation (CNS) Lab, Department of Psychology, University of North Texas, Denton, TX, United States, 4Professor, College of Information, University of North Texas, Denton, TX, United States

Introduction In medicine and related fields, examinations are in essence assessments or interrogations of functional systems. A complete physical examination may include assessing the integrity of biological and organ systems whose functioning is critical for a patient’s health. In psychology, this review may involve cognitive systems relating to areas such as affective self-regulation, language, memory, various components of executive functioning, aspects of attention, and problem-solving. Psychologists also assess behavioral and symptom patterns that relate to different clinical presentations. Data obtained from the review of systems can be used to establish whether or not an individual is showing problems within one or more cognitive and affective domains, may help inform diagnostic considerations, be used to assess capacity related to occupational or legal considerations, and provide information related to treatment planning. In neuropsychology, the once-important objective of lesion localization has become less relevant with advances in neuroimaging (Bigler, 1991; Baxendale & Thompson, 2010). However, assessing the cognitive status of an individual, as well as the integrity of various cognitive and affective systems, remains an important psychological contribution to patient evaluation, and traditional assessment methods and tests have functioned admirably in making psychology a valued part of modern health care. While it is important to acknowledge the success of psychological assessment to date, no profession can afford to stand still and in an era marked by rapid advances in technology in almost every area of life, it is critical to address to what degree technology can enhance the psychological examination and potentially expand uses for cognitive assessment. The goal of this chapter is to provide an overview of the current and potential impact of technology on cognitive assessment and to provide a roadmap for future developments making use of technology to enhance the assessment and rehabilitation process. An underlying theme is that technology should be used not because it is there but because it can Handbook of Psychological Assessment. DOI: https://doi.org/10.1016/B978-0-12-802203-0.00020-1 © 2019 Elsevier Ltd. All rights reserved.

574

Handbook of Psychological Assessment

introduce capabilities and options that at minimum enhance efficiency and more importantly augment the assessment and interpretation processes. For this chapter, we have identified seven main areas where technology can have a substantial impact on the practice of psychological assessment. These include: (1) enhancing the efficiency and reliability of the assessment process; (2) expanding the types of behaviors that can be assessed, including the implementation of scenario-based assessment; (3) increasing access to care; (4) improving assessment models by linking cognitive domains and biological systems; (5) refining diagnosis and prediction using analytics; (6) augmenting cognitive rehabilitation and self-monitoring; and (7) expanding research methods.

Assessment: enhancing efficiency and reliability Once the microcomputer became available in the 1970s it was inevitable that psychologists would begin exploring the potential of this device for both research and clinical assessment. Much of the early work was accomplished by researchers addressing specific problems or by individual clinicians whose goal was to develop tests for specific purposes. For example, Acker and Acker (1982) in England developed a brief battery that could be used to assess individuals with alcohol problems. Their battery ran on an Apple II computer. Branconnier (1986) developed The Alzheimer’s Disease Assessment Battery. It also ran on the Apple II and was constructed so that both the patient and the examiner worked on separate computer terminals. For general clinical assessment, Swiercinsky (1984) developed the SAINT-II. This system was composed of 10 tests, many of which were based on the author’s experience with the Halstead-Reitan Test Battery. The SAINT-II included tests assessing spatial orientation, motor persistence, verbal and visual memory, visual search speed, vocabulary, sequencing, rhythm discrimination, numerical skill, and set shifting (Kane & Reeves, 1997). A great deal of early work took place in laboratories run by psychologists in the Department of Defense. These individuals were engaged in human performance research and were interested in expanding the tests available for cognitive assessment and in developing batteries where test measures could be repeated in order to assess the effects of various environmental stressors. Much of this early work took place outside the awareness of most psychologists and publications were often in the form of technical reports. The fact that much of this early work was not visible or easily discoverable by most psychologists led Kane and Kay (1992) to publish a review of the work that had been accomplished in the area of computerized testing at that point in time. In this review they discussed basic issues in computerized assessment, outlined and categorized the types of tasks that had been implemented by different test batteries, and reviewed available batteries with respect to technical considerations, psychometric properties, and supporting research. Since that time there has been an expansion in both the use and availability of computerized test measures.

Technological developments in assessment

575

Advantages and challenges in early adoption Dating from the first attempts to implement computers as cognitive assessment devices, there was an appreciation of the potential advantages as well as the challenges and cautions that surrounded computerized assessment. The advantages included standardization in test administration superior to that of the human examiner, scoring accuracy, the ability to integrate response timing into a variety of tasks in order to better assess processing speed and efficiency, expanded test metrics that capture variability in performance, the ability to integrate adaptive testing for both test and item selection, and the ability to incorporate tests not easily done with booklets or pieces of paper. The cautions included that computers could be deceptive with respect to timing accuracy and developers of automated tests had to be aware of issues related to response input methods, operating systems, program coding, and computer architecture that could affect response timing and the measurement of test performance. In the early years of automated testing, computers were relatively new devices not yet fully integrated into daily life. Hence, there were also concerns about how an individual would react to being tested on a computer. These concerns were supported by studies addressing the effects of computer familiarity on test performance (Johnson & Mihal, 1973; Iverson, Brooks, Ashton, Johnson, & Gualtieri, 2009). The biggest limitation in adopting computerized testing for clinical assessment was that available technology did not permit the assessment of important language-based skills, including verbal memory. Algorithms for analyzing speech patterns for indications for anxiety, depression, or unusual syntax or word usage were also not available or were in the early stages of development. Constraints related to speech recognition and language analysis limited the types of tests and methods that have been implemented on computer for clinical assessment.

Technological advances While early adopters of computerized testing faced some challenges, it is unlikely that these limitations will persist for much longer. There are two thrusts in technology that will have a substantial impact on the ability to implement clinically relevant tasks via computer. These are speech (and language) recognition and adaptive computational models (i.e., artificially intelligent algorithms). Computational models will be discussed later as they relate to clinical decision making. However, improvements in the ability of automated systems to recognize spoken language will dramatically affect how computers can be integrated into cognitive and behavioral assessment. At present, it takes a human examiner to assess verbal memory, confrontation naming, generative fluency, and expository speech. Advances in speech and language recognition will eventually make it possible to assess a full range of cognitive and affective functions using automated methods. Also, it is likely that by implementing artificial neural networks, the computer will be able to pick up additional data informative to the assessment process (Mitchell & Xu, 2015). Humans are skilled at assessing emotional pauses, facial expression, and vocal intonations. Computers continue to have limitations for assessing mood and

576

Handbook of Psychological Assessment

affect. However, the gap between humans and computers in assessing emotion, while wide, continues to narrow through development and implementation of artificial neural network architectures constructed to handle the fusion of different modalities (facial features, prosody and lexical content in speech; Fragopanagos & Taylor, 2005; Wu, Parsons, & Narayanan, 2010). While early computational approaches to speech recognition and language processing focused on automated analyses of linguistic structures and the development of lab-based technologies (machine translation, speech recognition, and speech synthesis), current approaches can be implemented in real-world speech-to-speech translation engines that can identify sentiment and emotion (Hirschberg & Manning, 2015). As language processing improves, and as feature detection becomes more sophisticated, the range of clinically relevant aspects of how patients present will continue to expand in ways that will both enhance and challenge current approaches to psychological assessment. Studies are already underway to assess temporal characteristics of spontaneous speech in acquired brain injury and neurodegenerative disease (Szatloczki, Hoffmann, Vincze, Kalman, & Pakaski, 2015). Computer-automated language analysis may help the early diagnosis of cognitive decline. The temporal characteristics of spontaneous speech (e.g., speech tempo, number and length of pauses in speech) are markers of cognitive disorders (Roark, Mitchell, Hosom, Hollingshead, & Kaye, 2011). Future capabilities aside, there are a number of more immediate advantages to computerized testing that have yet to be fully exploited. The ability of computers to permit the implementation of tasks not well done or in some cases not possible through traditional means will be discussed later in this chapter. However, there are a number of other ways in which the computer can enhance and make more efficient the testing process. Booklet based tests are inherently clumsy. Test materials fray, making the process of presenting stimuli at exact intervals even more unreliable by compounding normal human error. Booklet based testing is inherently inefficient. It requires a constant shifting of materials along with hand scoring. Scores are later tabulated and either entered into a scoring program or require the examiner to refer to tables in order to record standard scores. Even assuming that no scoring error has taken place, a substantial amount of examination time is taken up in what are essentially administrative activates. Current standards allow for this time to be billed as time spent testing and scoring. If present health care trends persist, professional activities will be reimbursed based on outcomes and proven utility and not time spent. Depending on how changes in health care reimbursement affect psychology, failing to embrace technology as a way of modernizing the assessment process may well have economic consequences.

Expanding tasks and scenario-based assessment Task-based assessment is focused on having the patient demonstrate skills in specific cognitive domains. While most measures require more than one skill to

Technological developments in assessment

577

complete, specific tests attempt to target a specific skill. In life, individuals must implement a number of skills and integrate a number of abilities to manage demands in their environment. Some traditional measures such as the Wisconsin Card Sorting Test (WCST; Heaton, 2003) and Part B of the Trail Making Test (Reitan, 1955) are used by psychologists because of the degree of cognitive flexibility needed to complete the task. Furthermore, tests like the WCST and the Halstead Category Test (HCT; Reitan & Wolfson, 1985) are used because of their apparent ability to assess problem-solving based on feedback.

Computer-automated assessment of multitasking Traditional tests do not capture divided attention and how well an individual is able to allocate cognitive resources when having to tackle different tasks during a specific period of time. Early on, test developers appreciated that computers could be used to implement complex tasks in addition to those used in traditional assessment approaches such as the WCST and the HCT. Developers also appreciated that computers could be used to implement actual tests of divided attention where the person being tested would have to allocate resources between two or more tests running concurrently. The Complex Cognitive Assessment Battery (CCAB; Kane & Kay, 1992) was developed by DoD as a group of tasks that involved a number of complex problem-solving skills. While no longer available, the CCAB was a model for using computers to expand the assessment of complex reasoning skills. SynWork, developed by Tim Elsmore (Kane & Kay, 1992), was an early demonstration of a true divided attention or multi tasking test. SynWork presented up to four tasks that ran in different quadrants of the computer screen. Scores were produced for each task individually and there was an overall score that captured how well an individual was able to allocate their cognitive resources among tasks. The motivation for SynWork (synthetic work) was that day-to-day demands rely on an integration of skills and having the ability to effectively switch one’s focus between and among competing demands. The total score produced by SynWork increased as the test taker effectively completed a task and was reduced when the test taker failed to respond correctly to a task within a given timeframe, or needed to request reminders. SynWork, while attempting to model day-to-day demands, was task based. The person taking the test performed different cognitive tasks that were presented simultaneously rather than individually.

Virtual environments for ecologically valid assessments There is an apparent need for psychology to expand beyond its current conceptual and experimental frameworks. Burgess and colleagues (Burgess et al., 2006) argued that most cognitive assessments in use today fail to represent the actual functional capacities inherent in cognitive (e.g., executive) functions. These authors suggest that traditional psychological assessments like the WCST assess a hypothetical cognitive construct that can be inferred from research findings (e.g., correlation

578

Handbook of Psychological Assessment

between two variables). Cognitive construct measures like the WCST were not originally designed to be used as clinical measures (Burgess et al., 2006). Instead, these measures were useful tools for cognitive assessment in normal populations which later found their way into the clinical realm to aide in assessing constructs that are important to carrying out real-world activities. Goldstein (1996) questioned this approach because it is difficult to ascertain the extent to which performance on measures of basic constructs translates to functional capacities within the varying environments found in the real world. Psychologists need assessments that further our understanding about the ways in which the brain enables persons to interact with their environment and organize everyday activities. Virtual environments (VEs) are increasingly considered as potential aids in enhancing the ecological validity of psychological assessments (Campbell et al., 2009; Renison, Ponsford, Testa, Richardson, & Brownfield, 2012; Parsons, 2015). This increased interest is at least in part due to recent enhancements in 3D rendering capabilities that accelerated graphics considerably and allowed for greatly improved texture and shading in computer graphics. Earlier virtual reality equipment suffered a number of limitations, such as being large and unwieldy, difficult to operate, and very expensive to develop and maintain. Over the past decade, researchers have steadily progressed in making VE hardware and software more reliable, cost effective, and acceptable in terms of size and appearance (Bohil, Alicea, & Biocca, 2011). Today VEs offer advanced computer interfaces that allow patients to become immersed within a computer-generated simulation of everyday activities. As with any test there is need for validation of these measures (see Parsons, McMahan, & Kane, in press; Parsey & SchmitterEdgecombe, 2013). Like other computerized automated psychological assessments, VEs have greater computational capacities that allow for enhanced administration: reliable and controlled stimulus presentation, automated response logging, database development, and data analytic processing. Given that VEs allow for precise presentation and control of dynamic perceptual stimuli, psychologists can use them to provide assessments that combine the veridical control and rigor of laboratory measures with a verisimilitude that reflects real life situations (Parsons, 2015). Additionally, the enhanced computation power allows for increased accuracy in the recording of cognitive and emotional responses in a perceptual environment that systematically presents complex stimuli. VE-based psychological assessments can provide a balance between naturalistic observation and the need for exacting control over key variables. In sum, there is a growing body of evidence that supports the position that VE-based psychological assessments allow for real-time evaluation of multifarious cognitive and affective responses in order to measure complex sets of skills and behaviors that may more closely resemble real-world functional abilities (see Bohil et al., 2011; Kane & Parsons, 2017; Parsey & Schmitter-Edgecombe, 2013, Parsons, 2016, 2017; Parsons & Phillips, 2016; Parsons, Carlew, Magtoto, & Stonecipher, 2017, Parsons, Gagglioli, & Riva, 2017).

Technological developments in assessment

579

Access to care and telehealth Three key barriers to receiving psychological assessment are geographic distance from a qualified provider, the cost of assessment, and wait times between requesting an appointment and actually being seen. The growth of telehealth in medicine has far outpaced the exploration of remote assessment in psychology. According to the website eVisit (eVisit, 2016), in the year 2018, 7 million patients will receive services through telemedicine and as of August 2015, 29 states require health insurers to pay for telemedicine services. Unfortunately, clinical neuropsychology has fallen way behind in addressing this need. A new medium for delivering psychological assessments has emerged as a result of the Internet. Recent surveys have revealed that over 3.1 billion people now have access to the Internet. The distribution of this number by country reveals the following: China 5 642 million; United States 5 280 million; India 5 243 million; Japan 5 109 million; Brazil 5 108 million; Russia 5 84 million, among others (Stats, 2015). In the United States 86.75% of residents have access to the Internet. Telemedicine is an area that has developed for the use and exchange of medical information from one site to another via electronic communications, information technology, and telecommunications. When researchers are discussing “telemedicine,” they typically mean synchronous (interactive) technologies such as videoconferencing or telephony to deliver patient care. When the clinical services involve mental health or psychiatric services, the terms “telemental health” and “telepsychiatry” are generally used (Yellowlees, Shore, & Roberts, 2010).

Remote psychological assessment Remote psychological assessment is a recent development in telemedicine, in which psychologists administer remotely behavioral and cognitive assessments to expand the availability of specialty services (Cullum & Grosch, 2012). Evaluation of the patient is performed via a personal computer, digital tablet, smartphone, or other digital interface to administer, score, and aide interpretation of these assessments (Cullum, Hynan, Grosch, Parikh, & Weiner, 2014). Preliminary evaluation of patient acceptance of this methodology has revealed that it appears to be well accepted by consumers. For example, in the area of cognitive assessment, Parikh and colleagues (Parikh et al., 2013) found 98% satisfaction and approximately twothirds of participants reported no preference between assessment via video teleconferencing and traditional in-person assessment. Remote behavioral assessment is often done by way of interview and may include the administration of short questionnaires to assess pertinent symptoms. Remote cognitive assessment is just beginning to develop and for many the concept seems challenging at best. However, four models have emerged for remote cognitive assessment that have the potential to increase access to care for patients and potentially reduce costs including that for travel. Model 1 is a minor variation of employing a technician for test administration. It involves the interview being done

580

Handbook of Psychological Assessment

remotely by a psychologist with tests administered by a technician collocated with the patient. While statistics are not available, this method likely represents the current, or at least most frequent, implementation of remote cognitive assessment. In Model 2, both the clinical interview and test administration are accomplished remotely. This model has some limitations—it may require some tests to be renormed, and may also involve an assistant to help the patient sign on and set up certain test materials. Nevertheless, research done to date supports the viability of this model for both short screening tests such as the Mini Mental State Examination (Loh, Ramesh, Maher, Saligari, Flicker, & Goldswain, 2004; Loh, Donaldson, Flicker, Maher, & Goldswain, 2007; McEachern, Kirk, Morgan, Crossley, & Henry, 2008) as well as for more extensive cognitive assessment batteries (Cullum, Weiner, Gehrmann, & Hynan, 2006; Cullum et al., 2014; Jacobsen, Sprenger, Andersson, & Krogstad, 2002). The attractiveness of this model is that it addresses the reality that a trained technician may not always be available at sites distant from the location of the examining psychologist. Model 3 takes advantage of the fact that there are a number of computerized tests that can be set up for remote administration and that require minimal verbal input and guidance form an examiner. Tests can be downloaded to run locally on the computer used by the person taking the test, with data securely transferred back to the examiner. In some cases tests can be Internet-based. A recently published pilot study demonstrated the viability of this this model using the Automated Neuropsychological Assessment Metrics system (ANAM; Settle, Robinson, Kane, Maloni, & Wallin, 2015). This study compared test scores when patients with multiple sclerosis (MS) were assessed in person, in a different hospital room from the examiner, and at home. Results from the study demonstrated that test results were comparable when the same patients were tested remotely in different locations to those obtained with traditional in-person test administration. To preserve timing accuracy cognitive tests ran locally on the patient’s computer while the examiner monitored and communicated with the patient remotely. Model 4 is essentially a hybrid model that acknowledges that different approaches to remote cognitive assessment can be combined when assessing patients who are not collocated with the examining psychologist. Computer-based tests have expanded access to care by permitting data to be obtained from various groups of individuals potentially at risk for injury. A subset of the ANAM battery (Reeves, Winter, Bleiberg, & Kane, 2007) was implemented by NASA as the Spaceflight Cognitive Assessment Tool for Windows (WinSCAT; Kane, Short, Sipes, & Flynn, 2005). To date, WinSCAT has been used on 47 expeditions to the International Space Station (K.A. Seaton, personal communication, May 16, 2016). As a result of concerns about brain injury occurring during combat, as part of the 2008 National Defense Authorization Act (Congress, 2008), Congress mandated baseline testing on all deploying Service members. As of this writing baseline testing has been obtained on 1,140,445 Service members using a subset of ANAM tests. The database for individuals tested includes over 2 million assessments (D. Marion, personal communication, May 16, 2016). The ability to test individuals in space along with the ability to obtain baseline and post injury information on large numbers of individuals performing in hazardous environments

Technological developments in assessment

581

was possible only through using technology to expand models for cognitive assessment. The ImPACT test system (https://impacttest.com/research/) has been used, along with other computerized test systems, to gather baseline and post injury data on athletes. While these uses have been selective and focused on specific populations, they are also models for obtaining and storing data that may be useful throughout the life span when assessing the effects of disease or injury. These systems have made possible the concept of making cognition an additional medical endpoint for longitudinal health monitoring. Test instruments used for longitudinal health monitoring should be carefully developed and validated.

Linking cognitive domains and biological systems If a psychological evaluation involves the systematic exploration of pertinent cognitive and affective systems, then it is incumbent to define these systems and to determine specific aspects of these systems that should be assessed. Current domains and subdomains have emerged through studies in cognitive psychology, the examination of patients with different lesions and pathologies, and though factor analytic studies. Larrabee (2014) has argued for the development of an ability-focused battery based on factor analytic studies in order to produce a research-based approach to the systematic investigation of clinically relevant cognitive domains. While there is substantial merit to his approach, there is also an argument for expanding Halstead’s (1947) original goal of trying to capture biological intelligence. That is, it is important to understand what the brain is actually doing, how does it process information, how are neural networks organized, and what is the best way to measure this behaviorally. An example of this approach is that used by Poser (2011, 2016) (see also Posner & Rothhart, 2007) to help define the structure of attention networks and to tie behavioral tasks into these networks.

Neuroimaging Neuroimaging has gained widespread use in clinical research and practice. As a result, some objectives previously found within the expertise of psychology—principally lesion localization and laterality of function—have been almost completely replaced by neuroimaging. While neuroimaging has taken advantage of advances in computerization and neuroinformatics, psychological assessments have not kept pace with advances in neuroscience and reflect nosological attempts at classification that occurred prior to contemporary neuroimaging. This lack of development in psychological assessment makes it very difficult to develop clinical psychological models. The call for advances in psychological assessment is not new. Twenty years ago Dodrill (1997) argued that psychological assessments by clinical psychologists had made much less progress than would be expected in both absolute terms and in comparison with the progress made in other clinical neurosciences. When one compares progress in psychological assessment with progress in assessments

582

Handbook of Psychological Assessment

found in clinical neurology, it is apparent that while the difference may not have been that great prior the appearance of computerized tomographic scanning (in the 1970s), advances since then (e.g., magnetic resonance imaging) have given clinical neurologists a dramatic edge. Neuroimaging with its rapidly increasing capabilities will continue to play a role in our understanding of the functional organization of the brain. Technological advances in neuroimaging of brain structure and function offer great potential for revolutionizing psychological assessment (Bilder, 2011). Bigler (1991) has also made a strong case for integrating cognitive assessment with neuroimaging as a more potent method for understanding brain pathology and its effects. This integration can be aided with the use of computerized metrics time locked to imaging sequences.

Advancing innovative neurotechnologies The National Institute of Mental Health’s Research Domain Criteria (RDoC) is a research framework for developing and implementing novel approaches to the study of mental disorders. The RDoC integrates multiple levels of information to enhance understanding of basic dimensions of functioning underlying the full range of human behavior from normal to abnormal. The RDoC framework consists of functional constructs (i.e., concepts that represent a specified functional behavior dimension) categorized in aggregate by the genes, molecules, circuits, etc. used to measure it. In turn, the constructs are grouped into higher level domains of functioning that reflect current knowledge of the major systems of cognition, affect, motivation, and social behavior (see Insel et al., 2010). The BRAIN Initiative represents an ambitious but achievable set of goals for advances in science and technology. Since the announcement of the BRAIN Initiative, dozens of leading academic institutions, scientists, technology firms, and other important contributors to neuroscience have responded to this call. A group of prominent neuroscientists have developed a 12-year research strategy for the National Institutes of Health to achieve the goals of the initiative. The BRAIN Initiative may do for neuroscience what the Human Genome Project did for genomics. It supports the development and application of innovative technologies to enhance our understanding of brain function. Moreover, the BRAIN initiative endeavors to aid researchers in uncovering the mysteries of brain disorders [e.g., Alzheimer’s, Parkinson’s, depression, and traumatic brain injury (TBI)]. It is believed that the initiative will accelerate the development and application of new technologies for producing dynamic imaging of the brain that express the ways in which individual brain cells and complex neural circuits interact at the speed of thought.

Enhancing diagnosis and behavioral prediction: computational assessment/neuropsychology As noted above, two major thrusts in computer-based technology are language recognition and artificial intelligence (AI). There are different approaches to AI

Technological developments in assessment

583

including developing decision algorithms, deep machine learning, and training neural networks. Algorithms are typically developed by humans who review research data and develop decision rules. Machine learning lets the computer analyze data for patterns some of which may prove important though not be immediately intuitive. Neural networks require training to discern patterns important for whatever determination is to be made. In all cases, the better and more abundant the data, the better the classification or prediction. In contributing to the diagnostic process, the psychologist looks for patterns that conform to the known literature about various clinical conditions. By-in-large, as a profession, psychologists do a good job at this. However, the psychologist’s ability to refine pattern detection can be enhanced through the accumulation and analysis of large data sets. The challenge in doing this is more organizational than technical. Computers have been able to defeat champions playing the quiz show Jeopardy and at strategic games like Chess and Go. Technologies for accumulating and analyzing data and exploring rules and relationships exist and are becoming increasingly powerful and available. It is now up to the profession of psychology to pool resources and to take full advantage of data analytics to bring about advances in defining disease patterns and in making predictions regarding the ecological consequences of cognitive and behavioral test performance.

Cognitive rehabilitating and self-monitoring Psychologists spend a substantial amount of time characterizing performance patterns including identifying deficit areas. There have also been substantial efforts over the years to find ways to adjust and/or train cognitive skills. Data supporting cognitive training has often been disappointing. There have been positive results noted, although there has been a trend for effect sizes to lessen when improvements in performance in single arm studies are measured against those obtained in studies using control groups. Despite the difficulty of the task, clinicians and researchers continue to explore ways to train and/or rehabilitate cognitive processes. It is unlikely that technology will produce an easy fix with respect to developing approaches to cognitive training. However, there are data to indicate that technology can offer helpful tools and approaches for enhancing cognitive processes. The jury is still out regarding the efficacy of commercially available Internetbased cognitive enrichment programs (Simons et al., 2016). In 2014 a group of leading cognitive psychologists and neuroscientist were brought together at the Stanford Center on Longevity and the Berlin Max Planck Institute for Human Development to assesses the research pertaining to the use of brain games for cognitive enhancement (Longevity, 2014). The group concluded that more research was required to judge the effectiveness of these programs and couched its recommendations in terms of opportunity costs. That is, if one had other meaningful ways to remain engaged then they felt the data supporting online cognitive enhancement programs were insufficient to pull back from these other activities to focus on

584

Handbook of Psychological Assessment

computer training. Absent other ways of remaining active and stimulated, then engaging in online cognitive activates might be a reasonable way to spend time. Other scientists have gone on record supporting the efficacy of cognitive rehabilitation (www.cognitivetrainingdata.org). The Simons et al. (2016) review concluded there was extensive evidence that brain-training interventions enhances performance on the tasks used for training, less evidence that people get better when performing closely related tasks, and little evidence to support generalization to distantly related tasks or day-to-day functioning. The fundamental issue seems to be the effectiveness of training rather than the method of delivery. The Internet has the capability of brining interventions to more people once the efficacy of the rehabilitation approach has been established.

Computers for cognitive training There are a number of approaches to using computers for cognitive training. Reviewing each of these is beyond the scope of this chapter. Two programs will be mentioned here as examples of systematic efforts to start from data based theory, develop a computer-based approach based on theory and previous data, and then undertake the rigorous work of attempting to validate the method. Recently Chiaravalloti and her colleagues published randomized control trials of a method called the modified story memory technique to improve memory in patients with MS and TBI (Chiaravalloti, Moore, Nikelshpur, & DeLuca, 2013; Chiaravalloti, Dobryakova, Wylie, & DeLuca, 2015). These researchers were able to define the nature of the memory problem experienced by both MS and TBI patients as primarily that of acquiring information. They developed a computer-based method guided by previous studies to teach acquisition strategies and then validated the method through randomized control studies. Another example is a program being developed by Chen and colleagues (2011) that uses a video game like presentation of various scenarios to provide guided experimental learning and to teach performance enhancing techniques. In addition to being research-based, the approach by Chen et al. was designed from its initial stages to focus on the transfer of skills to the real world and to be adaptable to telehealth. While a number of rehabilitation programs and exercises have been implemented on the computer, the use of well-crafted scenarios designed to work hand-in-hand with structured training offers the potential to substantially enhance rehabilitation methods. Developing these methods for telehealth will result in more patients being offered these services.

Smartphones for psychological assessment Smartphones offer psychologists mobile computing capabilities and given their mobility and ubiquity in the general population they offer new options for research in cognitive science (Dufau et al., 2011). Brouillette and colleagues (2013) developed a new application that utilizes touch screen technology to assess attention and processing speed. Initial validation was completed using an elderly nondemented

Technological developments in assessment

585

population. Findings revealed that their color shape test was a reliable and valid tool for the assessment processing speed and attention in the elderly. These findings support the potential of smartphone-based assessment batteries for attentional processing in geriatric cohorts. From a mental health perspective, smartphone applications have been developed for the assessment and treatment of various conditions including depression, anxiety, substance use, sleep disturbance, suicidality, psychosis, eating disorders, stress, and gambling (Donker et al., 2013). While supporting data are limited, initial findings indicate potential uses for these apps for addressing behavioral conditions (Donker et al., 2013). One expects the development of smartphone apps in mental health to continue in view of their potential to extend interventions beyond the therapist’s office and to reinforce patient’s engagement in their treatment.

Ecological momentary assessments Psychologists are often interested in the everyday real-world behavior of their patients because brain injury and its functional impairments are expressed in realworld contexts. An unfortunate limitation is that many psychological assessments do little to tap into activities of daily living, quality of life, affective processing, and life stressors. These aspects of the patient’s life are surveyed using global, summary, or retrospective self-reports. The prominence of global questionnaires can keep psychologists from observing and studying dynamic fluctuations in behavior over time and across situations. Further, these questionnaires may obfuscate the ways in which a patient’s behavior varies and is governed by context. In reaction to the frequent reliance of psychologists on global, retrospective reports (and the serious limits they place on accurately characterizing, understanding, and changing behavior in real-world settings), some psychologists are turning to Ecological Momentary Assessment (EMA; Cain, Depp, & Jeste, 2009; Waters & Li, 2008; Waters et al., 2014; Schuster, Mermelstein, & Hedeker, 2015). EMA is characterized by a series of (often computer and/or smartphone-based) repeated assessments of cognitive, affective (including physiological), and contextual experiences of participants as they take part in everyday activities (Shiffman, Stone, & Hufford, 2008; Jones & Johnston, 2011). EMA uses modern methods to capture performance and behavioral characteristics in the course of everyday functioning. It makes possible capturing information in more depth and in broader contexts than would be obtained from naturalistic observation alone. Useful data can also be gathered from directly observing individuals performing everyday tasks. Some observational approaches are task-specific (e.g., Goverover, Chiaravalloti, & DeLuca, 2010; Goverover & DeLuca, 2015; Goverover, Chiaravalloti, & DeLuca, 2015) and others involve capturing broader samples of behavior as with the Multiple Errands Task (Dawson et al., 2009). However, EMA can be used to capture a range of behaviors in multiple environments without the observer having to be collocated with the patient or subject.

586

Handbook of Psychological Assessment

Expanding research options Some early developments in computerized testing were driven and shaped by research needs. Within the DoD there was a need to expand testing capabilities to better study the effects of environmental stressors on performance and to study medication side effects (Kane & Kay, 1992). This implementation of cognitive testing meant that individuals being evaluated would have to be tested on multiple occasions, under different conditions, and that tests and methods would have to be implemented for repeated administrations. The microcomputer was an obvious tool to develop tests designed for repeated measures assessment and to implement test measures potentially sensitive to various factors that might affect human performance. The need to assess groups of people at risk was also behind the development of the Neurobehavioral Evaluation System 2 (NES 2; Letz & Baker, 1988). Tests in the NES 2—like those developed by DoD—were designed for repeated measures assessment, and in addition were consistent with recommendations made by the World Health Organization for assessing the effects of environmental toxins (Kane & Kay, 1992). Another test battery that set the stage for the use of computerized tests in pharmaceutical studies was the Memory Assessment Clinics battery (Larrabee & Crook, 1988). This computerized test system used technology that existed during that time period (CD-ROM) to present tests that attempted to mirror everyday tasks such as telephone dialing, name face association, learning a grocery list, and remembering in which room the test taker placed various objects, among other measures. Currently, computerized tests designed for repeated measures are used extensively in pharmaceutical research. In addition, computer driven technologies are permitting the measurement of critical performance skills in ecologically relevant ways. Driving safety is an important concern when evaluating the effects and side effects of some medications. A recent development has been to employ driving simulators in pharmaceutical studies. For example, Kay and Feldman (2013) reported the results of a study that employed a desktop computer-based simulator to investigate whether the use of armodafinil could improve driving performance in patients with obstructive sleep apnea who demonstrated excessive daytime sleepiness. The use of simulators and virtual reality (VR) scenarios to augment current approaches in cognitive and behavioral research is likely to experience growth in the coming years. Clinical outcomes in dementia include not only the measurement of basic cognitive skills but also an assessment of day-to-day functioning. Everyday skills are typically measured via rating scales or tests that combine knowledge-based questions with a limited sampling of skills. Using validated VR scenarios to measure functional behavior is likely the next logical step in making use of advances in technology to expand assessment capabilities in psychological research. Using scenario-based assessment in this context has the potential advantage of producing functional metrics that are not dependent on subjective report. Two challenges in conducting successful clinical trials are those of recruitment and retention of subjects. Challenges to retention may involve subjects moving or finding returning to a research facility expensive, difficult, or just tiresome. Having

Technological developments in assessment

587

the ability to accomplish at least some aspects of follow-up in a person’s home will likely reduce subject dropout in research investigations. Remote psychological assessment was discussed earlier in this chapter with regard to its clinical implications. There is also a research role for remote cognitive assessment as a method for following subjects for longer periods of time despite obstacles imposed by travel. The technologies needed for this capability (e.g., Internet connection, cell phone) are increasingly ubiquitous and technologies exist for secure communication. The implementation of a remote assessment system will need to include a method of verifying that the intended responder is the person answering questions or performing tasks.

Conclusions In this chapter, we discussed the ability of technology to both enhance and expand current approaches to psychological assessment. In some cases, potential contributions of technology involve streamlining and making more efficient current assessment models. In other cases, technology can bring about paradigm shifts in how we conceptualize and measure cognitive processes, design interventions, and expand the reach of psychological services. The area of psychological assessment has been criticized for the sluggishness with which it embraces change (Sternberg, 1997). In 1997 Dodrill (1997) noted that psychologists had made much less progress in the way in which we as a profession approach assessment than would be expected in absolute terms and in comparison with other clinical neurosciences. In 1987 Meehl (1987) commented on how perplexing it would be if clinical psychologists lagged behind professions such as medicine and investment analysis—not to mention functions such as controlling operations in factories—in making full use of the power of the computer. The underlying theme of this chapter is that technology needs to be embraced by psychology not because it is there or because it enhances our credibility in the 21st century, but because it substantially expands our capabilities to assess and treat patients and to engage in research. To date, clinical psychologists have underused technology. This underuse has in part been resistance and in part been a function of the fact that those developing technology have not placed sufficient emphasis on clinical relevance. For clinical psychology to continue to prosper, both of these impediments need to be overcome.

References Acker, W., & Acker, C. (1982). Bexley Maudsley automated psychological screening and Bexley Maudsley category sorting test: Manual. Windsor, Great Britain: NFER-Nelson. Baxendale, S., & Thompson, P. (2010). Beyond localization: The role of traditional neuropsychological tests in an age of imaging. Epilepsia, 51(11), 2225 2230.

588

Handbook of Psychological Assessment

Bigler, E. D. (1991). Neuropsychological assessment, neuroimaging, and clinical neuropsychology: A synthesis. Archives of Clinical Neuropsychology, 6(3), 113 132. Bilder, R. M. (2011). Neuropsychology 3.0: Evidenced-based science and practice. Journal of the International Neuropsychological Society, 17(1), 7 13. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. National Review of Neurosciences, 12(12), 752 762. Branconnier, R. J. (1986). A computerized battery for behavioral assessment in Alzheimer’s disease. In L. W. Poon, T. Cook, K. L. Davis, et al. (Eds.), Handbook for clinical memory assessment of older adults (pp. 189 196). Washington, D.C: American Psychological Association. Brouillette, R. M., Foil, H., Fontenot, S., Correro, A., Allen, R., Martin, C. K., . . . Keller, J. N. (2013). Feasibility, reliability, and validity of a smartphone base application for the assessment of cognitive function in the elderly. PLoS One, 8(6), e65925. Burgess, P. W., Alderman, N., Forbes, C., Costello, A., Coates, L. M., Dawson, D. R., . . . Channon, S. (2006). The case for the development and use of “ecologically valid” measures of executive function in experimental and clinical neuropsychology. Journal of the International Neuropsychological Society, 12(2), 194 209. Cain, A. E., Depp, C. A., & Jeste, D. V. (2009). Ecological momentary assessment in aging research: A critical review. Journal of Psychiatric Research, 43, 987 996. Campbell, Z., Zakzanis, K. K., Jovanovski, D., Joordens, S., Mraz, R., & Graham, S. J. (2009). Utilising virtual reality to improve the ecological validity of clinical neuropsychology: An fMRI case study elucidating the neural basis of planning by comparing the Tower of London with a three-dimensional navigation task. Applied Neuropsychology, 16, 295 306. Chen, A. J.-W., Novakovic-Agopian, T., Nycum, T. J., Song, S., Turner, G. R., Hills, N. K., . . . D’Esposito, M. (2011). Training of goal-directed attention regulation enhances control over neural processing for individuals with brain injury. Brain, 134(Pt 5), 1541 1554. Chiaravalloti, N. D., Dobryakova, E., Wylie, G. R., & DeLuca, J. (2015). Examining the of the efficacy modified story memory technique (mSMT) in persons with TBI using functional magnetic resonance imaging (fMRI): The TBI-MEM trial. Journal of Head Trauma Rehabilitation, 30(4), 261 269. Chiaravalloti, N. D., Moore, N. B., Nikelshpur, O. M., & DeLuca, J. (2013). An RCT to treat learning impairment in multiple sclerosis: The MEMREHAB trial. Neurology, 81(24), 2066 2072. Congress, U. S. (2008). National Defense Authorization Act: 4986; Section 1618. ,https:// www.govtrack.us.congress/bills/110/hr4986.. Cullum, C. M., & Grosch, M. G. (2012). Teleneuropsychology. In K. Myers, & C. Turvey (Eds.), Telemental health: Clinical, technical, and administrative foundations for evidence-based practice (pp. 275 294). Amsterdam: Elsevier. Cullum, C. M., Hynan, L. S., Grosch, M., Parikh, M., & Weiner, M. F. (2014). Teleneuropsychology: Evidence for video conference-based neuropsychological assessment. Journal of the International Neuropsychological Society, 20, 1 6. Cullum, C. M., Weiner, M. F., Gehrmann, H. R., & Hynan, L. S. (2006). Feasibility of telecognitive assessment in dementia. Assessment, 13(4), 385 390. Dawson, D. R., Anderson, N. D., Burgess, P., Cooper, E., Krpan, K. M., & Stuss, D. T. (2009). Further development of the Multiple Errands Test: Standardization, scoring, reliability, and ecological validity for the Baycrest version. Archives of Physical Medicine and Rehabilitation, 90(11), S41 S51.

Technological developments in assessment

589

Dodrill, C. B. (1997). Myths of neuropsychology. The Clinical Neuropsychologist, 11, 1 17. Donker, T., Petrie, K., Proudfoot, J., Clarke, J., Birch, M. R., & Christensen, H. (2013). Smartphones for smarter delivery of mental health programs: A systematic review. Journal of Medical Internet Research, 15(11), e247. Dufau, S., Dun˜abeitia, J. A., Moret-Tatay, C., McGonigal, A., Peeters, D., Alario, F. X., . . . Grainger, J. (2011). Smart phone, smart science: How the use of smartphones can revolutionize research in cognitive science. PLoS One, 6(9), e24974. eVisit (2016). 36 Telemedicine statistics you should know. Retrieved from https://evisit.com/ 36-telemedicine-statistics-know/. Fragopanagos, N., & Taylor, J. G. (2005). Emotion recognition in human computer interaction. Neural Networks, 18(4), 389 405. Goldstein, G. (1996). Functional considerations in neuropsychology. In R. J. Sbordone, & C. J. Long (Eds.), Ecological validity of neuropsychological testing (pp. 75 89). Delray Beach, FL: GR Press/St. Lucie Press. Goverover, Y., Chiaravalloti, N. D., & DeLuca, J. (2010). Actual reality: A new approach to functional assessment in persons with multiple sclerosis. Archives of Physical Medicine and Rehabilitation, 91, 252 260. Goverover, Y., & DeLuca, J. (2015). Actual reality: Using the internet to assess everyday functioning after traumatic brain injury, E-pub Brain Injury. Available from https://doi. org/10.3109/02699052.2015.1004744. Goverover, Y., Chiaravalloti, N., & DeLuca, J. (2015). Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) and performance of everyday life tasks: Actual reality, E-pub; 2016 Multiple Sclerosis Journal, 22(4), 544 550. Available from https://doi.org/10.1177/1352458515593637. Halstead, W. C. (1947). Brain and intelligence: A quantitative study of the frontal lobes. Chicago, IL: University of Chicago Press. Heaton, R. K. (2003). Wisconsin card sorting test computer version 4.0. Odessa, FL: Psychological Assessment Resources. Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing. Science, 349(6245), 261 266. Insel, T., Cuthbert, B., Garvey, M., Heinssen, R., Pine, D. S., Quinn, K., . . . Wang, P. (2010). Research domain criteria (RDoC): Toward a new classification framework for research on mental disorders. American Journal of Psychiatry, 167(7), 748 751. Iverson, G. L., Brooks, B. L., Ashton, V. L., Johnson, L. G., & Gualtieri, C. T. (2009). Does familiarity with computers affect computerized neuropsychological test performance. Journal of Clinical and Experimental Neuropsychology, 31, 594 604. Jacobsen, S. E., Sprenger, T., Andersson, S., & Krogstad, J. M. (2002). Neuropsychological assessment and telemedicine: A preliminary study examining the reliability of neuropsychological services performed via telecommunication. Journal of the International Neuropsychological Society, 9, 472 478. Johnson, J. H., & Mihal, W. L. (1973). The performance of blacks and whites in computerized versus manual testing environments. American Psychologist, 28, 694 699. Jones, M., & Johnston, D. (2011). Understanding phenomena in the real world: The case for real time data collection in health services research. Journal of Health Services Research & Policy, 16, 172 176. Kane, R. L., & Kay, G. G. (1992). Computerized assessment in neuropsychology: A review of tests and test batteries. Neuropsychology Review, 3(1), 1 117. Kane, R. L., & Reeves, D. L. (1997). Computerized test batteries. In A. M. Horton, D. Wedding, & J. Webster (Eds.), The neuropsychology handbook (Vol. 1). New York, NY: Springer.

590

Handbook of Psychological Assessment

Kane, R. L., Short, P., Sipes, W., & Flynn, C. F. (2005). Development and validation of the spaceflight cognitive assessment tool for Windows (WinSCAT). Aviation, Space, and Environmental Medicine, 76(6, Suppl.), B183 B191. Kane, R. L., & Parsons, T. D. (Eds.), (2017). The role of technology in clinical neuropsychology. Oxford: Oxford University Press. Kay, G. G., & Feldman, N. (2013). Effects of Armodafinil on simulated driving and selfreport measures in obstructive sleep apnea patients prior to treatment with continuous positive airway pressure. Journal of Clinical Sleep Medicine, 9(5), 445 454. Larrabee, G. J. (2014). Test validity and performance validity: Considerations in providing a framework for development of an ability-focused neuropsychological test battery. Archives of Clinical Neuropsychology, 29, 695 714. Larrabee, G. J., & Crook, T. (1988). A computerized everyday memory battery for assessing treatment effects. Psychopharmacology Bulletin, 24(4), 695 697. Letz, R., & Baker, E. L. (1988). Neurobehavioral evaluation system 2: Users manual (version 4.2). Winchester, MA: Neurobehavioral Systems. Loh, P. K., Donaldson, M., Flicker, L., Maher, S., & Goldswain, P. (2007). Development of a telemedicine protocol for the diagnosis of Alzheimer’s disease. Journal of Telemedicine and Telecare, 13(2), 90 94. Loh, P. K., Ramesh, P., Maher, S., Saligari, J., Flicker, L., & Goldswain, P. (2004). Can patients with dementia be assessed at a distance? The use of telehealth and standardized assessments. Internal Medicine Journal, 34, 239 242. Longevity SCO. (2014) A consensus of the brain training industry fom the scientific community. McEachern, W., Kirk, A., Morgan, D. G., Crossley, M., & Henry, C. (2008). Reliability of the MMSE administered in-person and by telehealth. Canadian Journal of Neurological Sciences, 35(5), 643 646. Meehl, P. E. (1987). Theory and practice: Reflections of an academic clinician. In E. F. Bourg, R. J. Bent, J. E. Callan, N. F. Jones, J. McHolland, & G. Stricker (Eds.), Standards and evaluation in the education and training of professional psychologists: Knowledge, attitudes, and skills (pp. 7 23). Transcript Press. Mitchell, R. L., & Xu, Y. (2015). What is the value of embedding artificial emotional prosody in human computer interactions. Frontiers in Psychology, 6. Parikh, M., Grosch, M. C., Graham, L. L., Hynan, L. S., Weiner, M., Shore, J. H., . . . Cullum, C. M. (2013). Consumer acceptability of brief videoconference-based neuropsychological assessment in older individuals with and without cognitive impairment. The Clinical Neuropsychologist, 27(5), 808 817. Parsey, C. M., & Schmitter-Edgecombe, M. (2013). Applications of technology in neuropsychology. The Clinical Neuropsychologist, 27(8), 1328 1361. Parsons, T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in clinical, affective, and social neurosciences. Frontiers in Human Neuroscience, 9, 1 19. Parsons, T. D. (2016). Clinical neuropsychology and technology: What’s new and how we can use it. New York, NY: Springer Press. Parsons, T. D., & Phillips, A. (2016). Virtual reality for psychological assessment in clinical practice. Practice Innovations, 1, 197 217. Parsons, T. D., Carlew, A. R., Magtoto, J., & Stonecipher, K. (2017). The potential of function-led virtual environments for ecologically valid measures of executive function in experimental and clinical neuropsychology. Neuropsychological Rehabilitation, 37 (5), 777 807.

Technological developments in assessment

591

Parsons, T. D., Gagglioli, A., & Riva, G. (2017). Virtual environments in social neuroscience. Brain Sciences, 7(42), 1 21. Parsons, T. D. (2017). Cyberpsychology and the brain: The interaction of neuroscience and affective computing. Cambridge: Cambridge University Press. Parsons, T. D., McMahan, T., & Kane, R. (in press). Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. The Clinical Neuropsychologist. Posner, M. I. (2011). Attention in a social world. Oxford: Oxford University Press. Posner, M. I. (2016). Orienting of attention: then and now. The Quarterly Journal of Experimental Psychology, 69, 1864 1875. Posner, M. I., & Rothhart, M. K. (2007). Research on attention networks as a model for the integration of psychological science. Annual Review of Psychology, 58, 1 23. Reeves, D. L., Winter, K., Bleiberg, J., & Kane, R. L. (2007). ANAM genogram: Historical perspectives, description, and current endeavors. Archives of Clinical Neuropsychology, 22S, S15 S37. Reitan, R. M. (1955). The relation of the trail making test to organic brain damage. Journal of Consulting Psychology, 195, 393 394. Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan neuropsychological test battery: Theory and clinical interpretation. Tucson, AZ: Neuropsychology Press. Renison, B., Ponsford, J., Testa, R., Richardson, B., & Brownfield, K. (2012). The ecological and construct validity of a newly developed measure of executive function: The virtual library task. Journal of the International Neuropsychological Society, 18, 440 450. Roark, B., Mitchell, M., Hosom, J.-P., Hollingshead, K., & Kaye, J. (2011). Spoken language derived measures for detecting mild cognitive impairment. IEEE Transactions on Audio, Speech, and Language Processing, 19(7), 2081 2090. Schuster, R. M., Mermelstein, R. J., & Hedeker, D. (2015). Acceptability and feasibility of a visual working memory task in an ecological momentary assessment paradigm. Psychological Assessment, 27(4), 1463. Settle, J. R., Robinson, S. A., Kane, R., Maloni, H. W., & Wallin, M. T. (2015). Remote cognitive assessments for patients with multiple sclerosis: A feasibility study. Multiple Sclerosis Journal, 21, 1072 1079. Available from https://doi.org/10.1177/ 1352458514559296. Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology, 4, 1 32. Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., . . . Stine-Morrow, E. A. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103 186. Stats, I. L. (2015). Retrieved from ,http://www.internetlivestats.com/internet-users-by-country/. Accessed 28.05.15. Sternberg, R. J. (1997). Intelligence and lifelong learning: What’s new and how can we use it? American Psychologist, 52(10), 1134. Swiercinsky, D. (1984). Computerized neuropsychological assessment. Houston, TX: International Neuropsychological Society. Szatloczki, G., Hoffmann, I., Vincze, V., Kalman, J., & Pakaski, M. (2015). Speaking in Alzheimer’s disease, is that an early sign? Importance of changes in language abilities in Alzheimer’s disease. Frontiers in Aging Neuroscience, 7, 195. Waters, A. J., & Li, Y. (2008). Evaluating the utility of administering a reaction time task in an ecological momentary assessment study. Psychopharmacology, 197, 25 35.

592

Handbook of Psychological Assessment

Waters, A. J., Szeto, E. H., Wetter, D. W., Cinciripini, P. M., Robinson, J. D., & Li, Y. (2014). Cognition and craving during smoking cessation: An ecological momentary assessments study. Nicotine & Tobacco Research, 16(Suppl. 2), S111 S118. Wu, D., Parsons, T. D., & Narayanan, S. S. (2010). Acoustic feature analysis in speech emotion primitives estimation. Interspeech, 785 788. Yellowlees, P., Shore, J., & Roberts, L. (2010). Practice guidelines for videoconferencingbased telemental health-October 2009. Telemedicine and e-Health, 16(10), 1074 1089.