Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Contents lists available at ScienceDirect
Neuroscience and Biobehavioral Reviews journal homepage: www.elsevier.com/locate/neubiorev
Application of technology to social communication impairment in childhood and adolescence Andrea Trubanova Wieckowski ∗ , Susan W. White Department of Psychology, Virginia Polytechnic Institute and State University, United States
a r t i c l e
i n f o
Article history: Received 24 May 2016 Received in revised form 13 December 2016 Accepted 26 December 2016 Available online 14 January 2017 Keywords: Social communication Technology Childhood Adolescence
a b s t r a c t Social communication impairment has been implicated in various mental health disorders. The primary aim of this review paper is to summarize the extant research on the development and application of technologies to address social communication deficits, conceptualized according to the four constructs outlined by the NIMH’s Research Domain Criteria (RDoC), transdiagnostically in children and adolescents. An exhaustive and systematic search yielded 69 peer-reviewed articles meeting all inclusion criteria (i.e., used technology, applied the technology to target impairment in at least one of four constructs of social communication, included a child or adolescent samples). We found limited use of technology for exploration of impairment in reception of non-facial communication, compared to the other social communication constructs. In addition, there has been an overwhelming focus on social communication impairment in children and adolescents with Autism Spectrum Disorder (ASD), with relatively few studies evaluating technology application in other clinical populations. Implications for future directions for technological interventions to treat social communication impairments transdiagnostically are discussed. © 2017 Elsevier Ltd. All rights reserved.
Contents 1. 2. 3. 4.
5.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Social communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Social communication in childhood and adolescence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1. Selection criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.2. Search methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3. Coding and variable definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1. Reception of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1.1. Computer-based applications for reception of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1.2. Mobile applications for reception of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.3. Virtual reality-based applications for reception of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.4. Robotics and reception of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.5. Reception of facial communication with other technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.6. Reception of facial communication summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.2. Production of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.2.1. Computer-based application for production of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.2.2. Mobile applications for production of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.2.3. Virtual reality-based applications for production of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.2.4. Robotics for production of facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
∗ Corresponding author at: 460 Turner Street, Collegiate Square 207, Child Study Center, Blacksburg, VA 24060, United States. E-mail address:
[email protected] (A.T. Wieckowski). http://dx.doi.org/10.1016/j.neubiorev.2016.12.030 0149-7634/© 2017 Elsevier Ltd. All rights reserved.
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
6. 7.
99
5.2.5. Production of facial communication with other technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.2.6. Production of facial communication summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.3. Reception of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.3.1. Computer-based applications for reception of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.3.2. Virtual reality-based applications for reception of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.3.3. Reception of non-facial communication summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.4. Production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.4.1. Computer-based applications for production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.4.2. Mobile-based applications for production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.3. Virtual reality-based applications for production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.4. Robotics for production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.5. Video modeling and motion capture for production of non-facial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.6. Production of non-facial communication summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
1. Introduction There has been a recent upsurge in use of technology for assessment and treatment of mental health and behavioral disorders, due largely to practical benefits such as speed of delivery, convenience, and accessibility. Technology-based interventions may be especially germane to remediation of social communication deficits because of the nature of such deficits. For instance, observational or questionnaire measures may decrease or even eliminate the need for direct contact with a clinician, which may improve the validity of the intervention (e.g., if client behaves differently owing to therapist presence), as well as decrease the time burden on human assessors. Although a wide variety of technologies have been proposed and developed to treat social communication impairments in children, there has not yet been a systematic review of this research base. The purpose of this review is to synthesize research on technology, regardless of specific type or application, developed for remediation of social communication impairments in order to identify promising directions for future development of use of technologies for children and adolescents. We focused on the social communication domain primarily because of its transdiagnostic nature. These impairments are continuously distributed and seen in typically developing children without mental health disorders (e.g., during times of stress or transition) as well as across a range of disorders (e.g., autism). Heterogeneity within psychiatric disorders, as well as comorbidity and symptom-sharing across diagnoses, has prompted a call for increased attention to identification of core processes that lead to psychopathology and impairment as well as interventions that have transdiagnostic impact (Insel et al., 2010). In accordance, the National Institute of Mental Health (NIMH) has taken steps to promote research on functions that cut across disorders as traditionally defined (Insel et al., 2010). The NIMH proposed the Research Domain Criteria (RDoC) framework to lay the foundation for a new classification system by defining basic dimensions of functions that cut across disorders and can be studied across multiple units of analyses. The ‘Systems for Social Processes’ domain is comprised of processes that mediate responses in interpersonal settings, including perception and interpretation of others’ actions. This domain is subdivided into four constructs: affiliation and attachment, social communication, perception and understanding of self, and perception and understanding of others. The social communication subdomain is the focus of this review. Recent advances in technology have greatly affected delivery of assessment and treatment across diagnostic groups. For example, the technology-based functional assessment, which uses virtual reality (VR) that simulates real-world situations to determine what a person would do in a specific situation, has been
utilized to assess maintaining factors underlying problem behaviors. Parsons et al. (2007) employed a VR classroom to differentiate between children with Attention Deficit Hyperactivity Disorder (ADHD) and typically-developing children based on errors made. Although technology has a longer history in assessment of mental health problems, it has also been extended to intervention application. White et al. (2014) surmised that neurotechnologies, or technology-based tools used for both understanding and remediating neural processes underlying psychopathology, hold great promise in the future of intervention science and offered a framework for evaluation of novel neurotechnologies.
2. Social communication Social communication is a dynamic process that involves the exchange of information within an interpersonal context. It comprises both receptive (i.e., information perception/interpretation) and productive (i.e., information conveyance) aspects used in exchange of socially relevant information. Social communication, per the RDoC, is further organized into four sub-constructs: (1) reception of facial communication, which includes capacity to perceive emotional state via facial expression; (2) production of facial communication, which involves capacity to convey or express emotional states via facial expression; (3) reception of non-facial communication, which involves the capacity to perceive emotional states based on modalities other than facial expression; and (4) production of non-facial communication, which involves capacity to express emotional states based on modalities other than facial expression. Social communication is a complex, dynamic process in which children need to attend to multiple cues, often simultaneously, and learn to alter their communication style depending on both the context and the interactive partner. Social communication underlies interpersonal initiation, as well as responsivity to social bids. Difficulty in these processes affects one’s ability to infer emotions of others as well as one’s ability to express emotions, both of which are crucial in social interactions (Nuske et al., 2013). Impairment in emotional competence during social communication may lead to several undesirable outcomes, including social withdrawal and isolation, which in turn may also cause stress and other health problems (Seeman, 1996). In addition to negative sequelae associated with social isolation and wellbeing, social communication impairment impacts academic skills (Izard et al., 2001) and success in the work place (e.g., Butterworth and Strauch, 1994). The diverse and often long-lasting implications related to social communication impairment underscore the need to address these deficits in childhood, when these processes are arguably likely to be more malleable (Weisz and Kazdin, 2010) and
100
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
to try to minimize the social, academic, and health impacts later on in life.
3. Social communication in childhood and adolescence Adequately recognizing others’ socio-emotional expressions is not only crucial for successful interpersonal interaction in everyday life, research has documented impaired recognition of facial emotion expression across disorders in children and adolescents, including children with externalizing disorders (Aspan et al., 2013; Blair and Coles, 2000; Marsh et al., 2008), children and adolescents at risk for certain personality disorders, specifically bipolar disorder (e.g., Wegbreit et al., 2015) and schizophrenia (Dickson et al., 2014), children with depression (e.g., Jenness et al., 2014), children and adolescents with Autism Spectrum Disorder (ASD; Evers et al., 2015), and adolescents with eating disorders (e.g., Zonnevijlle-Bender et al., 2002). Similarly, difficulties with expression, or production, of facial communication have been found in childhood and adolescence across mental health disorders, including Post Traumatic Stress Disorder (e.g., Fujiwara et al., 2015), ASD (Loveland et al., 1994), eating disorders (Rhind et al., 2014), and externalizing disorders (e.g., de Wied, et al., 2006). Although less studied than facial communication reception, there is evidence for impairment in both reception (Corbett and Glidden, 2000; Emerson et al., 1999; Hobson et al., 1989; Wang et al., 2006; Zonnevylle-Bender et al., 2004) and production (Ben Shalom et al., 2006; Davies et al., 2012; Fussner et al., 2015) of non-facial communication across a range of disorders including ASD, ADHD, depression, and eating disorders. Such difficulties not only occur across multiple diagnoses, they also have pronounced impact on ability to communicate in a social interaction during childhood and adolescence (Brozgold et al., 1998). As such, deficits in all four constructs subsumed within the social communication domain likely lead to, or exacerbate, impairments seen across diagnoses. Due to the broad reach and promising emerging research on technological advancements to address deficits seen in specific populations (e.g., autism), applying technology to address social communication impairments will allow for targeting of these processes transdiagnostically. Although the majority of the research on social communication deficits has been conducted with clinical samples, the current review addresses the use of technology for social communication in psychiatric samples as well as typically developing children and adolescents. The purpose of this review is to assess the extant research on use of technology to address social communication impairment in youth. Our focus is on technology designed for, or implemented with, children or adolescents, irrespective of whether or not they have a diagnosis, and irrespective of type of diagnosis, in order to take an RDOC framework in assessing potential treatments for social communication impairments transdiagnostically.
4. Methods 4.1. Selection criteria In order to be included in the review, the article needed to meet three inclusion criteria. First, the study used technology, broadly defined as any tool, device, or procedure using electronics. Second, the technology was applied to target impairment in at least one of four constructs of social communication. Third, the sample’s minimum participant age did not exceed 18 years, or the technology was developed specifically for use with children or adolescents if there was no identified sample, as some of the reviewed technology has not been evaluated with either the target or analogue sample.
Table 1 Keyword search terms for each construct of social communication. Construct
Keywords
Reception of Facial Communication
EMOTION RECOGNI* FAC* RECOGNI* SOCIAL PERCEPT* FAC* PERCEPT* FAC* AFFECT and PERCEPT* FAC* COMMUNICATION EMOTION EXPRESS* FACE and EXPRESS* SOCIAL RESPONS* FAC* PRODUC* FACIAL AFFECT and PRODUC* FAC* and COMMUNICATION EMOTION RECOGNI* SOCIAL PERCEPT* NONVERBAL RECOGNI* NONVERBAL PERCEPT* EMOTION* and PROSOD* EMOTION* and GESTURE EMOTION* and VOCALI* EMOTION* and BODY EMOTION EXPRESS* EMOTION* PROSOD* NONVERBAL EXPRESS* NONVERBAL PRODUC* EMOTION* and GESTURE SOCIAL RESPONS* EMOTION* and VOCALI* EMOTION* and BODY
Production of Facial Communication
Reception of Non-Facial Communication
Production of Non-Facial Communication
4.2. Search methods Electronic databases (PubMed, IEEE Xplore Digital Library, ACM Digital Library, and Google Scholar) were searched to identify relevant, peer-reviewed articles written in English, published between January 2005 and December 2015. Both peer reviewed journal articles as well as peer reviewed conference papers were included. Keywords specified for each construct in Table 1 were combined with text word CHILD* or ADOLESCEN* and search term TECHNOLOG* or COMPUT* to identify studies on application of technology to social communication constructs. After the database search, references of relevant articles were reviewed for articles that were not identified in the initial database search. Identified articles were excluded if 1) the article did not use technology, 2) the article did not apply the technology to target social communication impairment, 3) the technology was not developed for or tested with children. 4.3. Coding and variable definitions For all articles, sample characteristics are reported, including age of the participants, number of participants included in the study, and the participants’ characteristics that included mental health or medical diagnoses. In addition, characteristics of the technology are reported. Technology ‘type’ was coded as falling into one of five potential groups: 1) computer-based, which included applications run on the computer or laptop; 2) mobile-based, which included any hand-held electronics including phones and tablets that use wireless computing; 3) VR, defined as computer-simulated reality, which replicates an environment that can be interacted with; 4) robotics, which included humanoid-robot that is directly interacting with the participant; and 5) other category, which includes technology that could not be grouped under the above categories, due to lack of information or separate category (e.g., motion capture system). The number of identified studies for each technology type is reported in Table 2.
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
101
Table 2 Number of identified studies for each technology type. Construct
Technology Type
Reception of Facial Communication (n = 35)
Computer-based
12
Mobile applications Virtual Reality Robotics Other Computer-based
7 11 2 3 7
Mobile applications Virtual Reality Robotics Other Computer-based
5 12 6 2 3
Mobile applications Virtual Reality Robotics Other Computer-based
0 4 0 0 4
Mobile applications Virtual Reality Robotics Other
4 2 5 4
Production of Facial Communication (n = 32)
Reception of Non-Facial Communication (n = 7)
Production of Non-Facial Communication (n = 19)
In addition to technology type, its intended application (i.e., what the technology was designed to address), modality (or brief description), and stage of study in terms of development and testing are indicated. Stage of technology was coded as: 1) research and development (RD), stage in which no participants have yet interacted with the technology; 2) usability (U), in which participants interacted with the system and provided feedback regarding the technology, but the outcome of treatment in terms of improving social communication has not been explored; 3) initial evaluation (IE), including pilot testing, in which improvement in social communication following the technological intervention has been explored but not yet in a controlled manner (i.e., descriptive outcomes only) or the data were not reported; and 4) evaluation (E), where the effectiveness of the technology to improve an aspect of social communication has been evaluated, and reported, in children or adolescents. These study characteristics for all identified articles are presented in Table 3.
5. Results The search resulted in a total of 14,727 articles (2992 articles for reception of facial communication, 4263 articles for production of facial communication, 3576 articles for reception of non-facial communication, and 3896 articles for production of non-facial communication). 794 articles remained after titles and abstracts were screened for inclusion criteria (198 articles for reception of facial communication, 248 for production of facial communication, 184 for reception of non-facial communication, and 164 for production of non-facial communication). Articles were excluded either due to use of technology with adults only, or because the technology was used for assessment only, and not intervention. After examination of the full papers, 69 unique articles were identified with eighteen articles falling under two different constructs and three articles falling under three different constructs. Therefore, 35 articles were identified for reception of facial communication, 32 articles were identified for production of facial communication, 7 articles were identified for reception of non-facial communication, and 19 articles were identified for production of non-facial communication. The search and identification flow is presented in Fig. 1.
Number of Studies
While some technological interventions allow for treatment of multiple constructs within the social communication subdomain, the majority have focused on addressing a specific construct. In order to review the technology used to address specific impairments in social communication, the sections below highlight the identified articles for each construct separately. When a study addresses more than one construct, the study is described in all relevant sections. Effect sizes are provided for those studies that examined change with the intervention and included sufficient data to calculate the effect size. Different indexes of effect size are provided based on the study’s design. Table 3 presents a summary of the main characteristics of the included studies. 5.1. Reception of facial communication Several different applications have been used in order to provide direct intervention to improve reception of facial communication in children and adolescents. Thirty-five articles were identified, seventeen of which provided at least a preliminarily evaluation on the effectiveness of the technology to improve some aspect of reception of facial communication. Deployment modalities included computer-based applications, mobile (e.g., tablet, iPhone) applications, VR applications, robotic-based interventions, as well as a DVD series presented on a television screen. 5.1.1. Computer-based applications for reception of facial communication Computer-based interventions are leading the field, representing about a third of the research within this construct (34.29%). Of these studies, the majority (75%) evaluated the effectiveness of the intervention, either through open pilot study or randomized controlled trial. Additionally, most of the studies targeted facial emotion recognition, with few studies targeting broader face processing and attention. Several of the studies explored use of computer-based intervention to teach facial recognition skills, and evaluated these interventions using small to moderately sized samples (e.g., n = 4 to 49) of children with ASD. These studies together found that computer-based interventions resulted in moderate to strong pretreatment to post-treatment effects (2 = 0.29; d = 0.34–3.05), as well as significant between group differences with medium to large
102
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Table 3 Study characteristics for articles utilizing technology to address impairment in each social communication construct. Reception of Facial Communication (n = 35) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
NK (4–11)
9
ASD
Virtual Reality
Train facial recognition and expression
LIFEisGAME, real-time automatic facial expression analysis and virtual character synthesis
U
Abirached et al. (2011)
10.20 (NK) 10.0 (NK)
18 16
SAD SAD
Computer-based
Modify attention bias
Computerized attention training
E
Bar-Haim et al. (2011)
9.64 (7–11) 9.81 (8–11)
26 23
ASD ASD
Computer-based
Enhance social skills, including facial emotion recognition
The Junior Detective Training Program, social skills program with a computer component
E
Beaumont and Sofronoff (2008)
14.7 (13–17) 14.6 (13–17)
10 10
ASD TD
Virtual Reality
Enhance facial affect recognition
Facial affect recognition task within VR environment
U
Bekele et al. (2014)
12.2 (10–13)
3
ASD
Virtual Reality
Promote emotional expression & social skills
Augmented reality-based self-facial modeling
E
Chen et al. (2015)
9 (8–11)
3
IDD
Virtual Reality
Teach socially based emotions capability in social contexts
3D-emotion system intervention program
E
Cheng and Chen (2010)
7.6 (7–8)
3
ASD
Virtual Reality
Enhance social competence (understand facial expressions)
Collaborative virtual learning environment with 3D expressive avatar & animated social situation
E
Cheng and Ye (2010)
NA
NA
NA
Computer-based
Train identification and recognition of facial expressions
Educational computer game with natural user interface
RD
Christinaki et al. (2014)
15 (14–16)
2
ASD
Robotics
Promote labeling of emotions
Robot assisted play using ZECA (Zeno Engaging Children with Autism)
IE
Costa et al. (2014)
NA
NA
NA
Virtual Reality
Improve facial expression perception and production
Emotion Mirror: cartoon character responds dynamically to child’s facial expression of emotion
RD
Deriso et al. (2012)
19.6 (12–32) 19.8 (12–32)
5 5
ASD ASD
Computer-based
Improve face processing strategies
Computerized face-training program
E
Faja et al. (2008)
NK (NK)
7
NK
Mobile-based
Enhance recognition of emotional facial expressions
Interactive mobile technology using 3D animation
U
Fergus et al. (2014)
NK (NK)
2
ASD
Virtual Reality
Teach recognition and expression of emotions
LIFEisGAME, a facial character animation system, with “Build a Face” game mode
U
Fernandes et al. (2011)
NA
NA
NA
Virtual Reality
Teach recognition of facial expressions
cMotion, a game with virtual characters to reinforce emotion recognition and problem solving
RD
Finkelstein et al. (2009)
5.6 (4–7) 6.2 (4–8) 5.4 (4–7)
20 18 18
ASD ASD TD
Other
Enhance recognition of emotions
The Transporters, an animated series
E
Golan et al. (2010)
NK (8–10)
2 4
ASD TD
Mobile-based
Teach facial emotions
CopyMe iPad game with instant feedback on performance
U
Harrold et al. (2014)
10.17 (6–15)
49
ASD
Computer-based
Teach facial recognition skills
FaceSay, computer-based social skills intervention
E
Hopkins et al. (2011)
NK (5–14)
26
ASD
Mobile-based
Enhance understanding, detection, and prediction of other’s facial emotions
Set of activities, including emotion modeling, run on multitouch tables
U
Hourcade et al. (2012)
NK (4–4)
11
TD
Mobile-based
Teach emotion recognition
Interactive game on multi-touch tablet
U
Humphries and McDonald (2011)
10.27 (8–11)
8
ASD
Computer-based
Teach emotion recognition skills
Mind Reading: The Interactive Guide to Emotions, computer software
E
Lacava et al. (2007)
8.6 (7–10)
4
ASD
Computer-based
Teach emotion recognition skills
Mind Reading: The Interactive Guide to Emotions, computer software
E
LaCava et al. (2010)
NK (NK)
3
ASD
Mobile-based
In situation learning of facial affect
Wearable camera and facial analysis software
U
Madsen et al. (2008)
NK (6–12)
5 15
ASD TD
Robotics
Observe, label, and imitate facial expressions
Social robot-based treatment with multi-parametric acquisition system and therapeutic protocol
IE
Mazzei et al. (2012)
9 (6–12)
2
ASD
Virtual Reality
Encourage affective and social behavior
Multimodal human-computer interaction system with embodied conversational agent
U
Mower et al. (2011)
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
103
Table 3 (Continued) Reception of Facial Communication (n = 35) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
23.0 (NK)
10
TD
Mobile-based
Train facial expression recognition
FEET: Facial Expression Expert Program presented on a tablet
U
Nunez et al. (2015)
7.67 (NA) 8.58 (NA)
1 1
ASD TD
Mobile-based
Enhance social skills and emotion recognition
Interactive life-like facial display (FACE) presented on an android
U
Pioggia et al. (2005)
NK (NK)
NK
ID
Computer-based
Facial emotion recognition and imitation
KIDEA, interactive learning media solution using Kinect sensor
RD
Puspitasari et al. (2013)
7.68 (5–11) 7.87 (5–11)
16 15
ASD ASD
Computer-based
Teach facial emotion recognition skills
FaceSayTM , computer-based social skills intervention
E
Rice et al. (2015)
NK (8–14)
3
ASD
Computer-based
Attention to relevant facial emotion cues
Computer-assisted facial emotion training program
E
Russo-Ponsaran et al. (2014)
11.4 (6–17)
33
ASD
Virtual Reality
Teach emotion recognition skills
JeStiMulE, interactive and multi-sensory computer-based game
E
Serret et al. (2014)
12.6 (11–14)
11
ASD
Virtual Reality
Enhance social competence (e.g. facial recognition)
Social Competence Intervention through iSocial, 3D virtual learning environment
E
Stichter et al. (2014)
10.5 (NK) 11.4 (NK)
42 37
ASD ASD
Computer-based
Teach facial recognition skills
Let’s Face It! Computer program
E
Tanaka et al. (2010)
NA
NA
NA
Computer-based
Improve facial expression recognition
Facial Expression Wonderland computer game training program
RD
Tseng and Do (2010)
5.23 (4–7) 5.16 (4–7)
28 27
ASD ASD
Other
Enhance recognition of emotions
The Transporters, an animated series
E
Williams et al. (2012)
NK (4–8) NK (4–8)
13 12
PDD PDD
Other
Enhance recognition of emotions
The Transporters, an animated series
E
Young and Posselt (2012)
Production of Facial Communication (n = 32) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
NK (4–11)
9
ASD
Virtual Reality
Train facial recognition and expression
LIFEisGAME, real-time automatic facial expression analysis and virtual character synthesis
U
Abirached et al. (2011)
14.4 (13–16)
20
TD
Virtual Reality
Train social cues including facial expressions
Nonverbal behavior analyzer system facilitating analysis of social signals
E
Baur et al. (2015)
4.7 (3–5) 4.3 (3–5)
6 6
ASD TD
Robotics
Promote skills in early social orienting
Adaptive Robot-Mediated Intervention Architecture with real-time head tracking
U
Bekele et al. (2013)
8.41 (4–14)
29
ASD
Virtual Reality
Foster social communication (including joint attention)
ECHOES: serious game in which children interact with a virtual character in a social situation
E
Bernardini et al. (2014)
9.08 (7–11)
3
ASD
Other
Promote socially expressive behaviors (e.g., facial expressions)
Videos eliciting socially expressive behaviors with actor as child’s model
E
Charlop et al. (2010)
12.2 (10–13)
3
ASD
Virtual Reality
Promote emotional expression & social skills
Augmented reality-based self-facial modeling
E
Chen et al. (2015)
7.6 (7–8)
3
ASD
Virtual Reality
Enhance social competence (eye contact)
Collaborative virtual learning environment with 3D expressive avatar & animated social situation
E
Cheng and Ye (2010)
NA
NA
NA
Computer-based
Improve expression production skills
SmileMaze tutoring system with Computer Expression Recognition Toolbox
RD
Cockburn et al. (2008)
NA
NA
NA
Virtual Reality
Improve facial expression perception and production
Emotion Mirror: cartoon character responds dynamically to child’s facial expression of emotion
RD
Deriso et al. (2012)
4.96 (4–5)
4
ASD
Robotics
Increase reciprocal interaction, including joint attention
Tito, a predictable mobile robot, with small wireless microphone-camera to measure eye gaze
E
Duquette et al. (2008)
10.8 (8–11)
9 3
TD ASD
Mobile-based
Practice of social skills in real-life situations (smiles, eye contact)
MOSOCO, a mobile assistive application that uses augmented reality and visual supports
IE
Escobedo et al. (2012)
104
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Table 3 (Continued) Production of Facial Communication (n = 32) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
11.5 (7–17)
12
ASD
Robotics
Improve eye-gaze attention and joint attention skills
Series of games with humanoid robot (NAO) encouraging eye contact during interaction
E
Feng et al. (2013)
NK (NK)
2
ASD
Virtual Reality
Teach recognition and expression of emotions
LIFEisGAME, a facial character animation system, with “Build a Face” game mode
U
Fernandes et al. (2011)
NK (8–10)
6
ASD
Virtual Reality
Enhance social communication including eye gaze and affect
StoryTable, a multi-user touchable interface
E
Gal et al. (2009)
10.89 (6–18)
30
ASD
Computer-based
Train facial expression production
FaceMaze automated computer recognition system that analyzes expression in real time
E
Gordon et al. (2014)
NA
NA
NA
Mobile-based
Aid emotional development (facial expression)
Expression recognition game which will ask child to mimic onscreen photographed facial expression
RD
Harrold et al. (2012)
NK (8–10)
2 4
ASD TD
Mobile-based
Teach facial emotions
CopyMe iPad game with instant feedback on performance
U
Harrold et al. (2014)
NK (5–12)
9
ASD
Virtual Reality
Influence emotional behavior by teaching facial expressions
Interactive computer game that tracks facial features of the participant
U
Jain et al. (2012)
NA
NA
NA
Other
Enhance awareness of social signals, initiation and turn-taking
Interactive Social-Emotional Toolkit: suite of wearable technologies
RD
Kaliouby and Goodwin (2008)
16.1 (13–18)
8
ASD
Virtual Reality
Improve engagement level during social interaction
VR-based interactive system with gaze-sensitive adaptive response technology
E
Lahiri et al. (2013)
15.6 (13–17)
6
ASD
Virtual Reality
Alter dynamic gaze patterns through individualized feedback
Virtual Interactive system with Gaze-sensitive Adaptive Response Technology
IE
Lahiri et al. (2011)
NK (4–14)
4
Cerebral palsy
Virtual Reality
Reinforce learning of several areas, including emotions
Computer educational software tool (Aprendiendo) with virtual agent
U
López-Mencía et al. (2010)
NK (6–12)
5 15
ASD TD
Robotics
Observe, label, and imitate facial expressions
Social robot-based treatment composed of a social robot and multi-parametric acquisition system
IE
Mazzei et al. (2012)
NK (NK)
NK
ID
Computer-based
Facial emotion recognition and imitation
KIDEA, interactive learning media solution using Kinect sensor
RD
Puspitasari et al. (2013)
8.0 (5–11)
4
ASD
Mobile-based
Encourage communicative intentions including gesture
ComFiM: Picture Exchange Communication for Multitouch Devices
IE
Ribeiro and Raposo (2014)
7.68 (5–11) 7.87 (5–11)
16 15
ASD ASD
Computer-based
Teach social skills including eye gaze and joint attention
FaceSayTM , computer-based social skills intervention
E
Rice et al. (2015)
NK (8–14)
3
ASD
Computer-based
Increase self-expression of facial emotion
Computer-assisted facial emotion training program
E
Russo-Ponsaran et al. (2014)
NK (8–10)
14
ASD
Mobile-based
Enhance social engagement, including eye contact
Social Compass, behavioral and educational intervention through computing technology
U
Tentori and Hayes (2010)
NA
NA
NA
Computer-based
Express facial emotion and communicate with others
“FaceFlower”, interactive computer game using expressions to control the game
RD
Tsai and Lin (2011)
8.5 (8–9)
6
ASD
Robotics
Facilitate collaborative play (eye gaze)
Triadic games using humanoid robot KASPAR
E
Wainer et al. (2014)
3.46 (2–4)
6
ASD
Robotics
Improve joint attention skills
Robotic interaction system that administers and adjusts joint attention prompts
E
Warren et al. (2015)
3.75 (3–4) 4.50 (4–5)
4 4
ASD DD
Computer-based
Teach variety of skills, including eye gaze and positive affect
TeachTown, an ABA-based, computer-assisted instruction program
E
Whalen et al. (2006)
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
105
Table 3 (Continued) Reception of Non-Facial Communication (n = 6) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
9.64 (7–11) 9.81 (8–11)
26 23
ASD ASD
Computer-based
Enhance social skills, including body posture emotion recognition
The Junior Detective Training Program, social skills program with a computer component
E
Beaumont and Sofronoff (2008)
12.2 (10–13)
3
ASD
Virtual Reality
Promote emotional expression & social skills
Augmented reality-based self-facial modeling
E
Chen et al. (2015)
7.6 (7–8)
3
ASD
Virtual Reality
Enhance social competence (non-verbal communication)
Collaborative virtual learning environment with 3D expressive avatar & animated social situation
E
Cheng and Ye (2010)
5.38 (5−5)
3
ASD
Computer-based
Teach recognition of situation-based emotions
Video based scenarios presented on a computer screen
E
McHugh et al. (2011)
9 (6–12)
2
ASD
Virtual Reality
Encourage affective and social behavior (including from emotional situations)
Multimodal human-computer interaction system with embodied conversational agent
U
Mower et al. (2011)
NA
NA
NA
Computer-based
Teach emotions from simple sentences
Theory-driven assistive technology game design
RD
Park et al. (2012)
NK (11–14)
24
ASD
Virtual Reality
Learn emotional expression (body language and verbal)
Interactive emotional social virtual framework with role-play
U
Zhang (2010)
Production of Non-Facial Communication (n = 19) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
5.5 (5−5)
3
PDD
Other
Respond to facial expression
Video modeling with adult modeling a response to expression
IE
Axe and Evans (2012)
NK (9–11)
6
ASD
Computer-based
Improve social interaction
Multi-user, touch- and gesture-activated device allowing for group collaboration
IE
Bauminger et al. (2007)
9.83 (NK)
22
ASD
Computer-based
Enhance collaboration and social conversation
Computer program “Join-In” to teach collaboration and “No-Problem” to teach conversation
E
Bauminger-Zviely et al. (2013)
14.4 (13–16)
20
TD
Virtual Reality
Train social cues including posture and gestures
Nonverbal behavior analyzer system facilitating analysis of social signals
E
Baur et al. (2015)
9.08 (7–11)
3
ASD
Other
Promote socially expressive behaviors, (gestures and intonation)
Video modeling through videos eliciting socially expressive behaviors
E
Charlop et al. (2010)
NK (NK)
20
TD
Motion-capture
Improve emotional expression (body gestures)
Interactive multi-agent based game with motion capture system
IE
De Silva et al. (2006)
4.96 (4–5)
4
ASD
Robotics
Increase reciprocal interaction, including gestures and actions
Tito, a predictable mobile robot, with small wireless microphone-camera to measure eye gaze
E
Duquette et al. (2008)
10.8 (8–11)
9 3
TD ASD
Mobile-based
Pracitce of social skills in real-life situations including tone of voice
MOSOCO, a mobile assistive application that uses augmented reality and visual supports
IE
Escobedo et al. (2012)
NK (8–10)
6
ASD
Virtual Reality
Enhance social communication including comforting and sharing
StoryTable, a multi-user touchable interface
E
Gal et al. (2009)
12.5 (10–14)
8
ASD
Mobile-based
Increase social interaction (initiation and response)
Set of iPad tablet applications from Open Autism Software
E
Hourcade et al. (2013)
10.14 (6–12)
7
ASD
Robotics
Increase ability to use words to express their emotions
Emotional story intervention using smart media
E
Jeong et al. (2015)
NA
NA
NA
Other
Enhance verbal communication
Interactive Social-Emotional Toolkit: suite of wearable technologies
RD
Kaliouby and Goodwin (2008)
7.68 (5–11) 7.87 (5–11)
16 15
ASD ASD
Computer-based
Teach social skills including approach and conversation
FaceSayTM , computer-based social skills intervention
E
Rice et al. (2015)
8.0 (5–11)
4
ASD
Mobile-based
Encourage communicative intentions including gesture
ComFiM: Picture Exchange Communication for Multitouch Devices
IE
Ribeiro and Raposo (2014)
NK (8–10)
14
ASD
Mobile-based
Enhance social engagement and social skills
Social Compass, behavioral and educational intervention through computing technology
U
Tentori and Hayes (2010)
106
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Table 3 (Continued) Production of Non-Facial Communication (n = 19) Sample
Technology
References
Age M (Range)
n
Char*
Type
Application
Description
Stage
8.5 (8–9)
6
ASD
Robotics
Facilitate collaborative play
Triadic games using humanoid robot KASPAR
E
Wainer et al. (2014)
NK (9–13) NK (9–13)
8 12
ASD ASD
Computer/ Virtual Reality
Enhance collaboration and social competence
“Join-In Suite” 3-user touch-based application and “TalkAbout” collaborative computer program
U
Weiss et al. (2011)
3.83 (NK) 3.61 (NK)
8 8
ASD TD
Robotics
Teach imitation social skills (gestures)
Robot-mediated Imitation Skill Training Architecture (RISTA)
E
Zheng et al. (2015a)
4.61 (NK) 4.63 (NK)
4 2
ASD TD
Robotics
Increase mixed gesture imitation
Robot-mediated mixed gesture imitation skill training
U
Zheng et al. (2015b)
Note: For sample characteristics, NA indicates data is not applicable (i.e., no sample was used). NK indicates that data is not known (i.e., not reported). Column “n” indicates number of participants for each group. Char* indicates characteristics of the sample. TD: non-clinical, typically developing sample; ASD: autism spectrum disorder; PDD: pervasive developmental disorder; ID: intellectual disability; DD: developmental disability; IDD: intellectual developmental disability. For technology, “Type” specifies category of technology as defined within text, “Application” notes what behavior technology aims to target; “Description” provides a brief overview of the technology, and “Stage” designates stage of technology in the study. E: Evaluation stage (evaluated effectiveness), IE: Initial evaluation stage (descriptive outcome only), U: usability stage (targeted outcome not evaluated), RD: Research and development stage (not applied to a human sample).
Fig. 1. Flow diagram of identification of articles for review for each construct.
effects ( 2 = 0.42; d = 0.69 − 1.50; Beaumont and Sofronoff, 2008; Lacava et al., 2007; Hopkins et al., 2011; Rice et al., 2015; Tanaka et al., 2010). In addition, LaCava et al. (2010) indicated that after treatment, emotion recognition improved, even though the change was not significant. Aside from targeting facial and emotion recognition skills, several studies focused on addressing broader skills of facial processing (e.g., speed of recognition) and attention (i.e., Russo-Ponsaran et al., 2014; Faja et al., 2008). These pilot studies suggest computerized intervention for facial emotion recognition impairment is promising, with results indicating intervention to be effective for some aspects of facial processing, (pre to post
d = 0.06–5.1), yet the extant research has primarily focused on ASD and replication with larger and more diverse samples is needed. Bar-Haim et al. (2011) presented the only study assessing use of computer-based technology to address impairment in reception of facial communication of children with mental health disorders other than ASD. The authors used a computer-based emotional attention spatial cuing task to train adolescents with social anxiety to disengage attention from threat (d = 0.89). In addition to the studies that reported effectiveness in terms of treatment outcome, three studies were identified that were in the research and development stage and have therefore not evaluated the proposed
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
computer-based technology (Christinaki et al., 2014; Puspitasari et al., 2013; Tseng and Do, 2010). 5.1.2. Mobile applications for reception of facial communication Smaller handheld computers, cell phones, and tablets (i.e., mobile devices) were utilized in seven of the thirty-five identified studies targeting reception of facial communication. All seven studies applied mobile technology to target facial emotion recognition and all reported usability results only, with no data reported regarding change in targeted behavior. Five of the seven mobile-based applications used a tablet to teach facial emotion recognition. Of these five studies, three sampled children with ASD (Fergus et al., 2014; Harrold et al., 2014; Hourcade et al., 2012). The remaining two studies, which evaluated usability of a tablet for recognition of facial emotion, tested the application on typically developing individuals (Humphries and McDonald, 2011; Nunez et al., 2015). Although proposed for use with children, the tablet application designed by Nunez et al. was evaluated with adults only. All of the studies utilizing tablets for facial emotion recognition indicated the need for, and the usefulness of, the intervention. While the studies found the mobile-based technology to be usable, it is as yet unknown whether the interventions will improve facial emotion recognition in children. In addition to the tablet, mobile PC and an android have been explored for application deployment. Madsen et al. (2008) developed and tested, with three adolescents, a wearable camera system connected to an ultra mobile PC with software that tracks, captures, interprets, and presents various interpretations of the facial-head movement. The target of the system was to increase interest in and willingness to comprehend facial emotions, and Madsen et al. found that the technology helped participants understand how different expressions produce different results. Similarly, Pioggia et al. (2005) explored usability of an interactive, life-like facial display, presented on an android, to help children with ASD to learn, identify, interpret, and use emotional information during a social interaction. Their initial result showed that children with autism are able to interact with an android. While all of these studies tested the usability of proposed mobile-based technology, there are no data on the effectiveness of mobile-based technology to improve reception of facial communication. 5.1.3. Virtual reality-based applications for reception of facial communication Eleven of the studies addressing reception of facial communication (31.42%) used VR, or computer-simulated reality, to improve emotion recognition. Of these studies, five evaluated use of virtual learning environment to improve facial emotion recognition, in addition to other general social skills, via a three-dimensional expressive avatar manipulated either through the game or through child’s own facial expressions (Chen et al., 2015; Cheng and Chen, 2010; Cheng and Ye, 2010; Serret et al., 2014; Stichter et al., 2014). Results from these studies suggest that VR based intervention can improve facial emotion recognition abilities in children; however, the magnitude of effect was not assessed in most of these studies, aside from Serret et al. and Sticher at al. studies. Serret et al. found VR to significantly increase emotion recognition (d = 0.95 − 2.15). Stichter et al. found that parent report of child social competence increased from pre to post intervention, although children’s scores on facial emotion recognition measures were not found to change with intervention. Notably, all but one (Cheng and Chen, 2010) of these five studies tested the use of VR environment with children with ASD. In addition, four of the identified studies evaluated usability of VR to improve reception of facial communication in children with ASD. The VR was either presented as an expressive avatar in the game (Bekele et al., 2014; Mower et al., 2011) or as a virtual face modeled after the child user’s facial movements
107
(Abirached et al., 2011; Fernandes et al., 2011). On the whole, these studies suggest the potential use of VR in addressing facial emotion recognition deficits, although efficacy data is very minimal. While not yet tested on healthy or clinical samples, two studies have proposed the use of VR to help improve reception of facial communication. Finkelstein et al. (2009) used virtual environment to allow children to manipulate an interactive virtual character, and Deriso et al. (2012), designed a system with a cartoon character that makes expressions that children are asked to copy. These technological designs require testing to determine usability, feasibility, and efficacy in improving facial emotion recognition. 5.1.4. Robotics and reception of facial communication Interventions utilizing robotics have also been implemented to attenuate facial recognition deficits. Specifically, Costa et al. (2014) and Mazzei et al. (2012) both described an initial evaluation of humanoid robot-assisted intervention to promote labeling of facial expressions. These exploratory studies reported positive outcomes in terms of increasing facial emotion recognition with use of robotics. 5.1.5. Reception of facial communication with other technology A DVD series, Transporters, utilized animated vehicles with characters with human faces grafted onto them (Golan et al., 2010; Williams et al., 2012; Young and Posselt, 2012). The studies evaluated the use of the DVD series, watched at home, in large samples of children with ASD, and found Transporters to be effective in increasing facial emotion recognition relative to control groups (2 p = 0.45 − 0.56; d = 0.26 − 1.60). 5.1.6. Reception of facial communication summary Combined, computer-based games and programs, mobile-based applications, VR characters, humanoid robots, as well as a DVD series have been explored, in various ways, to improve reception of facial communication in children and adolescents. Notably, the majority of the studies (n = 30, 85.71%) have evaluated the use of technology to address such deficits in individuals with ASD and only a handful of studies have included other clinical populations. In addition, less than half of the studies evaluated the effectiveness of the technological interventions, and most of the evaluation studies (n = 9, 53%) fell in the computer-based category type. Across these fairly diverse approaches, the effects of computer-based technology on recognition and perception of facial information ranged widely, from small to large effects. However, most of the studies reported statistically significant improvements in facial emotion recognition in participants receiving computer-based interventions. Effects of using VR for reception of facial communication similarly differed widely. Of note however, only four studies utilizing VR reported effect sizes, or provided necessary data with which to compute effects. Utilizing the DVD series similarly showed effectiveness of the technology, however, the strength of the effects varied widely, depending on the targeted outcome. Unfortunately, no studies on mobile-based technology and robotics for increasing reception of facial communication have reported data on the effect sizes of the explored interventions. Therefore, the ability to explore the effectiveness of technology outside of computer-based interventions to improve reception for facial communication remains limited. 5.2. Production of facial communication Several technological applications have also been used in order to provide direct intervention to improve production of facial communication. Thirty-two articles were identified, fifteen of which
108
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
evaluated effectiveness of the technology to improve some aspect of production of facial communication.
5.2.1. Computer-based application for production of facial communication Seven of the identified articles addressing impairment in production of facial communication utilized computer-based applications. Of these, most (4) presented effectiveness data related to attenuation of facial communication production. Two of these four studies targeted facial emotion expression (Gordon et al., 2014; Russo-Ponsaran et al., 2014), and found computer based application to be able to increase facial self-expression. While Russo-Ponsaran et al. tested the system with three children and therefore effect sizes cannot be calculated, Gordon et al. found small to medium effects of the technology in increasing facial emotion production (p 2 = 0.01 − 0.07). The other two studies targeted joint attention and appropriate eye gaze, in addition to facial emotion expression (Rice et al., 2015; Whalen et al., 2006), and found that results differed by targeted outcome. Whalen et al. found the intervention to enhance social communication from pre to post. Rice et al. reported an increase in positive interaction and a decrease in negative interaction; however, these changes were not statistically significant. Of the discussed studies, only one used a sample of children with developmental disability, in addition to a sample of children with ASD (Whalen et al., 2006). The other three studies utilizing computer-based applications fall in the research and development stage, as these technologies have not yet been tested for usability or effectiveness with children or adolescents. The studies differ in terms of the task in which the children engage. Puspitasari et al. (2013) described the proposed technology to enhance social skills, including reception and production of facial emotions, in children with intellectual disability, through an interactive learning media solution aimed to improve life skills. Cockburn et al. (2008) proposed the use of a computer-based game with the obstacles in the game making facial expressions, which the player is asked to produce and maintain for a fixed duration in order to proceed. Tsai and Lin (2011), on the other hand, proposed to ask the children to make positive and negative emotions to manipulate the environment in the game. These studies highlight the potential of computer-based technology; however, the feasibility, much less effect, of computer-based interventions in improving facial emotion production is unknown.
5.2.2. Mobile applications for production of facial communication Five of the thirty-two articles used mobile device applications to target production of facial communication skills, none of which presented outcome evaluation results. Three of the studies, however, were initial pilot studies to evaluate usability. Ribeiro and Raposo (2014) explored multitouch devices for collaboration between children to enhance interactive facial and non-facial expressions, and found the technology to simulate children’s communication intentions. Escobedo et al. (2012) and Tentori and Hayes (2010) employed a tablet application to encourage facial and non-facial production in real-life social situations. While these studies highlight the potential of the technology in increasing production of facial communication, the effectiveness of such intervention has not yet been explored. The other two identified studies both aimed to target facial emotion expression of children with ASD. These studies use tracking and classification of child’s facial expressions when children are asked to mimic a presented expression (Harrold et al., 2012; Harrold et al., 2014). The touch device has not yet been evaluated with respect to its ability to increase children’s imitation of facial expressions.
5.2.3. Virtual reality-based applications for production of facial communication Twelve of the thirty-two identified studies targeting production of facial communication utilized VR applications, of which six evaluated the technology’s effectiveness in improving the targeted behavior (i.e., facial emotion expression and eye contact). Chen et al. (2015) used a system with three-dimensional animations of the facial expression, overlaid on the participant’s face, to facilitate practicing of emotion expression and social skills in a pilot study with three children with ASD. They found that the system was associated with improved response to facial emotional expression. In addition to targeting facial affect, four studies (Baur et al., 2015; Bernardini et al., 2014; Gal et al., 2009; Cheng and Ye, 2010) used avatars to improve eye contact in order to enhance joint attention. Although the studies reported that the observed change was statistically significant, the clinical magnitude of effect could not be determined. In addition, results from one study actually did not reach statistical significance, although change in facial affect in the intended direction was reported (Bernardini et al., 2014). Eye-gaze was also evaluated by a VR based adaptive response technology in studies by Lahiri et al. (2011) and Lahiri et al. (2013). The technology integrates VR based tasks with eye-tracking, to facilitate engagement in social communication tasks. The usability study by Lahiri et al. (2011) suggest that the technology has is acceptable to adolescents with ASD, while Lahiri et al. (2013) provide preliminary evidence that their VR-based technology promotes improved social task performance (e.g., d = 0.48 for improvement of viewing pattern). In addition, several VR programs have been developed but as of yet have not been tested. In three such studies, a virtual avatar mimics participants’ expressions in order to influence children’s emotional behavior (Abirached et al., 2011; Fernandes et al., 2011; Jain et al., 2012). A similar educational software tool to reinforce emotion expression includes an avatar with physical appearance of a child, who provides feedback and reinforcement (López-Mencía et al., 2010). The technology is designed for use with individuals with educational needs, including children with cerebral paralysis and severe motor problems. The last identified study utilizing VR falls within the research and development phase. Deriso et al. (2012), described a VR based tool designed to improve facial expression production, through an interaction with a cartoon character. When a child makes a facial expression, the expression is “mirrored” by a cartoon character in real-time, so the user can see what their expressions look like in real-time. Collectively, these studies suggest usability and feasibility of VR-based intervention, along with preliminary evidence for efficacy. 5.2.4. Robotics for production of facial communication Six of the thirty-two identified studies addressing impairment in production of facial communication have utilized robotics, the majority of which (4) assessed effectiveness of the technology to improve expression of facial communication (Duquette et al., 2008; Feng et al., 2013; Wainer et al., 2014; Warren et al., 2015). These studies utilized a robot as an agent to enhance interaction, including joint attention and facial expressions directed toward the robot, and found the robot assisted technology to improve production of facial communication. In addition, all four of these studies evaluated the robot-based interactions with children or adolescents with ASD only. Joint attention was also a targeted skill in a study evaluating usability of robotics in children with ASD (Bekele et al., 2013). Bekele et al. used an adaptive robot-mediated intervention, augmented by a network of cameras for real-time head tracking, in order to address deficits in joint attention. Their initial results indicated the robotics system was usable, feasible, and even preferentially attended to, over a human-mediated intervention, by children with ASD (Bekele et al., 2013).
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Only one of the identified studies focused solely on improving facial emotion expression through the use of a human–robot interaction (Mazzei et al., 2012). The experimental therapeutic protocol involves the robot and the therapist performing a set of facial expressions, which the participant is asked to label and imitate. In this pilot study, Mazzei et al. found that the treatment was associated with production of affect among subjects with ASD, more than in control subjects. 5.2.5. Production of facial communication with other technology In addition, two studies used technology outside of these four categories. Charlop et al. (2010) used videos presented on a television screen to teach facial expressions to children with ASD. Video modeling, using videos of humans in situations in which it is appropriate for the child to respond with a verbal comment, along with intonation, gesture, and facial expression was used (Charlop et al., 2010). The authors found that video modeling led to rapid increase in amount of facial expressions which generalized across setting, in three children, highlighting the initial effectiveness of such technology that needs to be explored further. The other identified study utilized a suite of wearable technologies, Interactive SocialEmotional Toolkit, that enables individuals with ASD to capture, quantify, analyze, and share aspects of their own and others’ social interactions, including emotion expression skills (Kaliouby and Goodwin, 2008). While this system has not yet, to our knowledge, been tested, the technology was designed to teach social-emotional communication by enabling individuals to capture, quantify, analyze, and share aspects of their social interactions. 5.2.6. Production of facial communication summary Multiple types of technological interventions have been proposed for children and adolescents who show deficits in production of facial communication, including facial emotion expression and joint attention. Similar to the findings for studies targeting reception of facial communication, the majority of the studies (n = 28, 87.50%) have evaluated the use of technology with children or adolescents with ASD and only a few have evaluated the technology with children with other non-ASD disorders (Baur et al., 2015; López-Mencía et al., 2010; Puspitasari et al., 2013; Whalen et al., 2006). In addition, similar to the pattern seen for studies applying technology to impairments in reception of facial communication, less than half of the studies (n = 15, 46.88%) evaluated the effectiveness of the intervention to improve production of facial communication. Across studies focused on facial communication production, effect sizes vary considerably and, like the research on reception of facial communication, not all intervention studies reported data from which to compute to effects. 5.3. Reception of non-facial communication While there is quite a well-developed research base on the use of technology to assess and modify reception of facial communication, only seven studies were identified that utilized the technology with the intent to improve reception of non-facial communication. Three studies looked at the use of computer-based intervention tools, while the other four studies described use of VR to help children and adolescents perceive non-facial communication. 5.3.1. Computer-based applications for reception of non-facial communication Two of the three studies exploring potential use of computerbased interventions to address impairment in reception of non-facial communication focused on teaching emotion recognition using situational cues. Park et al. (2012), in their description of a system that has not yet been tested, proposed to enhance understanding of situation-based emotions from the verbal contextual
109
sentences. McHugh et al. (2011) utilized video-based scenarios to teach three children with ASD situation-based emotion recognition. They found that using this technology, children with ASD could be taught to respond to emotional situations. The other study utilizing computer-based applications explored potential improvements in emotion recognition by training perception of body gesture cues, in addition to facial emotion recognition (Beaumont and Sofronoff, 2008). This is the only large controlled study evaluating technology applied to impairments in non-facial communication, with 49 children with ASD taking part in the randomized study. Results from this study suggest that computer-based technology can be used to teach emotion recognition from body gestures (2 = 0.11).
5.3.2. Virtual reality-based applications for reception of non-facial communication Four of the seven identified studies explored use of an avatar in a VR environment to enhance reception of non-facial communication. Emotional narratives were utilized in two of the studies to enhance recognition of emotions from situations (Chen et al., 2015; Mower et al., 2011). While Mower et al. assessed usability, not efficacy, results from the pilot study by Chen et al. suggest that VR can be used to improve reception of non-facial communication. The other two studies focused on enhancing verbal and nonverbal communication of children with ASD during a social interaction. A virtual learning environment was set up to increase understanding of verbal and non-verbal communication (e.g., body language), in addition to enhancing reception and production of facial communication. Effectiveness has only been evaluated in one study, with a sample of three children with ASD (Cheng and Ye, 2010). Zhang (2010) similarly used VR to enhance verbal communication through an animation agent who derives an expression (i.e., using body gesture and facial expression) based on written text. These studies highlight the potential of technology in this domain, and the need for further evaluation to determine efficacy.
5.3.3. Reception of non-facial communication summary No published studies were identified on the use of robotics or mobile-based technology to increase reception of non-facial communication. In addition, most of the studies have focused on emotion identification of simple social situations, presented either in written or video form, or focused on emotion identification from body language. Use of technology to potentially address deficits in other, perhaps more nuanced forms of social communication (e.g., irony, interpersonal posture) has not been explored. It is important to note that while four of the studies presented data to evaluate efficacy of the technology to improve targeted behaviors (Beaumont and Sofronoff, 2008; Chen et al., 2015; Cheng and Ye, 2010, McHugh et al., 2011), only one study by Beaumont and Sofronoff reported on efficacy of intervention with larger samples in a controlled experimental design (2 = 0.11). In addition, none of the studies explored the use of these technological interventions outside of ASD. The small number of studies reporting effectiveness of technological intervention limits our ability to evaluate efficacy of such interventions.
5.4. Production of non-facial communication Nineteen studies were identified that address deficits in production of non-facial communication, ten of which report on the effectiveness of technology to improve children’s and adolescent’s production of non-facial communication. As with the other constructs, the technology modalities are diverse, and there is variability in the targeted non-facial behavior.
110
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
5.4.1. Computer-based applications for production of non-facial communication Four studies used computer-assisted instruction. Two of the identified studies targeted social interaction and conversation (Bauminger et al., 2007; Rice et al., 2015), while the other two focused specifically on collaboration between individuals, specifically to teach rules of social conversation (Bauminger-Zviely et al., 2013; Weiss et al., 2011). Two of the studies reported on effectiveness of the computer-based intervention to improve production of non-facial communication. Results from Bauminger-Zviely et al. suggest that the computer-based interventions are effective at increasing collaboration (d = 0.90) and social conversation (d = 1.72). Although Rice et al. reported that their computer-based intervention was associated with increased positive interactions and decreased negative interactions, the changes were not statistically significant (Rice et al., 2015).
5.4.2. Mobile-based applications for production of non-facial communication Four studies were identified that used mobile-based technology to improve non-facial communication skills. Only one of these four studies, however, evaluated effectiveness of the application (Hourcade et al., 2013). This study evaluated the impact of a set of applications from the Open Autism Software (Hourcade et al., 2013) on social interaction, including verbal interactions, supportive comments, and physical interactions (e.g., turn taking). Results suggest effectiveness of the study, as children had more interactions and were more physically engaged when using the mobile-based application. Ribeiro and Raposo (2014) conducted an initial evaluation of an adaptation of the Picture Exchange Communication System (PECS; Bondy and Frost, 1994) for mobile-based technology, to increase gestures and verbal communication, in addition to production of facial communication skills, in a pilot study with four children with ASD. They found that the technology stimulated the children’s communication intentions, such as gestures and verbal communication. Tentori and Hayes (2010) and Escobedo et al. (2012) have extended the use of mobile-based applications to enhance social communication skills in real-life situations, by notifying the children of missteps during social interactions and provide guidance on correcting the missteps. Initial evaluation of this system suggests great potential; however, efficacy evaluation awaits further study.
5.4.3. Virtual reality-based applications for production of non-facial communication Only two studies utilized VR to address children’s impairment in production of non-facial communication (Baur et al., 2015; Gal et al., 2009), both of which evaluated effectiveness of the technology. The targeted skills, as well as the sample characteristics, differ between the two studies. Bauer et al. evaluated the Nonverbal behavior Analyzer (NovA) system, which automatically analyzes and facilitates interpretation of social signals through a bidirectional interaction with an avatar. They found that system users had better overall interaction, including smiles, eye contact, and gestures, compared to the control group of non-users. Gal et al. (2009), on the other hand, described an evaluation of virtual environments targeting multiple non-facial interactions, including verbal responses, sharing of emotions, expression of interest in others, providing comfort, and providing encouragement to others. They found that the six tested children in their study improved in rate of initiation of positive social interactions, suggesting potential of VR interventions in addressing production of non-facial communication.
5.4.4. Robotics for production of non-facial communication Intervention utilizing a humanoid robot was explored in five of the studies, four of which evaluated the effectiveness of the robotics-based intervention (Duquette et al., 2008; Jeong et al., 2015; Wainer et al., 2014; Zheng et al., 2015a), all in the context of small pilot studies with children or adolescents with ASD. The targeted skills differ across the studies, with most targeting use of gestures. Zheng et al. (2015a,b) used robot-mediated intervention to address deficits in imitation skills, gestures in particular. The usability study (Zheng et al., 2015b) evaluated gesture imitation in four children with ASD and two typically developing children. The evaluation study (Zheng et al., 2015a) tested the robot-based system with eight children with ASD and eight typically-developing children, and found that children with ASD performed better in robot-administered sessions than in humanadministered sessions. Imitation training has also been explored in four children with ASD with the use of a mobile robot, referred to as ‘Tito’ (Duquette et al., 2008). In addition to exploring use of the robot for production of facial communication, Duquette et al. used the mobile robot to enhance shared conventions such as imitation of gestures, actions, and words, directed toward the mediator. Contrary to their hypothesis, Duquette et al. found that children exhibited shared conventions at a higher rate when paired with a human, than when paired with the robot. Humanoid robot KASPAR has also been utilized to modify gestures, in addition to facial cues (Wainer et al., 2014). The ability of the robot to engage, motivate, and encourage users to collaborate was evaluated with six children with ASD. Results suggest that robotic intervention was successful in increasing social interaction and play with peers. Aside from gestures, robot assisted intervention has also focused on verbal expression of emotion. Jeong et al. (2015) found that children with ASD used significantly more emotional words when using a robot, than when using tablet PCs. 5.4.5. Video modeling and motion capture for production of non-facial communication Four of the identified studies used technology outside of the four identified types. Two of the studies evaluated use of video modeling to either promote verbal response to a non-verbal expression (Axe and Evans, 2012) or socially expressive behaviors, such as gestures and intonation, in addition to production of facial expressions (Charlop et al., 2010). In both of these studies, videos were used to model the targeted behavior. Axe and Evans (2012) found that two out of the three children responded to treatment, while Charlop et al. found that all three children in their study reached criterion for all target behaviors, suggesting potential of video modeling in increasing social communication. The other two studies described a motion capture system (De Silva et al., 2006), which helped to develop participant’s emotional expression through affective body gestures, and a suite of wearable technologies (Kaliouby and Goodwin, 2008), which is in the research and development phase, and has not yet been evaluated. 5.4.6. Production of non-facial communication summary Multiple technological interventions have been proposed for children and adolescents who show deficits in production of nonfacial communication. Similar to the findings from studies targeting the other social communication constructs, the majority of studies included samples comprised of children and adolescents with ASD (n = 17, 89.47%) and only two studies explored the use of technology to increase production of non-facial communication in typically developing samples (Baur et al., 2015; De Silva et al., 2006). In addition, similar to the pattern seen for studies applying technology to the other social communication constructs, a little over a half of the studies (n = 10, 52.63%) across technology type
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
evaluated the effectiveness of the technological interventions to improve production of non-facial communication. Many of these studies, however, did not report information regarding effect sizes. Similarly, three studies explored effects of VR interventions, and found similar medium to large effects of the interventions. Only one study, however, evaluated effect of the robotics on production of non-facial communication, and this study found only a small effect. All remaining evaluation studies used small pilot samples of children with ASD and, as such, generalizability to other disorders or types of social-communication deficit is limited.
6. Discussion This review was undertaken to synthesize the extant research on technology developed to address social communication deficits, across all four subsumed constructs (cf, RDoC), in children and adolescents. Many different technologies have been utilized to address social communication impairments, including computerbased, mobile, VR, and robotics, highlighting the diversity of the developed technologies to date. Across studies, results suggest the promise of technology as an intervention tool, with many studies reporting the experimental interventions to be feasible to implement (usable) and acceptable to the children and adolescents. While the studies differ with regard to technology type and targeted population, on the whole this body of research suggests that technology is feasible and acceptable to users, and the available evaluation data, suggest that technology may be promising in targeting social communication deficits. As such, further research on technology-based interventions for social communication is warranted. Of the conclusions that can be drawn based on this review, perhaps the foremost finding is the relative scarcity of research concerning effectiveness of the technological interventions in remediation of social communication impairments. Across the social communication constructs and technology types, less than half of the published studies evaluated treatment outcome. Of these, the majority were preliminary uncontrolled trials with small samples. The notable exception is in studies utilizing computer-based applications for impairment in reception of facial communication. Nine studies within this construct conducted randomized controlled trials in order to assess the effectiveness of the computer-based interventions. These studies found small to large effects, across a diverse group of interventions (d = 0.06–1.80), suggesting unequivalent effect of computer-based interventions on social communication, in terms of recognition of facial emotions. Across the other social communication constructs and technology types, however, methodologically rigorous evaluations of impact (efficacy) are lacking. Effects, based on the limited outcome data, vary from small to large, depending on the targeted behavior. At this point, it would be premature to make any conclusions regarding which technology type appears to be most effective for what social communication construct or for which individuals, given the limited amount of data available. Outside of computer-based intervention for reception and production of facial expressions, less than a handful of studies provide such data. Therefore, comparison of different modalities in terms of clinical effect is not possible. This gap in evaluation of the technological interventions may be at least partially explained by the interdisciplinary nature of this research. This body of research is largely spearheaded by scientists in fields other than clinical psychology (e.g., computer science, electrical engineering), though often with collaborators in psychology. Many of the studies proposing technology to address impairments in social communication constructs have focused on design and potential application, and sometimes the usability, of the technology but have not progressed to the point of evaluation
111
of impact on targeted problems. The focus of the engineering and computer science fields (and their primary funding agencies) is generally, and understandably, on the technology and its development, rather than clinical application and evaluation of that technology via rigorous, and expensive, trials. As such, much of the extant research is characterized by samples too small to draw conclusions or infer generalizability, insufficient participant characterization, and limited consideration of actual clinical impact or use (e.g., experimental therapeutics). The primary goal of many of the studies reviewed, as such, was to provide description of the technology and evaluation of the technology’s abilities, with a secondary (and often aspirational) goal of evaluating change in children’s behavior. In addition to the limited examination of clinical impact, this review found that a great deal of the research has been with children and adolescents with ASD, to the exclusion of youth with other disorders, despite the transdiagnostic nature of social communication impairment. The focus of technological interventions to address social communication impairments specifically in individuals with ASD is sensible, as deficits in social communication and social interaction are chief diagnostic criteria for this disorder (American Psychiatric Association, 2013). However, social communication impairments are seen across diagnostic categories, for each of the four subsumed constructs. Of the 69 studies reviewed, 90% were ASD-focused (either in development, usability, of efficacy); one study used a sample of adolescents with social anxiety, three studies had samples of children with intellectual or other developmental disorders, and one study included a sample of children with cerebral palsy and motor deficits. In addition, few studies assessed the technology with typically developing, non-disordered youth. Consistent with the RDoC espoused approach of studying mechanism(s) behind impairments, irrespective of diagnostic categories, researchers need to place more focus on designing technologies for all individuals with social communication impairments irrespective of categorical diagnoses. Although we identified studies with proposed interventions across all social communication constructs, there are more than twice as many studies of interventions focusing on reception and production of facial communication than on reception or production of non-facial communication. While there is a relative shortage of studies utilizing technology for both reception and production of non-facial communication impairments, the discrepancy is most evident in the number of studies targeting reception of non-facial communication (9 studies total). This inequity is consistent with research on emotions, which was until recently almost entirely based on investigation of the perception of facial expressions (Adolphs, 2002). However, increased attention is being paid to expression of affect through other, non-facial modalities, including gestures and body expressions (e.g., Kleinsmith and Bianchi-Berthouze, 2013). Therefore, with heightened focus on non-facial displays of affect broadly in emotion research, there needs to be an increased focus on designing technologies to address impairments in this domain. In light of the variable landscape of the current literature in this area, future technology development for remediation of social communication impairment should consider the seven principles proposed by White et al. (2014), which serve as aspirational goals to improve translation of technology to clinical application. These seven principles (i.e., that the technology is verifiable, useful, consistent, reproducible, mechanism driven, complete, and deployable) provide a framework that ensures that the technology is both usable and deliverable. With this framework in mind, it will be possible to evaluate the use of technology in clinical science in terms of efficacy, as well as usability and deployment of the technology, which is currently not possible given the state of research. To move the field forward in terms of providing useful and effective
112
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
treatments, the provided principles should be taken into account during evaluation and development of technology-based interventions. The review’s conclusions should be considered in light of several limitations. First, we restricted ourselves to articles published within an eleven-year period, in order to focus the review on the more recent technological advancements. This restriction led to exclusion of articles utilizing technology published before the year 2005. Also, the list of keywords used to search the databases was not exhaustive; and therefore; it is possible that some studies were not identified that fit the criteria; given the specific search terms. However; this limitation was minimized with the use of several different key search terms used to search for articles using four different databases that spanned psychology; computer science; and engineering. Finally; the state of this research makes it impossible to draw a more satisfying conclusion as to the impact of technology on remediation on social communication impairments. As the research matures; we look forward to integrative reviews and meta-analyses that will allow us to answer the important questions of differential impact for whom and under what conditions.
7. Conclusion Impairment in social communication is prevalent across diagnostic categories as well as among youth without a formal psychiatric diagnosis. Moreover, such impairment exerts a significant impact on youths’ ability to interact with others, and is associated with downstream adverse outcomes developmentally. Clinical science generally recognizes the importance of addressing social communication impairment, as evidenced by the inclusion of the social communication domain within the RDoC. It is furthermore evident that there is momentum and interest in the development and application of technology to address social communication deficits, given the increasing rate of publications in this area. From the pool of reviewed articles, eleven were published between 2005 and 2008, while thirty-six were published between 2012 and 2015–a three-fold increase. Given the relative lack of published articles on reception of non-facial communication abilities in children or adolescents, further research on the use of technology to improve skills in this area is warranted. Since impairments in social communication are truly a transdiagnostic problem, future research needs to expand beyond specific impairments seen in individuals with ASD, in order to develop technology-based interventions that can address social communication impairments across diagnoses. In addition, the identified articles report on technology at various stages of development and testing. Slightly less than half of the articles reported effects of the intervention on the targeted behavior, and the majority of these were pilot studies with as few as three participants. It is therefore difficult at this stage to evaluate the efficacy of the proposed technologies as intervention tools. As this research base further develops, it will be increasingly important for intensified interdisciplinary communication, and perhaps consideration of new models of intervention development (e.g., to include stages for initial usability testing and refinement, prior to feasibility/fidelity studies). We posit that technology offers a unique modality for the treatment of transdiagnostic pathological processes, such as social communication impairment, and that its potential has yet to be fully realized. As the field’s appreciation of personalized medicine and the role of core, transdiagnostic processes evolves, these technologies may be seen as a complement to the more traditional biological (e.g., pharmacological) and psychosocial (e.g., therapeutic) approaches (cf, White et al., 2014). This review of the existing technological techniques and applications
highlights the promise of technology to address social communication impairment in children and adolescents.
References Abirached, B., Yan, Z., Aggarwal, J.K., Tamersoy, B., Fernandes, T., Miranda, J.C., Orvalho, V., 2011. Improving communication skills of children with ASDs through interaction with virtual characters. Paper Presented at the IEEE 1st International Conference on Serious Games and Applications for Health (SeGAH). Adolphs, R., 2002. Neural systems for recognizing emotion. Curr. Opin. Neurobiol. 12 (2), 169–177. American Psychiatric Association, 2013. 5th ed). In: Diagnostic and Statistical Manual of Mental Disorders. Author, Washington, DC. Aspan, N., Vida, P., Gadoros, J., Halasz, J., 2013. Conduct symptoms and emotion recognition in adolescent boys with externalization problems. Sci. World J. 2013, http://dx.doi.org/10.1155/2013/826108. Axe, J.B., Evans, C.J., 2012. Using video modeling to teach children with PDD-NOS to respond to facial expressions. Res. Autism Spectrum Disorders 6 (3), 1176–1185. Bar-Haim, Y., Morag, I., Glickman, S., 2011. Training anxious children to disengage attention from threat: a randomized controlled trial. J. Child Psychol. Psychiatry 52 (8), 861–869. Bauminger, N., Goren-Bar, D., Gal, E., Weiss, P.L., Kupersmitt, J., Pianesi, F., . . . Zancanaro, M., 2007. Enhancing social communication in high-Functioning children with autism through a co-located interface. Paper Presented at the IEEE 9th Workshop on Multimedia Signal Processing. Bauminger-Zviely, N., Eden, S., Zancanaro, M., Weiss, P.L., Gal, E., 2013. Increasing social engagement in children with high-functioning autism spectrum disorder using collaborative technologies in the school environment. Autism 17 (3), 317–339. Baur, T., Mehlmann, G., Damian, I., Lingenfelser, F., Wagner, J., Lugrin, B., . . . Gebhard, P., 2015. Context-aware automated analysis and annotation of social human-agent interactions. ACM Trans. Interact. Intell. Syst. 5 (2), 1–33, http:// dx.doi.org/10.1145/2764921. Beaumont, R., Sofronoff, K., 2008. A multi-component social skills intervention for children with Asperger syndrome: the Junior Detective Training Program. J. Child Psychol. Psychiatry 49 (7), 743–753, http://dx.doi.org/10.1111/j.14697610.2008.01920.x. Bekele, E.T., Lahiri, U., Swanson, A.R., Crittendon, J.A., Warren, Z.E., Sarkar, N., 2013. A step towards developing adaptive robot-Mediated intervention architecture (ARIA) for children with autism. IEEE Trans. Neural Syst. Rehabil. Eng. 21 (2), 289–299, http://dx.doi.org/10.1109/TNSRE.2012.2230188. Bekele, E., Crittendon, J., Zheng, Z., Swanson, A., Weitlauf, A., Warren, Z., Sarkar, N., 2014. Assessing the utility of a virtual environment for enhancing facial affect recognition in adolescents with autism. J. Autism Dev. Disord. 44 (7), 1641–1650, http://dx.doi.org/10.1007/s10803-014-2035-8. Ben Shalom, D., Mostofsky, S.H., Hazlett, R.L., Goldberg, M.C., Landa, R.J., Faran, Y., . . . Hoehn-Saric, R., 2006. Normal physiological emotions but differences in expression of conscious feelings in children with high-functioning autism. J. Autism Dev. Disord. 36 (3), 395–400, http://dx.doi.org/10.1007/s10803-0060077-2. Bernardini, S., Porayska-Pomsta, K., Smith, T.J., 2014. ECHOES: An intelligent serious game for fostering social communication in children with autism. Inf. Sci. 264, 41–60. Blair, R.J.R., Coles, M., 2000. Expression recognition and behavioural problems in early adolescence. Cognit. Dev. 15 (4), 421–434. Bondy, A., Frost, L., 1994. The picture exchange communication system. Focus Autist. Behav. 9, 1–19. Brozgold, A.Z., Borod, J.C., Martin, C.C., Pick, L.H., Alpert, M., Welkowitz, J., 1998. Social functioning and facial emotional expression in neurological and psychiatric disorders. Appl. Neuropsychol. 5 (1), 15–23. Butterworth Jr., J., Strauch, J.D., 1994. The relationship between social competence and success in the competitive work place for persons with mental retardation. Educ. Train. Mental Retard. Dev. Disabil. 29 (2), 118–133. Charlop, M.H., Dennis, B., Carpenter, M.H., Greenberg, A.L., 2010. Teaching socially expressive behaviors to children with autism through video modeling. Educ. Treat. Child. 33 (3), 371–393. Chen, C.-H., Lee, I.-J., Lin, L.-Y., 2015. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. Res. Dev. Disabil. 36, 396–403. Cheng, Y., Chen, S., 2010. Improving social understanding of individuals of intellectual and developmental disabilities through a 3D-facail expression intervention program. Res. Dev. Disabil. 31 (6), 1434–1442, http://dx.doi.org/ 10.1016/j.ridd.2010.06.015. Cheng, Y., Ye, J., 2010. Exploring the social competence of students with autism spectrum conditions in a collaborative virtual learning environment: the pilot study. Comput. Educ. 54 (4), 1068–1077. Christinaki, E., Vidakis, N., Triantafyllidis, G., 2014. A novel educational game for teaching emotion identification skills to preschoolers with autism diagnosis. Comput. Sci. Inf. Syst. 11 (2), 723–743. Cockburn, J., Bartlett, M., Tanaka, J., Movellan, J., Pierce, M., Schultz, R., 2008. Smilemaze: a tutoring system in real-time facial expression perception and production in children with autism spectrum disorder. Paper Presented at the
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114 ECAG 2008 Workshop Facial and Bodily Expressions for Control and Adaptation of Games. Corbett, B., Glidden, H., 2000. Processing affective stimuli in children with attention-deficit hyperactivity disorder. Child Neuropsychol: J. Normal Abnormal Dev. Childhood Adolesc. 6 (2), 144–155, http://dx.doi.org/10.1076/ chin.6.2.144.7056. Costa, S., Soares, F., Pereira, A.P., Santos, C., Hiolle, A., 2014. Building a game scenario to encourage children with autism to recognize and label emotions using a humanoid robot. Paper Presented at the 23th IEEE International Symposium on Robot and Human Interactive Communication. Davies, H., Swan, N., Schmidt, U., Tchanturia, K., 2012. An experimental investigation of verbal expression of emotion in anorexia and bulimia nervosa. Eur. Eat. Disord. Rev. 20 (6), 476–483, http://dx.doi.org/10.1002/erv.1157. De Silva, P.R., Madurapperuma, A.P., Lambacher, S.G., Osano, M., 2006. Therapeutic tool for develop child nonverbal communication skills through interactive game. Paper Presented at the International Conference on Computational Intelligence for Modelling, Control and Automation, 2006 and International Conference on Intelligent Agents, Web Technologies and Internet Commerce. de Wied, M., van Boxter, A., Zaalberg, R., Goudena, P.P., Matthys, W., 2006. Facial EMG responses to dynamic emotional facial expressions in boys with disruptive behavior disorders. J. Psychiatr. Res. 40 (2), 112–121, http://dx.doi. org/10.1016/j.jpsychires.2005.08.003. Deriso, D., Susskind, J., Krieger, L., Bartlett, M., 2012. Emotion mirror: a novel intervention for autism based on real-time expression recognition. Paper Presented at the Computer Vision?ECCV 2012. Workshops and Demonstrations. Dickson, H., Calkins, M.E., Kohler, C.G., Hodgins, S., Laurens, K.R., 2014. Misperceptions of facial emotions among youth aged 9–14 years who present multiple antecedents of schizophrenia. Schizophr. Bull. 40 (2), 460–468, http:// dx.doi.org/10.1093/schbul/sbs193. Duquette, A., Michaud, F., Mercier, H., 2008. Exploring the use of a mobile robot as an imitation agent with children with low-functioning autism. Autonom. Robots 24 (2), 147–157. Emerson, C.S., Harrisoin, D.W., Everhart, D.E., 1999. Investigation of receptive affective prosodic ability in school-aged boys with and without depression. Neuropsychiatry Neuropsychol. Behav. Neurol. 12 (2), 102–109. Escobedo, L., Nguyen, D.H., Boyd, L., Hirano, S., Rangel, A., Garcia-Rosas, D., . . . Hayes, G., 2012. MOSOCO: a mobile assistive tool to support children with autism practicing social skills in real-life situations. Paper Presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Evers, K., Steyaert, J., Noens, I., Wagemans, J., 2015. Reduced recognition of dynamic facial emotional expressions and emotion-specific response bias in children with an autism spectrum disorder. J. Autism Dev. Disord. 45 (6), 1774–1784, http://dx.doi.org/10.1007/s10803-014-2337-x. Faja, S., Aylward, E., Bernier, R., Dawson, G., 2008. Becoming a face expert: a computerized face-training program for high-functioning individuals with autism spectrum disorders. Dev. Neuropsychol. 33 (1), 1–24, http://dx.doi.org/ 10.1080/87565640701729573. Feng, H., Gutierrez, A., Zhang, J., Mahoor, M.H., 2013. Can NAO robot improve eye-Gaze attention of children with high functioning autism? Paper Presented at the 2013 IEEE International Conference on Healthcare Informatics (ICHI). Fergus, P., Abdulaimma, B., Carter, C., Round, S., 2014. Interactive mobile technology for children with autism spectrum condition (ASC). Paper Presented at the IEE 11th Consumer Communications and Networking Conference (CCNC). Fernandes, T., Alves, S., Miranda, J., Queirós, C., Orvalho, V., 2011. LIFEisGAME: A Facial Character Animation System to Help Recognize Facial Expressions Enterprise Information Systems. Springer, pp. 423–432. Finkelstein, S.L., Nickel, A., Harrison, L., Suma, E.A., Barnes, T., 2009. CMotion: a new game design to teach emotion recognition and programming logic to children using virtual humans. Paper Presented at the IEEE Virtual Reality Conference. Fujiwara, T., Mizuki, R., Miki, T., Chemtob, C., 2015. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study. Front. Psychol. 6 (1534), http://dx.doi.org/ 10.3389/fpsyg.2015.01534. Fussner, L.M., Luebbe, A.M., Bell, D.J., 2015. Dynamics of positive emotion regulation: associations with youth depressive symptoms. J. Abnorm. Child Psychol. 43 (3), 475–488, http://dx.doi.org/10.1007/s10802-014-9916-3. Gal, E., Bauminger, N., Goren-Bar, D., Pianesi, F., Stock, O., Zancanaro, M., Weiss, P.L.T., 2009. Enhancing social communication of children with high-functioning autism through a co-located interface. AI Soc. 24 (1), 75–84. Golan, O., Ashwin, E., Granader, Y., McClintock, S., Day, K., Leggett, V., Baron-Cohen, S., 2010. Enhancing emotion recognition in children with autism spectrum conditions: an intervention using animated vehicles with real emotional faces. J. Autism Dev. Disord. 40 (3), 269–279, http://dx.doi.org/10.1007/s10803-0090862-9. Gordon, I., Pierce, M.D., Bartlett, M.S., Tanaka, J.W., 2014. Training facial expression production in children on the autism spectrum. J. Autism Dev. Disord. 44 (10), 2486–2498, http://dx.doi.org/10.1007/s10803-014-2118-6. Harrold, N., Tan, C.T., Rosser, D., 2012. Towards an expression recognition game to assist the emotional development of children with autism spectrum disorders. In: Paper Presented at the Proceedings of the Workshop at SIGGRAPH, Asia. Harrold, N., Tan, C.T., Rosser, D., Leong, T.W., 2014. CopyMe: an emotional development game for children. In: Paper Presented at the CHI ‘14 Extended Abstracts on Human Factors in Computing Systems, Toronto, Canada.
113
Hobson, R.P., Ouston, J., Lee, A., 1989. Naming emotion in faces and voices: abilities and disabilities in autism and mental retardation. Br. J Dev. Psychol. 7 (3), 237–250. Hopkins, I.M., Gower, M.W., Perez, T.A., Smith, D.S., Amthor, F.R., Wimsatt, F.C., Biasini, F.J., 2011. Avatar assistant: improving social skills in students with an ASD through a computer-based intervention. J. Autism Dev. Disord. 41 (11), 1543–1555, http://dx.doi.org/10.1007/s10803-011-1179-z. Hourcade, J.P., Bullock-Rest, N.E., Hansen, T.E., 2012. Multitouch tablet applications and activities to enhance the social skills of children with autism spectrum disorders. Person. Ubiquitous Comput. 16 (2), 157–168. Hourcade, J.P., Williams, S.R., Miller, E.A., Huebner, K.E., Liang, L.J., 2013. Evaluation of tablet apps to encourage social interaction in children with autism spectrum disorders. Paper Presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Humphries, L., McDonald, S., 2011. Emotion faces: the design and evaluation of a game for preschool children. In: Paper Presented at the CHI ‘11 Extended Abstracts on Human Factors in Computing Systems, Vancouver, BC, Canada. Insel, T., Cuthbert, B., Garvey, M., Heinseen, R., Pine, D.S., Quinn, K., . . . Wang, P., 2010. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am. J. Psychiatry 167 (7), 748–751. Izard, C., Fine, S., Schultz, D., Mostow, A., Ackerman, B., Youngstrom, E., 2001. Emotion knowledge as a predictor of social behavior and academic competence in children at risk. Psychol. Sci. 12 (1), 18–23. Jain, S., Tamersoy, B., Zhang, Y., Aggarwal, J.K., Orvalho, V., 2012. An interactive game for teaching facial expressions to children with Autism Spectrum Disorders. Paper Presented at the 5th International Symposium on Communications Control and Signal Processing (ISCCSP). Jenness, J.L., Hankin, B.L., Young, J.F., Gibb Brandon, E., 2014. Misclassification and identification of emotional facial expressions in depressed youth: a preliminary study. J. Clin. Child Adolesc. Psychol. 44 (4), 559–565, http://dx. doi.org/10.1080/15374416.2014.891226. Jeong, M., Kim, Y., Yim, D., Yeon, S., Song, S., Kim, J., 2015. Lexical representation of emotions for high functioning autism (HFA) via emotional story intervention using smart media. In: Paper Presented at the Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Seoul, Republic of Korea. Kaliouby, R.E., Goodwin, M.S., 2008. iSET: interactive social-emotional toolkit for autism spectrum disorder. In: Paper Presented at the Proceedings of the 7th International Conference on Interaction Design and Children, Chicago, Illinois. Kleinsmith, A., Bianchi-Berthouze, N., 2013. Affective body expression perception and recognition: a survey. IEEE Trans. Affective Comput. 4 (1), 15–33. López-Mencía, B., Pardo, D., Hernández-Trapote, A., Hernández, L., Relano, J., 2010. A collaborative approach to the design and evaluation of an interactive learning tool for children with special educational needs. Paper Presented at the Proceedings of the 9th International Conference on Interaction Design and Children. LaCava, P.G., Rankin, A., Mahlios, E., Cook, K., Simpson, R.L., 2010. A single case design evaluation of a software and tutor intervention addressing emotion recognition and social interaction in four boys with ASD. Autism 14 (3), 161–178, http://dx.doi.org/10.1177/1362361310362085. Lacava, P.G., Golan, O., Baron-Cohen, S., Myles, B.S., 2007. Using assistive technology to teach emotion recognition to students with Asperger syndrome: a pilot study. Remed. Special Educ. 28 (3), 174–181. Lahiri, U., Warren, Z., Sarkar, N., 2011. Design of a gaze-Sensitive virtual social interactive system for children with autism. IEEE Trans. Neural Syst. Rehabil. Eng. 19 (4), 443–452, http://dx.doi.org/10.1109/TNSRE.2011.2153874. Lahiri, U., Bekele, E., Dohrmann, E., Warren, Z., Sarkar, N., 2013. Design of a virtual reality based adaptive response technology for children with autism. IEEE Trans. Neural Syst. Rehabil. Eng. 21 (1), 55–64. Loveland, K.A., Tunali-Kotoski, B., Pearson, D.A., Brelsford, K.A., Ortegon, J., Chen, R., 1994. Imitation and expression of facial affect in autism. Dev. Psychopathol. 6 (3), 433–444, http://dx.doi.org/10.1017/S0954579400006039. Madsen, M., Kaliouby, R.e., Goodwin, M., Picard, R., 2008. Technology for just-in-time in- situ learning of facial affect for persons diagnosed with an autism spectrum disorder. In: Paper Presented at the Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, Halifax, Nova Scotia, Canada. Marsh, A.A., Finger, E.C., Mitchell, D.G., Reid, V., Sims, M.E., Kosson, C., Pine, D.S., . . . Blair, R.J.R., 2008. Reduced amygdala response to fearful expressions in children and adolescents with callous-unemotional traits and disruptive behavior disorders. Am. J. Psychiatry 165 (6), 712–720, http://dx.doi.org/10. 1176/appi.ajp.2007.07071145. Mazzei, D., Greco, A., Lazzeri, N., Zaraki, A., Lanata, A., Igliozzi, R., Muratori, F., 2012. Robotic social therapy on children with autism: preliminary evaluation through multi- parametric analysis. Paper presented at the 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT), and 2012 International Conference on Social Computing. McHugh, L., Bobarnac, A., Reed, P., 2011. Brief report: teaching situation-based emotions to children with autistic spectrum disorder. J. Autism Dev. Disord. 41 (10), 1423–1428. Mower, E., Black, M.P., Flores, E., Williams, M., Narayanan, S., 2011. Rachel: design of an emotionally targeted interactive agent for children with autism. Paper Presented at the 2011 IEEE International Conference on Multimedia and Expo (ICME).
114
A.T. Wieckowski, S.W. White / Neuroscience and Biobehavioral Reviews 74 (2017) 98–114
Nunez, E., Matsuda, S., Hirokawa, M., Suzuki, K., 2015. Humanoid robot assisted training for facial expressions recognition based on affective feedback. Soc. Robot., 492–501. Nuske, H.J., Vivanti, G., Dissanayake, C., 2013. Are emotions impairments unique to, universal, or specific in autism spectrum disorder? A comprehensive review. Cognit. Emot. 27 (6), 1042–1061. Park, J.H., Abirached, B., Zhang, Y., 2012. A framework for designing assistive technologies for teaching children with ASDs emotions. In: Paper Presented at the CHI ‘12 Extended Abstracts on Human Factors in Computing Systems, Austin, Texas, USA. Parsons, T.D., Bowerly, T., Buckwalter, J.G., Rizzo, A.A., 2007. A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods. Child Neuropsychol. 13 (4), 363–381. Pioggia, G., Igliozzi, R., Ferro, M., Ahluwalia, A., Muratori, F., De Rossi, D., 2005. An android for enhancing social skills and emotion recognition in people with autism. IEEE Trans. Neural Syst. Rehabil. Eng. 13 (4), 507–515. Puspitasari, W., Ummah, K., Pambudy, A.F., 2013. KIDEA: An innovative computer technology to improve skills in children with intelectual disability using Kinect sensor. Paper Presented at the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). Rhind, C., Mandy, W., Treasure, J., Tchanturia, K., 2014. An exploratory study of evoked facial affect in adolescent females with anorexia nervosa. Psychiatry Res. 220 (1–2), 711–715, http://dx.doi.org/10.1016/j.psychres.2014.07.057. Ribeiro, P.C., Raposo, A.B., 2014. ComFiM: a game for multitouch devices to encourage communication between people with autism. In: Paper Presented at the IEEE 3rd International Conference on Serious Games and Applications for Health (SeGAH), (14–16 May 2014). Rice, L.M., Wall, C.A., Fogel, A., Shic, F., 2015. Computer-assisted face processing instruction improves emotion recognition, mentalizing, and social skills in students with ASD. J. Autism Dev. Disord. 45 (7), 2176–2186. Russo-Ponsaran, N.M., Evans-Smith, B., Johnson, J.K., McKown, C., 2014. A pilot study assessing the feasibility of a facial emotion training paradigm for school-Age children with autism spectrum disorders. J. Mental Health Res. Intell. Disabil. 7 (2), 169–190. Seeman, T.E., 1996. Social ties and health: the benefits of social integration. Ann. Epidemiol. 6 (5), 442–451. Serret, S., Hun, S., Iakimova, G., Lozada, J., Anastassova, M., Santos, A., . . . Askenazy, F., 2014. Facing the challenge of teaching emotions to individuals with low-and high- functioning autism using a new Serious game: a pilot study. Mol. Autism 5 (1), 37. Stichter, J.P., Laffey, J., Galyen, K., Herzog, M., 2014. iSocial: delivering the social competence intervention for adolescents (SCI-A) in a 3D virtual learning environment for youth with high functioning autism. J. Autism Dev. Disord. 44 (2), 417–430. Tanaka, J.W., Wolf, J.M., Klaiman, C., Koenig, K., Cockburn, J., Herlihy, L., . . . Schultz, R.T., 2010. Using computerized games to teach face recognition skills to children with autism spectrum disorder: the Let’s Face It! program. J. Child Psychol. Psychiatry 51 (8), 944–952, http://dx.doi.org/10.1111/j.1469-7610. 2010.02258.x. Tentori, M., Hayes, G.R., 2010. Designing for interaction immediacy to enhance social skills of children with autism. In: Paper Presented at the Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark. Tsai, T.W., Lin, M.Y., 2011. An application of interactive game for facial expression of the autisms. In: Edutainment Technologies. Educational Games and Virtual Reality/Augmented Reality Applications. Springer, pp. 204–211.
Tseng, R.Y., Do, E.Y.L., 2010. Facial expression wonderland (FEW): a novel design prototype of information and computer technology (ICT) for children with autism spectrum disorder (ASD). In: Paper Presented at the Proceedings of the 1st ACM International Health Informatics Symposium, Arlington, Virginia, USA. Wainer, J., Robins, B., Amirabdollahian, F., Dautenhahn, K., 2014. Using the humanoid robot KASPAR to autonomously play triadic games and facilitate collaborative play among children with autism. IEEE Trans. Auton. Ment. Dev. 6 (3), 183–199, http://dx.doi.org/10.1109/TAMD.2014.2303116. Wang, A.T., Lee, S.S., Sigman, M., Dapretto, M., 2006. Neural basis of irony comprehension in children with autism: the role of prosody and context. Brain 129 (4), 932–943, http://dx.doi.org/10.1093/brain/awl032. Warren, Z.E., Zheng, Z., Swanson, A.R., Bekele, E., Zhang, L., Crittendon, J.A., . . . Sarkar, N., 2015. Can robotic interaction improve joint attention skills? J. Autism Dev. Disord. 45 (11), 3726–3734. Wegbreit, E., Weissman, A.B., Cushman, G.K., Puzia, M.E., Kim, K.L., Leibenluft, E., Dickstein, D.P., 2015. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults. Bipolar Disord. 17 (5), 471–485, http://dx.doi.org/10.1111/bdi.12312. Weiss, P.L., Gal, E., Zancanaro, M., Giusti, L., Cobb, S., Millen, L., Eden, S., 2011. Usability of technology supported social competence training for children on the autism spectrum. Paper Presented at the 2011 International Conference on Virtual Rehabilitation (ICVR). Weisz, J.R., Kazdin, A.E. (Eds.), 2010. Evidence-based Psychotherapies for Children and Adolescents. Guilford Press. Whalen, C., Liden, L., Ingersoll, B., Dallaire, E., Liden, S., 2006. Behavioral improvements associated with computer-assisted instruction for children with developmental disabilities. J. Speech Lang. Pathol. Appl. Behav. Anal. 1 (1), 11. White, S.W., Richey, J.A., Gracanin, D., Bell, M.A., LaConte, S., Coffman, M., Kim, I., 2014. The promise of neurotechnology in clinical translational science. Clin. Psychol. Sci. (2167702614549801). Williams, B.T., Gray, K.M., Tonge, B.J., 2012. Teaching emotion recognition skills to young children with autism: a randomised controlled trial of an emotion training programme. J. Child Psychol. Psychiatry 53 (12), 1268–1276. Young, R.L., Posselt, M., 2012. Using the transporters DVD as a learning tool for children with autism spectrum disorders (ASD). J. Autism Dev. Disord. 42 (6), 984–991. Zhang, L., 2010. Affect sensing in an affective interactive e-theatre for autistic children. Paper Presented at the 2010 International Conference on the Natural Language Processing and Knowledge Engineering (NLP-KE). Zheng, Z., Young, E.M., Swanson, A.R., Weitlauf, A.S., Warren, Z.E., Sarkar, N., 2015a. Robot-mediated imitation skill training for children with autism. IEEE Trans. Neural Syst. Rehabil. Eng., http://dx.doi.org/10.1109/TNSRE.2015.2475724. Zheng, Z., Young, E.M., Swanson, A., Weitlauf, A., Warren, Z., Sarkar, N., 2015b. Robot- mediated mixed gesture imitation skill training for young children with ASD. Paper Presented at the 2015 International Conference on Advanced Robotics (ICAR). Zonnevijlle-Bender, M.J.S., van Goozen, S.H.M., Cohen-Kettenis, P.T., van Elburg, A., van Engeland, H., 2002. Do adolescent anorexia nervosa patients have deficits in emotional functioning? Eur. Child Adolesc. Psychiatry 11 (1), 38–42, http:// dx.doi.org/10.1007/s00787-004-0351-9. Zonnevylle-Bender, M.J.S., van Goozen, S.H.M., Cohen-Kettenis, P.T., van Elburg, T.A., van Engeland, H., 2004. Emotional functioning in adolescent anorexia nervosa patients. Eur. Child Adolesc. Psychiatry 13 (1), 28–34, http://dx.doi. org/10.1007/s00787-004-0351-9.