The effect of human engagement depicted in contextual photographs on the visual attention patterns of adults with traumatic brain injury

The effect of human engagement depicted in contextual photographs on the visual attention patterns of adults with traumatic brain injury

Journal of Communication Disorders 69 (2017) 58–71 Contents lists available at ScienceDirect Journal of Communication Disorders journal homepage: ww...

650KB Sizes 2 Downloads 10 Views

Journal of Communication Disorders 69 (2017) 58–71

Contents lists available at ScienceDirect

Journal of Communication Disorders journal homepage: www.elsevier.com/locate/jcomdis

The effect of human engagement depicted in contextual photographs on the visual attention patterns of adults with traumatic brain injury

MARK



Amber Thiessena, , Jessica Brownb, David Beukelmanc, Karen Huxc a

Department of Communication Sciences and Disorders, University of Houston, 114 Clinical Research Services, Houston, TX 77004, United States Department of Speech-Language-Hearing Sciences at the University of Minnesota, 164 Pillsbury Dr. SE, 115 Shevlin Hall, Minneapolis, MN 55455, United States c Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, 301 Barkley Memorial Center, Lincoln, NE 68583, United States b

AR TI CLE I NF O

AB S T R A CT

Keywords: Traumatic brain injury Eye-tracking Contextual photographs Engagement

Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Learning outcomes: Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs.

1. Introduction Speech-language pathologists (SLPs) and other rehabilitation professionals regularly employ contextual images—that is, images depicting objects in natural, holistic environments (Hux, Buechter, Wallace, & Weissling, 2010), to act both as tools for the assessment



Corresponding author. E-mail address: [email protected] (A. Thiessen).

http://dx.doi.org/10.1016/j.jcomdis.2017.07.001 Received 22 August 2016; Received in revised form 3 July 2017; Accepted 17 July 2017 Available online 30 July 2017 0021-9924/ © 2017 Elsevier Inc. All rights reserved.

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

(e.g., standardized and informal testing) and treatment of cognitive-communication deficits (e.g., memory books, speech generating devices) associated with severe traumatic brain injury (TBI; Carlomagno, Giannotti, Vorano, & Marini, 2011; Hanson, Yorkston, & Beukelman, 2013; Hux, Wallace, Evans, & Snell, 2008; Wallace, 2010). Given the important roles contextual images play in the rehabilitation process and the unique cognitive deficits experienced by individuals with TBI (e.g., attention deficits), it is essential that careful consideration be made when selecting and designing these images. Specifically, attention must be given to the effects of various design features on cognitive processing and interpretation to ensure that the images selected are as meaningful and transparent as possible. To this end, researchers have begun to examine the preferences and identification patterns of individuals with TBI to contextual images (Brown, Hux, Knollman-Porter, & Wallace, 2016; Thiessen, Brown, Beukelman, Hux, & Myers, 2017; Wallace, Hux, & Beukelman, 2010). These investigations reveal that adults with TBI prefer contextual photographs to icons and photographs void of context for the representation of certain message types (Thiessen et al., 2017). In addition, increased context has been shown to result in greater identification accuracy for photographs (Wallace et al., 2010). However, individuals with TBI may not be as adept in utilizing background context to answer questions related to image content or theme as their peers without neurological conditions (Brown et al., 2016), and they tend to require increased time to process high-context photographs as compared to low-context photographs (Wallace et al., 2010). Contextual photographs may well be a viable choice for message representation in the cognitive and communication supports of adults with TBI; however, care must still be taken to ensure that the content depicted within these photographs is meaningful to those who employ them. As such, identification of design features that improve the transparency and usability of contextual photographs is needed. Research to determine how image design features influence the perception, interpretation, and use of contextual photographs will assist in making this determination. As such, methods must be sought to objectively examine the responses of individuals with TBI to various design features. Eye-tracking technology is becoming an increasingly viable tool for examining the effects of various image design features on cognitive processing. Because it allows for the accurate and objective examination of viewers’ visual attention patterns to image regions (Rayner, 2009), eye-tracking has had a relatively prominent presence in the cognitive sciences and psychological research for decades (Buswell, 1935; Duchowski, 2002; Yarbus, 1967). Given the strong relation between visual attention and cognitive processing (Just & Carpenter, 1976), examination of the location, duration, and order of visual fixations on specific image regions could shed light on the cognitive processing of images without the need for verbal explanation (Rayner, 2009). For example, visual fixation has been shown to positively correlate with the relative importance, visual interest, and informativeness of image regions (Jacob & Karn, 2003; Poole & Ball, 2006). Specifically, image regions that are more rapidly fixated upon or that receive lengthier fixations are typically rated as more informative, appealing, or interesting to viewers (Buswell, 1935; Rayner, 2009; Yarbus, 1967). Therefore, examining which elements depicted within contextual images most readily capture visual attention or are overlooked, could serve to inform the image design and selection process. Although direct examination of the visual attention response patterns of individuals with TBI is desired, consideration must be given to relevant research findings regarding the visual attention patterns of adults without neurological conditions. It has been argued that contextual images, by design, provide a certain level of support for viewers who are engaged in visual processing tasks (Biederman, 1981; Biederman, Mezzanotte, & Rabinowitz, 1982). Specifically, because contextual images depict real-world environments, viewers without neurological conditions tend to use their world knowledge to guide their search and identification process (Neider & Zelinsky, 2006). As such, during visual search tasks, viewers tend to rapidly focus on image regions in which targets appear in real-world settings (e.g., looking for birds in the upper half of a contextual image). In addition to using their world knowledge to guide image viewing, adults without neurological conditions tend to fixate most on visually informative image regions (Antes, 1974; Buswell, 1935; De Graef, Christiaens, & d’Ydewalle 1990; Henderson, Weeks, & Hollingworth, 1999; Loftus & Mackworth, 1978; Mackworth & Morandi, 1967; Yarbus, 1967). Ratings of an image region’s level of informativeness tend to be dependent upon the task (e.g., search, description; Yarbus, 1967); however, one specific element that consistently captures viewers’ attention is human figures (e.g., Fletcher-Watson, Findlay, Leekam, & Benson, 2008). This factor remains consistent regardless of their size and position within contextual photographs (Wilkinson & Light, 2011). As such, it is likely that people find human figures to be informative during contextual photograph processing regardless of task. Human figures depicted in contextual photographs act as a guide to illustrated objects and/or photo regions. Specifically, when human figures presented in contextual photographs are touching and looking at an object (i.e., task-engaged), the number of visual fixations on the touched/viewed object tends to increase as opposed to when they are looking forward toward the camera (i.e., camera-engaged) (Fletcher-Watson et al., 2008). These results indicate that adults without neurological conditions can be guided by depicted engagement cues to visually fixate on objects that may otherwise receive little visual fixation time. These findings are not unique to adults without neurological conditions. A recent series of studies examining the effects of two types of human engagement represented in contextual photographs—that is, camera- and task-engagement, have been conducted on adults with aphasia (Thiessen, Beukelman, Ullman, & Longenecker, 2014; Thiessen, Beukelman, Hux, & Longenecker, 2016). Camera-engaged photographs contain one or more human figure(s) who are looking forward toward the camera with their hands in a neutral position. These figures are neither looking at nor touching objects presented in the photograph context. Task-engaged photographs contain one or more human figures who are both looking at and touching an object depicted in the background of a photograph. Participants in both of these studies viewed camera- and task-engaged photographs on an eye-tracker monitor during a free-viewing task. Results indicated that adults with aphasia responded to task-engagement cues in a similar manner as their peers without neurological conditions by increasing visual attention on objects in which human figures are touching and looking. Although the existing literature indicates that task-engagement cues presented in photographs have a guiding effect on the visual 59

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

attention of both individuals with aphasia and without neurological conditions, it cannot be assumed that adults with TBI will exhibit a similar response pattern. The reason for this potential difference is that TBI results in unique cognitive deficits not associated with aphasia. For example, impairments in selective attention and reasoning, which are commonly associated with TBI (Ben-David, Nguyen, & van Lieshout, 2011; Vas, Chapman, Cook, Elliott, & Keebler, 2011; Ziino & Ponsford, 2006), could potentially affect an individual’s ability to discern which image regions are most important as well as their ability to visually focus on them. In addition, deficits in social cognition often limit the ability of individuals with TBI to perceive and interpret gestures and body language in real world situations (Evans & Hux, 2011; Lezak, 1995; Milders, Fuchs, & Crawford, 2003; Rousseaux, Vérigneaux, & Kozlowski, 2010). As such, it is possible that these deficits could influence the perception of gestures and cues within contextual photographs as well. Therefore, the ability to follow task-engagement cues—a feat that requires perception and interpretation of gestures and body language as well as to the ability to selectively attend to relevant image regions—could be affected for individuals with TBI. To explore this possibility, the purpose of this investigation was to compare visual fixation patterns to photographs depicting camera-engagement or task-engagement for adults with TBI. Results from participants with TBI were compared to those from a group of adults without neurological conditions to measure the effect of TBI on visual attention to these two types of photographs. We hypothesized that all participants would demonstrate increased visual attention to objects of engagement depicted in task-engaged photographs as compared to camera-engaged photographs; however, adults with TBI would be less effective at processing these cues as demonstrated by slower response to engagement cues and less fixation time spent on the object of engagement depicted in cameraengaged photographs as compared to their peers without neurological conditions. 2. Method This study was approved by Institutional Review Boards both at the University of Nebraska-Lincoln and at the University of Houston. Approved protocols and procedures were utilized throughout the course of this investigation. 2.1. Participants 2.1.1. Adults with TBI Two females and seven males between the ages of 20 and 46 years (M = 30.78; SD = 8.83) with histories of severe TBI participated in this investigation. We recruited one additional adult with TBI; however, we were unable to procure a successful calibration of this participant to the eye-tracker. As such, he could not complete the experimental task and was disqualified from the study. No participants in this study reported any prior experience with eye-tracking or eye-gaze as an AAC access method. Participants who wore corrective lenses were allowed to wear their lenses throughout all portions of the study. All participants were native speakers of American English and had completed high school. We used the criteria established by Fortuny, Briggs, Newcombe, Ratcliff, and Thomas (1980) to ensure that all participants had sustained a severe TBI. As such, participants had sustained a TBI resulting in a loss of consciousness for at least one day and/or experienced post-traumatic amnesia for at least one week. In addition to initial injury severity, we also assessed outcome severity by examining the long-term effects of TBI on participants’ everyday functioning using the Mayo Portland Adaptability Index-4th edition (MPAI-4; Malec & Lezak, 2008). Participant scores on the MPAI-4 ranged from 39 to 56 (M = 44.89; SD = 6.55). According the MPAI-4 guidelines, scores of 30–40 indicate mild limitation as a result of brain injury, scores of 40–50 indicate moderate limitations, and scores greater than 50 indicate moderate to severe limitations. MPAI-4 scores indicate that investigational participants with TBI demonstrated long-term effects and, as a result, lived either in long-term supported environments such as assisted living or at home with family support and supervision. The standard score results for the MPAI-4 as well as additional demographic information for each participant appears in Table 1. 2.1.2. Adults without neurological conditions Two females and seven males between the ages of 19 and 55 years (M = 33.11; SD = 14.52) with no known history of neurological impairment formed a second participant group. To ensure that these individuals had no history of a neurological event or condition, they completed a self-reported neurological history questionnaire. Like participants in the TBI group, participant in this group who wore corrective lenses were allowed to wear their lenses throughout the study. All participants without neurological conditions were native speakers of American English and had completed high school. Computation of a t-test revealed no significant Table 1 Demographic Information for Participants with Traumatic Brain Injury. Participant

Gender Age (Years) TPO (Months) MPAI-4 Cause of Injury

1

2

3

4

5

6

7

8

9

M 20 7 39 MVA

M 34 207 41 MVA

M 23 9 53 MVA

M 37 49 40 Blow to Head

M 38 288 56 Assault

M 46 178 41 Fall

M 39 126 43 MVA

F 20 7 51 MVA

F 28 91 40 MVA

Note: MVA = motor vehicle accident; MPAI–4 = Mayo Portland Adaptability Index-4th edition.

60

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

difference in age between the two participant groups, t(8) = −0.738, p = 0.482. 2.2. Equipment 2.2.1. Tobii T60™ The Tobii T60 is an infrared eye tracker that was designed to be a noninvasive means of measuring visual fixations—that is, periods of relative immobility of the eyes during which visual processing occurs. The T60 operates by emitting an infrared beam that reflects off of an individual’s corneas and allows for precise measurement and recording of visual fixation location and time. The data is recorded at a 60 Hz sampling rate (i.e., data recorded 60 times per second) and is stored within the accompanying Tobii Studio software files on a connected computer. The T60 is a binocular recorder with monocular capabilities, meaning that it records the movements of both eyes, but it can be adjusted to solely record the movements of one eye if necessary. For the purposes of this investigation, all participants were binocularly calibrated and tracked and data was averaged across both eyes. The T60 is a robust system that allows for a moderate amount of participant head movement; as such, participants are not required to remain completely immobile during testing. We did not require the use of a chin rest, as this might have been difficult for our participants with physical disabilities; however, we did take precautions to minimize movement to the extent possible. Specifically, ambulatory participants sat in stationary chairs and participants who were not ambulatory sat in their wheelchairs with the breaks engaged. In addition, we encouraged participants to maintain a steady posture throughout testing. In addition to storing fixation data, Tobii Studio software allows researchers to set fixation filters to sort raw data to reflect only fixations in which cognitive processing is thought to be occurring. We employed a 100-millisecond fixation filter with a velocity threshold of 50 pixels/sample as this setting has been shown to reliably distinguish intended fixations from other occulomotor activity (Manor & Gordon, 2003). As such, fixations shorter than 100 ms in duration were filtered out and excluded from the data set as they were considered too brief for individuals to truly be engaged in cognitive processing. 2.3. Stimuli 2.3.1. Experimental photographs We captured a total of 38 colored contextual photographs with a Canon Rebel T1I camera. Nineteen of these photographs contained camera-engaged human figures, and 19 contained task-engaged human figures as shown in Figs. 1 and 2, respectively. Camera-engaged photographs contained a single human figure facing forward and looking toward the camera (Thiessen et al., 2014, 2016). Human figures in camera-engaged photographs were neither looking at nor touching any elements depicted within the image.

Fig. 1. Camera-engaged photograph example.

61

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

Fig. 2. Task-engaged photograph example.

Task-engaged photographs contained a human figure both touching and looking at an object of engagement that was also visible in the image (Thiessen et al., 2014, 2016). All human figures occupied between 3.7% and 14.0% of the associated photograph, and the object of engagement occupied 0.5%–5.5%. Camera- and task-engaged photographs were paired such that one photograph of each type contained the same content (i.e., human figure, object of engagement, and background context) with the only difference being the type of engagement displayed by the human figure. The faces of depicted human figures were completely visible for all cameraengaged photographs and were partially visible for task-engaged photographs. This difference occurred because the human figures depicted in task-engaged photographs were facing the object of engagement, whereas the humans in camera-engaged photographs were facing the camera. We captured contextual photographs that depicted human figures who were either performing everyday life tasks (i.e., taskTable 2 List of Photograph Themes. Photograph Themes Playing a game Reading a cookbook Completing homework Getting into a vehicle Checking blood pressure Drinking water Turning on a lamp Changing television channel Washing hands Putting on lotion Paying bills Eating lunch Tying shoes Vacuuming Straightening a picture Emptying dishwasher Typing on computer Reading Working in an office

62

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

engaged) relevant to adult activities and responsibilities (e.g., typing on a computer, vacuuming, see Table 2) or standing, facing the camera within matched contextual environments (i.e., camera-engaged). The photographs were taken both in indoor and outdoor settings that varied in the level of background context depicted. In addition, we varied the location of the human figure and objects of engagement such that they appeared in all regions across photographs. This served to minimize the potential for visual bias towards particular regions of images (e.g., left, right, and center). In addition, the object of engagement appeared both on the left and right side of the human figure. None of the photographs selected contained animals. The focus of the experimental task was not on identifying depicted themes/actions; however, we did feel it necessary to validate the experimental stimuli to ensure that none of the photographs represented concepts or activities that were highly abstract or difficult to identify. To validate our photographs, ten adults without neurological conditions viewed each photograph and were instructed to select, from five plausible choices, the main event that was either occurring (i.e., task-engaged photos) or about to occur (i.e., camera-engaged photos) in the each photograph. We counterbalanced the order of presentation of our stimuli so that for each matched pair, half of the viewers saw the task-engaged photograph and the other half of viewers saw the camera-engaged photograph. To qualify for inclusion in the study, each photo was required to be correctly identified during 80% of opportunities. All study stimuli met this criteria, and overall, our photographs were correctly identified during 96.3% of opportunities. 2.3.2. Foil photographs In addition to the 38 experimental photographs, we inserted five foil photos into the presentation order. These photographs were varied such that three contained two or more human figures and two contained no human figures. Furthermore, of the three foil photos depicting human figures, two were considered task-engaged photos in which the human figures were touching and looking at objects depicted in the photograph and one was camera-engaged such that the human figures were looking forward toward the camera. Two of these photographs were taken in indoor settings and one was taken outdoors. Of the two foil photos without human figures, one was captured in an indoor location and one was captured in an outdoor setting. Unlike the experimental photos, foil photos did not appear as matched pairs. We included the foil images to minimize participant success in deducing the investigative purpose, which could have affected their visual attention behaviors. Given that the purpose of these photos was not for data analysis, size of human figures was not held to the same constant as in the experimental photographs. In addition to inserting foils, we varied the presentation order of stimuli across participants to reduce the potential for order effects influencing study findings. As such, we randomly assigned participants to one of four quasi-randomized photograph presentation orders. These presentation orders were designed so that one photograph from each of the matched images pairs (i.e., taskand camera-engaged) appeared in the first half of the image set and the other appeared in the second half of the image set. We counterbalanced presentation order across participants such that the task-engaged photograph from an image pair appeared in the first half of the stimuli set for nine participants and in the second half for the remaining participants. We inserted the foil stimuli in the same location within all image presentation orders. We formatted all stimulus photographs to 700 × 466 pixels using Microsoft Paint™. This was the only modification made to any photos and served to ensure that all photographs appearing on the Tobii T60 monitor were of equal size (i.e., 23.18 × 16.51 cm). 2.4. Procedures 2.4.1. Screening procedures and training As such, each participant completed the appropriate consent/assent procedure prior to completing the study. All participants underwent a two-part screening procedure to ensure their ability to complete the experimental task. First, participants performed a calibration procedure to confirm their eyes were compatible with the eye-tracker. Tobii Studio software comes equipped with two-, five-, and nine-point calibration setting options. We selected the nine-point procedure, because a greater number of calibration points generally allows for more precise calibration (Tobii Technology, 2011). During this screening, participants sat at an appropriate distance from the eye-tracker (i.e., approximately 64 cm) while visually fixating on a ball (i.e., diameter approximately 1.91 cm) moving around the monitor. The ball paused at nine locations on the monitor during which the eye-tracker mapped information related to eye shape and structure as well as distance from the T60 monitor. Upon completion of the procedure, we used two methods to ensure adequate calibration. First, we relied on the Tobii Studio software’s calibration review, which objectively measures whether a calibration was successful immediately after completion of the procedure. Calibrations deemed successful according to Tobii Studio software standards were then visually inspected by the first author to ensure that they were adequate for study inclusion. Specifically, the researcher examined each of the nine points on the eye tracker screen for each eye. If all nine points were fixated accurately, calibration was judged to be adequate for participation. These procedures are consistent with those set forth in the Tobii Studio Users’ Manual and they have been utilized in previous investigations to ensure adequate calibration (Brown, Thiessen, Beukelman, & Hux, 2015; Thiessen et al., 2014, 2016, 2017). We also completed informal screening testing to ensure that our participants were able to visually process the photographs utilized in this investigation. As such, the second screening procedure was a visual scanning task designed to ensure ability to fixate visually on all quadrants of the eye-tracker monitor. To complete this task, participants remained seated in front of the eye-tracker monitor while viewing a series of ten black X symbols (i.e., approximately size 60 Times New Roman font) appearing sequentially at varying monitor locations. Participants had to fixate upon all X symbols within five seconds of appearance to meet the qualification criterion. A training activity followed the two screening activities. For training, we provided participants with a verbal explanation of experimental task procedures. Specifically, participants were informed that they would be viewing a series of photographs presented 63

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

on the eye-tracker monitor and that prior to viewing each photo a fixation dot would appear on the screen. They were told to fixate upon that dot until a photograph appeared as which time they were free to look at the photo naturally. Following this verbal explanation, participants viewed three sample stimulus trials on the T60 monitor and asked clarification questions as necessary. We asked participants whether they were experiencing any difficulty visually processing the training images to ensure that they were able to view image of this size and nature without known difficulty. Immediately prior to beginning the experimental task, we assured participants they would not have to answer any questions related to study stimuli. This verbal explanation was provided to reduce the likelihood of participants trying to memorize stimuli content. Previous research has established that visual attention is influenced by task (e.g., search, description; Buswell, 1935; Henderson et al., 1999; Yarbus, 1967). Because the goal of this investigation was to examine visual response behavior irrespective of task, we wanted participants to avoid any inclination to memorize stimuli content to so as to minimize this potential confound. Following the training period, we initiated the experimental task. 2.4.2. Experimental procedures Prior to experimental task performance, we recalibrated the eye-tracking equipment to each participant. They then remained seated in front of the T60 monitor to view the experimental stimuli. Each experimental contextual photograph was preceded by a fixation dot page consisting of a single red dot appearing in the upper central position of a plain, black screen. The fixation dot page served to align participants’ eye gaze to a consistent location prior to stimulus presentation. The fixation dot appeared for two seconds and then was replaced with the appearance of a single contextual photograph for seven seconds. The seven-second viewing time has been used frequently in research focused on scene viewing for augmentative and alternative communication (AAC) purposes (Thiessen et al., 2014, 2016; Wilkinson & Light, 2011), as it provides a controlled, lengthy window from which to examine individuals attention patterns. Then, the next stimulus set—that is, the fixation dot followed by the next contextual photograph—appeared on the eye-tracker monitor. This procedure repeated until a participant had viewed all photographs. 2.5. Data analysis We defined and designated specific areas of interest (AOIs) prior to having participants perform the experimental procedures. AOIs are image regions that correspond with specific research questions. In this investigation, our goal was to analyze visual fixations across three AOIs: person, object of engagement, and background. We defined the person AOI as the human figure depicted within each task-engaged and camera-engaged experimental photograph. We defined the object of engagement AOI as the object to which the human figure was looking and touching in the task-engaged contextual photographs. This AOI was consistent for the camera- and task-engaged contextual photographs; thus, regardless of whether the human figure was actually looking at and touching the object in the stimulus photograph, the object of engagement AOI remained the same. The background AOI included all remaining portions of an experimental photo exclusive of the person and object of engagement AOIs. For task-engaged photographs, the human figure was touching the object. As such, a portion of the object was occluded by the presence of the human figure’s hands. In these instances, the AOI was drawn to include the portion of the human figure’s hands that were touching the object of engagement. We selected three dependent variables for analysis purposes: (a) domain relative score, (b) latency to first fixation on each AOI, and (c) average fixation duration on each AOI. Domain relative score is a calculation that allows researchers to examine the percentage of time viewers fixated on an AOI relative to the size of that AOI. Given that larger AOIs account for greater regions of image space than smaller AOIs, it is assumed that larger AOIs would receive greater fixation time than smaller AOIs—that is, unless a smaller AOI was for some reason more interesting, complex, or deemed more worthy of attention by viewers (Fletcher-Watson et al., 2008; Wilkinson & Light, 2011). For example, an AOI that accounts for 80% of an image would likely receive approximately 80% of fixation time if all AOIs were deemed equally interesting or complex and therefore worthy of attention by viewers. As such, measuring percent of time fixated across AOIs without accounting for AOI size across image regions could be an issue. We specifically chose domain relative score as a dependent variable because it allows us to control for size discrepancies across AOIs (FletcherWatson et al., 2008), and it has been shown to be an effective index of visual attention in free viewing scene tasks (Thiessen et al., 2016; Wilkinson & Light, 2011). The size of the AOIs in the photographs used in this investigation were highly variable, as the background content accounted for a larger portion of our photographs than did the person or object AOIs. Because of this, we opted to examine domain relative score as opposed to percent of time fixated. To calculate domain relative score, we divided the total percent of time an AOI was fixated upon by the percent of image space occupied by that AOI. Therefore, if the percent of time fixated and the percent of space occupied were relatively equal, the result was a score of one. Scores greater than one indicated that an AOI was fixated upon more than would be expected based on that AOIs size and fixations of less than one indicated that an AOI received less fixation than was expected given its size. Because more time fixated on an AOI generally indicates that an image region is more important (Jacob & Karn, 2003) or complex (Fitts, Jones, & Milton, 1950) than other depicted AOIs, scores of greater than one reflect a more important or complex image regions than those with scores below one. Latency to first fixation was the average time in milliseconds that participants took to fixate visually on each AOI for the first time. Higher latency to first fixation scores indicate that participants required more time on average to locate and visually fixate on an AOI, whereas lower latency to first fixation scores indicate that participants fixated on a particular AOI relatively rapidly upon photo appearance. Latency to first fixation correlates with the visual appeal of a particular AOI relative to other identified AOIs (Poole & Ball, 2006). Specifically, the more rapidly an individual visually fixates on an AOI, the greater the attention grabbing properties of that AOI (Poole & Ball, 2006). We specifically chose this variable as it has been shown to be an effective measure of visual attention by individuals with neurological conditions (Thiessen et al., 2014) and adults without neurological conditions (Wilkinson & Light, 2011). 64

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

We calculated the third dependent variable—that is, average fixation duration on each AOI—by summing the total fixation duration (i.e., fixations with durations below 100 ms were not included in the total fixation duration calculation) on each AOI and dividing that number by the total number of fixations within each AOI. Lengthier fixation durations positively correlate with the level of difficulty an individual has in understanding the content presented within a region of an image (Goldberg & Kotval, 1999) and with the visual interest associated with a particular image region (Poole & Ball, 2006). Thus, when individuals visually attend and fixate for longer durations it indicates that an image region is more difficult to interpret or is of greater interest to a viewer than regions that receive shorter fixation durations. We selected this variable because it has been shown to effectively measure visual attention differences in adults with and without neurological conditions during free scene viewing tasks (Thiessen et al., 2016). The Tobii T60 interfaces with a computer hosting Tobii Studio software. The visual fixation data collected by the T60 is sent directly to the Tobii Studio software program during testing. This software was specifically designed for data analysis purposes. As a result, we relied on it to gather the quantitative data necessary for this investigation (e.g., fixation duration). Although we relied on Tobii Studio software to collect the fixation data, we employed a manual coding protocol to sort data into specific AOIs (Brown et al., 2015; Thiessen et al., 2014, 2016, 2017). Manual coding was completed by visually inspecting a participant’s fixations on each experimental photo and determining whether each individual fixation should be coded as a fixation on the person, object, or background AOI. We selected this procedure in lieu of using the Tobii Studio software’s automated coding to minimize the potential for errors in data coding that could have occurred as a result of drift. Drift is a relatively common issue associated with eye-tracking research that occurs as a result of minor postural adjustments and head movements over the course of an experiment (Holmqvist et al., 2011; Hornof & Halverson, 2002). These adjustments in position can cause the calibration of the eye-tracker to become less precise in its measurement of fixation location. To ensure accuracy of manual coding, a trained research assistant independently coded 20% of the data for comparison with our coding. Calculation of interrater reliability revealed 95.2% agreement across coders. We assessed both skewness and kurtosis of the data to ensure that use of parametric tests was appropriate for statistical analysis. Because results of Shapiro-Wilk (Razali & Wah, 2011; Shapiro & Wilk, 1965) testing revealed that a majority of the distributions were within the normal range, we opted to utilize the ANOVA procedure. We also conducted Mauchly’s sphericity testing (Mauchly, 1940) to ensure that data variance was within acceptable limits given that violation of this assumption could have resulted in an increase in Type I errors (Greenhouse & Geisser, 1959). Greenhouse-Geisser corrections were completed where necessary to ensure this assumption was not violated. In addition, we utilized Tukey’s Honestly Significant Difference test (HSD; Abdi & Williams, 2010) procedure to examine the simple effects of all observed interactions. Tukey’s HSD test allows for comparison of all simple effects in a data set while controlling for potential alpha-inflation (Abdi & Williams, 2010). Use of the HSD procedure renders a critical value to which mean differences for simple effects are compared. Mean differences that are greater than the critical value are significantly different whereas those that are lower than critical value have not reached significance. Tukey’s HSD critical values are provided for each of the significant effects involving three or more pairwise comparisons. We conducted a series of 2 × 2 × 3 mixed groups analyses of variance (ANOVA)—one for each dependent variable—to compare the visual fixation patterns of two groups of adult participants (i.e., one with TBI and one without TBI) to contextual photographs depicting two types of engagement (i.e., camera-engaged and task-engaged) across the three areas of interest (i.e., person, object, and background). We also calculated effect sizes using Pearson’s r to further describe significant main effects.

3. Results 3.1. Domain relative score Mean and standard deviation results for domain relative score for participants with and without TBI appear in Fig. 3. Computation of a 2 × 2 × 3 mixed groups ANOVA revealed a non-significant three-way interaction of Group × Engagement × AOI, F (1.03,16.52) = 0.53, p = 0.484. As a result, we performed no follow-up testing on this interaction, instead opting to examine the relevant two-way interactions. Neither the interaction of Group × AOI, F(1.11,17.69) = 0.89, p = 0.368, nor Group × Engagement, F(1,16) = 0.48, p = 0.498, were statistically significant; however, the interaction of Engagement x AOI was significant, F (1.03,16.52) = 81.54, p < 0.001, r = 0.91, HSD = 4.20. Further examination of the simple effects of this interaction revealed that participants exhibited significantly greater domain relative scores on the object AOI than the background AOI for camera-engaged photographs; however, no significant differences were noted between the person AOI and the object AOI. For task-engaged photos, participants exhibited significantly greater domain relative scores on the object AOI than the person or background AOI. Also, greater domain relative scores were noted on the person AOI than the background AOI. Comparison between the engagement conditions revealed no significant differences between task-engaged and camera-engaged photographs for the person or the background AOIs; however, significantly greater domain relative scores were noted on the object AOI for task-engaged photos than camera-engaged photos. Examination of the main effects revealed that the main effect of AOI was significant, F(1.11,17.69) = 115.13, p < 0.001, r = 0.78, HSD = 5.76. Further inspection revealed that the object AOI garnered significantly greater domain relative scores than the background or person AOIs. No significant differences were noted between the person and background AOIs. The main effect of Engagement was also statistically significant, F(1,16) = 71.27, p < 0.001, r = 0.90; indicating that greater overall domain relative scores were noted for task-engaged images than camera-engaged photographs. The main effect of Group was not significant, F(1,16) = 2.96, p = 0.105.

65

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

Fig. 3. Mean and standard deviation results for domain relative scores. Note. Error bars denote standard deviation.

3.2. Latency to first fixation Mean and standard deviation results for latency to first fixation appear in Fig. 4. Computation of a 2 × 2 × 3 mixed groups ANOVA revealed a non-significant three-way interaction of Group × Engagement × AOI, F(2,32) = 0.40, p = 0.675. As a result, we performed no follow-up testing on the three-way interaction. The two-way interaction of Group × Engagement, F(1,16) = 2.40, p = 0.141, was not significant; however, the interactions of Engagement × AOI, F(2,32) = 21.31, p < 0.001, r = 0.76, HSD = 298.06, and Group × AOI, F(2,32) = 4.77, p = 0.015, r = 0.48, HSD = 800.95 were statistically significant. Follow-up testing on the Engagement × AOI interaction revealed that when participants viewed camera-engaged photographs, they fixated significantly more rapidly (i.e., lower latency to first fixation) on the person AOI than the object or background AOIs. Also, they fixated more rapidly on the background AOI than the object AOI. When viewing task-engaged photographs participants fixated on the person AOI significantly more rapidly than the object or background AOIs, and they fixated on the object AOI significantly more rapidly than on the background AOI. Examination of the effects between camera- and task-engaged photos revealed

Fig. 4. Mean and standard deviation results for latency to first fixation. Note. Error bars denote standard deviation.

66

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

Fig. 5. Mean and standard deviation results for average fixation duration. Note. Error bars denote standard deviation.

no significant differences in latency to first fixation on both the background and person AOIs; however, participants fixated significantly more rapidly on the object AOI when viewing task-engaged than camera-engaged photos. Examination of the Group x AOI interaction revealed no significant differences among the person, object, or background AOIs for individuals with TBI. For individuals without neurological conditions, fixation on the person AOI was more rapid than fixation on the object or background AOIs; however, no significant differences were noted between the object and background AOIs. When examining the differences between participants with and without TBI, no significant differences were noted. Examination of the main effects revealed no significant differences for Group, F(1,16) = 0.57, p = 0.463, or Engagement, F (1,16) = 2.92, p = 0.141; however, the main effect of AOI was statistically significant, F(2,32) = 37.30, p < 0.001, r = 0.84, HSD = 459.67. Inspection of this effect revealed that, overall, participants fixated significantly more rapidly on the person than the object and background AOIs. No significant differences were noted between the object and background AOIs. 3.3. Average fixation duration Mean and standard deviation results for average fixation duration appear in Fig. 5. Computation of a 2 × 2 × 3 mixed groups ANOVA revealed that the three-way interaction of Group x Engagement x AOI was not statistically significant, F(2,32) = 0.49, p = 0.618. Therefore, we performed no follow-up testing of this interaction. Examination of the two-way effects revealed that neither the interaction of Group × AOI, F(2,32) = 0.32; p = 0.729, nor Group × Engagement, F(1,16) = 0.50, p = 0.488, reached significance; however, the two-way interaction of Engagement × AOI was statistically significant, F(2,32) = 25.64, p < 0.001, r = 0.78, HSD = 116.86. Simple effects testing revealed that when participant viewed camera-engaged photographs, their fixation on the person AOI were significantly lengthier than those on the object and background AOIs. In addition, their fixations were significantly lengthier on the object AOI than the background AOI for camera-engaged photographs. When viewing task-engaged photos, participant fixations were significantly shorter on the background AOI than on the person or the object AOIs. No significant differences were noted in fixation duration between the person and object AOIs. Neither the main effect of Group, F(1,16) = 0.40, p = 0.534, nor the main effect of Engagement, F(1,16) = 2.36, p = 0.144, reached significance; however, the main effect of AOI was significant, F(2,32) = 23.76, p < 0.001, r = 0.77, HSD = 200.36. Follow-up testing revealed that participants’ average fixation duration was significantly longer on the person AOI than the object or background AOIs. No significant difference was noted between the object and background AOIs. 4. Discussion 4.1. Summary of findings The purpose of this preliminary study was to examine the effects of type of engagement (i.e., camera- or task-engagement) depicted in contextual photographs on the visual attention patterns of adults with TBI and adults without a history of neurological impairment. Results revealed several important findings. First, little difference was evident between the visual fixation patterns of individuals with TBI and adults without neurological conditions both for camera- and task-engaged photographs. Second, participants 67

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

in both groups tended to fixate more rapidly on human figures depicted in contextual photographs than they did on other AOIs when viewing both photograph types. Third, both groups exhibited increased visual fixation time and fixated more rapidly on object AOIs when viewing task-engaged images than camera-engaged images. Interpretation of the results, examination of relevant literature, and potential clinical implications are discussed in the following sections. 4.2. Examination of existing literature The results of this study add to a growing body of literature focused on how adults with neurological conditions visually attend to engagement cues presented in contextual photographs. Specifically, studies have emerged examining the response to engagement cues of people with aphasia (Thiessen et al., 2014, 2016). The results of these investigations indicate that adults with aphasia do respond to engagement cues illustrated in contextual photographs by increasing their visual attention to objects with which a depicted human figure is task-engaged. Much like the current study, the differences between individuals with aphasia and their peers without neurological conditions were quite minimal (Thiessen et al., 2016). Taken together, these results indicate that engagement cues have the potential to guide the visual attention of individuals with a variety of neurological conditions to important image regions. 4.3. Study interpretations Contrary to our initial hypothesis, little difference was noted between the visual response patterns to engagement cues between individuals with and without TBI. These findings were unexpected given the nature of the cognitive deficits commonly experienced following brain injury. Specifically, deficits in social cognition and selective attention, both of which are common among individuals with TBI (Ben-David et al., 2011; Lezak, 1995; Milders et al., 2003; Vas et al., 2011; Ziino & Ponsford, 2006), could have limited participants’ ability to both attend and respond to engagement cues depicted in contextual photographs. However, this did not appear to be the case, as participants from both groups exhibited similar responses to human figures and to task-engagement cues. Although it is difficult to speculate on the reasons for the lack of differences between these two groups given the recentness of this area of research, a few possible explanations exist. One possible reason for the lack of differences noted between participant groups could be the simplistic and controlled nature of the experimental task. Specifically, participants were instructed to view photographs containing task-engagement cues in a freeviewing manner within a controlled clinical environment with limited distractions. Although one could argue that task-engagement cues are similar to the social engagement cues found in everyday life situations, this may not be the case when you consider factors unrelated to the cues themselves. For example, although typical social interactions often involve joint attention to a particular object or task—much like our task-engaged photographs—real world social interactions likely require greater processing skills as a result of extraneous, attention grabbing stimuli (e.g., noise, communication partner conversation). In addition, the rapid processing requirements of live interactions could pose additional challenges to individuals with decreased attention and processing abilities. As such, it is conceivable that the low complexity level of the experimental task sufficiently minimized the cognitive processing requirements necessary. This in turn would allow participants to allocate their attention to focus on the depicted human figures and engagement cues in order to adequately process the task-engagement cues present in experimental stimuli. An additional explanation for the lack of differences between participant groups may also be related to task requirements. The purpose of this investigation was solely to examine visual attention to engagement cues found in photographs through a free-viewing paradigm with no additional measure of additional behavioral responses were conducted. Although we expected differences in how participant groups visually attended to engagement cues, perhaps the distinction between the two groups is not fully captured at the level of visual attention but more so in each individual’s perception of the viewed content. For instance, it is commonly accepted that individuals with TBI experience deficits in interpreting and responding to social cues such as gestures and body language; however, a majority of the research conducted to examine this deficit has focused on the physical response to or interpretation of social cues present in simulated real world situations rather than visual interpretation of such cues. For example, research conducted by Evans and Hux (2011) revealed that adults with TBI tended to be less accurate than their peers without neurological conditions at interpreting gestures both when provided solely with a gesture and when provided with a gesture paired with verbal information. Given the nature of this and other studies, it is possible that people with TBI, like those without neurological conditions, do visually attend to engagement cues and other social cues; however, they may differ in their subsequent response to these cues. For example, if an individual looks at their watch during a conversation it likely implies that they do not have time or desire to continue talking. It is possible that individuals with TBI would notice this behavior and even visually attend to the clock in response to their communication partner’s gesture; however, they may not act in the expected manner—that is, excusing themselves from the situation. As such, they may solely have attended to the cues provided by their communication partner and not have applied or initiated the appropriate behavioral response. Although this interpretation is speculative, it does indicate the need for further research to determine whether individuals with brain injury do in fact visually attend to social cues yet do not initiate the appropriate behaviors in response. 4.4. Clinical implications Photographs and other image types are well-established methods to support the rehabilitation of adults with acquired neurological conditions. With the emergence of the digital camera, cellular technology, and online sharing capabilities, it is likely that they 68

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

will become increasingly prevalent in the rehabilitation world. As such, it is essential that we examine how those who will be relying on photographs perceive, interpret, and attend to their various associated design features. Several relevant clinical implications are evident from this investigation. First, given that participants increased their attention on engaged objects it is likely that engagement cues may have a guiding effect on visual attention. Because contextual photographs contain high levels of context and could even be described as ‘busy’, it is possible that it could be difficult for individuals with deficits in selective attention to sort the relevant content from the irrelevant content. The guiding effect noted in this investigation could potentially be a powerful tool used to steer viewers toward relevant content depicted in a photograph. As such, when selecting photographs with high levels of context as supports for individuals with TBI, SLPs should consider the potential benefits of task-engagement. A second relevant clinical implication is related to the focus on human figures in contextual photographs. Specifically, both participant groups tended to fixate on human figures depicted in contextual photographs and for greater durations than would be expected given the relative size of these AOIs. Although the focus on human figures could be seen as a positive factor, it is likely that the implications of this tendency are dependent upon the intended use of the photograph. For example, if a photograph is meant to represent a particular person or action, having a human figure contained with the photograph context may be beneficial; however, if a photograph is intended to represent a particular setting or location, the presence of human figures may be somewhat distracting. Although emerging evidence indicates that individuals with TBI demonstrate preferences for specific design features based on message type (i.e., nouns versus verbs; Thiessen et al., 2017), further research is necessary to fully describe how the presence of human figures in contextual photographs influences the interpretation of those images. Third, the finding of little difference in the visual attention behaviors of people with and without histories of TBI has implications for the capturing of photographs for use as supports. Individuals with TBI often experience physical and cognitive challenges reducing their ability to capture and select photographs for these purposes; hence, clinicians, family members, and friends serve as proxies during the image selection process. In such cases, those serving as proxies bear the onus of selecting identifiable and meaningful photographs—a role that is especially important given that the cognitive deficits of individuals with TBI may hinder their use of supports that are less than optimally designed. The fact that results from the current investigation suggest that age- and gendermatched peers of adults with TBI visually attend to similar photo regions as people with TBI serves as an initial step toward making this determination; however, this statement should be interpreted with caution as responses to many other design features have yet to be examined (e.g., number of human figures). 4.5. Study limitations and future research directions One limitation of the research presented herein was the inclusion of only nine adults with TBI and nine adults without neurological conditions as study participants. Although a larger number of participants would have been desirable, our sample size was consistent with those of other research examining the visual attention of individuals with disabilities (Brown et al., 2015; Thiessen et al., 2014, 2016; Wilkinson & Light, 2011, 2014; Wilkinson, O’Neill, & McIlvane, 2014). We also included a relatively large number of stimuli compared to those typically used in visual attention research (Brown et al., 2015; Wilkinson & Light, 2011, 2014) to ensure the strength of our findings. Given the relatively small sample size for the current investigation, it is difficult to conclusively summarize the study implications. However, examination of the variability among participants serves to better understand the stability of the findings. Inspection of our data revealed relative consistency of results across dependent variables for individuals with TBI. Specifically, one of these participants did not demonstrate decreased latency to first fixation on the object AOI in task-engaged photographs as compared to camera-engaged photographs and another participant did not demonstrate increased average fixation duration on the object AOI for task-engaged photographs as compared to camera-engaged photographs. All participants demonstrated a similar pattern for domain relative score for camera-engaged photographs. Slightly more variability was noted among participants without neurological conditions, such that three of these participants did not demonstrate increased fixation durations on the object AOI for task-engaged photographs as compared to camera-engaged photographs; however, the consistency levels for adults without neurological conditions mirrored those discussed above for the latency to first fixation and domain relative score variables. Although our variability was relatively minimal for the response to engagement, individuals with TBI present with heterogeneous strengths and deficits. As such, future researchers should consider performing studies with larger sample sizes and also examining whether subgroups of people with TBI displaying different types or severities of specific deficits perform differentially on visual attention tasks. A second limitation of this investigation was that the lack of personalized stimuli. Rather, the photos were generic in that they contained people and depicted places unfamiliar to each participant. The utilization of generic photographs, although not uncommon in the rehabilitation setting, is not considered best practice, because researchers have found that individuals with neurological disorders typically prefer their own personalized photos for message representation (McKelvey, Hux, Dietz, & Beukelman, 2010). The decision to utilize generic photographs as stimuli allowed for greater internal validity and control, because all participants viewed the same photographs; however, this high level of control came at the expensive of external validity. Specifically, practitioners and researchers should exercise caution is generalizing the study results to the personalized and personally relevant photos often employed in therapy settings. Future researchers should consider examining whether visual attention patterns differ when individuals with TBI view personalized versus generic photographs. Finally, the controlled nature of the experimental task described above could have influenced the generalizability of our findings. Although free-viewing tasks are common when examining how individuals allocate their attention to images (e.g., Armstrong & Olatunji, 2012; Frank, Vul, & Saxe, 2012; Judd, Ehinger, Durand, & Torralba, 2009), this paradigm may not adequately represent the activities performed by people with TBI when they search, identify, and reference photos during actual communication 69

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

interactions. Future researchers need to examine how individuals with TBI allocate their visual attention to photographs when engaged in functional communication tasks to further inform aspects of the image preparation and selection process. 4.6. Conclusions Results of this preliminary investigation indicate that individuals with TBI respond to task-engagement cues depicted in contextual photographs in a manner similar to their peers without neurological conditions. These results are relevant to SLPs and other rehabilitation professionals involved in the image selection process for the cognitive and communication supports for individuals with TBI. Specifically, the guiding effect of task-engagement cues could prove beneficial when prompting individuals with TBI to attend to important photo regions that may otherwise go unnoticed. Further research is necessary both to assess whether individuals’ responses to a modified as a result of their response to engagement cues presented in photographs. Declaration of interest The authors report no conflicts of interest. Acknowledgements The contents of this paper were developed under a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant # 90RE5017-02-01). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this paper do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government. The authors wish to thank the residents and the staff at QLI in Omaha, Nebraska for their participation in the research activities and Tobii Technologies for their support. The authors report no conflicts of interest and are solely responsible for the content and writing of the article. References Abdi, H., & Williams, L. J. (2010). Tukey’s honestly significant difference (HSD) test. Encyclopedia of research design, . Thousand Oaks, CA: Sage, 1–5. Antes, J. R. (1974). The time course of picture viewing. Journal of Experimental Psychology, 103, 62–70. Armstrong, T., & Olatunji, B. O. (2012). Eye tracking of attention in the affective disorders: A meta-analytic review and synthesis. Clinical Psychology Review, 32(8), 704–723. Ben-David, B. M., Nguyen, L. L., & van Lieshout, P. H. (2011). Stroop effects in persons with traumatic brain injury: Selective attention, speed of processing, or colornaming? A meta-analysis. Journal of the International Neuropsychological Society, 17(2), 354–363. Biederman, I., Mezzanotte, R. J., & Rabinowitz, J. C. (1982). Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14(2), 143–177. Biederman, I. (1981). Do background depth gradients facilitate object identification? Perception, 10(5), 573–578. Brown, J., Thiessen, A., Beukelman, D., & Hux, K. (2015). Noun representation in AAC grid displays: Visual attention patterns of people with traumatic brain injury. Augmentative and Alternative Communication, 31, 15–26. Brown, J. A., Hux, K., Knollman-Porter, K., & Wallace, S. E. (2016). Use of visual cues by adults with traumatic brain injuries to interpret explicit and inferential information. The Journal of Head Trauma Rehabilitation, 31(3), 32–41. Buswell, G. T. (1935). How people look at pictures: A study of the psychology of perception in art. Chicago: University of Chicago Press. Carlomagno, S., Giannotti, S., Vorano, L., & Marini, A. (2011). Discourse information content in non-aphasic adults with brain injury: A pilot study. Brain Injury, 25(10), 1010–1018. De Graef, P., Christiaens, D., & d'Ydewalle, G. (1990). Perceptual effects of scene context on object identification. Psychological Research, 52(4), 317–329. Duchowski, A. T. (2002). A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers, 34(4), 455–470. Evans, K., & Hux, K. (2011). Comprehension of indirect requests by adults with severe traumatic brain injury: Contributions of gestural and verbal information. Brain Injury, 25, 767–776. Fitts, P. M., Jones, R. E., & Milton, J. L. (1950). Eye movements of aircraft pilots during instrument landing approaches. Aeronautical Engineering Review, 9, 24–29. Fletcher-Watson, S., Findlay, J. M., Leekam, S. R., & Benson, V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37, 571–583. Fortuny, L. A., Briggs, M., Newcombe, F., Ratcliff, G., & Thomas, C. (1980). Measuring the duration of post traumatic amnesia. Journal of Neurology, Neurosurgery, and Psychiatry, 43, 377–379. Frank, M. C., Vul, E., & Saxe, R. (2012). Measuring the development of social attention using free-viewing. Infancy, 17(4), 355–375. Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics, 24(6), 631–645. Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data. Psychometrika, 24(2), 95–112. Hanson, E. K., Beukelman, D. R., & Yorkston, K. M. (2013). Communication support through multimodal supplementation: A scoping review. Augmentative and Alternative Communication, 29, 310–321. Henderson, J. H., Weeks, P. A., & Hollingsworth, A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. The Journal of Experimental Psychology: Human Perception and Performance, 25, 210–228. Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye tracking. A comprehensive guide to methods and measures. Oxford, UK: Oxford University Press. Hornof, A. J., & Halverson, T. (2002). Cleaning up systematic error in eye-tracking data by using required fixation locations. Behavior Research Methods, Instruments, and Computers, 34, 592–604. Hux, K., Wallace, S. E., Evans, K., & Snell, J. (2008). Performing cookie theft picture content analyses to delineate cognitive-communication impairments. Journal of Medical Speech-Language Pathology, 16(2), 83–103. Hux, K., Buechter, M., Wallace, S., & Weissling, K. (2010). Using visual scene displays to create a shared communication space for a person with aphasia. Aphasiology, 24(5), 643–660. Jacob, R. J. K., & Karn, K. S. (2003). Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In R. Hyone, J. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 573–605). Oxford, England: Elsevier Science. Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. 2009 IEEE 12th international conference on computer vision, 2106–2113. Just, M. A., & Carpenter, P. A. (1976). Eye fixations and cognitive processes. Cognitive Psychology, 8(4), 441–480.

70

Journal of Communication Disorders 69 (2017) 58–71

A. Thiessen et al.

Lezak, M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press. Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology, 4, 565–572. Mackworth, N. H., & Morandi, A. J. (1967). The gaze selects informative details within pictures. Perception and Psychophysics, 2, 547–552. Malec, J. F., & Lezak, M. D. (2008). Manual for the Mayo-Portland Adaptability Inventory (MPAI-4) for adults, children and adolescents. Santa Clara, CA: The Center for Outcome Measurement in Brain Injury. Manor, B. R., & Gordon, E. (2003). Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. Journal of Neuroscience Methods, 128(1), 85–93. Mauchly, J. W. (1940). Significance test for sphericity of a normal n-variate distribution. The Annals of Mathematical Statistics, 11(2), 204–209. McKelvey, M., Hux, K., Dietz, A., & Beukelman, D. R. (2010). Impact of personal relevance and contextualization on word-picture matching by people with aphasia. American Journal of Speech-Language Pathology, 19, 22–33. Milders, M., Fuchs, S., & Crawford, J. R. (2003). Neuropsychological impairments and changes in emotional and social behaviour following severe traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 25(2), 157–172. Neider, M. B., & Zelinsky, G. J. (2006). Scene context guides eye movements during visual search. Vision Research, 46(5), 614–621. Poole, A., & Ball, L. J. (2006). Eye tracking in human-computer interaction and usability Research. In C. Ghaoui (Ed.), Encyclopedia of human computer interaction (pp. 211–219). Pennsylvania: Idea Group, Inc. Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. The Quarterly Journal of Experimental Psychology, 62(8), 1457–1506. Razali, N. M., & Wah, Y. B. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21–33. Rousseaux, M., Vérigneaux, C., & Kozlowski, O. (2010). An analysis of communication in conversation after severe traumatic brain injury. European Journal of Neurology, 17(7), 922–929. Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 591–611. Thiessen, A., Beukelman, D., Ullman, C., & Longenecker, M. (2014). Measurement of the visual attention patterns of people with aphasia: A preliminary investigation of two types of human engagement in photographic images. Augmentative and Alternative Communication, 30, 120–129. Thiessen, A., Beukelman, D. R., Hux, K., & Longenecker, M. (2016). A comparison of the visual attention patterns of people with aphasia and adults without neurological conditions for camera-engaged and task-engaged visual scenes. Journal of Speech-Language and Hearing Research, 60, 1–12. Thiessen, A., Brown, J., Beukelman, D., Hux, K., & Myers, A. (2017). Effect of message type on the visual attention of adults with traumatic brain injury. American Journal of Speech-Language Pathology, 26(2), 428–442. Tobii Technology accuracy and precision test method for remote eye trackers (2011). Retrieved from http://www.tobiipro.com/siteassets/tobii-pro/accuracy-andprecision-tests/tobii-accuracy-and-precisiontest-method-version-2-1-1.pdf. Vas, A. K., Chapman, S. B., Cook, L. G., Elliott, A. C., & Keebler, M. (2011). Higher-order reasoning training years after traumatic brain injury in adults. The Journal of Head Trauma Rehabilitation, 26(3), 224–239. Wallace, S. E., Hux, K., & Beukelman, D. R. (2010). Navigation of a dynamic screen AAC interface by survivors of severe traumatic brain injury. Augmentative and Alternative Communication, 26, 242–254. Wallace, S. E. (2010). AAC use by people with TBI: Affects of cognitive impairments, SIG 12. Perspectives on Augmentative and Alternative Communication, 19(3), 79–86. Wilkinson, K. M., & Light, J. (2011). Preliminary investigation of visual attention to human figures in photographs: Potential considerations for the design of aided AAC visual scene displays. Journal of Speech, Language, and Hearing Research, 54, 1644–1657. Wilkinson, K. M., & Light, J. (2014). Preliminary study of gaze toward humans in photographs by individuals with autism, Down syndrome, or other intellectual disabilities: Implications for design of visual scene displays. Augmentative and Alternative Communication, 30, 130–146. Wilkinson, K. M., O’Neill, T., & McIlvane, W. J. (2014). Eye-tracking measures reveal how changes in the design of aided AAC displays influence the efficiency of locating symbols by school-age children without disabilities. Journal of Speech, Language, and Hearing Research, 57, 455–466. Yarbus, A. (1967). Eye movements and vision. New York: Plenum Press. Ziino, C., & Ponsford, J. (2006). Selective attention deficits and subjective fatigue following traumatic brain injury. Neuropsychology, 20(3), 383–390.

71