Computers in Human Behavior 80 (2018) 331e343
Contents lists available at ScienceDirect
Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh
Full length article
Navigation in virtual environments using head-mounted displays: Allocentric vs. egocentric behaviors Hadziq Fabroyir, Wei-Chung Teng* a r t i c l e i n f o
a b s t r a c t
Article history: Received 6 May 2017 Received in revised form 13 November 2017 Accepted 21 November 2017 Available online 24 November 2017
User behaviors while navigating virtual environments (VEs) using head-mounted displays (HMDs) were investigated. Particularly, spatial behaviors were observed and analyzed with respect to the virtual navigation preferences and performance. For this, two distinct navigation strategies applying allocentric and egocentric spatial perspectives were used. Participants utilized two different user interfaces (i.e., a multitouch screen and a gamepad) to employ the aforementioned strategies to perform a series of rotation, surge motion, and navigation tasks. Two allocentric and two egocentric metaphors for motion techniquesddigital map, canoe paddle, steering wheel, and wheelchairdwere established. User preferences for these motion techniques across the tasks were then observed, and their task performances on the two given interfaces were compared. Results showed that the participants preferred to apply egocentric techniques to orient and move within the environment. The results also demonstrated that the participants performed faster and were less prone to errors while using a gamepad, which manifests egocentric navigation. Results from workload measurements with the NASA-TLX and a brain-computer interface showed the gamepad to be superior to the multitouch screen. The relationships among spatial behaviors (i.e., allocentric and egocentric behaviors), gender, video gaming experience, and user interfaces in virtual navigation were also examined. It was found that female participants tended to navigate the VE allocentrically, while male participants were likely to navigate the VE egocentrically, especially while using a non-natural user interface such as the gamepad. © 2017 Elsevier Ltd. All rights reserved.
Keywords: Virtual reality HMD Spatial behavior Virtual navigation NASA-TLX EEG
1. Introduction Virtual environments (VEs), also known as virtual reality (VR), have gained much popularity and been employed in various applications. Many of these applications allow users to travel in the environment. Applications such as touring applications, 3D adventure games, and teleoperation systems employ specific user interfaces (UIs) to enable users to navigate their environments. For instance, a touring application such as Google Street View features click-to-go for transporting users virtually to any point within the view (Anguelov et al., 2010), and 3D adventure games and teleoperation systems utilize various user interfaces, such as a mouse, keyboard, joystick, or gamepad, for virtual navigation. It is also possible to leverage head motion as a user interface in headmounted display (HMD) teleoperation systems (Martins &
* Corresponding author. Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 43, Section 4, Keelung Road, Taipei, 106, Taiwan, ROC. E-mail addresses:
[email protected] (H. Fabroyir),
[email protected] (W.-C. Teng). https://doi.org/10.1016/j.chb.2017.11.033 0747-5632/© 2017 Elsevier Ltd. All rights reserved.
Ventura, 2009). The combination of VE with HMDs has developed over the last two decades. Myriad industries have been adopting VE technology with HMDs for their needs (Berg & Vance, 2017), and HMDs are being manufactured and distributed to consumers at affordable costs. This technology is likely to be ready for mass-market adoption soon. This adoption will require further studies on cognitive psychology, particularly regarding the presentation of an intuitive user interface for VE navigation that can also benefit users' spatial cognition. 1.1. Aims of study A previous study modeled a concept of a traveler holding a paper map as a metaphor for VE navigation (Fabroyir, Teng, Wang, & Tara, 2013, 2014). The concept adopted a proposal by former research (Darken & Cevik, 1999; Pausch, Burnette, Brockway, & Weiblen, 1995), except that it was in a non-HMD VE system and physically on two separate views. One of the views was displayed on a multitouch screen, and the other one was presented on a curved monitor. In addition, the system supplied a user interface
332
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
overlaid on the multitouch map view to assist the physical navigation element. In the previous study, the user interface was a control variable. The users could only apply the motion techniques that had been specified. Hence, it seemed worthwhile to reproduce the research as a pilot experiment to see how the users would behave in VE navigation if the techniques were not specified. In the experiment, the users were instructed to perform any gestures on the multitouch screen to move forward in the VE. Surprisingly, some of them used two fingers, swiping in opposite directions on the screen. This gesture behavior represented the users leveraging their allocentric perspective to shift the map view backward in order to move the VE viewpoint forward. Nevertheless, when the users were instructed to perform rotation in the VE, they executed either the steering motion gestures or typical rotation gestures on the multitouch screen. The phenomenon observed during the pilot experiment suggested a need for a deeper investigation into users' spatial behaviors, especially in HMD VE navigation. Unlike a desktop, curved, or large projection screen, an HMD occupies the entire visual field of the user; thus, it appeared important to determine whether their allocentric perspective would dominate when there was no way for the users to perceive the physical map view visually. Along with the existing multitouch screen in the previous study, a gamepad was added as an alternative interface. In contrast to the multitouch screen, the gamepad is commonly used as an egocentric navigation interface for controlling the movement of avatars in virtual environments. The aim of this addition was to observe users' spatial behavior preferences and navigation performance on these disparate user interfaces. In particular, no motion controllers were incorporated in the current study so as to present the metaphor of holding a paper map from real-world navigation. While holding the map, the gaps between users' hands are static. Accordingly, motion controllers would break this metaphor. In summary, the research question of this study was “How do users behave and perform while navigating VEs on HMDs?” This question was further detailed as follows. (a) Do users behave more allocentrically or egocentrically while navigating VEs on HMDs? (b) How does user performance in VE navigation across spatial behaviors? Which spatial behavior does result in better performance in terms of time, error, and workload? (c) Does spatial behavior correlate with users' gender, video gaming experience, and attitude towards the user interfaces (e.g., multitouch screen, gamepad)?
1.2. Allocentric vs. egocentric navigation Basically, navigation is a combination of mental (i.e., wayfinding) and physical (i.e., motion) elements (Darken & Peterson, 2014, ch. 19). Both elements should be viewed thoughtfully in the construction of a user interface for VE navigation. Furthermore, this construction should consider spatial representation, namely allocentric and egocentric representations as shown in Fig. 1. Thus, users may leverage the interface to improve their navigation performance and spatial cognition. To illustrate, mapping applications used in the aforementioned pilot experiment leveraged allocentric spatial representation for navigation. The pilot participants navigated the applications by incorporating object-to-object perspective transformation (see Fig. 1a) such as map panning, zooming, and rotation (Klatzky, 1998; Kozhevnikov, Motes, Rasch, & Blajenkova, 2006; Münzer & Zadeh, 2016).
Fig. 1. Spatial representation in navigation: (a) allocentric navigation locates objects with respect to other objects (i.e., object-to-object), (b) egocentric navigation locates objects relative to the self (i.e., self-to-object).
In addition, the applications also supported egocentric representation for navigation. Fig. 1b illustrates how egocentric representation leverages self-to-object spatial coding to perform navigation. In fact, the interface in the pilot experiment allowed the users to mimic steering motion gestures, reflecting the egocentric behavior of a car's driver (Fabroyir et al., 2013). 1.3. Related work The way people navigate the real world is applicable to VEs. There are several design principles for VE navigation based on cognitive psychology and environmental methodologies (Darken & Sibert, 1996). One of the principles is the presence of a virtual map. For users to navigate the VE effectively, the map orientation should be congruent with the environment, which is perpendicular to the floor and not in a vertical position. According to the real-world metaphor, this principle mimics travelers holding paper maps in the front of their body (Fabroyir et al., 2014). On top of that, the virtual map should always show a viewpoint or a you-are-here (YAH) marker whose position changes dynamically across the map. Thus, as users travel through the VE, this viewpoint minimizes the positional transformation between egocentric and allocentric frames of reference (Darken & Cevik, 1999). In fact, users need to perform mental rotations to align these two frames continuously during navigation for map localization. However, the operation cost is minimum as long as the map and the viewpoint are in a track-up arrangement (Aretz & Wickens, 1992). Mental rotation is important for establishing navigation awareness. The performance of mental rotation and navigation relies on the individual's spatial ability, which varies from one user to another. Some studies have examined whether this performance is gender-specific. The findings of those studies have favored males over females (Castelli, Latini Corazzini, & Geminiani, 2008; Lawton, 1994; Lawton et al., 2010; Saucier et al., 2002). This performance difference across gender, which emerges at early ages (Merrill, Yang, Roskos, & Steele, 2016), influences users' strategies for navigating VE. Users with good spatial ability encode landmarks and routes to build a mental representation of the VE directly, whereas users who lack this ability have to depend on verbal strategies (Wen, Ishikawa, & Sato, 2013). Previous researchers developed systems to examine navigation strategies in VEs. Some of the studies set up the systems on desktop displays. Altogether, they made a number of findings. First, the mouse was the best one-handed interface for navigating the desktop VE (Lapointe, Savard, & Vinson, 2011). Second, more experience in gaming led to superior performance in virtual navigation (Murias, Kwok, Castillejo, Liu, & Iaria, 2016). Third, both egocentric and allocentric views of VE should be available for users as self-directed learning aids (Münzer & Zadeh, 2016). The first
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
view accommodated local-scope navigation, whereas the second , Gardony, Mahoney, & view facilitated far-space navigation (Brunye Taylor, 2012). Fourth, navigation performance was no better with wide field-of-view (FoV) displays than with medium FoV displays. However, the last finding prompted extra observations on different display form factors (e.g., curved or vertical FoV) (Richardson & Collaer, 2011). Other studies attempted to utilize more immersive displays for further investigation of navigation behaviors in VEs. One such study employed a large projection screen and a joystick to analyze factors affecting VE navigation. The analysis revealed that besides navigation strategies, personality and computer proficiency also affected users' performance in navigation (Walkowiak, Foulsham, & Eardley, 2015). Other researchers offered greater immersion in VE by making use of HMDs in their systems. They commonly equipped their systems with a joystick or gamepad as the user interface (Henriksen & Midtbø, 2015; Richardson, Powers, & Bousquet, 2011). Still others presented head motion gestures as the navigation technique. Although this technique was more intuitive and yielded less error, it was rather impractical (Martins & Ventura, 2009). 2. Material and methods 2.1. Participants Forty graduate students from various departments were recruited by advertising on social media. They consisted of twenty males and twenty females ranging from 22 to 42 years old (M ¼ 25.18, SD ¼ 3.59). Twenty three participants had normal vision, and the rest were nearsighted. The apparatus was adjusted so that differences in visual acuity would not compromise performance. In terms of experience with electronic systems, eighty-five percent of the participants were convenient with multitouch devices. Nine of them had never held a gamepad, twelve claimed to be gamepad experts, and the rest of them had operated a gamepad before but were not well-versed in their use. Five of them had never played any video games, half played video games less than 1 h daily, and the rest spent more than 1 h every day on video gaming. Among all the participants, nine had previously worn virtual reality (VR) displays (i.e., HMDs), and six had done so more than twice.
333
2.2. Apparatus Virtual environment. The virtual environment (VE) was an urban touring system. The system comprised a street view running on an iMac (21.5-inch, late 2013) and a map view running on an iOS device. As shown in Fig. 2, the iMac was coupled with an HMD (i.e., Oculus Development Kit 2). The HMD displayed the same street view as presented on the iMac. The street view was developed for this study as a web application based on WebVR technology, and the map view app was written in Swift as an iOS application. The map view included a viewpoint located in the center, applying track-up alignment (Darken & Cevik, 1999; Fabroyir et al., 2014). Data for establishing both the map and the street views were requested from the Internet through Google Maps APIs. User interfaces. Two types of user interfaces were provided for the experiment. The first was a multitouch screen. Specifically, an iPad Mini 3 was employed as the multitouch screen in the experiment. This iPad provided a map view for the participants and received motion inputs from them. The second interface was a gamepad, a FlashFire Bluetooth ACTION PAD BT-3000, to acquire motion inputs from the participants. Attached at the top of the gamepad was an iPhone 5s to present the map view for the participants. EEG headband. Muse, a non-intrusive brain sensing headband, was used to measure brainwaves through a series of electroencephalography (EEG) sensors. The headband was also able to detect muscle activity such as eye-blinks and jaw-clenches. The participants wore the headband along with the HMD during the experiment. The iMac received the brainwave and muscle activity data via Bluetooth and logged them for further analysis. WebSocket interconnections. WebSocket technology was used to link the map and the street views. A protocol was developed specifically to sync the viewpoint's location and heading on the map and the street views. The protocol also managed to duplicate the map view on the iOS device and transfer it with the device's attitude to the street view in real time. Additionally, the WebSocket bridged both views with an observer app. The observer app was built to control and monitor tasks in the experiment. The participants could not execute any tasks unless the app allowed them to begin the tasks.
Fig. 2. System architecture including the interconnections through the Internet (green), Bluetooth (blue), and WebSocket (orange). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
334
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
2.4. Task
Fig. 3. Rotation techniques on the interfaces: (a) allocentric (i.e., digital map), (b) egocentric (i.e., steering wheel).
2.3. Motion techniques In the experiment, observation of motion techniques was centered to within two degrees of freedom: rotational motion around the vertical axis (i.e., yaw) and linear longitudinal motion (i.e., forward/backward or surge). Despite the interface types, participants were required to use two fingers, thumbs preferred, for both motions. The interfaces require different techniques due to their specific natures. On the multitouch screen, the gestures of both fingers were interpreted as input commands, whereas on the gamepad, the two joysticks were used for the gestures, as shown in Figs. 3 and 4. The rotation task featured two possible gestures, referred to by their metaphors: digital map and steering wheel. The digital map gesture takes the way pedestrians rotate maps on their smart phones or tablets as a metaphor. The gesture utilizes allocentric navigation to orient the map and the VE. Fig. 3a demonstrates the gesture presentation on the two interfaces. The steering wheel gesture, however, behaves oppositely. The gesture depicts a driver's behavior in steering a car. As a matter of fact, it employs egocentric navigation to orient the viewpoint on the map view and in the VE. Fig. 3b illustrates the gesture on both the multitouch screen and the gamepad. The surge motion task involved two viable gestures, designated wheelchair and canoe paddle. Metaphorically, in the first gesture, the fingers move like the hands while paddling a canoe. This metaphor actually applies allocentric navigation to shift the map and the VE, as shown in Fig. 4a. In the second gesture, both fingers mimic hand propulsion of a wheelchair. This gesture employs egocentric navigation to move the viewpoint, as demonstrated in Fig. 4b.
Table 1 lists the specified tasks by their task conditions and detailed instructions. There were three different task conditions (1e3) over ten rotation tasks (1.1e1.4, 2.1e2.4, 3.1, 3.2) and two surge motion tasks (2.5, 3.3) for the multitouch screen and the gamepad. The conditions determined where participants should focus their attention, as illustrated in Fig. 5. All the above tasks were demonstrated in the video material. Participants had to finish a series of navigation tasks for each interface. The tasks consisted of multiple rotation, surge motion, and navigation tasks. Although the tasks were various, they kept the same objective, which was to set the target ball in the VE, which appeared as a circle on the map, in front of the viewpoint (see Fig. 6). Once the participants achieved the objective, a notification text appeared (e.g., ball is caught) in both the map and the street views. In addition to the motion tasks, the participants also accomplished two navigation tasks within the VE. Each given task featured two balls in the start and end positions. In the first task, the participants were provided with the map view. They then used the map to track the balls' positions and memorize the directions. They were notified (e.g., the ball appeared) when they reached the end position. Afterwards, they ended the task by orienting the VE and locating the ball in the center of the view. The second task required the participants to perform the same navigation task and directions, but with the map view hidden so that they would have to recall the directions to reach the objective. 2.5. Measurements To evaluate user preferences, two dependent variables were assigned. The first one was the number of movements using the steering wheel technique executed by the participants in the rotation tasks. The second was the number of movements using the wheelchair technique executed by the participants in the surge motion tasks. User performance was also observed by measuring completion time (second), angular error (degree), distance error (step), and workload. The workload measurement was based on the NASA-TLX (Sandra, 2006). The measurements were later compared with the brainwave data, namely, alpha wave (Smith, Gevins, Brown, Karnik, & Du, 2001), and muscle activities, specifically, eye blink (Faure, Lobjois, & Benguigui, 2016; Iwanaga, Saito, Shimomura, Harada, & Katsuura, 2000; Rendon-Velez et al., 2016; Rosenfield, Jahan, Nunez, & Chan, 2015; Veltman & Gaillard, 1996; Zheng et al., 2012) Table 1 Motion instructions grouped by task condition.
Fig. 4. Surge motion techniques on the interfaces: (a) allocentric (i.e., canoe paddle), (b) egocentric (i.e., wheelchair).
Task
Instruction
1 1.1 1.2 1.3 1.4
Look at the map view on the interface Rotate the map, face the circle on the right Rotate the map, face the circle on the left Turn right 90 , face the circle Turn left 90 , face the circle
2 2.1 2.2 2.3 2.4 2.5
Look at the map view within the HMD Rotate the map, face the circle on the right Rotate the map, face the circle on the left Turn right 90 , face the circle Turn left 90 , face the circle Move forward ten steps, then face the circle
3 3.1 3.2 3.3
Look at the street view within the HMD Turn right 90 , face the ball Turn left 90 , face the ball Move forward ten steps, then face the ball
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
335
Fig. 5. Task conditions: (a) Look at the map view on the physical user interfaces (e.g., iPhone, iPad), (b) Look at the map view in the virtual environments, (c) Look at the street view in the virtual environments.
Fig. 6. ”Ball is caught” indicating task completion.
and jaw clench (Huang, Chou, Chen, & Chiou, 2014; de Morree & Marcora, 2010), from the Muse headband (Abujelala, Abellanoza, Sharma, & Makedon, 2016; Wiechert et al., 2016). Particularly, alpha (a) values presented throughout experimental results were produced by subtracting the average alpha absolute band power (Muse Developers, 2015) while the participants were relaxing from the average alpha absolute band power while they were using the interfaces in the navigation tasks. 2.6. Design A within-subjects design was used and counterbalanced by separating the participants into four groups: A, B, C, and D, with each group consisted of the same number of males and females. To nullify learning effects due to the user interfaces, groups A and C initially performed the tasks using the gamepad; conversely, groups B and D first completed the tasks using the multitouch screen. The condition in the motion tasks was also considered a confounding factor to the user preferences. Therefore, groups A and B began from task condition 3 (i.e., looking at the street view in the HMD environment), and groups C and D began with task condition 1 (i.e., looking at the map view directly on the interfaces).
gamepads, and frequency of video gaming. On the day of the experiment, they initially wore the Muse headband and closed their eyes while they relaxed for 2 min. Meanwhile, brainwave baseline data were collected by capturing their alpha absolute band power. Subsequently, the virtual environment, user interfaces, and HMD system were introduced to them, and then the task objective. They were instructed to use only use two fingers (e.g., their thumbs) on the user interfaces. They also viewed a demonstration of the four types of finger movements allowed on the interfaces, as illustrated in Figs. 3 and 4. However, the effects of the movements with respect to the system navigation were not demonstrated so as to avoid any influence on their later technique preferences. The experiment was carried out with the task conditions and user interface sequences according to the aforementioned task and design. All trials were video-recorded. In each task, the system logged the completion time, angular error, total steps, brain waves, muscle activities, and technique preferences automatically. The system recorded the angular error when the viewpoint was idle within the tolerance range, forty degrees around the target. The distance error was calculated by subtracting the least possible steps in the task from the total steps taken by participants. To avoid VR sickness, the participants were allowed a 1-min break after each task condition. After the experiment with each user interface was completed, participants completed the NASA-TLX form. The technique preferences were detected with a specific strategy. For example, in task 1.3 (see Table 1), the participants were instructed to turn right, which would result in two effects at the same time: the map would rotate counter-clockwise (Fig. 3a), and the viewpoint would pivot oppositely against the map (Fig. 3b). Since the effects were certain, the system would only need to anticipate the participants' initial gesturedeither the steering wheel (Fig. 3b) or the digital map (Fig. 3a) techniquedand register the gesture as the user's rotation preference. A similar strategy was applied to detect the technique preference in the motion tasks: either the canoe paddle (Fig. 4a) or the wheelchair (Fig. 4b) metaphor. 3. Results and discussion
2.7. Procedure 3.1. Motion preferences Participants completed a consent form along with an online questionnaire on their personal information and related experience prior to the experiment. Some of the important questions included convenience towards multitouch devices, experience in using
Tables 2 and 3 summarize the participants' preference ratios for each rotation and surge motion technique, respectively, in a unit interval (i.e., 0 and 1) across the tasks. The tasks consisted of ten
336
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
rotation and two surge motion tasks for each type of interface. Red shaded cells in the both tables indicate allocentric ratio superiority against its opposites. The blue ones, on the other hand, indicate egocentric ratio dominance against its counterparts. 3.1.1. Rotation An independent-samples two-tailed t-test was conducted to compare the participants' preference ratios on the two rotation techniques. The steering wheel technique (M ¼ 0.71, SD ¼ 0.36) was preferred significantly more often than the digital map technique (M ¼ 0.29, SD ¼ 0.38) (t (78) ¼ 10.59, p < .01). These results suggest that users preferred to behave egocentrically (e.g., like steering a car) while orienting map views and VEs regardless of the types of interface. 3.1.2. Surge motion Another t-test was conducted to compare the participants' preference ratios on the two surge motion techniques. The wheelchair technique (M ¼ 0.86, SD ¼ 0.34) was favored significantly more than canoe paddle technique (M ¼ 0.21, SD ¼ 0.4) (t(78) ¼ 10.96, p < .01). Regardless of interface type, these results suggest that users also prefer to behave egocentrically (e.g., like propelling a wheelchair) while moving in VEs. 3.2. Correlation on navigation behaviors Pearson product-moment correlation was used to analyze whether relationships occurred between navigation behavior and confounding variables: gender, interface order, viewing order multitouch convenience, gamepad proficiency, and video gaming experience. Table 4 shows the correlation coefficients of navigation behavior across variables and interface types in rotation and surge motion tasks. 3.2.1. Gender Females tended to rotate allocentrically and males tended to rotate egocentrically, especially while using the gamepad (rpb ¼ 0.286, p < .1, rb ¼ 0.354). Furthermore, females also tended to surge allocentrically and males tended to surge egocentrically as well regardless of interface type (rpb ¼ 0.372, p < .05, rb ¼ 0.46). 3.2.2. Interface order There was no correlation between rotation preference and
interface order. However, there was a statistically significant correlation between surge motion preference and interface order regardless of interface type (rpb ¼ 0.443, p < .1, rb ¼ 0.548). When the participants were given the gamepad as the first controller, they surged egocentrically. They brought the egocentric behavior from earlier tasks while using the gamepad into the later tasks while surging in the VE using the multitouch screen. 3.2.3. Viewing order No statistically significant correlation was found between viewing order and rotation behavior or surge motion behavior. This finding inferred that when the participants were exposed to an egocentric view such as a street view, it did not guarantee that they would behave egocentrically. Vice versa, as the participants were exposed to an allocentric view such as a map view, there was no warranty that any allocentric behaviors would be observed. 3.2.4. Multitouch convenience There was a significant correlation between rotation behavior and multitouch convenience regardless of interface type (rpb ¼ 0.32, p < .05, rb ¼ 0.484). As the participants felt convenient to use the multitouch technology, they behaved egocentrically while rotating the VE, especially while using the multitouch screen. Nevertheless, no statistically significant correlation was found between surge motion behavior and multitouch convenience. 3.2.5. Gamepad proficiency There was no statistically significant correlation between rotation preference and the participants' experience and proficiency in using gamepads. However, there was a statistically significant correlation between surge motion behavior and gamepad proficiency (rpb ¼ 0.305, p < .1, rb ¼ 0.397). The participants who were gamepad experts tended to surge in the VE egocentrically regardless of interface type. It indicated that the egocentric characteristics embodied by the gamepad influenced the participants to behave egocentrically while surging in the VE. 3.2.6. Video gaming frequency There was also a statistically significant correlation between surge motion behavior and video gaming frequency (rpb ¼ 0.317, p < .05, rb ¼ 0.403). The participants who were frequent game players (i.e. playing video games more than 1 h daily) tended to surge in the VE egocentrically regardless of interface type. It also
Table 2 Mean preference ratios (standard deviations): allocentric vs. egocentric rotation, for each interface type, separated by participant group and gender.
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
337
Table 3 Mean preference ratios (standard deviations): allocentric vs. egocentric surge motion, for each interface type, separated by participant group and gender.
Table 4 Pearson product-moment correlations between motion preferences and gender (female ¼ 0, male ¼ 1), interface order (multitouch screen first ¼ 0, gamepad first ¼ 1), multitouch convenience (not convenient ¼ 0, convenient ¼ 1), gamepad proficiency (not proficient ¼ 0, proficient ¼ 1), and video gaming frequency (less than 1 h ¼ 0, more than 1 h ¼ 1).
indicated that the egocentric characteristics brought by most of video games influenced the participants to behave egocentrically while surging in the VE.
3.3. Task performance Table 5 presents the participants' performance in the rotation and surge motion tasks with spatial behavior as the independent
338
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
variable. In addition, Table 6 sums up the participants' performance with interface type as the independent variable in the above tasks and navigation tasks. The better performance of the both independent variables for each dependent variable is marked with shading. A parametric method (i.e., Student's t-test) was employed to evaluate the significance of performance differences between the spatial behaviors and between the interface types. Each statistically significant pvalue is also marked with shading. 3.3.1. Rotation Completion time. The mean on completion time of the egocentric rotation was superior to the allocentric one (4.36s < 5.21s, t(61) ¼ 1.76, p < .01) regardless of interface type. The mean on the gamepad was better than that on the multitouch screen (3.38s < 5.46s, t(78) ¼ 7.62, p < .01). Both differences were statistically significant. Angular error. In overall, the mean on angular error of the allocentric rotation was better than its egocentric counterpart (6.27 < 8.17, t(61) ¼ 3.94, p < .01). Furthermore, the participants rotated with less angular error while using the multitouch screen (7.71 ) than while using the gamepad (7.73 ), but the difference was statistically insignificant. 3.3.2. Surge motion Completion time. The difference in mean completion time between the two interfaces in the surge motion tasks was statistically significant (t(78) ¼ 5.56, p < .01). The mean for the gamepad was 42.01 s. It was about 1.5 times faster than the mean of 60.66 s observed for the multitouch screen. Moreover, regardless of the interface type, the egocentric surge motion resulted in faster performance than its allocentric counterpart (23.85s < 30.48s). The performance difference was statistically significant (t(45) ¼ 3.25, p < .01). Distance error. The difference in distance error between the two spatial behaviors were statistically not significant. Nevertheless, the difference in distance error between the two interfaces was statistically significant (t(78) ¼ 3.52, p < .01). The distance errors in the surge motion tasks were 2.63 steps for the gamepad and 8 steps for the multitouch screen.
3.3.3. Navigation Completion time. Across the two subsequent navigation tasks, the mean completion times were 231.06 s for the multitouch screen and 206.52 s for the gamepad. In other words, the participants performed the navigation tasks faster while using the gamepad. However, the difference between these mean completion times was not statistically significant. Distance error. The distance errors in the navigation tasks were 26.23 steps for the multitouch screen and 12.25 steps for the gamepad. The gamepad offered less error, and the error difference was statistically significant (t(78) ¼ 4.02, p < .01). 3.4. Workload To supplement Table 6, two more tables, Tables 7 and 8, detail the descriptive statistics of workload measurements from the NASA-TLX and Muse data, respectively. Specifically, Table 7 presents the NASA-TLX's workload measurements overall as well as in three of its weighted subscales: mental demand, physical demand, and performance. These results were paired with the Muse results in Table 8, comprising alpha value, eye blink rate, jaw clench rate sequentially. Shaded cells in the tables indicate the superior values across the interface types. Table 9 illustrates the Pearson productmoment correlations between both measurements subsequently. 3.4.1. NASA-TLX As there was substantial variation among the results, the differences in all the following measurements were not statistically significant, except for the overall workload (t(78) ¼ 2.33, p < .05). Overall workload. The mean and median overall workloads for the multitouch screen were 48 and 49.5, whereas those for the gamepad were 38.4 and 37, respectively. These numbers show that the gamepad was better because it required less workload from the participants as compared to the multitouch screen. Mental demand. The mean and median mental demand scores for the gamepad were 127 and 106.5, whereas those for the multitouch screen were 158.6 and 125, respectively. These scores indicate that the gamepad was mentally less demanding than the multitouch screen. Physical demand. The mean and median physical demand
Table 5 Mean scores, standard deviations, and inference test results per spatial behavior for all performance measures grouped by task and interface type.
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
339
Table 6 Mean scores, standard deviations, and inference test results per interface type for all performance measures grouped by task.
Table 7 NASA-TLX overall and subscale scores for each interface type, separated by participant group and gender.
scores for the multitouch screen was 113.4 and 75, which is higher than the score of 94.6 and 59 for the gamepad. It means that the participants thought that the multitouch screen was physically more demanding than the gamepad. 3.4.2. Muse As there was substantial variation among the results, the differences in all the following measurements were not statistically significant. Alpha (a) value. The mean a value for the multitouch screen was 0.024 Bels. This value was lower than the mean a value for the gamepad, which was 0.014 Bels. Nevertheless, the difference was statistically insignificant. Eye blink rate. The participants blinked 0.26 times per second while running the navigation tasks using the multitouch screen. This blink rate was slower than the participants' blink rate while conducting the navigation tasks using the gamepad (0.314 times per second). However, the rate difference was not statistically significant according to the Student's t-test. Jaw clench rate. The participants clenched their jaws 0.472
times per second while running the navigation tasks using the gamepad. This jaw clench rate was faster than the participants' jaw clench rate while performing the navigation tasks using the multitouch screen (0.296 times per second). Nevertheless, the rate difference was statistically insignificant in the hypothesis test. 3.5. Correlation on workload measurements In accordance with the Muse API reference (Muse Developers, 2015), alpha value corresponds to mellow level of users. However, as the alpha value was paired with the mental workload and mental demand scores of the NASA-TLX, the Pearson product-moment correlation in Table 9 showed that the connections between these pairs were not statistically significant. Nevertheless, based on the Pearson product-moment correlation, we found that the participants' blink-rates were statistically correlated with their corresponding overall workloads (r ¼ 0.274, p < .05), and mental demand scores (r ¼ 0.26, p < .05) regardless of interface type. This negative correlation indicated that when the overall workload or mental demand increased, then the
340
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
Table 8 Muse brainwave and muscle activity measurements for each interface type, separated by participant group and gender.
Table 9 Pearson product-moment correlation on workload measurements between the NASA-TLX and Muse data.
participants would blink less frequently. In addition, the correlation test also revealed a small correlation between the participants' jaw clench rate with their corresponding physical demand scores on the NASA-TLX. The correlation indicated that when the physical demand score on the tasks increased, the participants would clench their jaws more frequently. However, the correlation was only statistically significant while using the gamepad (r ¼ 0.27, p < .1). It was possibly because the actuation
on the gamepad (i.e., joystick propulsion) was physically less ambiguous and resulted in more explicit haptic feedback than the actuation on the multitouch screen (i.e., finger gestures). Nevertheless, we found that the blink rate from Muse were relatively consistent with the overall workload and mental demand scores of the NASA-TLX. The jaw clench rate from Muse also matched up notably with the physical demand scores of the NASATLX. Hence, these results uphold the findings of the previous
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
related studies (Huang et al., 2014; Iwanaga et al., 2000; RendonVelez et al., 2016; Rosenfield et al., 2015; Veltman & Gaillard, 1996; Zheng et al., 2012; de Morree & Marcora, 2010).
341
them to navigate egocentrically: to rotate as if they were steering a car and to move as if they were propelling a wheelchair. These navigation behaviors were prevalent for both the desktop and HMD VEs.
3.6. Performance across behavior and gender Pearson product-moment correlation was used to analyze whether relationships occurred between spatial navigation performance and gender. As a result, two particular statistically significant relationships were found. First, allocentric males rotated with less angular error than allocentric females while using the gamepad (r ¼ 0.484, p < .05). Second, egocentric females oriented more accurately than egocentric males (r ¼ 0.365, p < .1), especially while using the multitouch screen (r ¼ 0.359, p < .1). Several previous studies had confirmed the superior performance of males over females in spatial navigation (Castelli et al., 2008; Lawton, 1994; Lawton et al., 2010; Saucier et al., 2002). However, our findings revealed that males were not always better than females. It depended on the participants' spatial behavior and performance metrics. As mentioned in the above paragraph, for example, females whose behavior was egocentric performed more accurate rotation than egocentric males. 3.7. Effects of viewpoint design The current study was initiated by replicating the previous map touring system (Fabroyir et al., 2013, 2014) as a pilot experiment. In the experiment, the pilot participants were expected to move along the desktop VE as it had been designed initially (i.e., through wheelchair finger gestures). Surprisingly, in the end, most of the participants exhibited the opposite behavior (i.e., canoe-paddle finger gestures). Because of this unexpected behavior, the investigation was continued empirically, but on different types of displays. To observe whether the aforementioned behavior is prevalent, the system was used with a more immersive display, the HMD. The viewpoint design of the system's map view (i.e., the allocentric view) was altered to suit the new display specifications. The previous viewpoint design consisted of an arrow and a cone with the origin at the viewpoint. The cone approximated the view volume of the street view (i.e., the egocentric view) on the desktop or the curved display. As the view volume of an HMD is quite narrow, the approximation was rather insignificant. Consequently, the cone was later removed, and only the arrow was retained. The new viewpoint design was intended to emphasize the track-up direction instead of the view volume. Unexpectedly, the main experiment on the HMD resulted in behaviors unlike those observed in the pilot experiment. The allocentric behavior was no longer dominant. As described in section 3.1, the participants in the main experiment preferred to behave egocentrically on both the rotation (section 3.1.1) and the surge motion tasks (section 3.1.2). It appeared that the difference was due to the viewpoint design. The previous studies had clearly indicated two map configurations: north-up (i.e., allocentric) and forward-up (i.e., egocentric) (Darken & Cevik, 1999; Darken & Peterson, 2014, ch. 19). The configurations specified the viewpoint design similarly to that in this study, except that the sole circle was set as a you-are-here (YAH) indicator for the egocentric treatment. Interviews were conducted to question the participants from both the pilot and main experiments. The interviews confirmed that the viewpoint design had affected their motion behaviors. The viewing cone viewpoint encouraged an object-to-object perspective (i.e., the width of the view's angle users can perceive). As a result, the viewpoint made the participants think allocentrically. The directional arrow viewpoint, in contrast, emphasized a self-to-object perspective (i.e., which direction users currently face). Consequently, the viewpoint caused
3.8. Factors influencing navigation behaviors Besides the viewpoint, interface types and view exposure were also hypothesized to influence the navigation behaviors. That is why, in the experiment design (section 2.6), the groups were counterbalanced by considering these two confounding factors. However, the experimental results in section 3.2 only showed statistically significant correlation on the interface types (section 3.2.2), and showed none on the view exposure (section 3.2.3). Figure 7 illustrates factors that influenced navigation behaviors in virtual environments, especially in HMD VEs. More confounding factors were attributed to the participants, namely their gender, their skill in using the interfaces, and their frequency also experience in playing video games. Males were statistically correlated with the egocentric navigation behaviors (section 3.2.1, whereas females were correlated oppositely with allocentric navigation behaviors. Moreover, as the participants were well-versed with the user interfaces (sections 3.2.4 and 3.2.5) or were more experienced in playing video games (section 3.2.6), they would likely exhibit the egocentric navigation behaviors more than the allocentric one. 3.9. Switch between spatial behaviors In the surge motion tasks, the rotation effect was purposefully altered to be the opposite of the participant's preference. This alteration was implemented to observe the participant's response to a spatial behavior switch. For example, if the participant's dominant technique in the rotation tasks was the steering wheel (i.e., egocentric), then we applied the digital map effect (i.e., allocentric) in the surge motion tasks. Thus, once the participants had to orient the VE to target the ball, they would realize that the rotation effect was not what they expected. As a result, the participants whose spatial behavior was allocentric felt annoyed and
Fig. 7. Factors influencing spatial navigation behaviors: viewpoint design, user interface type, gender, video gaming experience, and proficiency towards user interface.
342
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343
difficult to switch to the egocentric behaviors. However, the egocentric participants encounter no difficulty in switching their spatial behavior. 3.10. Limitation and future work This study on navigation behaviors focused only on two main degrees of freedom: yaw and surge. Additional studies may be necessary to explore the behaviors for the remaining degrees of freedom: pitch, roll, strafe, and elevate. Moreover, the current study only constituted the street view (i.e., discrete view) as the VE. Therefore, extra observations may be required to compile other types of VE that embody a continuous view, such as 3D adventure games and teleoperation systems. 4. Conclusions The primary goal of this study was to investigate navigation preferences and performance in HMD VEs across two disparate types of spatial behaviors and user interfaces. Both allocentric and egocentric techniques and designs were developed, and the two were compared to identify the one that was dominant in the navigation. With respect to the navigation preferences, we found that the majority of participants chose to behave in an egocentric manner while rotating and surging in the HMD VEs. However, although allocentric behaviors were found less than the egocentric one, they should not be neglected. Allocentric participants expressed a difficulty to switch to the egocentric behaviors. As an implication, any VE systems in the future should consider supporting both allocentric and egocentric spatial behaviors during navigation. With regard to navigation performance, the experimental results showed that the participant performance was significantly faster with the gamepad than with the multitouch screen in the rotation, surge motion, and navigation tasks. Furthermore, in terms of accuracy, allocentric motions were found to be less prone to errors than the egocentric ones. Nevertheless, in terms of efficiency, egocentric motions resulted in faster performance comparing to the allocentric motions. Thus, any VE systems in the future should consider optimizing allocentric navigation strategy to target better accuracy and prefer to leverage egocentric navigation strategy for more timely-efficient performance. Finally, statistically significant correlations were found between spatial navigation behaviors and user attributes such as gender, video gaming experience, and proficiency towards UI technology. Future development of VE systems should consider these attributes while selecting suitable spatial behaviors for their users to achieve optimal performance. Appendix A. Supplementary data Supplementary data related to this article can be found at https://doi.org/10.1016/j.chb.2017.11.033. References Abujelala, M., Abellanoza, C., Sharma, A., & Makedon, F. (2016). Brain-EE: Brain enjoyment evaluation using commercial EEG headband. In Proceedings of the 9th ACM international conference on PErvasive technologies related to assistive environments - PETRA ’16, no. October (pp. 1e5). New York, New York, USA: ACM Press. https://doi.org/10.1145/2910674.2910691. Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., et al. (2010). Google Street View: Capturing the world at street level. Computer, 43(6), 32e38. https://doi.org/10.1109/MC.2010.170. Aretz, A. J., & Wickens, C. D. (1992). The mental rotation of map displays. Human Performance, 5(4), 303e328. https://doi.org/10.1207/s15327043hup0504_3. Berg, L. P., & Vance, J. M. (2017). Industry use of virtual reality in product design and
manufacturing: A survey. Virtual Reality, 21(1), 1e17. https://doi.org/10.1007/ s10055-016-0293-9. , T. T., Gardony, A., Mahoney, C. R., & Taylor, H. A. (2012). Going to town: Brunye Visualized perspectives and navigation through virtual environments. Computers in Human Behavior, 28(1), 257e266. https://doi.org/10.1016/ j.chb.2011.09.008. Castelli, L., Latini Corazzini, L., & Geminiani, G. C. (2008). Spatial navigation in largescale virtual environments: Gender differences in survey tasks. Computers in Human Behavior, 24(4), 1643e1667. https://doi.org/10.1016/j.chb.2007.06.005. Darken, R., & Cevik, H. (1999). Map usage in virtual environments: Orientation issues. In Proceedings IEEE virtual reality (pp. 133e140). IEEE Comput. Soc. https:// doi.org/10.1109/VR.1999.756944 (Cat. No. 99CB36316). Darken, R. P., & Peterson, B. (2014). Spatial orientation, wayfinding, and representation. In K. S. Hale, & K. M. Stanney (Eds.), Handbook of virtual Environments: Design, implementation, and applications (2nd ed., pp. 467e492). CRC Press. https://doi.org/10.1201/b17360. Darken, R. P., & Sibert, J. L. (1996). Navigating large virtual spaces. International Journal of Human-computer Interaction, 8(1), 49e71. https://doi.org/10.1080/ 10447319609526140. Fabroyir, H., Teng, W.-C., & Lin, Y.-C. (2014). An immersive and interactive map touring system based on traveler conceptual models. In IEICE transactions on information and systems E97.D (8) (pp. 1983e1990). https://doi.org/10.1587/ transinf.E97.D.1983. Fabroyir, H., Teng, W.-C., Wang, S.-L., & Tara, R. Y. (2013). MapXplorer Handy: An immersive map exploration system using handheld device. In 2013 international conference on cyberworlds (pp. 101e107). Yokohama, Japan: IEEE. https:// doi.org/10.1109/CW.2013.64. Faure, V., Lobjois, R., & Benguigui, N. (2016). The effects of driving environment complexity and dual tasking on drivers' mental workload and eye blink behavior. Transportation Research Part F: Traffic Psychology and Behaviour, 40, 78e90. https://doi.org/10.1016/j.trf.2016.04.007. Henriksen, S. P., & Midtbø, T. (2015). Investigation of map orientation by the use of low-cost virtual reality equipment. In C. Robbi Sluter, C. B. Madureira Cruz, & P. M. Leal de Menezes (Eds.), Cartography - maps connecting the world, lecture notes in geoinformation and cartography (pp. 75e88). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-17738-0_6. Huang, D.-H., Chou, S.-W., Chen, Y.-L., & Chiou, W.-K. (2014). Frowning and jaw clenching muscle activity reflects the perception of effort during incremental workload cycling. Journal of Sports Science & Medicine, 13(4), 921e928. Iwanaga, K., Saito, S., Shimomura, Y., Harada, H., & Katsuura, T. (2000). The effect of mental loads on muscle tension, blood pressure and blink rate. Journal of Physiological Anthropology and Applied Human Science, 19(3), 135e141. https:// doi.org/10.2114/jpa.19.135. Klatzky, R. L. (1998). Allocentric and egocentric spatial Representations: Definitions, distinctions, and interconnections. In Spatial cognition - an interdisciplinary approach to representation and processing of spatial knowledge, no. September 1997 (pp. 1e17). https://doi.org/10.1007/3-540-69342-4_1. Kozhevnikov, M., Motes, M. A., Rasch, B., & Blajenkova, O. (2006). Perspective-taking vs. mental rotation transformations and how they predict spatial navigation performance. Applied Cognitive Psychology, 20(3), 397e417. https://doi.org/ 10.1002/acp.1192. Lapointe, J.-F., Savard, P., & Vinson, N. (2011). A comparative study of four input devices for desktop virtual walkthroughs. Computers in Human Behavior, 27(6), 2186e2191. https://doi.org/10.1016/j.chb.2011.06.014. Lawton, C. A. (1994). Gender differences in way-finding strategies: Relationship to spatial ability and spatial anxiety. Sex Roles, 30(11e12), 765e779. https:// doi.org/10.1007/BF01544230. Lawton, C. A. (2010). Gender, spatial abilities, and wayfinding. In J. C. Chrisler, & D. R. McCreary (Eds.), Handbook of gender research in psychology (pp. 317e341). New York, NY: Springer New York. https://doi.org/10.1007/978-1-4419-1465-1_ 16. Martins, H., & Ventura, R. (2009). Immersive 3-D teleoperation of a search and rescue robot using a head-mounted display. In 2009 IEEE conference on emerging technologies & factory automation (pp. 1e8). IEEE. https://doi.org/10.1109/ ETFA.2009.5347014. Merrill, E. C., Yang, Y., Roskos, B., & Steele, S. (2016). Sex differences in using spatial and verbal abilities influence route learning performance in a virtual environment: A comparison of 6- to 12-year old boys and girls. Frontiers in Psychology, 7, 258. https://doi.org/10.3389/fpsyg.2016.00258 (February). de Morree, H. M., & Marcora, S. M. (2010). The face of effort: Frowning muscle activity reflects effort during a physical task. Biological Psychology, 85(3), 377e382. https://doi.org/10.1016/j.biopsycho.2010.08.009. Münzer, S., & Zadeh, M. V. (2016). Acquisition of spatial knowledge through selfdirected interaction with a virtual model of a multi-level building: Effects of training and individual differences. Computers in Human Behavior, 64, 191e205. https://doi.org/10.1016/j.chb.2016.06.047. Murias, K., Kwok, K., Castillejo, A. G., Liu, I., & Iaria, G. (2016). The effects of video game use on performance in a virtual navigation task. Computers in Human Behavior, 58, 398e406. https://doi.org/10.1016/j.chb.2016.01.020. Muse Developers. (2015). Interaxon, Available data - muse developers. http:// developer.choosemuse.com/research-tools/available-data. Pausch, R., Burnette, T., Brockway, D., & Weiblen, M. E. (1995). Navigation and locomotion in virtual worlds via flight into hand-held miniatures. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques SIGGRAPH ’95 (pp. 399e400). New York, New York, USA: ACM Press. https://
H. Fabroyir, W.-C. Teng / Computers in Human Behavior 80 (2018) 331e343 doi.org/10.1145/218380.218495. th, I., van der Vegte, W., & de Rendon-Velez, E., van Leeuwen, P. M., Happee, R., Horva Winter, J. (2016). The effects of time pressure on driver performance and physiological activity: A driving simulator study. Transportation Research Part F: Traffic Psychology and Behaviour, 41, 150e169. https://doi.org/10.1016/ j.trf.2016.06.013. Richardson, A. E., & Collaer, M. L. (2011). Virtual navigation Performance: The relationship to field of view and prior video gaming experience. Perceptual and Motor Skills, 112(2), 477e498. https://doi.org/10.2466/22.24.PMS.112.2.477-498. Richardson, A. E., Powers, M. E., & Bousquet, L. G. (2011). Video game experience predicts virtual, but not real navigation performance. Computers in Human Behavior, 27(1), 552e560. https://doi.org/10.1016/j.chb.2010.10.003. Rosenfield, M., Jahan, S., Nunez, K., & Chan, K. (2015). Cognitive demand, digital screens and blink rate. Computers in Human Behavior, 51, 403e406. https:// doi.org/10.1016/j.chb.2015.04.073 (PA). Sandra, G. H. (2006). NASA-task load index (NASA-TLX); 20 years later. In Human factors and ergonomics society annual meting (pp. 904e908). https://doi.org/ 10.1037/e577632012-009. Saucier, D. M., Green, S. M., Leason, J., MacFadden, A., Bell, S., & Elias, L. J. (2002). Are sex differences in navigation caused by sexually dimorphic strategies or by differences in the ability to use the strategies? Behavioral Neuroscience, 116(3), 403e410. https://doi.org/10.1037/0735-7044.116.3.403. Smith, M. E., Gevins, A., Brown, H., Karnik, A., & Du, R. (2001). Monitoring task
343
loading with multivariate EEG measures during complex forms of humancomputer interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 43(3), 366e380. https://doi.org/10.1518/ 001872001775898287. Veltman, J., & Gaillard, A. (1996). Physiological indices of workload in a simulated flight task. Biological Psychology, 42(3), 323e342. https://doi.org/10.1016/03010511(95)05165-1. Walkowiak, S., Foulsham, T., & Eardley, A. F. (2015). Individual differences and personality correlates of navigational performance in the virtual route learning task. Computers in Human Behavior, 45, 402e410. https://doi.org/10.1016/ j.chb.2014.12.041. Wen, W., Ishikawa, T., & Sato, T. (2013). Individual differences in the encoding processes of egocentric and allocentric survey knowledge. Cognitive Science, 37(1), 176e192. https://doi.org/10.1111/cogs.12005. Wiechert, G., Triff, M., Liu, Z., Yin, Z., Zhao, S., Zhong, Z., et al. (2016). Identifying users and activities with cognitive signal processing from a wearable headband. In 2016 IEEE 15th international conference on cognitive informatics & cognitive computing (ICCI*CC) (pp. 129e136). IEEE. https://doi.org/10.1109/ICCICC.2016.7862025. Zheng, B., Jiang, X., Tien, G., Meneghetti, A., Panton, O. N. M., & Atkins, M. S. (2012). Workload assessment of surgeons: Correlation between NASA TLX and blinks. Surgical Endoscopy, 26(10), 2746e2750. https://doi.org/10.1007/s00464-0122268-6.