Exam performance and attitudes toward multitasking in six, multimedia–multitasking classroom environments

Exam performance and attitudes toward multitasking in six, multimedia–multitasking classroom environments

Computers & Education 86 (2015) 250e259 Contents lists available at ScienceDirect Computers & Education journal homepage: www.elsevier.com/locate/co...

251KB Sizes 0 Downloads 68 Views

Computers & Education 86 (2015) 250e259

Contents lists available at ScienceDirect

Computers & Education journal homepage: www.elsevier.com/locate/compedu

Exam performance and attitudes toward multitasking in six, multimediaemultitasking classroom environments Edward Downs*, Angela Tran, Robert McMenemy, Nahom Abegaze University of Minnesota Duluth, United States

a r t i c l e i n f o

a b s t r a c t

Article history: Received 6 January 2015 Received in revised form 12 August 2015 Accepted 13 August 2015 Available online 15 August 2015

Although many colleges and universities have implemented laptop programs, the use of these technologies in the classroom doesn't guarantee increases in exam performance. Used improperly, these technologies can hinder the learning process. An experiment was conducted comparing how the use or non-use of technology affected exam performance between six different classroom environments. Consistent with predictions based on theories of multitasking and multimedia processing, participants performed worst on an exam when distracted with social media. Moreover, having had this experience, participants in five of the six conditions showed a decrease in perceptions of their abilities to efficiently multitask from pre-test to post-test. Results are discussed in terms of theory and recommendations are made for the integration of experiential learning sessions into orientation programs to help promote a healthy classroom learning environment. © 2015 Elsevier Ltd. All rights reserved.

Keywords: Multimedia Multitasking Cognition Attitudes Experiential learning

1. Introduction Imagine sitting in a modern-day college lecture hall. Your attention is divided between listening to the professor, taking notes, an obnoxiously loud granola bar wrapper from somewhere close by, and the many glowing laptop screens in front of you. Some of the more studious classmates are typing notes, while others are shopping for the newest trends or catching the latest scores on ESPN. Almost everyone has a window open to some sort of social-media site: Twitter, Flickr, Reddit, Instagram, or the ubiquitous Facebook. This is the technological environment that the current generation of college students occupies. Referred to as the Net Generation, or N-Gen (Tapscott, 1998), these students were born into a world of technological access and full-time connectivity. They are described as tech savvy, interested in multimedia, the creators of Internet content, and avid multitaskers (Berk, 2010). With wireless connectivity on college campuses, televisual media displaying messages in college hallways, and smartphones, laptops, tablets, and MP3 players always at hand, today's college students certainly know how to navigate today's media-saturated (Roberts, Foehr, & Rideout, 2005) environment. Given the N-Gen crowd is used to staying connected and that multitasking is normal for them (Frand, 2000), one would expect that this also leads to efficiency. That is to say, given the practice they have in their day-to-day lives, one would think that multitasking during class wouldn't present too many barriers to learning. However, the body of literature on multitasking and multimedia processing suggests otherwise. The following paragraphs will review the literature on multitasking, or trying to complete two or more tasks simultaneously, with a focus on the effects of multitasking in the college classroom. Following this, the adaptive control of thought-

* Corresponding author. E-mail address: [email protected] (E. Downs). http://dx.doi.org/10.1016/j.compedu.2015.08.008 0360-1315/© 2015 Elsevier Ltd. All rights reserved.

E. Downs et al. / Computers & Education 86 (2015) 250e259

251

rational (ACT-R) cognitive architecture, theory of threaded cognition, and multimedia learning theories will be used to explain the conditions under which multitasking is likely to inhibit learning. Hypotheses and research questions will be advanced for an experiment to test exam performance and document attitudes toward perceived abilities to multitask in six different multimedia, multitasking environments. Results and discussion will follow in terms of theory and recommendations for classroom management in collegiate learning environments. 2. Literature review When it comes to multitasking, strong evidence of generational differences has been observed. Survey data indicate that the Net-Generation reports more multitasking behaviors than their Generation-X counterparts, who in turn report more multitasking behaviors than their Baby-Boomer predecessors (Carrier, Cheever, Rosen, Benitez, & Chang, 2009). Although the frequency of multitasking endeavors may serve as a general proxy for approval or acceptance of this behavior, it does not necessarily translate into generational differences in the ability to effectively or efficiently process information. Across generations, there are similar opinions on which combinations of multitasking behaviors are more or less difficult (Carrier et al., 2009). For example, each of the generations mentioned above would agree that texting and driving while concurrently trying to use a navigation device in the car would be more difficult than watching T.V. and chewing gum. The general acceptance of anytime, anywhere multitasking behaviors among N-Geners in today's media-saturated environment (Roberts et al., 2005) has infiltrated the college lecture hall. One study examining a large-lecture found that computer-using students were engaged in off-task activities 61% of the class time (Ragan, Jennings, Massey, & Doolittle, 2014). Another study using spyware on students' laptops (Kraushaar & Novak, 2010) concluded that students generated 65.8 active windows on their laptop screens for each 75 min lecture. Researchers were also able to determine differences between productive (course-related) and distractive (non-course-related) screens. Results showed that on average, 25.1 screens were productive, while 40.7 screens per class lecture were distractive. Not surprisingly, students with higher distraction ratios demonstrated lower levels of performance as measured by scores on homework, quizzes, projects, exams, and final class grades. Additional work (ex. Burak, 2012; Junco, 2012; Paul, Baker, & Cochran, 2012; Ravizza, Hambrick, & Fenn, 2014) recognizes the inverse relationship between increased multitasking and compromised course performance. Experimental research also verifies the relationship between multitasking and compromised cognitive outcomes (ex. Sana, Weston, & Cepeda, 2013). Manipulating laptop use for a lecture (Hembrooke & Gay, 2003) resulted in significantly lower recall and recognition test scores for those using laptop computers during class. Others have found that those using Facebook or IM applications while in class performed significantly worse on multiple-choice exams than those taking notes using pencil-and-paper or word-processing technology (Wood et al., 2012). Still others have demonstrated a lack of task efficiency when engaged in multimedia, multitasking behaviors (Bowman, Levine, Waite, & Gendron, 2010). That is not to say that all technologies in the classroom are bad. An experiment that manipulated structured (focused use of a laptop) and unstructured (ignoring how laptops were being used) environments during lectures found productive learning more often in the structured environment than in the unstructured classroom environment (Kay & Lauricella, 2011). Unfortunately, while this study demonstrated that students can stay on track with coordinated, relevant opportunities to engage with technology, no performance outcome or cognitive measure of learning was reported. To summarize, survey research indicates that multitasking does occur in the classroom, and that often times, it's not productive. The experimental research indicates that cognitive processes are negatively impacted, but there is at least some evidence to suggest that used properly, technology can facilitate learning. Given this information, how do educators make sense of all this? At the heart of the matter, two different things are occurring which are disrupting cognitive processes. The first is the distracted multitasking behavior, and the second is related to an overloading of the senses in mediated message processing. A closer look at theories of multitasking, the ACT-R and threaded cognition theory (Borst Taatgen & van Rijn, 2010; Salvucci, 2005; Salvucci, Taatgen, & Borst, 2009) and multimedia learning theory (Mayer & Moreno, 1998; Mayer & Moreno, 2002; Moreno & Mayer, 1999) can explain how all of these findings fit together. 2.1. ACT-R and threaded cognition Published research on human performance and limited capacity has been available for more than sixty years. Some scholars have used simultaneous auditory tasks to test human capabilities (Broadbent, 1958), while others have examined reaction times in multitasking environments (Hick, 1952). Others have theorized about resource pool management necessary to complete more than one task at a time (Kahneman, 1973), while still others have critiqued the experimental work in this area looking for alternative explanations for limited capacity (Duncan, 1980). Adaptive control of thought and rationale (ACTR) and threaded cognition theory (Anderson, 2007; Salvucci & Taatgen, 2008) are the latest limited-capacity theories that combine to explain how people perform in various multitasking situations. At its most basic level, the literature on multitasking distinguishes between concurrent and sequential multitasking (Salvucci & Taatgen, 2008). Sequential multitasking is said to occur when two or more tasks are being accomplished in a given time frame, but are never being done at the exact same time. For example, microwaving a bowl of soup and then making a salad for lunch. Both are being done in the same time frame, but attentional resources can be focused on one task at a time. Concurrent multitasking occurs when a person is attempting to complete two or more tasks at the exact same time. An example of concurrent multitasking is texting while driving. A person is attempting to use resources at the same time to accomplish two different tasks at the same time.

252

E. Downs et al. / Computers & Education 86 (2015) 250e259

ACT-R posits that in any given multitasking situation, people draw from three different resource pools; motor, perceptual, and cognitive (Borst, Taatgen, & van Rijn, 2010). Although each of these resource pools can operate in parallel to each other with little difficulty, within-group demands are serial in nature (Borst et al., 2010). In a concurrent-multitasking environment such as a classroom where students are IM-ing on Facebook and trying to pay attention to a course lecture, all three resource pools; cognitive, motor, and perceptual, are compromised. Threaded cognition states that when two tasks require the same resource, that a “bottleneck” occurs (Borst et al., 2010). The central bottleneck theory says that when two tasks require the same resource, the tasks are placed into a queue and given the necessary attention in sequence. The example of a student using a laptop during class to both take notes and IM friends on a social network site provides the necessary context to understand this theory in action. First, the student is trying to listen and watch the professor as he or she presents new course materials. At the same time, this student has to take down notes, either by pen and paper or electronically by a laptop, and attempt to process the materials as more information is presented. In this example, all of the resource pools are being drawn from. First, perceptual resources are used when the student is listening and watching the professor. Second, motor resources are used to take down notes. Finally, cognitive resources are pulled from to process information. In the case of concurrent multitasking, some students are doing much more than taking notes with their laptops. They are also text messaging, checking their email, reading current events, and interacting with social media. This means that the social media task is competing for resources across the perceptual, motor, and cognitive resource pools. While students in this position may feel like they are navigating both tasks successfully, the research shows that concurrent multitasking comes with a price. As students allocate resources from one task to another, they inevitably need to stop allocating resources to one or the other tasks. This process is not conducive to learning.

2.2. Multimedia learning theory At the same time that students are attempting to multitask, there are also limitations when it comes to processing information through audio, video, and text-based media. Based in part on Paivio's (1990) dual-coding theory, multimedia learning theory explains how people process mediated messages and information. At the core of multimedia learning theory is the ability to process information through two sensory channels. Multimedia learning theory's modality principle states that learning potential is maximized when information is presented through audio and video. However, the redundancy principle (Mayer & Moreno, 1998; Mayer & Moreno, 2002) also recognizes that when we venture beyond the dual-coding threshold, that we create a situation that is favorable to cognitive over load. In a multimedia, multitasking environment where multiple media sources are present, this means the ability to process information through both verbal and visual processes will be compromised (Mayer & Anderson, 1991; Moreno & Mayer, 2000) For example, if a student were watching a documentary film while also updating his or her social media pages, it would mean that the student would need to process the film's video information, the text on the social media page, as well as any static images on that page. In this case, the visual system is split three ways before note-taking or processing of auditory information is even taken into consideration. In short, media components by themselves are capable of overloading the senses and making it difficult to process information e especially when the topics are not related to each other (Moreno & Mayer, 1999; Moreno & Mayer, 2000).

2.3. Present study The ACT-R cognitive architecture with threaded cognition and multimedia learning theory provide two mechanisms through which cognitive processing could be affected. The aim of the current study is to use these theories in conjunction with the previous literature to create six different classroom environments and hail the call for future research (Fried, 2008; Weaver & Nilson, 2005) to see how cognitive processes, specifically, exam performance, is helped or hindered. This was accomplished by having participants watch the documentary film Century of the Self in one of six classroom environments and then take a test on the material that was covered. The six classroom environments tested were as follows:  An unstructured environment where all participants watched a video while responding to questions with their laptops on a Facebook chat group; the Facebook distracted condition.  A structured environment where a video played while all participants took notes with pencil-and-paper; the notes on paper condition.  A structured, non-distracted control group where all participants just sat and watched the video; the no media use control condition.  An unstructured mix of conditions that had half of the participants that were distracted by answering questions on a Facebook chat group, while the other half just sat and watched the video (an equal mix of the first and third conditions); the mixed distraction condition.  A structured environment where all participants used a laptop to type notes with no other distractions permitted (no social media, IM, e-mail, etc.) while watching the video; the notes on laptop condition.  A unstructured distracted condition where all participants were taking notes on their laptops, while also concurrently being distracted by the Facebook chat group, while the video was playing; the distracted combo condition.

E. Downs et al. / Computers & Education 86 (2015) 250e259

253

For the purpose of this study, similar to Kay and Lauricella (2011), we conceptualized unstructured environments as those which ignore how students are using their laptops (ie. they are distracted by Facebook). However, the structured environment in this study is defined as an environment where technology use has been limited or restricted to allow focus on the presented materials. The environment described in the first condition was an unstructured, multimedia-multitasking environment, where students watched a video while concurrently having to respond to questions on a Facebook chat group. Given the potential for compromising the resource pools, and given that the concurrent social-media task was not related to the video, we predicted: H1: Participants in the Facebook distracted condition (watching a video while responding to questions on a Facebook chat group) will receive the lowest scores (compared to the five other conditions) on a subsequent exam. Recalling that at least one empirical work demonstrated that the structured use of technology in the classroom could facilitate learning (Kay & Lauricella, 2011), it was prudent to test how structure and technology use could be beneficial in the classroom. With this in mind, three structured conditions were created: the notes on paper condition, notes on laptop condition, and a no media-use control condition. Consistent with theory and with previous research we predicted: H2: Participants in the structured conditions (notes on paper, notes on laptop and no media use control) will perform better than those in the distracted conditions (Facebook distracted, mixed distraction and distracted combo). Given that many classrooms tend to be mixed, some students use laptops and some do not, it is important to see how the classroom atmosphere of a mixed-distraction environment affects learning. Consistent with theories of multitasking and multimedia learning, we predicted; H3: Participants who are not distracted in the mixed-distraction condition will score better on the exam than their distracted counterparts. Finally, we were interested to see what would happen to participants who were in a condition that was structured (as in paying attention) but not taking notes, nor distracted with social media. Based on theory we speculate that the lack of notetaking and lack of a multitasking distraction would place this control condition somewhere in between the distracted conditions and the structured conditions in terms of learning. If this were true, it would mean that simply paying attention (with no note taking and with no distractions) would be more productive than multitasking and being distracted. Thus, a research question asked; RQ1: How will participants in the structured, no media use control group compare on the multiple choice exam against the other conditions?

2.4. Experiential learning If we can predict student performance outcomes in a variety of multimedia, multitasking environments this information will help instructors and professors to understand more about today's lecture halls and give them the tools to govern their classrooms accordingly. While this information will be of use at an instructional level, it could also very well be a learning experience at the individual student level. Research suggests that students favor the use of technology in the classroom (Downs, Boyson, Alley, & Bloom, 2011), and anecdotal evidence suggests they become frustrated when they can't use it (Young, 2006). It may be because students overestimate their abilities to effectively and efficiently multitask. But what would happen if it were possible to create a multimedia, multitasking environment that was designed for students to fail on purpose? Would students realize how inefficient this type of unstructured multitasking is? Further, would it lead to changes in perceptions of their abilities to multitask? Tenets from situated cognition and experiential learning suggest that if students were to arrive at that conclusion on their own, that it would be a more potent learning experience than if they were simply told what they can and cannot do with their technology. The theory of experiential learning (ELT, Kolb & Kolb, 2005) supports the notion the learning is a process of knowledge creation. ELT, (Kolb, 1984) defines learning as “the process whereby knowledge is created through the transformation of experience” and that “knowledge results from the combination of grasping and transforming experience” (cited from Kolb & Kolb, 2005; pg. 194). That is to say when students in a multimedia, multitasking environment are allowed to experience first-hand the difficulties of trying to successfully multitask, that their subsequent reflections on the experience may force them to confront how inefficient multitasking behaviors are in a learning environment. A pre-test, post-test measure of perceived abilities to multitask should indicate how students perceive their abilities for each experimental condition. Because participants in the distracted Facebook condition are hypothesized to have the most difficulty cognitively, we predict:

254

E. Downs et al. / Computers & Education 86 (2015) 250e259

H4: Participants' attitudes toward their abilities to effectively multitask in the Facebook distracted environment (watching a video while responding to questions on a Facebook chat group) will be lower in the post-test than in the pre-test. As the other conditions have a mix of both structured and unstructured multimedia multitasking, we ask the research question; RQ2: How will participants' attitudes towards their ability to multitask vary for those in the other five conditions? 3. Method 3.1. Participants Students (N ¼ 204) were recruited to participate in one of the six outlined experimental conditions. The average age of the sample was (M ¼ 19.55, SD ¼ 1.49) years and (n ¼ 119) participants were female. Most of the participants were freshmen (n ¼ 83), followed by sophomores (n ¼ 50), juniors (n ¼ 35), and seniors (n ¼ 35). One person did not respond to the class-rank question. The average ACT score for the (n ¼ 188) that responded was (M ¼ 23.35, SD ¼ 2.88), and none of the participants reported seeing the stimulus material, the Century of the Self documentary, prior to the experimental session. Approximately one-fifth of the sample (n ¼ 39) reported previously using their laptops in class to take notes. 3.2. Measures To determine whether or not students' perceived abilities to multitask in the various conditions changed from pre-test to post-test, a scale was created to measure perceived multitasking proficiency. Seven items asked about perceived abilities to multitask, including; I believe that I am an efficient multi-tasker; I believe that I get more things done when I multitask; Multitasking is the only way I can get everything I need to do accomplished; I am good at multitasking; Being engaged in multiple tasks is easy for me; Multitasking feels natural to me; and, I get things done more quickly when I multitask. Each of the seven questions was measured with a 5-point Likert scale anchored at (1 ¼ strongly disagree) and (5 ¼ strongly agree). As this scale was developed for this study, an exploratory factor analysis with varimax rotation was conducted to test the sevenitem structure of the perceived ability to multitask measure. The analysis indicated a single factor for all seven items with an eigenvalue of 4.3, that accounted for 61.48% of the variance. The seven, pre-test, scale-items measuring self-perceptions of multitasking ability was internally consistent and deemed reliable for use with a Cronbach's alpha score of (0.89). The identical 7-item scale used in the post-test was also internally consistent and deemed reliable for use with a Cronbach's alpha of (0.92). To assess learning a 15-item, multiple-choice exam was created by the authors and pretested. The revised measurement instrument asked questions about historical facts that were mentioned in the 25 min video clip of the Century of the Self documentary. Each multiple-choice question had four options to choose from; one correct answer and three foils. As the questions were taken directly from the documentary, the instrument had good face validity. To further test the efficacy of this measure of learning, the exam was administered to a separate group of students who were not part of the experimental conditions (n ¼ 24), who had never watched the Century of the Self documentary. The mean score of this comparison group was (M ¼ 4.12, SD ¼ 1.36). This group's average learning assessment score was the lowest scoring average compared to the six groups tested in the experiment, and differed statistically from all six conditions in the experiment. Facebook distracted t(55) ¼ 2.310, p < 0.05, (M ¼ 5.15, SD ¼ 1.83, d ¼ 0.62); no media use control t(56.80) ¼ 11.66, p < 0.001, (M ¼ 9.80, SD ¼ 2.4, d ¼ 3.09); mixed distraction t(45.04) ¼ 4.94, p < 0.001, (M ¼ 7.15, SD ¼ 3.09, d ¼ 1.4); notes on laptop t(56.53) ¼ 12.97, p < 0.001, (M ¼ 10.14, SD ¼ 2.19, d ¼ 3.45); distracted combo t(49.23) ¼ 5.34, p < 0.001, (M ¼ 6.97, SD ¼ 2.57, d ¼ 1.52); and notes on paper t(53.73) ¼ 11.82, p < 0.001, (M ¼ 10.58, SD ¼ 2.8, d ¼ 3.22). Holm's sequential Bonferroni procedure was used to attenuate for family-wise Alpha error. All tests remained significant. Thus, the instrument was deemed a valid measure of learning. Sample items from the learning assessment included: What movement did Edward Bernays begin in early films? Which president hired Bernays to introduce the first political public-relations campaign to regain popularity? Why did Bernays arrange for Freud's work to be published in America? Which famous opera singer was mentioned as one of Edward Bernays' early clients? 3.3. Procedure Students were recruited from communication courses at a university in the Midwest to participate in an experiment examining attitudes and exam performance in various multimedia, multitasking environments. During recruitment, participants were asked to sign-up for a one-hour time slot on the date and time of their choice. They were also told that they would need to bring a laptop computer to the experiment. Data were collected in a traditional college classroom, during fall and spring semesters. Upon arrival to their scheduled appointment times, informed consent was obtained and participants were asked to turn off phones or electronic devices that would potentially interrupt the study.

E. Downs et al. / Computers & Education 86 (2015) 250e259

255

Researchers then administered a basic demographic questionnaire which asked about biological sex, college grade-point average, and any standardized college testing scores (ACT or SAT), etc. This questionnaire also asked about basic media preferences, and their experience with laptops in the classroom. Pre-test questions were also included about perceptions of their abilities to multitask. Prior to arrival, each group of participants was randomly assigned into one of the six classroom conditions; (1 ¼ Facebook distracted; 2 ¼ notes on paper; 3 ¼ no media use control group; 4 ¼ mixed distraction; 5 ¼ notes on laptop; and 6 ¼ distracted combo). In the case of condition one, participants used their laptops to join a Facebook chat group created for this study titled; “BBC Century of the Self”. In the case of condition two, participants were provided a sheet of notebook paper and instructed to take notes on the documentary as they would normally do during a lecture. In the case of condition three, participants were told to watch the documentary and nothing more. In the case of condition four, approximately half of the participants (every other seat) were asked to join the Century of the Self Facebook chat group, while the other half sat and watched without additional distraction. In the case of condition five, participants used their laptops and took notes on a word processing program. In the case of condition six, participants followed the Facebook protocol for condition one, while simultaneously taking notes on their laptop during the video. In the distracted environments, participants received questions from one of the research assistants monitoring the Century of the Self, Facebook chat group at regular intervals. The interval chosen was every two minutes, as justified by previous research on the frequency of distracted behavior in classrooms (Kraushaar & Novak, 2010). Participants were asked to watch the video and asked to reply to questions/statements posed by the research assistant on the Facebook group to the best of their abilities. Examples include: What is your favorite place to eat and what is your favorite dish there? List five things that you need to purchase from the store. What do you think of the video so far? (For the complete list of questions, see Appendix 1.) Researchers dimmed the classroom lights and began playing part one of the Century of the Self documentary. This BBC documentary discussed the rise of consumerism in American in the 1920s and 30s. It also discussed the role that one of the fathers of public relations in the U.S., Edward Bernays, had in changing American purchasing habits from needs-based consumers to wants-based consumers. This clip also detailed the relationship between Bernays and his famous uncle, Sigmund Freud, whose psychoanalytic techniques Bernays used in his marketing. The documentary was played from source material on YouTube, through an overhead projector on a standard-sized projection screen. The volume was set at a comfortable level and played through an integrated speaker-system spaced throughout the room. The documentary was stopped approximately 25 min in, where researchers asked participants to either put their laptops or note papers away. Researchers then administered the exam, consisting of 15, multiple-choice questions. After completion of the exam, one final questionnaire was given; the post-test measure of perceived abilities to multitask. When the final questionnaire was completed, participants were debriefed, allowed to ask questions, and thanked for their time. 4. Results Prior to conducting the analyses, any demographic variables that were thought to affect the dependent measure of exam performance were examined. Three variables were positively correlated with exam performance; age, year in class, and ACT scores. Zero-order correlations for these variables are reported in Table 1. As such, these variables were controlled for in any analysis that examined exam performance. Neither biological sex nor college GPA were significantly related to any of the dependent measures. The first hypothesis predicted that participants in the Facebook distracted condition would receive the lowest scores (compared to the five other conditions) on a subsequent exam. When controlling for age, year in class, and ACT score, ANCOVA results indicated that there were significant differences in performance on a multiple-choice exam between conditions, F(5,178) ¼ 26.4, p < 0.001, partial eta sq. ¼ 0.43. Mean scores for the Facebook distracted condition indicated that this group scored significantly lower (M ¼ 5.16, SE ¼ 0.42) than the five other class environments that were tested: notes on paper (M ¼ 10.64, SE ¼ 0.42), notes on laptop (M ¼ 10.21, SE ¼ 0.41), the no media use control (M ¼ 9.72, SE ¼ 0.41), the distracted combo condition (M ¼ 7.37, SE ¼ 0.45) and the mixed distraction environment (M ¼ 7.07, SE ¼ 0.47). Post hoc analyses revealed three clusters of mean scores that differed statistically from each other. Cluster one, containing only the Facebook distracted

Table 1 Bivariate correlation matrix of variables thought to affect exam performance. Variables

1

1. 2. 3. 4. 5. 6.

e 0.15* 0.01 0.23** 0.06 0.04

Biological Sex Age ACT Score College GPA Year in College Exam score

2

3

4

5

6

e 0.38** 0.05 0.25**

e 0.04 0.13

e 0.24**

e

e 0.15 0.04 0.85** 0.16*

Note: * correlation is sig. at p < 0.05, ** correlation is sig. at p < 0.01. Numbers (1e5) are a list of variables that were thought to affect the primary dependent measure exam performance (6). Brief descriptions of the variables are as follows: 1. Biological sex of the participant, 2. Age of the participant, 3. Self-reported ACT standardized test score 4. College grade point average, 5. Year in College, (Fr., So., Jr., or Sr.) and 6. Exam performance on a 15-item, multiple-choice measure.

256

E. Downs et al. / Computers & Education 86 (2015) 250e259

condition was statistically different than cluster two which contained both the mixed-distraction condition and the distracted combo group. These two clusters in turn were statistically different the third cluster that contained the three structured conditions (notes on paper, notes on laptop, and no media use control conditions) (See superscripts in Table 2 for details). Research question one asked how the other conditions would rank in terms of exam performance, specifically asking about the structured, non-distracted control group. The information gleaned from the post hoc analyses from hypothesis one provided the impetus to explore the cluster relationships further. A polynomial contrast test was conducted assigning the values of (1, 0, and þ1) to the three clusters, respectively. The polynomial contrast test with (ACT score as a covariate) indicated that a statistically significant linear trend existed, F(1,184) ¼ 121.27, p < 0.001, eta sq. ¼ 0.38. The first cluster containing the Facebook distracted condition scored lowest of the three clusters and was statistically different than the second cluster (M ¼ 5.10, SE ¼ 0.33), the second cluster containing the mixed-distraction condition and the distracted combo condition fell between the first and third clusters (M ¼ 7.19, SE ¼ 0.38), and the third cluster was statistically different than the first two (M ¼ 10.23, SE ¼ 0.25). The quadratic trend was not statistically significant. The results from these tests provide support for hypothesis one and answer research question one. Hypothesis two predicted that participants in the structured conditions would perform better than those in the unstructured conditions. The original six conditions were collapsed into two categories; the structured category (comprised of the notes on paper condition, no media use control condition, and the notes on laptop condition) and the unstructured category (comprised of the Facebook distracted condition, mixed distraction condition, and distracted combo condition). Controlling for age, year in class, and ACT scores, analysis of covariance revealed that the two categories differed from each other, F(1,182) ¼ 105.73, p < 0.001, partial eta sq. ¼ 0.37. The average exam score was lower in the category containing the three unstructured conditions (M ¼ 6.48, SE ¼ 0.26), than in the category containing the three structured conditions (M ¼ 10.20, SE ¼ 0.25, p < 001). Thus, hypothesis two was supported. Hypothesis three predicted that participants who were not distracted in the mixed distraction condition would score better on the multiple-choice exam than their distracted counterparts. Analysis of covariance revealed significant differences between those that were distracted and those that were not, F(1,21) ¼ 46.08, p < 0.001, partial eta sq. ¼ 0.69. The distracted half of the class scored lower (M ¼ 4.81, SE ¼ 0.51) on the multiple-choice exam than the other half of the class that was not distracted (M ¼ 9.81, SE ¼ 0.51, p < 0.001). Thus, hypothesis three also garnered empirical support. Hypothesis four predicted that participants' attitude scores toward multitasking in the Facebook distracted condition would be lower in the post-test than in the pre-test. And research question two inquired as to how attitudes toward their ability to multitask would vary for those in the other five conditions. To determine if participants' attitudes toward multitasking changed, each group was analyzed independently. Repeated-measures t-tests were conducted to determine if pre-test scores on perceived ability to multitask differed from post-test scores of the same measure. Holm's sequential Bonferroni procedure was applied to the six tests of significance to attenuate for the risk of family-wise alpha error. Each of the tests that originally achieved significance remained significant. For the Facebook distracted condition, differences between pre-test and post-test scores were significantly different in the direction that was expected, t(24) ¼ 10.99, p < 0.001. Average scores in the pre-test measure were higher (M ¼ 2.98, SE ¼ 0.13) than the post-test assessments (M ¼ 1.93, SE ¼ 0.12), indicating that the experience exposed the perceived inefficiency of multitasking in a multimedia environment. For the notes on paper condition, differences were also statistically different, t(31) ¼ 2.74, p < 0.01. On average, perceptions of abilities to multitask were higher in the pre-test (M ¼ 2.95, SE ¼ 0.12) than the post-test (M ¼ 2.79, SE ¼ 0.12). For the notes on laptop group, pre-test and post-test measures indicated the same significant pattern, t(34) ¼ 3.30, p < 0.002. On average, pre-test and post-test scores differed, (M ¼ 2.99, SE ¼ 0.13) and (M ¼ 2.82, SE ¼ 0.14), respectively. The mixed-distraction condition also followed this pattern, t(31) ¼ 5.38, p < 0.001, with higher pre-test scores (M ¼ 2.98, SE ¼ 0.13) than post-test scores (M ¼ 2.26, SE ¼ 0.16). The distracted combo condition also differed; t(31) ¼ 6.87, p < 0.001, with higher pre-test scores (M ¼ 2.85, SE ¼ 0.12) than post-test scores (M ¼ 2.15, SE ¼ 0.14). The only group that did not follow this pattern was the no media use control condition. In this condition, pre-test scores did not differ from post-test scores, t(24) ¼ 1.08, p < 0.29, with pre-test averages of (M ¼ 3.09, SE ¼ 0.12) and post-test averages of (M ¼ 3.03, SE ¼ 0.12). To summarize the results of the pretest/post-test measure of perceived ability to multitask, all of the conditions except the no media use control group differed significantly from pre- to post-test. In each of these conditions, participants’ perceived

Table 2 Exam performance mean and standard error scores for conditions. Conditions

Mean scores

Standard error

Notes on papera Notes on laptopa No media use controla Distracted combob Mixed distractionb Facebook distractedc

10.64 10.21 9.72 7.37 7.07 5.16

0.42 0.41 0.41 0.45 0.47 0.42

Note. Conditions with different superscripts differ at p < 0.01. The ANCOVA test of significance, controlling for age, year in college, and ACT score was: F (5,178) ¼ 26.43, p < 0.001, partial eta sq. ¼ 0.43. Although significantly correlated with exam performance in the bivariate correlation matrix, neither age nor year in college were significant when controlled for in the ANCOVA model.

E. Downs et al. / Computers & Education 86 (2015) 250e259

257

abilities to effectively multitask decreased. This demonstrates support for H4, and reveals a pattern of results for RQ2 that will be elaborated upon in the next section. 5. Discussion Numerous technologies have been introduced to the classroom setting with the idea they are to be used as learning tools. However, often there is little empirical proof that these technological tools actually help or improve learning or exam performance (ex. Downs et al., 2011). The body of research examining multitasking behaviors overwhelmingly suggests that if used incorrectly, multimedia technologies can impede the learning process (Mayer & Moreno, 1998, 2002; Moreno & Mayer, 1999). The results from this study were consistent with what was predicted from applying ACT-R and threaded cognition, as well as multimedia learning theory. Consistent with the growing body of work examining social-media sites as distractions in the classroom (Burak, 2012; Junco, 2012; Paul et al., 2012) this study found that those in the unstructured (distracted) conditions performed worse on a multiple-choice exam than those in the structured (non-distracted) control, paper-and-pencil, and laptop note-taking groups. These results can be explained in part by ACT-R and threaded cognition, as well as multimedia learning theory. First, in terms of multitasking, watching the classroom video required both perceptual and cognitive resources in order to process the information that was being delivered. At the same time, the Facebook task required motor, cognitive, and perceptual resources. At the very least, two resource pools, the perceptual and cognitive pools, were competing for resources and creating bottlenecks. Although the participants could switch back and forth at will, the Facebook chat group distraction meant that participants missed information presented in the video. The logical conclusion is that multitasking in this experimental, distracted environment is not helping exam performance. With respect to multimedia processing, multimedia learning theory's modality principle (Mayer & Moreno, 2002) states that the best way to present media in terms of learning is through both video and audio channels. The redundancy principle says that when you present information through more than two modes, that you can overwhelm processing bandwidth and create a situation favorable to cognitive overload. Taken as a whole, if one considers the distracted experimental environments that students were a part of, it is easy to see how processing resources would be overwhelmed. The visual system was busy trying to process animations on the projection screen, while simultaneously trying to process pictures and text on the laptop screen. In short, splitting the resources of the visual processing system at least three different ways lead to cognitive overload. Although both theories provide a different mechanism through which resources are split, the end result is clear: unstructured, multimedia-multitasking environments are not good if the end-goal is to improve learning. That is not to say that all use of technology by students during class time is inherently bad. The findings presented here demonstrate that under the right structured conditions, that technology can be used as a tool to augment learning processes. Those in the structured notes on laptop condition did not differ significantly from the notes on paper condition. In fact, these two categories achieved the highest scores in terms of learning. Even though taking notes and watching a video can be considered multitasking, the focus directed toward processing information and transcribing it in an environment free of additional technological distractions makes a difference. When the results from the no media use control condition are taken into consideration, just sitting and paying attention without taking notes is better for learning outcomes than an unstructured environment full of additional technological distractions. Even among the three top scoring conditions, notes on paper, notes on laptop, and no media use control conditions, respectively, there is the issue of statistical versus practical significance. Students will be interested to know that this study found notes on paper to be the most effective in relation to exam performance. The differences here, although, not statistically different, are practically significant. Converted to percentage scores, those in the notes on paper condition earned a 71%, those in the notes on laptop condition earned a 68%, and those in the no media use control condition earned a 65%. By many grading rubrics, these percentages are the difference between the passing or failing of a class and needing to retake it. Another finding from this experiment focuses on attitudes toward multitasking. In each of the six conditions, participants were given a pre-test asking them to assess their abilities to effectively multitask. In the pre-test, averages for each condition were within rounding error of the midpoint of the scale. However, in the post-test, participants in all of the conditions except the no media-use control condition reported significantly less confidence in their ability to effectively multitask. What makes this finding interesting is that none of the participants ever found out what their individual scores were on the multiplechoice assessment. Even without this knowledge, they still felt less confident in their ability to learn. At first blush, social desirability may partly explain the decrease in self-assessment. However, this did not hold true for the no media-use control group ruling out social desirability as the sole explanatory mechanism. A more likely explanation comes from participants' experience in the multitasking environment. It is possible that those in the other five conditions all thought that multitasking would be negatively related to exam performance. The only condition that did not have this experience was the control group. In the absence of any multitasking behavior (structured or unstructured), participants’ pre- and post-test assessments of their abilities to effectively multitask in the no media-use control condition did not change. This finding makes two important contributions to the multitasking literature. First, it underscores the importance of experiential learning. When participants had the experience of being in a multitasking environment and then taking a test their attitudes toward their abilities to efficiently multitask changed. Knowing this information can be helpful for those who coordinate and plan student orientations and teach freshman seminar courses. If students can be placed in an environment

258

E. Downs et al. / Computers & Education 86 (2015) 250e259

where they are likely to have difficulty (as in the Facebook distracted condition) then they can experience for themselves what effect multimedia multitasking has on learning. Creating an experience that resembles a traditional classroom puts the student in situ, and will allow students to understand from first-hand experience what happens when they try to multitask in an unstructured environment. We stress the importance of learning through experience based in part on research that suggests experiential and active learning can be transformative. As other empirical work has demonstrated success when using technology to generate a desired attitude change (ex. Downs, 2014), this type of experiential learning is likely to have a stronger impact than simply telling students that they “shouldn't multitask in class”. Second, this finding also reveals that students do not appear to delineate between productive and nonproductive multitasking. Even the participants in the top two scoring note-taking conditions thought significantly less of their abilities to efficiently multitask. Since media technology is often incorporated into course curricula, it may be wise to teach new students about the problems associated with extracurricular media usage. In such a course, students should be allowed to experience a simulated, unstructured multimedia-multitasking environment where they are required to engage in concurrent multitasking. Giving this experience to students early on in their college careers through orientations and new-student seminars could help students to develop classroom technology habits that maximize their ability to succeed academically.

5.1. Limitations & future research While this study provides some answers as to how these different environments affected learning outcomes, there are still areas that need to be explored. Although this work uses both ACT-R and threaded cognition, in addition to multimedia learning theory as plausible explanatory mechanisms for why learning is compromised in multimedia-multitasking environments, it cannot determine which theory is more or less responsible for this pattern. Future work will need to manipulate different environments with different tasks to determine which theory has more explanatory power. In addition, this study is limited in that it only examined how the use of laptop computers affected learning outcomes. Different technologies with different form factors with different input mechanisms are also popular in today's classrooms. A recent study demonstrated how multitasking with various technologies was associated with lower grade point averages and inefficient study habits (Bellur, Nowak, & Hull, 2015). Future research would benefit from further examining how laptops, tablets, cell phones, MP3 players and other technologies affect the learning process inside and outside of the classroom. Further, this study used Facebook groups to create a distracted environment. Would other social media outlets have provided as strong of a distraction? Anecdotally, there was a trend of opposition to technology use with many faculty banning the use of technologies in their classrooms due to concerns about the negative impact they have on student learning (e.g.Young, 2006). On one hand, some believe that technology can facilitate learning, and others believe that it is detrimental to the learning process. Still others ignore the issue altogether (Burak, 2012). As with most complicated issues, the answer is not likely to be as simple as technology is all good or all bad. Instead, a thoughtful examination of both theory and practice should seek to identify the boundary conditions under which learning outcomes and objectives are likely to succeed or fail in the presence or absence of technological aids. Taking the findings of this study into consideration, results demonstrate that unstructured, distracted, multimedia-multitasking environments are detrimental to learning. Structure, with or without technology is important for creating an environment that maximizes learning capabilities. Knowing this can help instructors and professors establish guidelines for acceptable use of technology to help to make classroom learning an easier act to follow.

Appendix 1

List of Questions Used by Research Assistant in Distracted Conditions 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

What is your favorite place to eat and what is your favorite dish there? List (5) things that you need to purchase from the store. Now look at everyone else's lists and pick (3) things that you want from their list that is not already on your own. What is your dream car, what color would it be, and why is it your dream car? List three hobbies you have and how old you were when you started? Who is your favorite band/singer/songwriter and why? If you could travel anywhere in the world where would you go and why? What is your favorite movie quote, who said it, and what movie was it from? Go to www.foodnetwork.com and find a dish that you would want to eat. Please share this with the group. What do you think of the video so far? Go to www.theonion.com and find a headline that interests you. Please share this article with the group and tell us why it's interesting. 12. What is your major and minor and what led you to choose it?

E. Downs et al. / Computers & Education 86 (2015) 250e259

259

References Anderson, J. R. (2007). How can the human mind occur in the physical universe? New York, NY: Oxford University Press. Bellur, S., Nowak, K. L., & Hull, K. S. (2015). Make it our time: in class multitaskers have lower academic performance. Computers in Human Behavior, 53, 63e70. http://dx.doi.org/10.1016/j.chb.2015.06.027. Berk, R. A. (2010). How do you leverage the latest technologies including Web 2.0 tools, in your classroom? International Journal of Technology in Teaching and Learning, 6(1), 1e13. Borst, J. P., Taatgen, N. A., & van Rijn, H. (2010). The problem state: a cognitive bottleneck in multitasking. Journal of Experimental Psychology, 36, 363e382. http://dx.doi.org/10.1037/a0018106. Bowman, L. L., Levine, L. E., Waite, B. M., & Gendron, M. (2010). Can students really multitask? an experimental study of instant messaging while reading. Computers & Education, 54, 927e931. http://dx.doi.org/10.1016/j.compedu.2009.09.024. Broadbent, D. E. (1958). Perception and communication. London: Pergamon. Burak, L. (2012). Multitasking in the university classroom. International Journal for the Scholarship of Teaching and Learning, 6, 1e12. Carrier, L. M., Cheever, N. A., Rosen, L. D., Benitez, S., & Chang, J. (2009). Multitasking across generations: multitasking choices and difficulty ratings in three generations of Americans. Computers in Human Behavior, 25, 483e489. http://dx.doi.org/10.1016/j.chb.2008.10.012. Downs, E. (2014). Driving home the message: using a video game simulator to steer attitudes away from distracted driving. International Journal of Gaming and Computer-Mediated Simulations, 6, 50e63. http://dx.doi.org/10.4018/ijgcms.2014010104. Downs, E., Boyson, A. R., Alley, H., & Bloom, N. R. (2011). iPedagogy: using multimedia learning theory to iDentify best practices for MP3 player use in higher education. Journal of Applied Communication Research, 39, 184e200. http://dx.doi.org/10.1080/00909882.2011.556137. Duncan, J. (1980). The demonstration of capacity limitation. Cognitive Psychology, 12, 75e96. http://dx.doi.org/10.1016/0010-0285(80)90004-3. Frand, J. L. (2000). The information-age mindset: Changes in students and implications for higher education. Educ. Rev., 35(5), 15e24. Fried, C. B. (2008). In-class laptop use and its effects on student learning. Computers & Education, 50, 906e914. http://dx.doi.org/10.1016/j.compedu.2006.09. 006. Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: the effects of multitasking in learning environments. Journal of Computing in Higher Education, 15, 46e64. http://dx.doi.org/10.1007/BF02940852. Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11e26. http://dx.doi.org/10.1080/ 17470215208416600. Junco, R. (2012). In-class multitasking and academic performance. Computers in Human Behavior, 28, 2236e2243. http://dx.doi.org/10.1016/j.chb.2012.06. 031. Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall. Kay, R. H., & Lauricella, S. (2011). Unstructured vs. structured use of laptops in higher education. Journal of Information Technology Education, 10, 33e42. Kolb, D. A. (1984). Experiential learning: experience as the source of learning and development. New Jersey: Prentice-Hall. Kolb, A. Y., & Kolb, D. A. (2005). Learning styles and learning spaces: enhancing experiential learning in higher education. Academy of Management Learning and Education, 4, 193e212. http://dx.doi.org/10.5465/AMLE.2005.17268566. Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects of student multitasking with laptops during the lecture. Journal of Information Systems Education, 21, 241e251. Mayer, R. E., & Anderson, R. B. (1991). Animations need narrations: an experimental test of a dual-coding hypothesis. Journal of Educational Psychology, 83, 484e490. http://dx.doi.org/10.1037//0022-0663.83.4.484. Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: evidence for dual processing systems in working memory. Journal of Educational Psychology, 90, 312e320. http://dx.doi.org/10.1037//0022-0663.90.2.312. Mayer, R. E., & Moreno, R. (2002). Aids to computer-based multimedia learning. Learning and Instruction, 112, 107e119. http://dx.doi.org/10.1016/S09594752(01)00018-4. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: the role of modality and contiguity. Journal of Educational Psychology, 91, 358e368. http://dx.doi.org/10.1037//0022-0663.91.2.358. Moreno, R., & Mayer, R. E. (2000). A coherence effect in multimedia learning: the case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92, 117e125. http://dx.doi.org/10.1037//0022-0663.92.1.117. Paivio, A. (1990). Mental representations: A dual coding approach. New York: Oxford University Press. Paul, J. A., Baker, H. M., & Cochran, J. D. (2012). Effect of online social networking on student academic performance. Computers in Human Behavior, 28, 2117e2127. http://dx.doi.org/10.1016/j.chb.2012.06.016. Ragan, E. D., Jennings, S. R., Massey, J. D., & Doolittle, P. E. (2014). Unregulated use of laptops over time in large lecture classes. Computers & Education, 78, 78e86. http://dx.doi.org/10.1016/j.compedu.2014.05.002. Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-academic internet use in the classroom is negatively related to classroom learning regardless of intellectual ability. Computers & Education, 78, 109e114. http://dx.doi.org/10.1016/j.compedu.2014.05.007. Roberts, D. F., Foehr, U. G., & Rideout, V. (2005). Generation M: Media in the lives of 8e18-year olds. The Henry J. Kaiser Family Foundation. Retrieved on 12.14. 14 from http://www.kff.orgentmedia/entmedia030905pkg.cfm. Salvucci, D. D. (2005). A multitasking general executive for compound continuous tasks. Cognitive Science, 29, 457e492. http://dx.doi.org/10.1207/ s15516709cog0000_19. Salvucci, D. D., & Taatgen, N. A. (2008). Threaded cognition: an integrated theory of concurrent multitasking. Psychological Review, 115, 101e130. http://dx. doi.org/10.1037/0033-295X.115.1.101. Salvucci, D. D., Taatgen, N. A., & Borst, J. P. (2009). Toward a unified theory of the multitasking continuum: from concurrent performance to task switching, interruption, and resumption. In S. Greenberg, S. E. Hudson, K. Hinckley, M. R. Morris, & D. R. Olsen, Jr. (Eds.), Human factors in computing systems: CHI 2009 conference proceedings (pp. 1819e1828). New York, NY: ACM Press. Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, 24e61. http://dx.doi.org/10.1016/j.compedu.2012.10.003. Tapscott, D. (1998). Growing up digital: The rise of the net generation. New York, NY: McGraw-Hill. Weaver, B. E., & Nilson, L. B. (2005). Laptops in class: what are they good for? what can you do with them? New Directions in Teaching and Learning, 101, 3e13. http://dx.doi.org/10.1002/tl.181. Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of off-task multi-tasking with technology on realtime classroom learning. Computers & Education, 58, 365e374. http://dx.doi.org/10.1016/j.compedu.2011.08.029. Young, J. R. (2006). The fight for classroom attention: professor vs. laptop. Chronicle of Higher Education, 2, A27eA29.