Computers & Education 59 (2012) 449–465
Contents lists available at SciVerse ScienceDirect
Computers & Education journal homepage: www.elsevier.com/locate/compedu
Trainee teachers’ mental effort in learning spreadsheet through self-instructional module based on Cognitive Load Theory Zaidatun Tasir a, *, Ong Chiek Pin b a b
Department of Educational Multimedia, Faculty of Education, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor, Malaysia Malaysian Teacher Education Institution, Batu Pahat, Malaysia
a r t i c l e i n f o
a b s t r a c t
Article history: Received 31 May 2011 Received in revised form 13 January 2012 Accepted 13 January 2012
A printed module should consist of media elements, namely text and pictures, which are selfinstructional and could cater to the needs of the user. However, the typical platform of such visualization frequently overloads the limited working memory causing split attention and redundancy effects. The purpose of this study is to design and develop a printed self-instructional module based on Cognitive Load Theory in learning. Media elements are presented with minimal cognitive demands with an actionand task-oriented approach. Utilizing a modified Solomon Group design on 113 trainee teachers selected using purposive sampling, the effectiveness of the developed module was compared to the conventional module. Independent sample t-tests conducted to compare the time of completion in performance between the Control Group working on the conventional module and Group 2 working on the developed module show significant statistical differences in pre- and post-activities. Group 2 reported lower cognitive load scores on the rating scale and graphical plots using computational approach showed higher instructional efficiency. Thus, results show that trainees working on the developed printed module were able to perform faster and better with lower mental effort and had higher performance. Ó 2012 Elsevier Ltd. All rights reserved.
Keywords: Adult learning Applications in subject areas Improving classroom teaching Media in education Teaching/learning strategies
1. Introduction The introduction of Information and Communication Technology (ICT) as one of the main components in subjects related to educational technology for trainee teachers in the Malaysian Teacher Education Institution is seen as a move towards producing a technologically capable workforce. In the duration of their course, these trainees are exposed to concepts of ICT, computer systems and also networks and communications with the aim of equipping them with knowledge and skills. Consequently, they will be able to use and apply these concepts in schools so as to keep pace with the rapidly changing developments and innovations of ICT. A case study conducted by Goh (2007) on the integration of ICT in teaching and learning noted that although the implementation of the Smart School Project in Malaysian schools was already into its sixth year, the take-up was low, evidenced by its mediocre density and low frequency of use. It was found that two-thirds of the estimated 50 percent users implemented teaching and learning ICT only once or twice a semester. Less than 20 percent implemented ICT weekly. Most informants admitted they were weak in ICT skills. A developing country like Malaysia is not the only country that faces problems with technology usage among teachers. Laffey (2004) states that the current in-service teachers are not adequately prepared for teaching technology, implying an inadequacy in the technology training of teachers. Shuldman’s (2004) study on the state of technology integration revealed that creating a better understanding of how technology can be applied in normal classrooms may be part of the solution to the problem of technology integration. Korthagen, Loughran, and Russell (2006) report that approaches to teacher education are critiqued for their limited relationship to student teachers’ needs and for their meagre impact on practice. Of equal importance, teacher education programs have not adequately modelled the use of technology in the methods courses (Adamy & Boulmetis, 2006) or incorporated effective approaches to technology integration into single technology courses (Brown & Warschauer, 2006). An effective pedagogy in which theory and practice are linked effectively is needed. Appropriate teaching and learning approaches and strategies are important for the achievement of learning objectives which should involve the active engagement of trainees in meaningful experiences. * Corresponding author. Tel.: þ60 19 7255786; fax: þ60 7 5534884. E-mail addresses:
[email protected],
[email protected] (Z. Tasir),
[email protected] (O.C. Pin). URL: http://ac.utm.my/web/p-zaida. 0360-1315/$ – see front matter Ó 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2012.01.009
450
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
Courses on ICT in the curriculum were often based on the premise that if there are computer labs, software and user support provided for users, they will learn. No doubt the existing printed self-instructional module used in the institutions provides procedural explanations and stresses the contents to be learned, but these conditions merely consist of following programmed exercises, something which the dazzle of the technology rather than the quality of learning, which will then earn the trainees the grade. Long sentence structures and the need to search and match text with window elements cause difficulties in carrying out assigned activities. Rovai (2002) and Mohamed Ally (2005) stress that learning is influenced by the instructional strategy in learning materials and therefore must be properly designed using proven learning theories and instructional design principles. Cognitive Load Theory (Sweller, 1988) is both theory of cognition and learning and an instructional design which describes how the architecture of cognition has specific implications for the design of instruction. According to Sweller (2008), this theory sees the user as active and inquiring, while constructing meaning of experience. From the constructive perspective, learning occurs most effectively when students are engaged in authentic tasks that relate to meaningful contexts. The idea is to minimize and control conditions that create unwanted cognitive load in learning materials. To meet these requirements in reorganizing information, it is feasible to design and develop a printed self-instructional module that would work better than the conventional one with the best use of media elements, namely text and pictures, in presenting its contents that minimizes the cognitive load yet favours an action- and task-oriented approach. Due to this, this research is taken up to make the software applications easier and faster for the trainees to learn and comprehend. 2. Theoretical background 2.1. Self-instructional module The most commonly used media in printed modules are text and pictures. Text consists of alphanumeric characters that may be displayed in any format whereas pictures include diagrams, drawings or graphics. As Bodemer and Ploetzner (2002) put it, ‘multiple representations can complement each other, resulting in a more complete representation of an application domain than a single source of information does’. Gyselinck, Ehrlich, Cornoldi, de Beni, and Dubois (2005) mention that this approach encourages learning, in a way where users are able to compensate for any weaknesses associated with one particular strategy and representation by switching to another. Further, it can be seen that there may be considerable advantages for learning with complementary processes because, by exploiting combinations of representations, users are less likely to be limited by the strengths and weaknesses of any single one. According to the cognitive theory of multimedia learning (Mayer, 1997), a user possesses a visual and a verbal information processing system which is capable of selecting incoming verbal and visual information. The information is then organized to create a verbally- and a visually-based model of the to-be-explained system, thereafter the user integrates the items from the models. Based on the theory, Mayer and Moreno (2003) tested and derived principles of instructional design for fostering multimedia learning. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words as users are able to build connections between the corresponding items in verbal- and visual-based models. This proved to be the case when students reading texts containing captioned illustrations placed near the corresponding words generated more useful solutions on problem-solving transfer test than students who simply read text alone (Mayer, 1989; Mayer & Gallini, 1990). The contiguity principle holds that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation as both media must be in the working memory at the same time in order to facilitate the construction of referential links. Studies revealed that students reading texts with captioned illustrations placed near the text generated more useful solutions on problem-solving transfer questions than students reading the same text with illustrations presented on separate pages (Mayer, 1989; Mayer, Steinhoff, Bower, & Mars, 1995). The coherence principle posits that multimedia explanations are better understood when they include few rather than many extraneous words and pictures as shorter presentation primes the user to select relevant information and organize it productively. Students reading a passage explaining steps on how lightning forms, along with corresponding illustrations were found to generate more useful solutions in problem-solving transfer test than did students reading the same information with additional details inserted in the materials (Harp & Mayer, 1997; Mayer, Bove, Bryman, Mars, & Tapangco, 1996). Nevertheless, Seufert and Brunken (2004) state that the effectiveness for coherence formation is dependent on the learner’s expertise and it is only helpful if it is within the limits of working memory capacity. Guiding a user’s attention by signalling and highlighting relevant corresponding items relieves the cognitive capacity of unnecessary processing and activates the user to deal with the learning content actively and elaborate it. Studies done by Kalyuga, Chandler, and Sweller (1998) indicated that guiding users’ attention by colour coding has resulted in better learning outcomes. Likewise Plotzner, Bodemer, and Feuerlein (2001) used dynamic linking in their computer-based statistic trainer VISUALSTAT revealed that users could observe changes in the graph after changing the values of the corresponding formula. These researches have presented clear and compelling guidelines for the instructional design of multimedia messages based upon a cognitive theory of how users process multimedia information. The typical presentation platform of visualization in printed modules is the screen capture, displays of the computer screen, combined with text (van der Meij & Gellevij, 1998). Horton (1993) justifies that screen captures offer visual relief on pages full of text and states that when screen captures are used appropriately and placed wisely, they make procedures easier to learn and quicker to follow. ‘When’ a screen capture is to be used depends on the type of information it supports (Gellevij, 2002; van der Meij, 1998). For example, a target screen supports goal setting, the outcome of an action step signifies action information, or a specific button offers coordinative information. ‘Which’ screen captures to use often relates to the use of context. Full screen captures show the complete interface. Partial captures show little (for example, the active window) or no context (for example, a single button). And ‘where’ to use screen captures is about the appropriate place of a screen capture on a page. Screen captures can be placed on the left of the text, on the right, or in the flow of the text. Horton (1993) and Houghton-Alico (1985) found that 76% of the pages in any computer instructional documentation showed one or more screen captures. Nearly all screen captures were pictures from the whole screen, or pictures from one or two windows of the program. Only 21% showed a single object like a button or icon. 23% of the pages had a picture in the form of a schema, flowchart, diagram or table. The reason claimed for this dominance is that screen captures can help the user to acquire the necessary knowledge and skills because they can convey some things better than other illustrations (Gellevij, 2002).
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
451
The challenge is how instructional designers can effectively embed instructionally sound practices into the interface design of screen captures so that the user needs are sufficiently addressed from the learner’s perspective. When learning with multiple representations, the user has to understand the semantics of each representation, understand which parts of the domain are represented, relate the representations to each other by finding the corresponding variables and translate between them through reasoning. According to van der Meij and De Jong (2003), research revealed that it could be a demanding process if these environments were not carefully designed. A selfinstructional printed module should enable users to self-learn (Lazonder, 2000) and have the potential in motivating them due to its flexibility, be adaptable to majority as well as emphasizing on individualized learning so that users will acquire the mastery of learning (Shaharom Noordin, 1997). 2.2. Cognitive Load Theory Design principles that can be derived from Cognitive Load Theory serve as guides for designing instructions that balance the media elements with the cognitive demands of users. The theory holds that instructional materials, in order to be effective, must take particular account of the limits of working memory and the interaction between working memory and the schemata held in the long-term memory. It is the structure of that instructional information that must be carefully considered (Paas, van Gog, & Sweller, 2010) and the actual needs will only be addressed if both learning and interface design dimensions are addressed with equal attention (Sweller, 2008). The theory identifies three sources of cognitive load (Sweller, 1994, 1999; Sweller, van Merrienboer, & Paas, 1998) namely intrinsic cognitive load, extraneous cognitive load, and germane cognitive load. Intrinsic load is the cognitive load imposed by the inherent difficulty of the instructional materials. It is not associated with the quantity of the instructional materials but rather with its complexity, that is, the number of information elements that must be processed simultaneously in the working memory and the extent of their interactivity (Sweller et al., 1998). Text information presented to users that can be processed sequentially without reference to other elements are said to be low in element interactivity and therefore easily understood. However if users are required to process many elements concurrently, then simultaneous processing of all essential elements must occur before understanding commences – a situation of high intrinsic cognitive load. As a consequence, high element interactivity material is difficult to understand. The speed of this process is dependent on the exposure time: the shorter the time, the quicker the instructional materials complexity will be revealed. Extraneous load is the cognitive load imposed by the format and the manner in which information is presented, and by the requirements of the instructional activities that are not necessarily relevant to the learning goals (Sweller et al., 1998). The need to integrate text information with any pictorial portion referenced by the text and/or the need to coordinate text presented in various modes: visually in text and aurally in narration interfere with schema acquisition and automation, and consequently waste limited mental resources. Poorly designed instructional materials impose extraneous cognitive load as much cognitive capacity is needed to establish coherence that drains cognitive resources needed to achieve the learning objective. Numerous extraneous sources of cognitive load may result in longer time to learn, poorer learning outcomes, or both. Germane load is the cognitive load imposed by instructional activities directed towards the instructional goal as it contributes directly to the process of schema construction and automation (Paas, Tuovinen, Tabbers, & van Gerven, 2003; Sweller et al., 1998). The goal of this process is not the overall complexity of the instructional material but the complexity of the currently revealed material. Variability in instructional procedures like asking questions about examples, requires the thoughtful engagement of the users which could yield better schema construction and transfer of learning. Users may end up with a much broader repertoire of applicable knowledge and/or skills. Under such circumstances, it may be necessary to increase users’ motivation and interest, and encourage them to employ learning processes that yield germane cognitive load. It is obvious that intrinsic and extraneous cognitive load reflect objective design factors while germane cognitive load reflects the subjective experience of the user. These cognitive loads can work in tandem. For example, if learning is improved by an instructional design that reduces extraneous cognitive load, the improvement may have occurred because the additional working memory capacity freed by the reduction in extraneous cognitive load has now been allocated to germane cognitive load. As a consequence of learning through schema acquisition and automation, intrinsic cognitive load is reduced. Learning should be powered by a reinforcement process in that, as users acquire new schemata, the gap between the presented instructional material and the user’s knowledge decreases. Therefore when designing content and activities, instructors need to consider the cognitive load the learning material poses (intrinsic cognitive load), the working memory resources needed in learning material or performing the learning task (germane cognitive load) and cognitive processing required for activities not related to learning (extraneous cognitive load). When learning a computer program, a user must not only process the instructional material but must also attend to the keyboard, mouse and computer screen at the same time. Cognitive Load Theory indicates that this situation poses two potential risks: split attention and redundancy (Chandler & Sweller, 1991; Sweller, 1994; Sweller & Chandler, 1994). Split attention occurs when users are presented with multiple sources of information in isolation, such as in the simultaneous use of text and pictures in many instructional materials that have to be mentally integrated before they can be understood (Chandler & Sweller, 1991; Sweller, 2002). Conventionally a picture is presented with the associated text above, below or at the side, requiring the user to hold one or the other in working memory while switching back and forth between the text and picture. In such cases, cognitive load is increased. But when the text is overlaid on the picture to produce a single source of instructional information, the user’s attention is not split and cognitive load is reduced (Cooper, 1998). Clark and Mayer (2003) also cite that the construction of a coherent structure in which text placed next to corresponding pictures will not overtax the working memory. The important point is that users should not have to use scarce working memory resources to search for information as successive or spatially separated information increases extraneous cognitive load. Other experiments which have provided evidence for this effect were demonstrated in Euclidian geometry (Tarmizi & Sweller, 1988); coordinate geometry and numerical control programming (Sweller, Chandler, Tierney, & Cooper, 1990); kinematics problems (Ward & Sweller, 1990); writing of scientific reports (Chandler & Sweller, 1992, 1996); primary school paper folding task (Bobis, Sweller, & Cooper, 1993); and arithmetic word problem-solving task (Mwangi & Sweller, 1998).
452
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
The redundancy effect occurs when there is an overlap or replication of meaning and information between what the textual and visual representations are trying to convey (Clark & Mayer, 2003). These materials become extraneous as redundant information can actually increase cognitive load due to the unnecessary processing of the working memory. The general message of the redundancy effect is that less is often more when it comes to learning. Printed modules that have minimal text and ample diagrams are examples of a good way to do this. The effect was demonstrated by Chandler and Sweller (1991, 1996), Sweller and Chandler (1994), Cerpa, Chandler, and Sweller (1996), Kalyuga, Chandler, and Sweller (1999), Mayer, Heiser, and Lonn (2001) and Craig, Gholson, and Driscoll (2002). In teaching complex cognitive and psychomotor skills, Pociask and Morrison (2008) examined the effectiveness of instructional materials designed to control split attention by integrating text and pictures, and the removal of redundancy in the text and pictures in original materials. They found that such strategies could minimize extraneous cognitive load, indicated by higher test scores from the modified instructional group. Contents containing split attention and redundancy features represent a discernable difference between groups in terms of the number of discrete elements that users are required to maintain and simultaneously manipulate in working memory. This reduction of extraneous cognitive load would have allowed for an increase in germane cognitive load that would allow for the further development of appropriate schemata. Cognitive Load Theory has shown that by reducing redundancy and integrating information sources, a user’s ability to process a task is heightened as all problem-based searching makes heavy demands on working memory. The fundamental tenet of the theory is that the quality of instructional design will be raised if greater consideration is given to the role and limitations of working memory (Watson, 2005). Learning must be seated in real meaningful learning tasks taking into account how to minimize the cognitive load for the purposes of usability and learn-ability, an approach from the minimalist perspective (Carroll, 1990). This can be achieved by an interface which a user seeks to access and interact. According Gellevij, Van der Meij, De Jong, and Pieters (2002), visual cues on screen captures with procedural sections offer better support for learning than the conventional ones. The layout is such that instructions have to be juxtaposed side-by-side with the screen captures to avoid split attention and only one instance of every instruction has to be selected to avoid redundancy. It is important that the user’s attention is focused on elements in the environment that are relevant to learning and irrelevant elements are filtered out. Instruction is made more efficient when the amount of reading is minimized, generally breaking up sentences into chunks. Cueing devices, such as arrows or bolding of text (Clark & Harrelson, 2002) help support attention. Therefore the need to mentally integrate disparate sources of information may be eliminated, mental load reduced and learning enhanced. 3. Research objectives Since the priority of instruction is to instruct and benefit users by giving specific guidance on how to manipulate information cognitively that are consistent with a learning goal, the module should then strive to facilitate the learning process, that is, by making learning easier and faster. Directing cognitive resources to the most central or important aspects of the instructional materials is a matter of attention control. This research aims to achieve the following objectives: (1) To design and develop a printed self-instructional module based on Cognitive Load Theory by Sweller (1988). (2) To determine the effectiveness of the developed printed self-instructional module from the following aspects: a. Trainees’ performance during evaluation based on time of completion on spreadsheet activities. b. The amount of mental effort that the trainees spent on the activities.
4. Method 4.1. Design In this study we used a quasi experimental pre–post test design. Quasi experimental design was chosen due to the existence of intact groups from the subject option combination system mandated by the Teacher Education Division. Randomly assigning trainees to the groups would disrupt classroom learning (Creswell, 2005). To deal with a potential testing threat when the act of taking an activity affects how learners performed on a post-activity, we used the Solomon Group Design as a base. This was where the intact groups were then randomly assigned to a Control Group and two experimental groups. The modified Solomon Group Design is shown in Fig. 1. The symbol O refers to the measurement of the dependent variable and X represents exposure to treatment. R represents random assignment of intact groups. In this study, the second experimental group working on the developed printed self-instructional module took the pre-activity as the result was compared with the result of pre-activity done by the Control Group working on the conventional module. Threat to internal validity will not occur since the pre- and post-activities in this study were of different activities differing in complexity and the effects of pre-activity will not influence the results of post-activities. Fig. 2 shows the research procedure of this study. 4.2. Sample The sample of the presented study consisted of 6 classes comprising 113 fourth semester trainees from the Teacher Education Institution. They were of ages ranging from 20 to 23 years and they had been selected to undergo the teacher training program based on their abilities to
Control Group Group 1 Group 2
R R R
O O O
Fig. 1. Solomon group design (modified).
X X
O O O
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
Fig. 2. Research procedure.
453
454
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
fulfil the minimum requirements set during interviews by the Ministry of Education. They were placed in their respective classes based on the subject options relevant to their performance and achievements in curricular and co-curricular activities in schools. The intact classes were randomly assigned to three groups – a control group and two experimental groups. Thirty-nine trainees were in the Control Group while thirty-eight were in Group 1 and thirty-six in Group 2. Participation in the experiment was considered a course requirement as spreadsheet application is a must-be-taught ICT component. In this study, we did not take into account the age differences, gender, interest, levels of expertise and the learning styles of trainees that might affect reported results as learning techniques were set as the scope in determining their abilities and performances. Also not inclusive in the study were entry requirements and socio-cultural background. 4.3. Instruments Two modules, namely the conventional and the developed printed module were compared. They were of the same contents (Table 1) but differed in the way they were presented and in the layout of the instructional design in terms of split attention and redundancy. Fig. 3 shows an example of the designed differences. A pre- and post-activity pilot study was carried out using the equivalent-forms method to ensure the reliability of the modules. Time taken during the activities was analyzed using Pearson correlation and the result showed it to be highly reliable at 0.843 – that the two modules related positively in the consistency for the measurement of scores. The developed module was also shown and checked by the head of the Educational Technology department and head of the ICT unit, both having more than 15 years of lecturing experience in ICT in the institute for content validation. Time sheet presented in a form was given to trainees to record the training time they took to complete the lessons in the modules (Appendix A). Training time was the time they started working on the computer and the time they completed the activity. Trainees recorded the time according to what was shown on the computer clock. Cognitive load measurement was administered at the end of every activity through self-report of invested mental effort on a nine-point symmetrical scale ranging from 1 (very, very low mental effort) to 9 (very, very high mental effort) introduced by Paas (1992) (Appendix B). Its reliability was tested for the purpose of this study using Cronbach’s Alpha. The coefficient obtained from calculations involving the cognitive load measurements had shown a strong evidence of reliability at 0.865. This scale is assumed to be the most practical way for the trainees to inspect their own cognitive processes and report the experienced difficulty as well as the amount of mental effort invested. Being simple yet easily applicable in a natural setting, it increases the ecological validity of the results. 5. Design and development of the self-instructional module In this study, both design and development phases were combined to reflect an integrated approach when developing the printed selfinstructional module. ADDIE model (Branson et al., 1975) was used as the instructional design model to develop the module as it has served as the source to the development of other commonly used instructional design models namely Morrison, Ross and Kemp model; Seels and Glasgow model; or Dick and Carey model, evidenced by the presence of the five main phases of ADDIE – Analyze, Design, Develop, Implement and Evaluation (Kruse, 2004). According to Molenda, Pershing, and Reigeluth (1996), ADDIE is systematic, systemic, reliable, iterative and empirical. Strickland (2006) mentions that through the iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. Weaknesses or defects can be identified early and rectified before the instructional material is completed. This has enabled us to improve the product to ensure that it is educationally innovative and effective. Thus the model’s flexibility and its dynamic nature will enable us to receive continual feedback when the self-instructional module is being developed with the aim of helping trainees acquire the desired learning objectives. In designing the computer module, the main theme is to keep the cognitive load low for usability and learn-ability purposes with the intention of easier and faster comprehension. This can be achieved by an interface which a user seeks to access and become acquainted with. As the learning environment of the developed printed module contained the same contents as the conventional module, the information
Table 1 Contents in excel lessons. Lesson
Contents
Excel 1 (pre-activity)
Create worksheets
a. b. c. d. e. Excel 2 (post-activity 1)
Create mark sheets
a. b. c. d. Excel 3 (post-activity 2)
Enter data Use sum, average, max & min function Use currency format Create borders Save and print Adjust column width Align text Use countif, vilookup & ranking function Create tables
Create charts
a. Use chart type b. Insert data labels End product 1 (post-activity 3)a End product 2 (post-activity 4)a a
Module on request only.
Create budget spreadsheet and charts Create mark sheet and charts
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
455
Fig. 3. A designed difference of the modules.
provided by the representations was the same for all conditions, but the way it was presented differed. The following sections will justify guidelines for the developed module from an instructional perspective based on Cognitive Load Theory on two effects of interest namely split attention and redundancy, underlying the way the application of interface design guidelines enhances the learning process. 5.1. Split attention effects The split attention effect indicates that if users were to integrate different sources of information, the learning process will be inhibited (Sweller, 1988). Previous work has indicated that when students are required to learn how to use a computer application from a module which is unintelligible without referring to the material on the screen and vice-versa, the need to frequently interact with both the equipment and the instructional module can interfere with learning (Chandler & Sweller, 1991; Sweller & Chandler, 1994). This is because they must hold segments of information in working memory while searching for the matching entities on the computer screen, thus causing extraneous cognitive load. One alternative is to represent the equipment (computer screen and computer keyboard) diagrammatically in the form of screen captures so as to reduce the amount of switching (Chandler & Sweller, 1991; Sweller et al., 1990). Screen captures can contribute to a better exploitation of the user’s optical channel possibilities (Sweller, 1988), combining written text with pictures. This is in line with the minimalist approach whereby user support on task performance will lead to deeper comprehension and retention of the educational material when working with a computer. There may be a possibility of speculation that users may experience a splitting effect when working on a computer with a module at the same time, but, if a module is self-contained then split attention will not occur as users can understand it fully even without referring to the computer screen. 5.1.1. Positioning When relative information of context is inserted at appropriate locations to the picture, users have all the required information available thus obviating the need for mental integration. This precisely-needed information of text and picture is placed on the same sheet and in
456
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
a two-column layout in which the instructions and screen captures are presented side-by-side in a left-to-right reading order. The right-hand column presentation of screen fits the users’ reading direction (van Merrienboer, 2000). They have direct access to the instructions that show a flow of procedures and can follow them step-by-step. They are not required to move to many different screens to see the instructions. Thus they can make a logical connection, avoid the split attention effect and retain information better. As Schnotz (1993) put it, users who read with minimal switches made more text-based inferences and recalled more facts while those reading with frequent switches were found to have faced difficulty integrating and recalling text. 5.1.2. Cueing The presence of signalling techniques which include the use of hairlines and circles draw the users’ attention to the task-relevant elements of the Window. Cues such as these often serve as feedforward, marking the areas that become important in the actions that follow thus avoiding split attention. In this case the hairlines indicate that the screen capture is also meant to support object location and being coloured, they show contrast and draw attention. Thus having a greater portion of total cognitive load associated with the targeted contents make instructional materials more effective as learning is done by directing cognitive resources towards relevant activities (Cooper, 1990). 5.2. Redundancy effects The redundancy effect indicates that if sources of information which are self-contained were to be presented as the same information in multiple ways, then it will cause unnecessary processing of the working memory (Sweller, 1988). Learning should be a constructive process, meaning that the user should link new knowledge to existing schemata. Text which merely re-describes the visual – both being intelligible in isolation – with the redundant text, requires working memory resources that do not alter long-term memory. Therefore, several measures can be taken to ensure that the instruction given to the user does not cause extraneous working memory load, while still resulting in germane effects on collaboration. 5.2.1. Text The information presented is short in conversation style, acting as restriction to the range of possible irrelevant actions. According to Mayer (2004), people learn better from conversational style than from the formal style as it provides them an immediate opportunity to act and get started fast too. Such support makes certain actions that are not relevant to the instructed activities unavailable. Conceptual information used in instructions must be operational, not technical or theoretical, and must tell users only what they need to know for the task at hand. If an instruction is supported too well, it may prevent users from developing a deeper understanding of the actions that are required and thus prevent transfer to other situations. Therefore, the text will not contain more than seven chunks or segments of instructions with words or phrases compatible with the users’ context so that they can immediately comprehend without much further effort. An immediate goal state is included for each step acting as sub-goals to enable users to know what they are pursuing. Different font types and sizes, and white space used may attract their attention enabling them to distinguish important information. These not only make sense to them, but also give them an overview of the tasks involved with the use of the spreadsheet application. Besides, presenting explanatory information may help trainees to direct and interpret their experiences from prior knowledge. An example of recommended text with immediate goal state is shown in Fig. 4.
Fig. 4. Distinguish action steps with goal state.
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
457
5.2.2. Coverage On the basis of Cognitive Load Theory, screen captures in the module must be identical to that on the computer screen (full coverage) in order to prevent redundancy effects (Sweller et al., 1998). Screen captures which are not identical will impose extraneous cognitive load on users as they are required to locate and coordinate information between the screen and module, a situation which is actually redundant and causes split attention. The advantage of full screen modules lies in the depicted display of a complete user interface (van der Meij & Gellevij, 1998). Full screen images reduce visual search and speed up object location by showing objects in context. In addition, the series of visuals can convey a sense of continuity and may help users form a mental model of the program. These form the basis of how the content coverage is designed. 5.2.3. Size The size of each screen capture will take about 50% of a page to ensure legibility among users. A legible screen capture displays all the elements and objects of the window, for example toolbars, menu and icons, clearly. Clarity helps users verify screen states during task execution. Users may alleviate their fears by checking and comparing screen states to see if their inputs have been correctly processed. Comparisons are direct and involve no mental transformation of information which may impose redundant retention. With this, they are able to detect errors early and speed up task execution. These positive feedbacks serve as reinforcement in motivating users. Research has shown that legible screen captures help users reduce the time needed to verify information and increase the ability to correct errors (Gellevij, 2002). 6. Data analysis To determine the effectiveness of the developed printed self-instructional module on trainees’ performances during evaluation based on time of completion on the application, we analyzed the data using inferential statistics. Independent sample t-test was done to determine if the developed module brings about positive changes in mean time scores when compared to the conventional module. The first t-test was done during the pre-activity stage where Group 2 worked on the assigned developed module and the Control Group on the conventional module followed by four post-activities over a period of four weeks. To determine the effectiveness of the developed printed self-instructional module on the amount of mental effort spent by the trainees on the activities, we analyzed the data using descriptive statistics. This was done by listing in rank order from 1 (very, very low) to 9 (very, very high), with frequency to indicate the number of trainees marking the values after pre- and post-activities on the conventional and developed modules. Results were presented in a graphical display using bar graphs to enhance understanding and visual interpretation. Frequencies, i.e. the number of trainees, were shown on the vertical axis and the amount of invested mental effort on the horizontal axis. To visualize if changes in the dependent variables were due to the effect of design differences between the conventional and developed modules, the mean scores of time taken in minutes, amount of mental effort spent and performance scores in percentage for all the groups in pre-activity and four post-activities’ stages were tabulated and depicted as displays of mean score plots for comparison. For each activity, the performance scores were based on the designed and developed rubrics. Appendix C shows an example of the rubric for pre-activity. In an effort to interpret cognitive load, we further used the computational approach of Paas and van Merriënboer (1993) to determine the mental efficiency of the different instructional conditions of the two modules. An instructional condition is considered to be more efficient if a user’s task performance levels are higher than might be expected of the invested mental effort and vice-versa. Generally, if users in an instructional condition show high performance with low invested mental effort during task execution, then it indicates high instructional efficiency, whereas if users show low performance with high invested mental effort during task execution then low instructional efficiency is indicated. Instructional efficiency was calculated by standardizing each of the trainees’ scores for mental effort and performance into z scores for mental effort and z scores for performance using the following formula,
E¼
Z Performance Z Mental Effort pffiffiffi 2
A graphical display to represent the E values was then created by plotting the two different instructional groups’ mean of standardized mental effort and performance scores on the coordinate system. In this way it is easier to visualize the relative positions of mental efficiency of the two instructional conditions. 7. Results 7.1. Analysis of performance based on time Independent sample t-tests were conducted on the Control Group and Group 2 during pre-activity and four post-activities’ stages to compare the mean time scores (minutes) taken in determining if the developed module brings positive changes in trainees’ performances as compared to the conventional module. Results of these t-tests and the effect sizes (Cohen’s d) are shown in Table 2. t-Test performed in the pre-activity stage showed a statistically significant difference in the mean scores of trainees’ performance, t (73) ¼ 5.748, p ¼ .000, indicating that Group 2 working on the developed module took shorter time to complete the activity (M ¼ 18.1, SD ¼ 5.0, n ¼ 36) compared to the Control Group working on the conventional module (M ¼ 26.8, SD ¼ 7.8, n ¼ 39). The effect size estimate was d ¼ 1.4, indicating a very large effect. Though pre-activity data was analyzed to ensure baseline equivalence, significant differences were also remarkable in the other post-activities which differed in task complexity. Group 2 showed significantly lower mean time scores in trainees’ performance with Post-Activity 1, t (73) ¼ 5.864, p ¼ .000, d ¼ 1.4; Post-Activity 2, t (73) ¼ 5.012, p ¼ .000, d ¼ 1.2; Post-Activity 3, t (73) ¼ 5.003, p ¼ .000, d ¼ 1.2; and Post-Activity 4, t (73) ¼ 6.698, p ¼ .000, d ¼ 1.6. The effect sizes of the treatment in all activities were very
458
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
Table 2 Independent sample t-test in stages of activities. Activity
Experimental groups
Mean
t
Sig. (2-tailed)
Cohen’s d
Pre-activity
Control group Group 2
26.77 18.06
5.748
0.000
1.35
Post-activity 1
Control group Group 2
47.31 34.22
5.864
0.000
1.37
Post-activity 2
Control group Group 2
29.54 20.14
5.012
0.000
1.17
Post-activity 3
Control group Group 2
33.38 24.64
5.003
0.000
1.17
Post-activity 4
Control group Group 2
40.77 29.25
6.698
0.000
1.57
N control group ¼ 39, N group 2 ¼ 36, a < 0.05.
large (d ¼ 1.2–1.6) since Cohen (1988) did mention, any effect size equal and above 0.8 is considered large. Therefore, it can be concluded that the developed printed module has a significant positive effect and might be more effective than the conventional module in terms of trainees’ performance based on time of completion on the activities.
7.2. Analysis of mental effort Descriptive statistics in the form of a frequency table (Table 3) that quantifies the number of trainees from the Control Group and Group 2 working on the printed conventional (CM) and developed modules (DM) respectively indicate the amount of invested mental effort spent in pre-activity and post-activities stages. A low mental effort invested as reported in the rating scale is indicated by a lower value whereas a high mental effort invested is indicated by a higher value. Following Paas’s original work, a scale with nine points is used most frequently but differences are also seen in the number of units used (de Jong, 2010). Studies done by Kablan and Erden (2008), Kalyuga et al. (1999), Moreno and Valdez (2005), Ngu, Mit, Shahbodin, and Tuovinen (2009), and Pollock, Chandler, and Sweller (2002) used a seven-point scale while Camp, Paas, Rikers, and van Merrienboer (2001), Huk (2006) and Salden, Paas, Broers, and van Merrienboer (2004) used a five-point scale. In this study, data was grouped to a five-point scale for easier interpretation as the adverbs, ‘rather’ and ‘very’, indicating ‘to a greater extent’ (Soanes & Stevenson, 2003) were grouped to the preferred adjective. The said data is depicted as bar graphs in Fig. 5. The difference in proportions on the amount of invested mental effort showed that working on the developed module recorded fewer cases at the higher values of the scale, moderate cases of neither low nor high, and more cases at the lower values of the scale. These results indicated lower invested mental effort working on the developed module compared with the conventional module even though none of the trainees marked very high values on the scale. The mean scores of time taken (minutes), amount of mental effort spent and performance scores (percentage) for all the groups in the pre-activity and four post-activities stages were tabulated in Table 4 and depicted in Fig. 6 to help visualize if the changes in these dependent variables were due to the effect of design differences between modules. From the mean scores and plots, it was seen that Group 2 working on the developed module took the shortest time, spent the least mental effort to complete the activities and scored the highest when compared to the other two experimental groups. Comparison made between the Control Group and Group 1 showed that both groups fared almost equally working on the conventional module during the preactivity stage. Nevertheless, the latter took a shorter time with less effort in achieving higher performance scores after working on the developed module in the following post-activities. These effects showed that working with the developed module required less effort, resulted in shorter completion time and produced better scores as compared to working on the conventional module. It could imply that changes in these dependent variables (time, mental effort and performance scores) were due to the effect of design differences between modules. Indeed, shorter time spent to accomplish tasks
Table 3 Mental effort frequency table. Value
1 2 3 4 5 6 7 8 9
Pre-activity
Very low Low Moderate High Very high
Post-activity 1
Post-activity 2
Post-activity 3a
Post-activity 4a
CM
DM
CM
DM
CM
DM
CM
DM
CM
DM
– – – 17 8 12 2 – –
– 5 8 7 11 5 – – –
– – 1 7 14 15 2 – –
– 1 6 12 13 3 1 – –
– 1 4 7 14 10 3 – –
– 3 10 10 4 6 3 – –
1 7 4 9 11 7 – – –
– 7 10 9 6 4 – – –
– 3 8 9 13 6 – – –
– 5 12 5 8 6 – – –
N control group ¼ 39, N group 2 ¼ 36. Indicator: value 1 – very, very low; value 2 – very low; value 3 – low; value 4 – rather low value; 5 – neither low; nor high; value 6 – rather high; value 7 – high; value 8 – very high; value 9 – very very high. a Post-activity 3 and post-activity 4 – without module (given upon request only).
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
459
Fig. 5. Frequency of invested mental effort.
Table 4 Mean scores and standard deviations of time, mental effort and performance score of the experimental groups. Dependent variables
Groups Control group
Time (min)
Mental effort
Performance score (%)
Group 1
Group 2
Mean
SD
Mean
SD
Mean
SD
7.76 11.08 9.51 9.14 8.76
22.24 34.63 25.29 20.74 35.13
5.74 9.85 12.32 7.04 7.46
18.06 34.22 20.14 24.64 29.25
4.94 7.82 6.26 5.36 5.67
Pre-activity Post-activity Post-activity Post-activity Post-activity
1 2 3 4
26.77 47.31 29.54 33.38 40.77
Pre-activity Post-activity Post-activity Post-activity Post-activity
1 2 3 4
4.97 5.26 4.95 4.10 4.28
0.99 0.91 1.20 1.45 1.19
5.00 4.37 3.58 3.16 3.76
1.27 1.56 1.39 1.22 1.34
4.08 4.39 4.25 3.72 3.94
1.30 1.05 1.46 1.28 1.35
Pre-activity Post-activity Post-activity Post-activity Post-activity
1 2 3 4
93.00 92.33 90.43 96.68 95.76
8.88 8.96 10.01 3.49 4.22
93.30 94.02 92.46 97.67 96.20
15.21 9.58 9.82 2.62 5.18
99.41 96.67 97.41 98.86 97.76
1.59 7.78 4.30 2.00 4.59
N control group ¼ 39, N group 1 ¼ 38, N group 2 ¼ 36.
460
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
Fig. 6. Mean scores of the three groups.
with a lower mental effort yet higher performance scores by Group 1 showed that the effect was due to the independent variable. One is likely to reason that the developed module was more effective than the conventional module. Further comparison to determine the power of instructional conditions of the two modules was carried out via a computational approach using standardized z scores for mental effort (M) and task performance (P) in which the instructional conditions could be compared not only in terms of their effectiveness but also their efficiency. Instructional efficiency score, E, was computed for each trainee using the formula, Table 5 Mental effort, performance and instructional efficiency means for the 2 instructional conditions of the two modules. Instructional condition
Mental effort (M)
Performance (P)
Instructional efficiency (E)
Pre-activity
CM DM
0.22 0.49
0.20 0.40
0.30 0.63
Post-activity 1
CM DM
0.46 0.23
0.22 0.27
0.48 0.35
Post-activity 2
CM DM
0.47 0.01
0.33 0.46
0.56 0.33
Post-activity 3
CM DM
0.32 0.04
0.35 0.40
0.48 0.25
Post-activity 4
CM DM
0.22 0.04
0.17 0.26
0.27 0.21
CM ¼ conventional module, DM ¼ developed module.
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
461
Fig. 7. Graphic presentation of instructional efficiency.
E ¼
PM pffiffiffi 2
Subsequently, data of groups’ mean shown in Table 5 was plotted on a coordinate system of performance–effort axis to visualize the combined effects of the two measures as illustrated in Fig. 7. The values of efficiency score for DM instructional condition were represented by dots whereas the values of efficiency score for CM were represented by squares. Dots on the upper left of the coordinate system indicated high efficiency as it resulted from higher performance with less invested mental effort when working on the developed printed module. Squares on the lower right indicated low efficiency due to lower performance with more invested mental effort when working on the conventional printed module. Hence, working on the developed printed module proved to be more effective than the conventional printed module in that it involved an expenditure of relatively low mental effort. 8. Discussion Results indicate that Group 2 working on the developed module has shown significantly lower mean time scores in trainees’ performance for all activities than the Control Group even though the activities differed in task complexity (Table 2). Since the materials for the two groups were of similar contents, this suggests that decisions regarding the presentation of information are far from trivial and should be approached with as much care as decisions on their content. The use of highlighting and pointers such as headings and signalling techniques, facilitate the more unconscious task of the sensory memory. Formulating the right way to design instructional contents by establishing clear learning objectives, using chunks of information and emphasizing on task-oriented steps will improve trainees’ conscious attention and consequently the effectiveness of the working memory. As trainees received large amount of information and information processing capacities are limited, their cognitive effort therefore needs to be effectively managed during content design. A significant positive effect shown by Group 2 has provided evidence that presenting information in a way where cognitive load falls within the limitations of working memory improves performance in terms of completion time. Once we exceed those limits, our thinking and learning processes are bogged down (Sweller & Sweller, 2006). This is consistent with the predictions of Cognitive Load Theory to ensure that the working memory is not overloaded with information (Paas et al., 2010). Methods used should reduce information that is not related to the content type. For example, incrusting text in pictures reduces the effort required to retain information presented in the text and then to locate it in the picture, avoiding redundant presentation and assuring that the information is coherent. Brief and concise written materials presented in the developed printed module show the facilitative effects of minimalist approach as defined by Carroll (1990) evidenced by shorter time working on the activities. Cutting any pre-doing activities down to the essentials has indeed led to less learning time and hence, faster performance. Therefore, the developed printed module has proved to be more effective than the conventional printed module in terms of trainees’ performance based on time on the spreadsheet activities. For findings on mental effort, it is interesting to note that in all stages of activities, Group 2 displayed higher frequencies of low mental effort as indicated by lower values on the rating scale and lower frequencies of high mental effort as indicated by higher values compared to the Control Group. A lower scale value indicating lower mental effort shows that working on the developed printed module is perceived as less difficult than the conventional printed module. This could be due to the three methods carried out to ensure that the manner of the study is consistent with the instructional principles of the Cognitive Load Theory and minimalist approach. First, in order to decrease the intrinsic cognitive load, a load inherent to the learning material, chunking text information into segments was done to minimize user’s feelings of being overwhelmed by the content. This seems to be in line with the coherence principle of Mayer’s multimedia learning on the beneficial effects of artificially reducing element interactivity. Second, extraneous cognitive load, a load caused by the instructional features of the learning material, was minimized by redesigning the software application so that texts are physically relocated in close proximity to related screen entities, resulting in an integrated format, or in other words, no split attention and no redundancy. This is consistent with previous extensive research on cognitive load in multimedia presentations (Cerpa et al., 1996; Chandler & Sweller, 1991, 1996; Craig et al., 2002; Kalyuga et al., 1999; Mayer et al., 2001; Mayer & Moreno, 2003; Sweller, 1999; Sweller & Chandler, 1994) and in the teaching of complex orthopaedic physical therapy skills (Pociask & Morrison,
462
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
2008) where users reported differences in perceived cognitive load resulting from manipulation of extraneous load variables. And third, in order to increase germane cognitive load, a load relevant for learning and schema construction, learning tasks within the same task class were presented in various contexts as hands-on activities, prominent as end-products in Post-Activities 3 and 4. This variability of problem situations as promoted by Paas and van Merriënboer (1994) encourage schema construction by increasing the abilities to distinguish and identify the relevant elements. Correlations among these cognitive loads might have led to greater ease in comprehending knowledge and skills among trainees in Group 2. A further effort in identifying the occurrence on the effects of the developed printed module was determined by the mean scores and standard deviations of time taken, mental effort spent and performance scores from all the groups (Table 4). Fig. 6 helped visualize if changes in these dependent variables were due to the effect of design differences. From the plots, Group 1 took a shorter time, spent a lower amount of mental effort and scored higher than the Control Group in all the post-activities. This showed that Group 1 reacted strongly to the experimental treatment and it was due to the independent variable – the developed printed module. Moreover, Group 2 that worked solely on the developed printed module, too, showed less effort spent, shorter time of completion and was capable of producing better scores compared to the Control Group working on the conventional printed module. This indicates that the independent variable does have an effect which is separated from its interaction with the treatment. It is possible that changes in these dependent variables (time, mental effort and performance scores) are due to the effect of design differences. Indeed one is likely to reason that the developed printed module is more effective for the trainees to comprehend tasks – accomplishing faster yet with higher scores. From the descriptive form, it was predicted that the developed printed module will indicate a higher level of instructional efficiency than the conventional printed module in the activities. This was indicated when the computational approach using standardized z scores for mental effort and task performance (Paas et al., 2003) for each group’s means for all activities was graphed on a coordinate system of performance–effort axis in Fig. 7. The resulting points’ distances from E ¼ 0 line illustrated that high efficiency occurred in the developed printed module instructional condition, in which higher performance was obtained with relatively lower mental effort. Low efficiency occurred in conventional printed module condition indicated by lower performance with higher mental effort expenditure. This finding could mean that the instructional approach using the developed printed module was more efficient for the acquisition of knowledge and skills than the conventional printed module approach in that trainees working on the developed printed module performed significantly better on the activities with fewer errors and less mental effort. The success of Cognitive Load Theory in developing strategies and techniques which result in lower mental effort, reduced training time and enhanced performance is therefore of paramount importance to education. Trainees taught using Cognitive Load Theory-generated materials are actually able to deal with such unusual or unforeseen situations as attested to by their superior performances. 9. Conclusion The experiments have shown that modifications done to the design of the instructions based on Cognitive Load Theory and minimalist approach are necessary features for the improvement of the computer documentation in terms of effectiveness for faster and easier learning. When the developed and conventional modules were compared, independent sample t-tests showed that Group 2 working on the developed printed module took a shorter time to complete all the activities compared to the Control Group working on the conventional printed module. Physically integrating text and pictures into a unitary entity eliminates the need to split attention between the instructional information. This association is clearly indicated where both the textual and pictorial presentation are both necessary to impart meaning whereby eliminating the redundancy effect. In this regard, overlaying text on picture side-by-side in a two-column layout with left-to-right reading order on the same spreadsheet to produce a single source of instructional information (Cooper, 1998) will not require switching back and forth to obtain the full meaning, and minimal text with ample diagrams are good examples of addressing the detrimental effects of redundant information. As a result, presenting information in a way that cognitive load falls within the limitations of working memory has improved performance in terms of time taken to completion. However, manipulating extraneous variables to avoid split attention and redundancy are not sufficient for instructional conditions to be effective. The reduction of intrinsic cognitive load has also become an important goal that stresses authentic, real-life learning tasks as the driving force for learning. Manipulation of element interactivity through chunking of learning tasks frees up cognitive resources by allocating for processes that induce germane cognitive load, which results in the achievement of higher scores with lower expense of mental effort. The instructional condition might have made the comprehension of tasks easier and therefore faster to accomplish. In this case, the calculation of mental efficiency scores has enriched our knowledge on the effect that instructional format based on Cognitive Load Theory with minimalist approach might have on the learning process, thus contributing to the existing evaluation of screen designs. The study has shown that effective instructional design depends on sensitivity to cognitive load which, in turn, depends on an understanding of how the human mind works. Methods to manipulate extraneous cognitive load, minimize the intrinsic complexity of learning tasks and encourage trainees to invest germane load in learning, have signified the practicability of Cognitive Load Theory for the design of courses and e-learning programs that are characterized by a high level of interactivity. In turn, these may also serve as guides and references in helping educational technology and other relevant parties in the designing and development of instructional modules. In addition to an increased practicability, this study also contributes to theoretical progress in cognitive science; a well-supported theory of how people learn from words and pictures by incorporating a paradigm of controlled experimental designs which is capable of making a real difference to educational practice. 10. Limitations and future studies The findings of this study suggest several directions for future research. First, further considerations may be necessary with regard to the measurement of cognitive load. In this study, trainees’ perceived cognitive load was measured by a single item asking the amount of mental effort they invested on the tasks they received. The item does not measure each of the three sources of cognitive load – extraneous, germane and intrinsic cognitive load. For example, if two trainees, each from a different instructional condition indicated the same ratings of 8 (very high) on the scale, it is impossible to determine whether they perceived the same type of cognitive load (that is, intrinsic, extraneous or
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
463
germane cognitive load) with the instrument used in this study. In this regard, multiple items relating to each source of cognitive load may be necessary for future research. None of the results referring to age, gender, learning style and levels of expertise is accounted for in the pre-planned way. Therefore the generalization of the results regarding these factors is limited. It is therefore suggested that comparisons examining age, gender, learning style and levels of expertise should be conducted using a greater sample size and more equivalent numbers in each group from all the other teaching institutions in the country to obtain more reliable results, perhaps by designing levels of these into the experimental design. Carroll (1990) has laid a good foundation for the development of better computer documentation and the developed printed module has blazed a new trail by taking the users’ information needs instead of the program’s functions as a starting point for design. However, whether the module will yield similar results in terms of time of completion, performance scores and invested mental effort on online, computersupported learning settings for teaching software skills seems worth examining.
Acknowledgement The authors would like to thank the Universiti Teknologi Malaysia (UTM) and Ministry of Higher Education (MoHE) Malaysia for their support in making this project possible. This work was supported by the Research University Grant [Q.J130000.7131.00H17] initiated by UTM and MoHE.
Appendix A. Time sheet
Name: Group: You are going to prepare a mark sheet using Excel. Please open the Excel program on your computer and perform the following tasks. If you cannot perform some of the tasks below, please do not get frustrated. Try to complete only the tasks that you can. Please write down the time you start and the time you complete the tasks in the blanks provided. Time Start: ____________________ Time End: ___________________ Appendix B. Cognitive load rating scale
Name: Group: Indicate the amount of mental effort that you spent on the practice task you have just finished. Circle the corresponding number below. 1
Very, very low mental effort
2
Very low
3
Low
4
Rather low
Appendix C. Scoring rubrics Name: ..................... Group: .....................
5
6
7
8
9
Neither low nor high
Rather high
High
Very high
Very, very high mental effort
464
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
a. Activity 1 create worksheets. No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Skills checklist Insert title Type title Insert column titles (1 column@1 mark) Insert row titles (1 row@1 mark) Enter numerical data Type data Use sum formula Select cell B6 Perform autosum Copy formula Use average, max and min formulas Add row titles (1 row@1 mark) Select cell B7 Use average formula Copy formula Select cell B8 Use max formula Copies formula Select cell B9 Use min function Copy formula Use currency format Select cells B3:E9 Set 2 decimal places Type transmittal costs in Ringgit Malaysia only Create borders Select A2:E9 Create all borders Save worksheet Save as hotel transmittal costs Total
Marks
Score
1 4 4 3 1 1 1 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 33
References Adamy, P., & Boulmetis, J. (2006). The impact of modeling technology integration on preservice teachers’ technology confidence. Journal of Computing in Higher Education, 17(2), 100–120. Bobis, J., Sweller, J., & Cooper, M. (1993). Cognitive load effects in a primary school geometry task. Learning and Instruction, 3(1), 1–21. Bodemer, D., & Ploetzner, R. (2002). Encouraging the active integration of information during learning with multiple and interactive representations. Paper presented at the international workshop on dynamic visualizations and learning. Tubingen. Branson, R. K., Rayner, G. T., Cox, J. L., Furman, J. P., King, F. J., & Hannum, W. H. (1975) (TRADOC Pam 350-30 NAVEDTRA 106A). Interservice procedures for instructional systems development, 5 Vols.. Ft. Monroe, VA: U.S. Army Training and Doctrine Command, August 1975. (NTIS No. ADA 019 486 through ADA 019 490). Brown, D., & Warschauer, M. (2006). From the university to the elementary classroom: students’ experiences in learning to integrate technology in instruction. Journal of Technology and Teacher Education, 14(3), 599–621. Camp, G., Paas, F., Rikers, R. M. J. P., & van Merrienboer, J. J. G. (2001). Dynamic problem selection in air traffic control training: a comparison between performance, mental effort and mental efficiency. Computers in Human Behavior, 17, 575–595. Carroll, J. M. (1990). The Nurnberg Funnel: Designing minimalist instruction for practical computer skill. Cambridge: MIT Press. Cerpa, N., Chandler, P., & Sweller, J. (1996). Some conditions under which integrated computer-based training software can facilitate learning. Journal of Educational Computing Research, 15, 345–367. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. Chandler, P., & Sweller, J. (1992). The split attention effect as a factor in the design of instruction. Journal of Educational Psychology, 62, 233–246. Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer program. Applied Cognitive Psychology, 10(2), 151–170. Clark, R., & Harrelson, G. L. (2002). Designing instruction that supports cognitive learning processes. Journal of Athletic Training, 37(4), 152–159. Clark, R. C., & Mayer, R. E. (2003). e-Learning and the science of instruction. San Francisco: Pfeiffer. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Cooper, G. (1990). Cognitive load theory as an aid for instructional design. Australian Journal of Educational Technology, 6(2), 108–113. Cooper, G. (1998). Research into cognitive load theory and instructional design at UNSW. Retrieved 27.07.07, from. http://education.arts.unsw.edu.au/CLT_Aug_97.html. Craig, S., Gholson, B., & Driscoll, D. (2002). Animated pedagogical agents in multimedia educational environments: effects of agent properties, picture features and redundancy. Journal of Educational Psychology, 94, 428–434. Creswell, J. W. (2005). Educational research: Planning, conducting and evaluating quantitative and qualitative research (2nd ed.). New Jersey: Pearson. de Jong, T. (2010). Cognitive load theory, educational research, and instructional design: some food for thought. Instructional Science, 38, 105–134. Gellevij, M. (2002). Visuals in instruction: Functions of screen captures in software manuals. Unpublished PhD thesis, University of Twente, The Netherlands. Gellevij, M., Van der Meij, H., De Jong, T., & Pieters, J. (2002). Multimodal versus unimodal instruction in a complex learning context. The Journal of Experimental Education, 70(3), 215–239. Goh, L. H. (2007). Integrating ICT in teaching and learning: a case study. Jurnal Penyelidikan Pendidikan Guru, 3, 45–60. Gyselinck, V., Ehrlich, M.-F., Cornoldi, C., de Beni, R., & Dubois, V. (2005). Visuospatial working memory in learning from multimedia systems. Journal of Computer Assisted Learning, 16(2), 166–176. Harp, S., & Mayer, R. E. (1997). Role of interest in learning from scientific text and illustrations: on the distinction between emotional interest and cognitive interest. Journal of Educational Psychology, 89, 92–102. Horton, W. (1993). Visual literacy – dump the dumb screendump. Technical Communication, 40, 146–147. Houghton-Alico, D. (1985). Creating computer software user guides. New York: McGraw-Hill. Huk, T. (2006). Who benefits from learning with 3D models? The case of spatial ability. Journal of Computer Assisted Learning, 22, 392–404. Kablan, Z., & Erden, M. (2008). Instructional efficiency of integrated and separated text with animated presentations in computer-based science instruction. Computers & Education, 51, 660–668. Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors, 40(1), 1–17. Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13, 351–371. Korthagen, F., Loughran, J., & Russell, T. (2006). Developing fundamental principles for teacher education programs and practices. Teaching and Teacher Education, 22(8),1020–1041.
Z. Tasir, O.C. Pin / Computers & Education 59 (2012) 449–465
465
Kruse, K. (2004). Introduction to instructional design and the ADDIE model. Retrieved 22.03.11, from. http://mizanis.net/edu3105/bacaan design_L/e-Learning and the ADDIE Model.htm. Laffey, J. (2004). Appropriation, mastery, and resistance to technology in early childhood pre-service teacher education. Journal of Research on Technology in Education, 36(4), 361–382. Lazonder, A. W. (2000). Exploring novice users’ training needs in searching information on the world wide web. Journal of Computer Assisted Learning, 16, 326–335. Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59, 14–19. Mayer, R., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: when presenting more material results in less understanding. Journal of Educational Psychology, 93, 187–198. Mayer, R., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43–52. Mayer, R. E. (1989). Systematic thinking fostered by illustrations in scientific text. Journal of Educational Psychology, 81, 240–246. Mayer, R. E. (1997). Multimedia learning: are we asking the right questions. Educational Psychologist, 32, 1–19. Mayer, R. E., Bove, W., Bryman, A., Mars, R., & Tapangco, L. (1996). When less is more: meaningful learning from visual and verbal summaries of science textbook lessons. Journal of Educational Psychology, 88, 64–73. Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth the thousand words? Journal of Educational Psychology, 82, 715–726. Mayer, R. E., Steinhoff, K., Bower, G., & Mars, R. (1995). A generative theory of textbook design: using annotated illustrations to foster meaningful learning of science text. Educational Technology Research and Development, 43, 31–44. Mohamed Ally. (2005). Designing learning materials for successful learning when using instructional technology. Konvensyen Teknologi Pendidikan Ke-18. Kuala Terengganu, Malaysia. Molenda, M., Pershing, J. A., & Reigeluth, C. M. (1996). Designing instructional systems. In R. L. Craig (Ed.), The ASTD training and development handbook (4th ed.). (pp. 266–293) New York: McGraw-Hill. Moreno, R., & Valdez, A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: the role of student interactivity and feedback. Educational Technology Research and Development, 53, 35–45. Mwangi, W., & Sweller, J. (1998). Learning to solve compare word problems: the effect of example format and generating self-explanations. Cognition and Instruction, 16(2), 173–199. Ngu, B., Mit, E., Shahbodin, F., & Tuovinen, J. (2009). Chemistry problem solving instruction: a comparison of three computer-based formats for learning from hierarchical network problem representations. Instructional Science, 37, 21–42. Paas, F. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: a cognitive-load approach. Journal of Educational Psychology, 84, 429–434. Paas, F., Tuovinen, J. E., Tabbers, H., & van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63–71. Paas, F., van Gog, T., & Sweller, J. (2010). Cognitive load theory: new conceptualizations, specifications and integrated research perspectives. Educational Psychology Review, 22(2), 115–121. Paas, F., & van Merriënboer, J. J. G. (1993). The efficiency of instructional conditions: an approach to combine mental effort and performance measures. Human Factors, 35, 737–743. Paas, F., & van Merriënboer, J. J. G. (1994). Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6, 51–71. Plotzner, R., Bodemer, D., & Feuerlein, I. (2001). Facilitating the mental integration of multiple sources of information in multimedia learning environments. In C. Montgomerie, & J. Viteli (Eds.), Proceedings of ED-Media 2001. World conference on educational multimedia, hypermedia and telecommunications, Tampere, Norfolk. Pociask, F. D., & Morrison, G. R. (2008). Controlling split attention and redundancy in physical therapy instruction. Educational Technology Research and Development, 64(4), 379–399. Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and Instruction, 12(1), 61–86. Rovai, A. (2002). Building sense of community at a distance. International Review of Research in Open and Distance Learning, Retrieved 03.05.06, from. http://www.irrodl.org/ content/v3.1/rovai.pdf. Salden, R. J. C. M., Paas, F., Broers, N. J., & van Merrienboer, J. J. G. (2004). Mental effort and performance as determinants for the dynamic selection of learning tasks in air traffic control training. Instructional Science, 32, 153–172. Schnotz, W. (1993). Adaptive construction of mental representations in understanding expository texts. Contemporary Educational Psychology, 18, 114–120. Seufert, T., & Brunken, R. (2004). Supporting coherence formation in multimedia learning. Retrieved 12.07.11, from. http://www.iwm-kmrc.de/workshops/dim2004/pdf_files/ Seufert_et_al.pdf. Shaharom Noordin. (1997). Kesan pengajaran bermodul ke atas pengkonsepan dan perubahan konsep pelajar tingkatan 4 dalam pembelajaran fizik Science and mathematics education national seminar. Universiti Teknologi Malaysia. Shuldman, M. (2004). Superintendent conceptions of institutional conditions that impact teacher technology integration. Journal of Research on Technology in Education, 36(4), 319–343. Soanes, C., & Stevenson, A. (Eds.). (2003). Oxford dictionary of English (2nd ed.). Oxford: Oxford University Press. Strickland, A. W. (2006). ADDIE. Idaho State University College of Education Science, Math & Technology Education. Retrieved 29.04.07, from. http://ed.isu.edu/addie/ index.html. Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cognitive Science, 12, 257–285. Sweller, J. (1994). Cognitive load theory, learning difficulty and instructional design. Learning and Instruction, 4, 295–312. Sweller, J. (1999). Instructional design in technical areas. Melbourne: ACER Press. Sweller, J. (2002). Visualisation and instructional design. In R. Ploetzner (Ed.), International workshop on dynamic visualizations and learning, Tubingen. Sweller, J. (2008). Help! My brain is overloaded!. Retrieved 12.02.08, from. http://www.unsw.edu.au. Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185–233. Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load and selective attention as factors in the structuring of technical material. Journal of Experimental Psychology: General, 119(2), 176–192. Sweller, J., & Sweller, S. (2006). Natural information processing systems. Evolutionary Psychology, 4, 434–458. Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. Tarmizi, R., & Sweller, J. (1988). Guidance during mathematical problem solving. Journal of Educational Psychology, 80(4), 424–436. van der Meij, H. (1998). Optimizing the joint handling of manual and screen. In J. M. Carroll (Ed.), Minimalism beyond the Nurnberg Funnel (pp. 275–310). Cambridge, Massachusetts: MIT Press. van der Meij, H., & De Jong, T. (2003). Learning with multiple representations. In EARLI conference 2003. Padua, Italy. van der Meij, H., & Gellevij, M. (1998). Screen captures in software documentation. Technical Communication, 45(4), 529–543. van Merrienboer, J. J. G. (2000). The end of software training? Journal of Computer Assisted Learning, 16, 366–375. Ward, M., & Sweller, J. (1990). Structuring effective worked examples. Cognitive and Instruction, 7(1), 1–39. Watson, J. (2005). Cognitive load theory summary. Retrieved 15.11.06, from. http://www.jacalyn.info/contract7_CLsum.htm.