Computers & Education 40 (2003) 1–15 www.elsevier.com/locate/compedu
The Jeliot 2000 program animation system§ Ronit Ben-Bassat Levya, Mordechai Ben-Aria,*, Pekka A. Uronenb a
Department of Science Teaching, Weizmann Institute of Science, Rehovot 76100, Israel Department of Computer Science, University of Helsinki, FIN-00014 Helsinki, Finland
b
Received 1 January 2002; accepted 4 June 2002
Abstract Jeliot 2000 is a program animation system intended for teaching introductory computer science to high school students. A program animation system is a system that displays a dynamic graphical representation of the execution of a program. The goal is to help novices understand basic concepts of algorithms and programming like assignment, I/O and control flow, whose dynamic aspects are not easily grasped just by looking at the static representation of an algorithm in a programming language. The paper describes the design and implementation of Jeliot 2000 and an experiment in its use in a year-long course. The experiment showed that animation provides a vocabulary and a concrete model that can improve the learning of students who would otherwise have difficulty with abstract computer-science concepts. # 2002 Elsevier Science Ltd. All rights reserved. Keywords: Interactive learning environments; Programming and programming languages
1. Introduction The difficulties encountered by new students of computer science are well-known, One explanation for these difficulties is that students do not have a viable mental model of how a computer works, so their ability to perform an elementary task like following the execution of a program is seriously flawed (Ben-Ari, 2001). Intuitively, it would seem that visualization and animation of algorithms and programs would significantly aid the building of viable mental models. Unfortunately, empirical studies do not unequivocally support this claim.
§
A preliminary version of this paper was presented at the Program Visualization Workshop, Porvoo, Finland, July, 2000. * Corresponding author. E-mail address:
[email protected] (M. Ben-Ari).
0360-1315/03/$ - see front matter # 2002 Elsevier Science Ltd. All rights reserved. PII: S0360-1315(02)00076-3
2
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
Empirical studies are usually carried out using the techniques of cognitive psychology: carefully controlled short-term experiments that look for a quantitative improvement in a test result. While such studies can be reliable, it is not clear to what extent the results can be validly transferred to an actual classroom setting. This paper reports on research that investigated the efficacy of program animation when it is integrated into a year-long course. Ethical and logistical considerations make it practically impossible to create a well-controlled experiment in a real school; nevertheless, we believe that the results—in particular the qualitative results—provide novel and significant insight into the issue. The experiment was carried out on 10th-grade high school students studying an introductory course on algorithms and programming. We re-implemented Jeliot I, an existing program animation system, to create a version that would be appropriate for these students. Two full-year classes were taught, one using Jeliot 2000 and one as a control group, and the results were evaluated both quantitatively and qualitatively. The extended use of Jeliot 2000 enabled the students to feel comfortable with the tool, so that the frustration with learning to use a new system had time to dissipate.
2. Previous work Stasko, Badre, and Lewis (1993) used algorithm animation to teach a complicated algorithm to graduate students in computer science; they expected that the animation would help students understand the dynamic behavior of the algorithm. The results were disappointing: in a post-test, the group that used animation did not perform better than those that did not. The authors attributed the results to the design of the animations, which was more appropriate for experts than for novices. Novices found it difficult to map the graphics elements of the animation to the algorithm. In a subsequent study, Byrne, Catambone, and Stasko (1996) examined the effect of both predictions and animations on the performance of undergraduate students. The study showed that students in the animation groups got better grades on challenging questions for simple algorithms, but on difficult algorithms the differences were not significant. The work by Kehoe, Stasko, and Taylor (1999) is closest to ours in that they evaluated the use of animations in the classroom and homework sessions, rather than in examinations. Their conclusion is that the pedagogical value of algorithm animation is more apparent in open homework sessions than in closed examinations. Animation is not useful in isolation; students need human explanations to accompany the animations. Affective advantages of animation were observed: the animation group exhibited more motivation and satisfaction. A sequence of experiments on multimedia learning was performed by Mayer (1997) who consistently found that visualizations need to be accompanied by explanations to be effective. Multimedia plays two roles: it guides students’ attention and helps them create connections between text and concepts. Furthermore, the explanations must be simultaneous with the visualizations to be effective. Mayer’s explanation is that multimedia helps students create multi-pie representations of the problem. Petre and Green (1993) showed that a graphical display is not self-explanatory, but must be learned. From this we conclude that there is a need for long-term experimentation in the use of animation.
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
3
3. Jeliot Since our students are beginning high school students, we needed an animation system that is appropriate for their use. The emphasis is on program animation that demonstrates the execution of input–output, assignment, selection and loop statements. Animation of data structures (except for arrays) and of algorithms (such as sorting) is not necessary for this course. The Jeliot I1 system (Haajanen, Pesonius, Sutinen, Tarhio, Tera¨svirta, & Vanninen, 1997) enables the student to see the source code together with a theater on which the program entities ‘‘perform’’. Jeliot I is different from other animation systems in that the animations are produced automatically and do not require the user to designate which aspects are to be animated. The system is quite sophisticated, employing multiple windows and an extensive menu of options for customizing the animation. Lattu, Tarhio, and Meisalo (2000) conducted a qualitative evaluation of Jeliot I where the animation was used to support oral presentation in the classroom. They concluded that Jeliot I aids the formation of concepts such as variable manipulation. They suggested ways to improve Jeliot I and their suggestions were incorporated in our design. Based on Jeliot I, a new animation system called Jeliot 2000 was developed by the third author during a visit to the Weizmann Institute of Science. The basic concept (and hence the name Jeliot) was maintained: automatic animation of all programs rather than instructor-defined animation of selected programs. The design of Jeliot 2000 focused on simplifying the interface relative to Jeliot I (at the expense of the flexibility of the display), and on acheiving a more complete animation (at the expense of often animating too much detail). This was done since Jeliot 2000 is intended for total novices, while Jeliot I is more appropriate for students having some familiarity with programming. Here is a list of main differences between Jeliot 2000 and Jeliot I [see also Ben-Ari et al. (2002)]: 1. A single window is displayed with two panels, one for the source code and one for the animation. 2. All the variables are animated uniformly. 3. Expression evaluation and control decisions are animated. 4. The only controls are the familiar VCR-like ones. 5. Text input–output is animated. 6. Jeliot 2000 is a single Java application.
3.1. Principles of animation Jeliot 2000 animates programs at the level of programming language constructs: variables, values, expressions, statements and subprograms. The central design principle is that the animation should be complete and continuous. Completeness means that every feature of the program must be visualized, for example, a value such as a constant may not appear from nowhere. Continunity means that the animation must make the relations between actions in the program explicit. For example, Jeliot 2000 shows how the values of the subexpressions of an expression 1 We adopt this terminology from a recent survey article (Ben-Ari, Myller, Sutinen, & Tarhio, 2002) to avoid saying ‘‘the original Jeliot’’.
4
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
contribute to its value. This means that the visual objects that represent subexpressions must remain visible until all of them have been evaluated; then these objects are animated to form the expression. 3.2. The user interface Jeliot 2000 is intended for total novices, not for experts who have different requirements (Petre, 1995). The user interface must be kept as simple as possible, because a sophisticated user interface can be confusing for a novice, and their choices tend to be shallow and of no pedagogical importance. The user interface consists of one window split into two panels (Fig. 1). The lefthand panel is the code area that displays the source code of the program being animated. The animation itself takes place at the theater area in the right-hand panel. The lower part of the theater panel includes areas for a constant store and there is an area for displaying the output generated by the program; Jeliot I did not animate these aspects because they are clear to anyone with a bit of programming experience, but to total novices they are quite mystifying. Menus are used to open files, and a button below the source code display causes the program to be compiled. The user controls the execution of the program through a panel containing VCRlike buttons. In the figure, you can see that the While-condition is being animated. The animation has already moved the values 8.0 and 4 from the rectangles representing the variables real and input (respectively) to the stage where the evaluation will be animated. The evaluation of the Boolean condition results in the display of the result true, as well as a note indicating its effect on the subsequent control flow. The next step of the animation will erase the display of this evaluation and replace it with the animation of the evalution of the following statement real=real*0.4. For input, a text box pops up, and for output, the value moves to the output panel.
Fig. 1. Screen snapshot of Jeliot 2000.
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
5
3.3. Implementation principles Both Jeliot I and Jeliot 2000 use Java both for the source programs being animated and as an implementation language. Java was chosen for its portability, its extensive library and the widespread use of Java in teaching. There are no plans to extend Jeliot 2000 to other languages. Jeliot I was structured as a server application together with a client applet so that it could be run from a web browser. For Jeliot 2000, we decided to change the implementation to a single application for the following reasons: simplicity, reliability problems with the implementation of Java by browsers and the availability in the school laboratory of PCs powerful enough to install Java on them. To implement animation, the source program must be interpreted; the animation display is a side effect of the interpretation. Animating JVM byte code would lose a layer of abstraction, so Jeliot 2000 directly interprets the program source code. The source code is parsed to produce a tree that has the declarations, statements, and expressions of the program as nodes. When the program is run, Jeliot 2000 traverses this tree, keeping track of the program’s state and rendering the events of control flow and expression evaluation in the animation. The most difficult implementation problem was to preserve the continuity and smoothness of the animation. In an ordinary interpreter, after an expression has been evaluated only its value needs to be kept. But for animation, the visual objects that represent the expression must remain displayed for the user to examine in the context of the source code. Then, they are erased to avoid cluttering the display. Furthermore, values can come from constants, variables, parameters and I/O, and the animation must produce a pleasing display for all possibilities. It is possible to animate new constructs by creating new class animators, though this requires detailed knowledge of the implementation and is not something a user would do. Class animators were used to implement input–output in Jeliot 2000. The direction taken in the implementation of Jeliot 2000 is very different from that of Jeliot I, which was based on self-animating datatypes. This approach is quite elegant, but makes it difficult to implement visual relations between elements of a program that occur during the evaluation of expressions or parameter passing. Embedding the animation system in an interpreter gives Jeliot 2000 more power of expression at the cost of a more complicated implementation. Jeliot I is able to run almost any legal Java program; operations on non-animating datatypes are passed to the Java interpreter which executes them with no visualization. Jeliot 2000 is complete in that it animates all the constructs of programs it accepts, but the current implementation is limited in the language constructs that it supports.
4. Research design Initially, we were apprehensive that using a Java tool would be inappropriate for teaching in a class where the programming language that must be used is Pascal. However, a preliminary study showed that for the study of the basic concepts like assignment, the difference in the syntax was not critical. The teacher was always available to help overcome any problems. If anything, this meant that improvements attributed to animation are probably greater than we claim. The experiment was carried out by the first author who taught two parallel classes in introductory computer science using the same materials; one class was treated with demonstrations and labs using Jeliot 2000 and the other class was a control group. The classes were composed by
6
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
the school in the normal manner without our intervention; we asked for and hoped for classes of equivalent average ability, but the control group was estimated to be better as measured by a pretest of middle-school mathematics, and in fact, their achievements were consistently higher than the class that used animation. Every new computer science concept was taught in the following sequence: 1. The concept was studied for 2 weeks, where each week’s study consisted of 2 h of frontal lecture and 1 h of lab work using Turbo Pascal. 2. An in-class paper-and-pencil assignment was given; this assignment was used as a pretest. 3. The next 2 weeks were devoted to further study of the topic: The control group received additional in-class and lab exercises; they used the trace facility of Turbo Pascal. The animation group was taught the syntax for expressing the concept in Java. Then they used Jeliot 2000 to work on the same in-class and lab exercises. Both classes were asked to predict the outcome of the execution of a program fragment. 4. An in-class paper-and-pencil assignment was given as a post-test. The in-class assignments were composed of questions designed to test levels of understanding: The ability to retrieve factual information. Implementation understanding, by writing partial code for a question similar to one done in class. Analysis, by asking for inputs that would produce a specified output. Synthesis, by writing a program. This sequence was repeated for each new concept. The pre- and post-tests were designed to contain questions of similar levels of understanding and similar complexity. The attribution of levels to the questions was validated by comparison with independent attributions given by two colleagues. At the end of the year, an additional assignment was given in order to evaluate long-term learning. Finally, a follow-up assignment during the next school year was used to investigate if any effects could be discovered that lasted beyond the use of animation in a single course. Students of all achievement levels from both groups were interviewed to learn about their attitudes towards the material learned in class and the way it was presented. The students from the animation group were also asked about their attitudes towards the use of Jeliot 2000. Furthermore, during the interview session, the students were asked to solve a problem, verbalizing their thoughts as best they could.
5. Results Because we could not control membership in each class, the control group was always stronger, as reflected in higher average grades than the animation group. In fact, there was very little room for (statistically significant) improvement in the grades of the control group; however, we were able to demonstrate significant improvement in the grades of the animation group. (Details of the
7
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15 Table 1 Grade improvement for the If-statement Difference
Control
Animation
D4 5 5
5 17 9 6 2 0 0
1 17 2 4 8 4 2
(Total)
39
38
statistical analysis are given in Appendix A.) The interviews were more helpful in determining the effects of the use of animation. We now present the results for the topics that were taught. 5.1. Assignment and input–output We found no significant improvement in overall performance, as both classes did well in the first two class assignments. The animation group used the Jeliot 2000 representation of memory cells in order to explain how a cell gets its value and drew paths just like Jeliot 2000, while the non-animation group had difficulties in explaining how a cell gets its value. In an interview, the students attempted to predict the outcome of an assignment statement of the form x:=x+2, which they had not seen previously. The animation group imitated the graphic objects of Jeliot 2000 and were able to explain what would happen, while the non-animation group simply said that the assignment statement is mathematically incorrect. (The sample transcript in Appendix B shows how the animation enabled a student to quickly recover from this misconception.) The animation group made fewer mistakes in the explanation of the execution of input statements, again explaining their answers in terms of the Jeliot 2000 animation. On questions concerning output, the animation group made more mistakes than did the non-animation group. Subsequent interviews led us to believe that this was an artifact caused by syntactical difference between Pascal and Java. 5.2. If-statement We detected a significant improvement in the performance of the animation group, while there was no similar improvement in the performance of the control group. Table 1 shows the grade improvement D of the students from the first assignment on the If-statement to the second assignment. For each range of D in the first column, the next two columns list the number of students who improved by that D. The average grade2 of the animation group improved from 77 to 89 while the average grade of control group improved from 95 to 96. Note the large number of 2 For clarity, averages in this paper are rounded to the nearest whole number; raw data and precise results are available in Ben-Bassat Levy (2002).
8
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
Table 2 Grade improvement for the While-statement Difference
Control
Animation
D4 5 5
8 11 4 3 2 1 8
3 5 2 5 4 3 6
(Total)
37
28
students in the animation group whose grades improved by 10–15 points; we claim that these are students who initially found it difficult to learn the material, and who significantly improved with exposure to animation. During the interviews, the students were asked about a program with nested If-statements, a construct which they had not seen before. We can divide the answers into four main groups: 1. In the non-animation group, only the stronger students could answer the questions, and only after many attempts. They were not sure of the correctness of their answers and had difficulty explaining them. 2. The stronger students of the animation group also had difficulties answering this question! They did not use Jeliot 2000 because they believed that they could understand the material without it. 3. The weaker students of the animation group refused to work on the problem, claiming that nested If-statements are not legal, or that they did not understand the question. 4. The mediocre students of the animation group gave correct answers! They drew a Jeliot 2000 display and used it to hand simulate the execution of the program.
5.3. While-statement The results for the While-statement are shown in Table 2.3 The average of the control group has increased only from 83 to 88, but the average of the animation group improved from 76 to 87. The results show that almost 50% of the students that answered the assignments in the animation group improved their grades by more than ten points. As in the case of the If-statement, the main improvement was in the group of mediocre students. The interviews contained questions on the termination of While-statements, and on the execution of a While-statement followed by an If-statement. The results of the interviews were consistent with the results of the interviews on If-statements done with the same students. The stronger 3 The total number of students in each group is the number from whom we have both pre- and post-tests; because the experiment was carried out in a regular school environment, we had no control over absence from class.
9
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
students in both the control group and the animation group were eventually able to answer the questions, but only in the animation group could the mediocre students answer the questions. The students of the animation group used vocabulary such as ‘‘skip the loop’’ and ‘‘continue the loop’’, which appear on the Jeliot 2000 displays, while the students of the control group tried to use the tracking tables that we used in class. It must be stated that both groups studied tracking tables, but in the animation group, the animation displays superseded the tables as the model for discussing program execution. 5.4. Summary assignment A two-question summary assignment was given at the end of the course. The students were presented with two program fragments: readln(X); X:=X+3; writeln(X);
while Y >=X do Y:=Y-X;
As we were interested in obtaining evidence for the effects of animation on the students’ mental models, part of the assignment asked them to explain how they would explain these fragments to another student. The characteristics of the answers were encoded and are tabulated in Tables 3 and 4. There were 34 and 33 students in the control and animation group, respectively; several students’ answers contained more than one characteristic. The control group used generalized and verbose explanation indicating that they were unable to articulate their reasoning.4 The ani-
Table 3 Responses for I/O and assignment Characteristic
Control group
Animation group
Consistent use of graphical or representation Step-by-step description of execution Generalized explanation Jeliot terminology Verbose explanation
3 8 32 1 7
7 15 25 8 0
Table 4 Responses for While-loop Characteristic
Control group
Animation group
Step-by-step description of execution Generalized explanation Jeliot terminology
9 21 21
21 3 3
10
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
Table 5 Average point improvement by ability Ability
N
I/O
If
While
For
Weak Mediocre Strong
9 17 12
12 7 11
12 15 6
6 15 6
15 16 9
mation group used clear step-by-step explanations, usually using Jeliot terminology. As an ability to articulate reasoning can be considered as an indicator for better understanding, this can support the claim that the use of animation has given the students improved cognitive tools for understanding programming. 5.5. Matriculation examination At the end of the year, both groups sat for a first matriculation examination in computer science given by the Ministry of Education. The average grade of the control group was 98 and the average grade of the animation group was 97. While the high averages indicate that the exam was not difficult, we believe that the use of animation was responsible for the animation group— despite their lower starting point—achieving parity with the control group. 5.6. The mediocre-student effect Anecdotal evidence as well as our qualitative results suggest that it is the mediocre students who profit the most from the use of animation. In order to test this hypothesis, at the conclusion of the course the average improvement in the scores between pre- and post-tests for each topic was tabulated for strong, mediocre and weak students. The assignment to these groups was made by the teacher prior to computing these data. This introduces the possibility of bias since she is also one of the researchers, but we believe that after many hours in a classroom, a teacher can assess a student’s learning abilities far more accurately than any psychometric test whose validity for the specific subject like computer science is questionable. The results are shown in Table 5. We interpret the strange results for the first assignment as resulting from the difficulty inherent in learning to use a new system. The subsequent results support the hypothesis that mediocre students profit more from animation than either strong or weak students. 6. Discussion Since the control group had very little room for improvement, the most that can be concluded from the quantitative results is that the animation group showed improvement. Even in long term use, animation does not improve the performance of all students: the better students do not really 4 The anomalous use of Jeliot terminology in the control group is easily explained: the student was the twin of a student in the animation group!
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
11
need it, and the weakest students are overwhelmed by the tool. But for many, many students, the concrete model offered by the animation can make the difference between success and failure. Animation does not seem to harm the grades neither of the stronger students who enjoy playing with it but do not use it, nor of weaker students for whom the animation is a burden. Only after the third assignment was there significant improvement in the grades, leading us to conclude that animation must be a long-term part of a course, so that students can learn the tool itself. The consistent improvement in the average scores of the mediocre students confirms the folk-wisdom that they are the most to benefit from visualization. We find it particularly significant that the animation group used a different and better vocabulary of terms than did the non-animation group. (See the sample transcript in Appendix B, where student D explains an assignment statement in terms of the animation.) Especially within the socio-linguistic approach to learning, verbalization is the first and most important step to understanding a concept, and for this reason alone, the use of animation can be justified. Of course, it can be claimed that improved teaching could achieve the same effect without animation, but we believe that seeing the computer explain itself has a much more powerful effect than mere words coming out of the teacher’s mouth. Furthermore, a teacher’s time is very limited especially in lab situtations, while the animation system is always available. 7. Follow-up An assignment was given to the same population of students in the subsequent year; they studied the second year of the 2-year sequence on algorithms and programming under another teacher who was not familiar with Jeliot 2000. The purpose of this assignment was to look for knowledge transference in the students who had used Jeliot 2000: How do these students explain a new concept (parameters) that they studied without using animation? Students from the first year’s groups have been mixed together to form new classes in the second year, and furthermore, the teacher had never used Jeliot 2000, so it is safe to attribute differences to the use of Jeliot 2000 in the first year. We hypothesized that there would be a difference in the style of explanations, and in fact, students from the animation group used a step-by-step method of explanation; some even used symbols from Jeliot 2000 in order to show the flow of the values from the actual parameters to the formal ones and conversely. Again, students from the control group expressed themselves in a generalized and verbose manner with a limited vocabulary, for example: ‘‘the main program sends a value to the procedure’’, a statement which does not even mention parameters. These results confirm that the use of animation in an introductory course can result in the students developing learning strategies that are valuable in the long term. We are continuing to use Jeliot 2000 to teach a new group of introductory students. They show the same enthusiasm for animation, and demand that new concepts be explained using Jeliot 2000. As one of them said: ‘‘Jeliot is esthetic, friendly and good at explaining things.’’ 8. Conclusion Animation is an educational tool that must be integrated into the classroom and assignments, rather than used as a one-time teaching aid. Furthermore, the interpretation of the animation itself is non-trivial and must be explicitly taught. Our research shows that the main benefit of
12
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
animation seems to be that it offers a concrete model of execution that all but the best students seem to need in order to understand algorithms and programming. Jeliot 2000 may be downloaded at http://stwww.weizmann.ac.il/g-cs/benari/vis.htm. It may be freely used, but currently no support can be given.
Acknowledgements We would like to thank Erkki Sutinen for permission to re-implement Jeliot I and for his e-help during the design and implementation of Jeliot 2000. We would also like to thank Pessach Goldstein, the principal of the Gan Nahum High School in Rishon-le-Zion, Israel, for permitting us to conduct the experiment in the school.
Appendix A. Statistical results Statistical analysis of the pre- and post-tests This section contains the statistical analysis of the pre- and post-tests that were given to the students. The tests for the assignment statement are not included as measures intended to eliminate copying were not successful. Tables A1 and A2 give the averages, ranges and medians for the two groups. The Student t test was used to compare the averages of the two tests with =5% (Tables A3 and A4). The reliability of the test measures the likelihood of similar results if the test is repeated under similar conditions: a score greater than 0.8 means very likely and a score greater than 0.7 means somewhat likely. The improvements in the average grades of the animation group were significant, except for the first pair of tests. The average grades in the control group improved but the improvements were not statistically significant. The reliability of the tests of the if- and while-statements was good. These tests were given near the end of the year, by which time the the students are thoroughly familiar with the animation environment. Normalized statistics Since the control group achieved such high scores on the pretests, it is possible that their results are not significant because they had no room to improve. To rule out the this possibility, we
Table A1 Pre- and post-test results of the animation group Test
Avgpre
Avgpost
Rangepre
Rangepost
Medianpre
Medianpost
I/O If While For
92 77 76 67
89 89 87 80
51–100 15–100 0–100 20–100
33–100 17–100 25–100 35–100
100 82 85 71
100 94 97 84
13
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15 Table A2 Pre- and post-test results of control group Test
Avgpre
Avgpost
Rangepre
Rangepost
Medianpre
Medianpost
I/O If While For
92 95 83 79
96 96 88 87
44–100 75–100 40–100 26–100
56–100 89–100 10–100 60–100
99 97 88 80
100 97 95 89
Table A3 Pre- and post-test analysis of the animation group Test
t
P
Significant?
Reliabilitypre
Reliabilitypost
I/O If While For
t30= 0.85 t37=5.16 t27=3.53 t28=3.52
0.4018 0.0001 0.0016 0.0015
No Yes Yes Yes
0.64 0.72 0.68 0.62
0.81 0.82 0.72 0.59
Table A4 Pre- and post-test analysis of control group Test
t
P
Significant?
Reliabilitypre
Reliabilitypost
I/O If While For
t32=1.12 t38=1.51 t36=1.17 t26=1.99
0.2711 0.1403 0.2482 0.0577
No No No No
0.64 0.72 0.68 0.62
0.81 0.82 0.72 0.59
checked the normalized differences of the averages5 of grades of the two groups, which can give us a measure of the actual improvement of each group relative to the possible improvement. The normalization formula is: avgpost avgpre ; 100 avgpre where avgpost and avgpre are the averages of each group in each questionnaire. Table A5 shows the normalized results.6 They are not very different from the absolute results, and show that the control group could have improved more than it did. 5 Normalization was not applied to each individual grade because many pre-test grades were close to 100, causing numerical problems because of the denominator 100 avgpre. In addition, the computation requires that we leave out students who did not have not pre- and post-test grades. Thus, individual normalization would not have contributed to understanding the results. 6 Table A5 is taken from Ben-Bassat Levy (2002) where the precise data were used, and does not correspond to the rounded data presented in Tables A1 and A2.
14
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
Table A5 Normalized results Test
Control
I/O If While For
0.51 0.29 0.30 0.38
Animation 0.42 0.49 0.48 0.39
Appendix B. Transcript This appendix contains a transcript (translated into English) of the discussion between two students D and V (in the presence of the teacher T), who were presented with the problem of understanding the assignment statement x:=x+1, after they had studied the statements x:=3 and y:=x+3. V: There is an error. It is a false statement. D: You are wrong. This is not mathematics, it is Pascal. V: So what; it is impossible that x will be equal to x+1. D: But we saw in the lab that it takes what there is in the rectangle at the side and puts it on the marble [the area on which is animation is conducted is called the marble because of its graphical appearance], and then adds one and then puts the answer back into the rectangle. [At this point, D draws the Jeliot screen on the blackboard.] V: Wow, did I ever make a mistake! I forgot about it. [Turns to the teacher] I forgot. What am I going to do?! T: What we saw in the lab was the execution of y:=x+3. You explained here the execution of x:=x+1. Are the statements similar or different? D: I think they are both similar and different. T: What do you mean? D: They are similar because of the right-hand side, where it takes what there is in the rectangle, puts it on the marble and then adds one. They are different because of the left-hand side: in the lab y was written and here x is written. T: What do you mean by ‘‘take’’? D: It copies what there in in the rectangle x. I remember now, in the lab you said ‘copies’. I don’t care if x or y is written on the left-hand side, because it is a computer and it does the same thing all the time. So instead of putting it in y he will put it in x. References Ben-Ari, M. (2001). Constructivism in computer science education. Journal of Computers in Mathematics and Science Teaching, 20(1), 45–73. Ben-Ari, M., Myller, N., Sutinen, E., & Tarhio, J. (2002). Perspectives on program animation with Jeliot. Software visualization: international seminar. Dagstuhl Castle, Germany, Lecture Notes in Computer Science 2269, 31–45.
R. Ben-Bassat Levy et al. / Computers & Education 40 (2003) 1–15
15
Ben-Bassat Levy, R. (2002). The use of animation as an educational tool. MSc thesis, Weizmann Institute of Science (in Hebrew). Byrne, M., Catrambone, R., & Stasko, J. (1996). Do algorithm animations aid learning? Technical report GIT-GVU-96– 19, Georgia Institute of Technology. Haajanen, J., Pesonius, M., Sutinen, E., Tarhio, J., Tera¨svirta, T., & Vanninen, P. (1997). Animation of user algorithms on the web. Proceedings of ‘97 IEEE Symposium on Visual Languages, 360–367. Kehoe, C., Stasko, J., & Taylor, A. (1999). Rethinking the evaluation of algorithm animations as learning aids: an observational study. Technical report GIT-CVU-99–10, Georgia Institute of Technology. Lattu, M., Tarhio, J., & Meisalo, V. (2000). How a visualization tool can be used: evaluating a tool in a research and development project. In 12th Workshop of the Psychology of Programming Interest Group (pp. 19–32). Corenza, Italy. Available: http://www.ppig.org/papers/12th-lattu.pdf. Mayer, R. E. (1997). Multimedia learning: are we asking the night questions? Educational Psychologist, 32(1), 1–19. Petre, M. (1995). Why looking isn’t always seeing: readership skills and graphical programming. Communications of the ACM, 38(6), 33–44. Petre, M., & Green, T. R. G. (1993). Learning to read graphics: some evidence that ‘seeing’ an information display is an acquired skill. Journal of Visual Languages and Computing, 4, 55–70. Stasko, J., Badre, A., & Lewis, C. (1993). Do algorithm animations assist learning: an empirical study and analysis. In Proceedings of the INTERCHI ‘93 Conference on Human Factors in Computing Systems (pp. 61–66). Amsterdam, The Netherlands.