Student perceptions about computerized testing in introductory managerial accounting

Student perceptions about computerized testing in introductory managerial accounting

J. of Acc. Ed. 27 (2009) 59–70 Contents lists available at ScienceDirect J. of Acc. Ed. journal homepage: www.elsevier.com/locate/jaccedu Main arti...

2MB Sizes 0 Downloads 90 Views

J. of Acc. Ed. 27 (2009) 59–70

Contents lists available at ScienceDirect

J. of Acc. Ed. journal homepage: www.elsevier.com/locate/jaccedu

Main article

Student perceptions about computerized testing in introductory managerial accounting Barbara Apostolou a,*, Michael A. Blue b,1, Ronald J. Daigle c,2 a

Division of Accounting, College of Business and Economics, West Virginia University, P.O. Box 6025, Morgantown, WV 26506-6025, United States Department of Finance, E.J. Ourso College of Business, Louisiana State University, Baton Rouge, LA 70803-6304, United States c Department of Accounting, College of Business Administration, Sam Houston State University, Huntsville, TX 77341-2056, United States b

a r t i c l e Article history:

i n f o

a b s t r a c t This study reports on the implementation of computerized testing in an introductory managerial accounting course. Students were surveyed about their perceptions of computerized testing after taking two major computerized exams. Results show that students perceived both negative and positive aspects about computerized testing, and overall perceptions tended to be more negative than positive. Clear differences in student perceptions existed when analyzing results by instructor, indicating that individual instructors can manage student perceptions about computerized testing. Suggestions for addressing negative student perceptions are provided for accounting educators who are considering the use of computerized testing in introductory courses. Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction The purpose of this study is to gain insight into student perceptions of computerized testing in an introductory managerial accounting course. Certain benefits to computerized testing have been noted in the literature, yet studies of how students perceive a computerized-testing environment are scarce. Those perceptions can be used to facilitate transition from the traditional paper and pencil exam setting to computerized settings. The shift of major professional exams to computerized settings provides * Corresponding author. Tel.: +1 304 293 0091. E-mail addresses: [email protected] (B. Apostolou), [email protected] (M.A. Blue), [email protected] (R.J. Daigle). 1 Tel.: +1 225 578 6291. 2 Tel.: +1 936 294 1249. 0748-5751/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.jaccedu.2010.02.003

60

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

additional motivation for studying student perceptions (e.g., the CMA, CPA, and CIA exams were computerized in 1997, 2004, and 2008, respectively). Research suggests that computerized tests can reduce testing time (Bunderson, Inouye, & Olsen, 1989; McBride, 1985; Sereci, 2003; Wise & Plake, 1989). Computerized tests can also provide greater immediacy of grading and reporting exam results (Bugbee, 1992; McBride, 1985; Sereci, 2003). Improved test security is another potential benefit to computerized testing (Grist, Rudner, & Wise, 1989; Sereci, 2003), as is flexible scheduling (Bugbee, 1992; Hambleton, Zaal, & Pieters, 1991; Sereci, 2003). Butler (2003) studied computerized testing in an introductory psychology course and identified other advantages to the format over traditional paper and pencil tests: (a) reduced paper and copying costs; (b) elimination of the need for proctors; and (c) out-of-class exams to accommodate large courses. Certain disadvantages were also identified: (a) testing at a computer screen without the ability to underline or make notations; (b) stress from looking at a computer screen; and (c) anxiety of changing from the traditional paper and pencil test setting. Despite the disadvantages, students had more positive perceptions about computerized tests than paper and pencil tests. Few studies have focused on computerized testing in accounting and business courses, as shown by the absence of references in the accounting education literature reviews covering published research since 1991 (Apostolou, Watson, Hassell, & Webber, 2001; Rebele et al., 1998a; Rebele et al., 1998b; Watson, Apostolou, Hassell, & Webber, 2003; Watson, Apostolou, Hassell, & Webber, 2007).3 One related study is deLange, Suwardy, and Mavondo (2003), who surveyed on-campus students in an introductory accounting course about the use of WebCT. The most relevant study to date is Peterson and Reider (2002), who surveyed successful and unsuccessful candidates of the Institute of Management Accountants’ (IMA) computerized Certified Financial Manager (CFM) exam. Despite some negative perceptions (e.g., computer screen fatigue, elimination of partial credit), CFM candidates in general (whether successful or unsuccessful) perceived the computerized exam setting as an overall positive experience. This result is shown by the finding that 64% of successful candidates and 62% of unsuccessful candidates perceived that a candidate’s computerized exam score accurately reflects the candidate’s knowledge. Introductory managerial accounting courses tend to have large enrollments because a majority of business majors and minors are served. Those teaching such courses may have an interest in computerized testing because of the perceived benefits to be gained. Accounting educators teaching online courses may have an interest because the online environment imposes computerized testing. Some accounting educators may be interested in not only gaining testing efficiencies, but in giving students some insight into the environment of computerized professional exams. The potential interest in computerized testing by accounting educators requires the study of how computerized testing can impact accounting courses, including student perceptions. Understanding student perceptions is important because negative perceptions can be a detriment to a course. As Peterson and Reider (2002) provide insights for the IMA and other professional organizations that use or may adopt computerized testing, this study seeks to provide insights for accounting educators interested in using computerized testing in introductory accounting courses. This study adapts Peterson and Reider’s (2002) survey instrument to gather and report student perceptions about computerized testing in an introductory managerial accounting course taught by multiple instructors.4 Students in this study reported significant positive and negative perceptions about computerized testing. Overall perceptions tend to be more negative. Student perceptions differed by instructor, a factor not considered in prior research. This finding suggests that the instructor can play an important role with managing perceptions of the test environment. Accounting educators considering the use of computerized testing in introductory courses should be cognizant of these findings, and the corresponding suggestions provided for managing negative perceptions about computerized testing.

3

Boyce (1999) analyzes how computers can expand teaching and learning in accounting curricula. This study does not focus on perceptions associated with smaller-scale technologies such as WebCT, online ancillary quizzes, or Blackboard. 4

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

61

The remainder of the paper describes the research method, presents the results, and provides suggestions for managing students’ perceptions about computerized testing. Limitations of the current study and suggestions for future research are provided in the final section. 2. Research method 2.1. Description of computerized-testing environment under study Introductory managerial accounting is a coordinated course taught in multiple sections by experienced instructors at Louisiana State University. Computerized tests are offered in a three-day ‘‘exam window.” Students sign up for a two-hour time slot at a campus testing center, each having 150 computer terminals dedicated for exams. Test center administration, scheduling, proctoring, and other test processing functions are managed by the University’s Office of Assessment and Evaluation. Each student’s exam is randomly generated from a test universe, which is created by the course coordinator from materials provided by the textbook publisher. The software that generates the questions is Questionmark™ Perception™ (2009). The software can generate multiple-choice questions, true/false questions, questions with drop-down menus, and problems or questions requiring fill-inthe-blank responses. Each exam is randomly generated so that the order of questions and values in problems differs for each student. The order of choices in multiple-choice questions also may be randomly assigned. Results are available for viewing by the course coordinator and instructor as soon as the student submits the exam electronically. 2.2. Survey data collection Student perceptions about computerized tests were collected using a survey instrument adapted from that developed by Peterson and Reider (2002).5 The survey (see Appendix) was given after completion of two mid-term exams, and consisted of 18 Likert-scale (scale of 1–5), two yes/no, and one openended question that gave students the opportunity to elaborate on perceptions. The survey was both voluntary and anonymous, with participants signing a separate consent form. Prior to giving the survey, each instructor read a brief, standard script to ensure consistency across all sections. The survey took approximately 20 min to complete during class, and included all students present for class on the designated day (n = 223). 3. Results Table 1 summarizes key demographics of students initially enrolled in the course. The data were collected during the first class meeting of the semester.6 Students reported previous computerized test experience by a 3–1 margin. Biology, information systems, music, and physics were the most cited courses in which the students had previously experienced centralized computerized testing. The most common classification was junior, followed by senior and sophomore. Approximately 55% of the responding students were male, 40% were female, and the remainder gave no response. Approximately 92% reported English as their first language. Approximately half of the enrolled students were in business major specialties, with the most common being general business administration (21.84%), marketing (13.65%), and management (7.94%). The rest of the enrolled students were business minors from 23 non-business majors. Construction management (14.64%) and general studies (6.70%) were the most common non-business majors.

5

An exemption from Institutional Review Board oversight for using students as subjects was received for this study. In a separate survey administered at the beginning of the semester, students provided demographic data. The students who participated in the study do not represent all of the students who completed the demographic survey. However, presentation of the demographic information is included to provide a general description of those initially enrolled in the managerial accounting course. 6

62

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Table 1 Self-reported demographic data of students surveyed. Question

Responses (n = 396)

Computer testing experience Yes No No response

76.01% 23.48% 0.51%

Course(s) providing previous computer testing experience Biology Information systems Music Physics Other No response

31.28% 18.94% 6.17% 5.96% 15.53% 22.12%

Majora Business Non-business No response

50.62% 45.66% 3.72%

Current overall GPA Mean Standard deviation No response

3.06 0.45 5.81%

ACT score Mean Standard deviation No response

24.93 3.35 27.53%

Classification Freshman Sophomore Junior Senior Graduate No response

0.00% 18.94% 44.44% 30.81% 2.02% 3.79%

Age Mean Standard deviation No response

21.15 2.96 5.30%

Gender Male Female No response First language is english Yes No No response a

55.30% 39.65% 5.05% 91.67% 2.27% 6.06%

Seven students had dual majors. Those with at least one major in business are considered business majors.

3.1. Perceptions about computerized testing Table 2 provides mean responses to the 18 Likert-scale and two yes/no questions, as well as the rescaled difference of each question’s mean response from its respective neutral score.7 For discussion purposes, the questions are described to correspond to the scheme of the survey instrument (i.e., Q1, Q2, . . . , Q14). Rescaled differences are given to help emphasize how negative/positive a particular

7 The scale for Q13 is in the opposite direction of all other perception questions. For all perception questions but Q13, the lower the response, the more negative the perception. Therefore, responses to Q13 were reverse coded when performing the analysis described.

Table 2 Mean student responses and differences of means from respective neutral score. Question

Brief descriptiona

Explanation for a rating of 5b

Mean (standard deviation) (n = 223)

Q1

Difficulty of computerized exams compared to paperbased exams Scope of material that can be tested Perceived quality of grade earned Flexibility of scheduling and taking exams Prompt feedback of results Ability to make educated guesses of answers because of objective format Elimination of essay questions or long-form problems Elimination of judgment in grading Elimination of partial credit in grading Required knowledge about computers when taking a computerized exam Elimination of in-class return and review of exam Impact on student’s stress and anxiety Impact on opportunity to cheat Advantage of taking exam later than earlier in test window provided Impact of looking at computer screen on exam performance Ability to make notes on exam on exam performance Ability to preview exam and budget time on exam performance Ability to scan through/review unanswered questions on exam performance Whether other forms of tests should also be given Accuracy of computerized exams measuring student learning

Much easier

2.32 (1.02)

0.68

9.82***

Strongly expands Strongly improves Very positive Very positive Very positive

2.60 2.33 4.74 3.66 3.27

(0.85) (0.93) (0.58) (1.26) (1.00)

0.40 0.67 1.74 0.66 0.27

7.11*** 10.74*** 44.99*** 7.82*** 4.02***

Very Very Very Very

3.73 2.12 1.45 3.03

(1.20) (1.15) (0.69) (0.77)

0.73 0.88 1.55 0.03

9.00*** 11.37*** 33.35*** 0.52

Very positive Strongly reduces Much more difficult Have an advantage

1.89 2.42 3.89 3.53

(1.02) (0.89) (1.01) (0.84)

1.11 0.58 0.89 0.53

16.24*** 9.75*** 13.08*** 9.44***

Very positive effect Very positive effect Very positive effect

2.25 (0.78) 1.93 (0.80) 2.56 (1.03)

0.75 1.07 0.44

14.40*** 19.95*** 6.38***

Very positive effect

2.49 (0.83)

0.51

9.24***

1 = Yes; 2 = No 1 = Yes; 2 = No

1.73 (0.45) 1.50 (0.50)

0.23 0.00

7.50*** 0.13

Q4d Q4e Q4f Q4g Q4h Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 a b c * ** ***

positive positive positive positive

t-statistic for comparison of mean to neutral score

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Q2 Q3 Q4a Q4b Q4c

Difference of mean from neutral scorec

See Appendix for complete question descriptions. Q1–Q11 are answered on a Likert-scale of 1–5, with a neutral score of 3. Q12 and Q13 are coded 1 for ‘Yes’ and 2 for ‘No’, with a neutral score of 1.5. The standard deviations of the differences in this column are the same as the respective means shown in the previous column. p-value < 0.05. p-value < 0.01. p-value < 0.0001.

63

64

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

perception is about computerized testing. Perceptions to 11 questions are significantly negative, perceptions to seven questions are significantly positive, and perceptions to two questions are neutral. All significant means except those to three questions (Q2, Q4c, and Q10) are greater than a half a point from the respective neutral score. All significant means are therefore meaningful to discuss for gaining insights to student perceptions about computerized testing in an introductory accounting course. With respect to negative perceptions, students perceived that computerized tests: (a) are more difficult to complete (Q1); (b) limit the scope of test material (Q2); (c) weaken the perceived quality of one’s grade earned (Q3); and (d) create greater stress and anxiety (Q5). Students negatively perceived that computerized tests eliminate: (a) judgment in grading because of the objective test format (Q4e); (b) partial credit (Q4f); and (c) in-class return and review (Q4h). Students negatively perceived: (a) looking at a computer screen for extended time periods (Q8); (b) the inability to make notes on the exam (Q9); (c) the inability to quickly preview the entire exam and budget one’s time (Q10); and (d) the inability to scan and review ‘‘marked” or ‘‘unanswered” questions (Q11). Perceptions about the elimination of partial credit (Q4f), inability to make notes (Q9), and loss of in-class return of exams (Q4h) were the most negative, as shown by the rescaled differences in Table 2. With respect to positive perceptions, students perceived: (a) greater scheduling and exam-taking flexibility (Q4a); (b) more prompt feedback than with paper-based tests (Q4b); (c) an ability to make educated guesses because of the objective test format (Q4c); and (d) elimination of essay questions or long-form problems (Q4d). Students also positively perceived that those taking an exam later in the test window have an advantage over those taking the exam earlier (Q7), which may be due to having more time to study, as well as learning information about the exam from those who took it earlier. Even with this perception, students perceived more difficulty with finding opportunities to cheat (Q6), a finding that should please educators. Q12 required a yes/no response, with 1 for ‘yes’ and 2 for ‘no’. The mean of 1.73 indicates that approximately 73% of students did not favor separate tests, which would include handwritten essay or long-form problems. This perception is consistent with that favoring the elimination of essay questions or long-form problems (Q4d). Of all positive perceptions, students were most positive about scheduling and exam-taking flexibility (Q4a), as shown by the rescaled differences in Table 2. Students reported two neutral perceptions: (1) the need for possessing computer knowledge for taking computerized exams (Q4g), and (2) whether computerized exams accurately measure learning (Q13). While just one question, Q13 could be deemed among the most important in the survey because a perception that exams do not accurately measure learning could have an undermining impact on a course. This question required a yes/no response. The mean of 1.50 indicates that students were equally split on whether computerized exams accurately measure learning. While statistically neutral, the mean response can be viewed in essence as a negative perception because it is not positive. Students also were asked to answer an open-ended question (Q14) for elaborating on their perceptions. A summary count by response type is shown in Table 3. Approximately 60% of students (135/ 223) answered this question, and 67% of those who answered the question gave a negative response, with the most common types of responses categorized as follows: (a) the desire for paper and pencil to make notes and hand computations; (b) concern about the loss of partial credit due to the objective

Table 3 Summary of open-ended responses elaborating on perceptions about computerized testsa.

a

Nature of response

Number (percentage) of students of instructor #1

Number (percentage) of students of instructor #2

Number (percentage) of students of instructor #3

Total number (percentage) of students

Positive Negative Neutral Total responses No response Total students

17 (37.0) 28 (60.9) 1 (2.1) 46 (100.0) 19 65

10 (18.5) 35 (64.8) 9 (16.7) 54 (100.0) 64 118

3 (8.6) 28 (80.0) 4 (11.4) 35 (100.0) 5 40

30 (22.2) 91 (67.4) 14 (10.4) 135 (100.0) 88 223

See Q14 in the Appendix for complete question description.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

65

question format; (c) stress; (d) having to look at a computer screen; and (e) instructor absence from testing facility. Representative comments include ‘‘Can’t show your work on the computer. What if I pick the wrong answer but have the work right???” and ‘‘What if I have a question during the exam? I can’t ask my teacher.” Roughly 22% of those who answered the question gave a positive response, primarily about exam-taking flexibility (e.g., ‘‘Can schedule exam when I want” or ‘‘We took them online in biology and it was okay.”) 3.2. Instructor effect Further analyses were performed to determine if student perceptions differed by instructor. No prior study has considered differences across instructors. There were no a priori expectations in this study for an instructor effect because of the experience and record of positive student evaluations of each of the three instructors teaching the managerial accounting course. Analyses, however, show a distinct pattern of differences among student perceptions about computerized testing by instructor. Table 4 shows the rescaled difference of each mean from the respective neutral score by instructor and the sum of the differences of means by instructor. Significant differences between two or more instructors are noted for 14 of the 20 questions (p-value < 0.05), as well as between all the sums of the differences of means by instructor (p-value < 0.01). Comparison of each sum to that of a neutral score for all 20 questions shows that students of Instructor #1 had a neutral perception ( 0.35, p-value of 0.743), students of Instructor #2 had a slight negative perception ( 3.82, p-value < 0.0001), and students of Instructor #3 had a more negative perception ( 8.45, p-value < 0.0001). While the sum of differences for students of Instructor #1 is approximately zero and the sum for students of Instructor #2 is slightly less than zero, the sum for students of Instructor #3 is almost a half a point below a neutral score per question ( 8.45/20). A distinct difference exists between student perceptions by instructor, thereby indicating that the instructor may influence student perceptions about computerized testing. 4. Managing student perceptions about computerized testing In response to this study’s findings, the three instructors sought to better manage student perceptions about computerized testing in the introductory managerial accounting course. The adjusted behaviors are provided in this section to serve as suggestions to accounting educators for managing student perceptions about computerized testing in introductory courses. The three instructors recognize that potential student perceptions should first be evaluated before using computerized testing, which helps anticipate issues that may arise with its use. A proactive stance is especially important when a course is taught by multiple instructors. Formal discussion among instructors makes each uniformly aware of how student perceptions can impact the class environment, which should result in the adoption of a consistent approach with managing perceptions about computerized testing in all sections of the course. The instructors discuss the computerized-testing environment with students in-class at the beginning of the semester, before the first exam, and as concerns arise during the semester. The benefits of computerized testing are emphasized to students at the beginning of the course. If students raise concerns, instructors focus on how those concerns are addressed. Efforts are made to ensure equal opportunity to get preferred times within the window because some students perceived that the test window offered grade advantages to late exam takers. In response to concerns about partial credit, two actions were taken. The instructors incorporate blended exams that are substantially computerized with some traditional problems to be answered by hand. Partial credit can be earned on these problems. Instructors also use some computational multiple-choice questions that have incorrect choices that are partially correct with some point values to be earned. Elimination of the in-class return of exams is one negative perception that can be challenging to minimize, especially if the final exam is comprehensive. Exams cannot be returned or reviewed inclass because each student’s exam is unique. Students may review exams in their instructor’s office. However, a formalized system was established in which a graduate student assistant holds blocks

66

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Table 4 Differences of means from respective neutral score by instructor. Question

Brief descriptiona

Explanation for a rating of 5b

Q1

Difficulty of computerized exams compared to paper-based exams Scope of material that can be tested Perceived quality of grade earned Flexibility of scheduling and taking exams Prompt feedback of results Ability to make educated guesses of answers because of objective format Elimination of essay questions or longform problems Elimination of judgment in grading Elimination of partial credit in grading Required knowledge about computers when taking a computerized exam Elimination of inclass return and review of exam Impact on student’s stress and anxiety Impact on opportunity to cheat Advantage of taking exam later than earlier in test window provided Impact of looking at computer screen on exam performance Ability to make notes on exam on exam performance Ability to preview exam and budget time on exam performance Ability to scan through/review unanswered questions on exam performance

Much easier

0.26 (0.94)

0.78 (1.00)

1.08 (0.97)

Strongly expands Strongly improves Very positive

0.19 (0.88)

0.50 (0.82)

0.50 (0.82)

0.43 (1.04)

0.67 (0.82)

1.10 (0.90)

1.72 (0.48)

1.83 (0.49)

1.52 (0.85)

Very positive

0.92 (1.08)

0.65 (1.28)

0.24 (1.39)

Very positive

0.40 (0.91)

0.22 (1.05)

0.20 (1.00)

Very positive

1.02 (1.10)

0.69 (1.20)

0.35 (1.29)

Very positive

0.65 (1.16)

0.89 (1.12)

1.25 (1.17)

Very positive

1.28 (0.89)

1.59 (0.60)

1.90 (0.30)

Very positive

0.04 (0.64)

0.05 (0.89)

0.05 (0.55)

Very positive

0.78 (1.21)

1.16 (0.91)

1.50 (0.85)

Strongly reduces Much more difficult Have an advantage

0.38 (0.86)

0.61 (0.85)

0.80 (0.99)

1.00 (0.92)

0.90 (1.01)

0.68 (1.16)

0.49 (0.73)

0.53 (0.87)

0.58 (0.90)

Very positive effect

0.58 (0.79)

0.79 (0.76)

0.90 (0.78)

Very positive effect

0.97 (0.73)

1.08 (0.80)

1.20 (0.91)

Very positive effect

0.29 (0.98)

0.39 (1.00)

0.82 (1.11)

Very positive effect

0.51 (0.81)

0.44 (0.85)

0.75 (0.74)

Q2 Q3 Q4a

Q4b Q4c

Q4d

Q4e Q4f

Q4g

Q4h

Q5 Q6 Q7

Q8

Q9

Q10

Q11

Instructor #1 difference of Mean from neutral score (standard deviation) (n = 65)c

Instructor #2 difference of mean from neutral score (standard deviation) (n = 118)c

Instructor #3 difference of Mean from neutral score standard deviation) (n = 40)c

67

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70 Table 4 (continued) Question

Brief descriptiona

Explanation for a rating of 5b

Q12

Whether other forms of tests should also be given Accuracy of computerized exams measuring student learning Sum (Standard Deviation) of the Differences of Meansd

1 = Yes; 2 = No

0.30 (0.40)

0.20 (0.46)

0.17 (0.48)

1 = Yes; 2 = No

0.08 (0.50)

0.00 (0.50)

0.15 (0.48)

0.35 (8.01)

3.82 (7.83)

8.45 (7.22)

Q13

Instructor #1 difference of Mean from neutral score (standard deviation) (n = 65)c

Instructor #2 difference of mean from neutral score (standard deviation) (n = 118)c

Instructor #3 difference of Mean from neutral score standard deviation) (n = 40)c

a

See the Appendix for complete question descriptions. Q1–Q11 are answered on a Likert-scale of 1–5, with a neutral score of 3. Q12 and Q13 are coded 1 for ‘Yes’ and 2 for ‘No’, with a neutral score of 1.5. c Bolded and italicized differences denote that one or more differences significantly differ from one or more other differences across instructors for a particular question at p < 0.05. d Bolded and italicized sums of the differences denote that one or more sums significantly differ from one or more other sums across all instructors at p < 0.01. b

of office hours following each exam dedicated to individualized exam reviews. As another way to ease concerns about computerized testing, instructors use class time to review a sample exam to demonstrate the different formats that can appear on the exams. 5. Limitations and further suggested research One limitation of this study is that results may not be generalizable because the study was conducted at a single university. The results are limited to measuring student perceptions after giving two computerized exams in one semester. Future research should compare perceptions before and after the exams to offer insight about how perceptions change by the experience. Students in this study were either pursuing a major or minor in business. A study about computerized testing in a course taken exclusively by accounting majors may be of more interest to educators desiring to give students some insight into the environment of computerized professional exams. The results could be directly compared to the results of Peterson and Reider (2002) regarding the perceptions of candidates of professional accounting-related exams. If computerized testing is implemented in multiple courses in an accounting curriculum, a longitudinal study of student perceptions may provide useful insights. A study could help determine if particular perceptions, such as computerized testing accurately measuring a student’s learning, become more positive with increased experience in computerized test settings. Another longitudinal study could measure student perceptions within a specific course using computerized testing over multiple semesters. Insights could be gained as to whether student perceptions improved with instructor experience and computerized test environment modifications. Some analyses in this study combined data from 20 survey questions. These analyses assumed that responses can equally offset each other. Some aspects of computerized testing may be more important to students than others. Future studies can focus on determining whether students weigh or rank the importance of perceptions differently, which can help educators when designing and using computerized tests. This study was exploratory in nature. Studies that test formal hypotheses based on theory are essential for explaining how and why computerized testing impacts accounting courses. The potential interest in computerized testing by educators requires the need to study how computerized testing can impact accounting course delivery, including student perceptions of the test environment.

68

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Appendix. Survey instrument The department of accounting is implementing computerized testing this semester in Introductory Managerial Accounting. We are interested in your personal views related to computerized testing now that you have had some experience with it. Please circle the number that corresponds to the best answer to each question.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

69

70

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

References Apostolou, B. A., Watson, S. F., Hassell, J. M., & Webber, S. A. (2001). Accounting education literature review (1997–1999). Journal of Accounting Education, 19(1), 1–61. Boyce, G. (1999). Computer-assisted teaching and learning in accounting: Pedagogy or product? Journal of Accounting Education, 17(2–3), 191–220. Bugbee, A. C. (1992). Examination on demand: Findings in ten years of testing by computer 1982–1991. Edina, MN: TRO Learning. Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 367–407). New York: The Macmillan Company. Butler, D.L. (2003). The impact of computer-based testing on student attitudes and behavior. The technology source archives at the University of North Carolina, January/February. Accessed 26.01.2010. deLange, P., Suwardy, T., & Mavondo, F. (2003). Integrating a virtual learning environment into an introductory accounting course: Determinants of student motivation. Accounting Education, 12(1), 1–14. Grist, S., Rudner, L., & Wise, L. (1989). Computer adaptive tests. ERIC Clearinghouse on tests, measurement, and evaluation. Washington, DC: American Institute for Research. Hambleton, R. K., Zaal, J. N., & Pieters, J. M. (1991). Computerized adaptive tests: Theory, applications, and standards. In R. K. Hambleton & J. N. Zaal (Eds.), Advances in educational and psychological testing (pp. 341–366). Boston: Kluwer. McBride, J. R. (1985). Computerized adaptive testing. Educational Leadership, 43(2), 25–28. Peterson, B. K., & Reider, B. P. (2002). Perceptions of computer-based testing: A focus on the CFM examination. Journal of Accounting Education, 20(4), 265–284. Questionmark™ Perception™. (2009). Accessed 26.01.2010. Rebele, J. E., Apostolou, B. A., Buckless, F. A., Hassell, J. M., Paquette, L. R., & Stout, D. E. (1998a). Accounting education literature review (1991–1997), part I: Curriculum and instructional approaches. Journal of Accounting Education, 16(1), 1–51. Rebele, J. E., Apostolou, B. A., Buckless, F. A., Hassell, J. M., Paquette, L. R., & Stout, D. E. (1998b). Accounting education literature review (1991–1997), part II: Students, educational technology, assessment, and faculty issues. Journal of Accounting Education, 16(2), 179–245. Sereci, S. G. (2003). Computerized adaptive testing: An introduction. In J. E. Wall & G. R. Walz (Eds.), Measuring up: Assessment issues for teachers counselors and administrators (pp. 685–694). Greensboro: CAPS Press. Watson, S. F., Apostolou, B. A., Hassell, J. M., & Webber, S. A. (2003). Accounting education literature review (2000–2002). Journal of Accounting Education, 21(4), 267–327. Watson, S. F., Apostolou, B. A., Hassell, J. M., & Webber, S. A. (2007). Accounting education literature review (2003–2005). Journal of Accounting Education, 25(1), 1–58. Wise, S. L., & Plake, B. S. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issues and Practice, 8, 5–10.