Pergamon
Journal of Accounting Education, Vol. 14, No. 1, ~pp. 1-16, 1996 Copyright © 1996 Elsewer Science Ltd Printed m Great Britain. All rights reserved 0748-5751/96 $15,00 + 0.00 0748-5751 (9b')00033-X
Main Articles
WHAT CORPORATE AMERICA WANTS IN ENTRYLEVEL ACCOUNTANTS: SOME METHODOLOGICAL CONCERNS Bart P. Hartman LOUISIANA STATE UNIVERSITY
Jack M. Ruhl WESTERN MICHIGAN UNIVERSITY
Abstract: In September 1993, the Institute of Management Accountants (IMA) and the Financial Executives Institute (FEI) commissioned the Gary Siegel Organization (hereafter GSO) to carry out a study that was ultimately tiffed, "What Corporate America Wants in Entry-level Accountants." The GSO employed a research approach that included focus group interviews, telephone interviews, and mail questionnaires. Central to the GSO study is an assessment of accounting knowledge and skills areas (AKSAs). AKSAs are 15 broad accounting knowledge and skills areas selected from the content specifications of professional accounting examinations and from major topic areas taught in college accounting courses. The research results were reported at the Annual Meeting of the American Accounting Association in August, 1994, and published as a monograph as well as in Management
Accounting. According to a joint IMA/FEI position statement, "The results show that Corporate America believes universities are doing a less than adequate job of preparing people for entrylevel work in management accounting" (p. 25). Consistent with these conclusions, the IMA/ FEI position statement continues, "University accounting programs must be restructured to respond to the needs of the corporate customer." (p. 25). As a way of measuring the service quality gap with respect to the AKSAs, the GSO apparently tried to adapt a perceptions-minus-expectations research methodology used to measure service quality referred to as SERVQUAL. Unfortunately, the GSO study contains serious methodological flaws. These methodological flaws call into question the conclusions drawn from the study. Accounting educators and administrators need to exercise caution in using the GSO study results as the basis for wholesale restructuring of the accounting curriculum. At the least, additional studies ought to be made to either confirm or disconfirm the results reported by the GSO study. In this paper, we outline a more acceptable method for using the SERVQUAL perceptions-minus-expectations framework.
INTRODUCTION T h e b u s i n e s s e n v i r o n m e n t o f c o r p o r a t e A m e r i c a is c h a n g i n g r a d i c a l l y a n d r a p i d l y . U n q u e s t i o n a b l y , a c c o u n t i n g e d u c a t i o n m u s t a l s o c h a n g e to a c c o m m o d a t e t h e n e e d s o f c o r p o r a t e clientele. E n h a n c e m e n t o f o p p o r t u n 1
2
B.P. Hartman and J. M. Ruhl
ities for career success in the fast-moving corporate environment necessitates continuous curricular modifications in accounting programs. In recognition of the changing business environment and the need for curriculum modification to accommodate the new demands of the corporate workplace, the Institute of Management Accountants 0MA) and the Financial Executives Institute (FEI) sought to identify specific areas of concern in accounting education. Hence, in 1993, the IMA and FEI jointly commissioned the Gary Siegel Organization (GSO) to design and execute a study ultimately titled "What Corporate America Wants in Entry-level Accountants." The purpose of this commissioned study was to determine the expectations and needs of corporate employers. Such a determination clearly would serve both corporate employers as well as students. Importantly, the result could have a significant impact in guiding accounting curricula revision, and foster a partnering of corporate needs and accounting education. This is noteworthy in view of anecdotal evidence related by the GSO study suggesting that some corporate employers are so dissatisfied with the preparation that new graduates receive from typical undergraduate accounting programs, they prefer to hire graduates of 2-year associate programs for management accounting positions. On the other hand, the GSO study suggests that other executives perceive a need for management accountants to have more, not less accounting education because of the increasingly complex business environment as well as changes in management accounting. Unfortunately, the GSO study itself does not achieve the objectives or purpose of its sponsors. Before investing resources in curriculum changes recommended by the study, educators should consider several methodological problems with the study, and how these impact on the conclusions and policy recommendations of the study. These methodological concerns are discussed in the remainder of this paper. THE GSO STUDY The GSO study (1994) reports mail and telephone survey results obtained from high-level finance and accounting executives from a broad cross-section of industries polled over a 3-month period. Mail survey results are based on 794 responses from a random sample of members of IMA, FEI and the AICPA. Telephone survey results reflect responses of 61 randomly selected members of IMA or FEI. Central to the study is an assessment of 15 accounting knowledge and skills areas (AKSAs), which are listed in Exhibit 1. In reference to the GSO study (1994), a joint IMA/FEI position statement says "The results show that Corporate America believes universities are doing a less than adequate job of preparing people for
What Corporate America Wants in Entry-level Accountants
3
Exhibit 1 Listed below are a number of broad accounting skills and areas of accounting knowledge. These were drawn from the content specifications of professional accounting examinations and from major topic areas taught in college accounting courses. For each skill or knowledge area, please indicate: (a) How important it is for an entry-level management accountant to process that knowledge or skill. (b) The depth of knowledge necessary for an entry-level management accountant. For importance, please enter a number from O-to-lO0 where zero is not important at all, and 100 is extremely important. For depth of knowledge, please enter A, B, or C, where A indicates a basic level of knowledge, comprehension, and application; B is a moderate level; and C is the deepest level.
Skllllknowledge area (1) Product and service costing (2) Internal auditing (3) External auditing (4) Individual income taxation (5) Corporate income taxation (6) Accounting for government and not-for-profit organizations (7) Information systems development and design (8) Asset management and planning (including capital budgeting) (9) Working capital management (10) Long-term financing and capital structure (11) Control and performance evaluation (12) Consolidated financial statements (13) Strategic cost management (including ABC) (14) Budgeting (15) Pronouncements of the FASB
Importance O-to-lO0
Depth of knowledge A-B-C
entry-level work in management accounting" (Joint IMA/FEI Position Statement, p. 25). The same document further states, "University accounting programs must be restructured to respond to the needs of the corporate customer . . . . NOW is the time to begin the process of change . . . . We are confident that the members of IMA and FEI, working in partnership with Academic America, can develop solutions to the problems identified in the study and thereby improve the educational preparation of entry-level management accountants . . . . " (p. 25). The GSO Study (1994) targets specific change facilitators; in the foreword to their monograph, Siegel and Sorenson write, "We urge the leaders of the
4
B.P. Hartman and J. M. Ruhl
American Assembly of Collegiate Schools of Business (AACSB), the Administrators of Accounting Programs Group of the American Accounting Association (AAA), and special curricula change committees within AAA to accelerate their curriculum redevelopment efforts and refocus them, in part, to meet Corporate America's needs (Siegel & Sorenson, 1994b, p. iii). The GSO study purports to identify specific areas of deficiencies in management accounting education that should be used as a basis for directing curriculum revision. Our paper questions the methodology used in the GSO study, particularly the portion of the study which addresses accounting knowledge and skills areas (AKSAs). We also question the conclusions drawn and policy recommendations made by the GSO researchers. METHODOLOGY OF CHOICE IS SERVQUAL SERVQUAL and the service quality construct have been extensively discussed in the marketing literature (see, for example, Babakus &Boller, 1992; Carman, 1992; Cronin & Taylor, 1992; Parasuraman, Zeithaml, & Berry, 1988, 1991). Like some other researchers (e.g., Reidenbach & Sandifer-Smallwood, 1990; Babakus & Mangold, 1992), the GSO clearly adapted the SERVQUAL methodology, but have done so in such a way as to render their conclusions highly suspect. A comparison of SERVQUAL and the GSO methodology used in assessing the AKSAs will clarify our concerns. The AKSA portion of the GSO study (reproduced in Exhibits 1 and 2) is essentially similar to a service quality study employing a perceptionsminus-expectations framework (Teas, 1993). Exhibit 1 shows how respondents indicated the importance of 15 accounting skill/knowledge areas using an importance scale of 0-to-100 followed by an indication of the depth of knowledge necessary for an entry-level management accountant. Exhibit 2 shows how respondents then compared the degree of preparation expected versus the preparation the entry-level person brings to the job. The service quality literature and the SERVQUAL instrument can be used to frame a discussion of the GSO study. Service quality can be defined as "the extent of discrepancy between customers' expectations or desires and their perceptions" (Zeithaml, Parasuraman, & Berry, 1990, p. 19). Using this framework, a service quality gap exists when perceived service quality is different from expected service quality. In the services marketing literature, perceptions (P) are defined as consumers' beliefs concerning the service received (Parasuraman et al., 1985). Expectations (E) are defined by Parasuraman et al. (1988) as "desires or wants of consumers, i.e., what they feel a service provider should offer rather than
What Corporate AmericaWants in Entry-levelAccountants
5
Exhibit 2 Listed below are a number of accounting skills and areas of accounting knowledge. For each, please indicate: (a) The degree of preparation you would like an entry-level management accountant to have. (b) The degree of preparation the typical entry-level management accountant brings to the job. In each column please enter a number from 0-to-100 where zero is no preparation at all, and 100 is the highest degree of preparation.
Degree of preparation Skill/knowledge area (1) Product and service costing (2) Internal auditing (3) External auditing (4) Individual income taxation (5) Corporate income taxation (6) Accounting for government and not-for-profit organizations (7) Information systems development and design (8) Asset management and planning (including capital budgeting) (9) Working capital management (10) Long-term financing and capital structure (11) Control and performance evaluation (12) Consolidated financial statements (13) Strategic cost management (including ABC) (14) Budgeting (15) Pronouncements of the FASB
A
B
You expect O-to-lO0
Brings to job 0-1o-100
would offer" (p. 17). The P-E gap represents a comparison with a norm and P-E equals service quality. Exceeding the norm means high quality is received and falling short of the norm means low quality is received. The traditional method of operationalizing the P-E gap is to obtain perception and expectation scores on a variety of service attributes. The SERVQUAL instrument can be used to assess this gap. SERVQUAL is based on work done by Parasuraman et al. (1988, 1991). When IMA and FEI commissioned the GSO study, the objectives and purpose of the study exactly fit the SERVQUAL methodology. Expectations of preparedness for the workplace of entering management accountants compared with the perception of preparedness would provide an indication of the "preparation gap". This preparation gap could then
6
B . P . Hartman and J. M. Ruhl
be used as the basis for policy recommendations and curricula revisions in colleges and universities. Indeed, this is what the GSO researchers tried to do, but without examining the proper means of using the SERVQUAL methodology. Instead, they seemed to design questionnaires without regard to construct validity or to scale validity.
Developing SERVQUAL Zeithaml et al. (1990) write that the first step in developing an instrument to measure service quality is to determine the dimensions of the service quality construct. A construct is an abstract variable that is reflected in a domain of more concrete, observable variables. Examples of psychological constructs include intelligence and anxiety. Constructs are abstract in that they are not isolated, observable dimensions of behavior (Nunnally, 1978). For instance, there are a number of dimensions of the construct intelligence (a verbal dimension, a mathematical dimension, etc.) which must be assessed before making a global judgment of an individual's intelligence. A researcher assessing an individual's intelligence must assess all the dimensions of the intelligence construct before claiming that he or she had made a global assessment of intelligence. Further, the intelligence tester may not test a dimension which is not related to intelligence (e.g., height) and include it as part of an overall assessment of intelligence. Service quality is an example of a construct because it is not directly observable and the level of service quality is inferred from individuals' responses to a number of concrete measures of dimensions related to the service quality construct (Parasuraman et al, 1988, 1991; Zeithaml et al., 1990; Cronin & Taylor, 1994). Similar to the intelligence construct, researchers must determine the dimensions of service quality before claiming that they have made a global assessment of the service quality for an individual firm or a group of firms. Importantly, they must not include dimensions which do not relate to the service quality construct in question. In developing the SERVQUAL instrument which would ultimately measure the service quality construct, Parasuraman et al. (1988) began by conducting interviews with 12 focus groups (three groups from four service sectors) to determine the dimensions of the service quality construct. The purpose of the focus groups was to determine the dimensions of the service quality construct as perceived by service customers. Parasuraman et al. (1988) screened each potential focus group member to ensure that each had a service transaction within the 12 months preceding the focus group session. It was important to ensure that focus group members had transactions with service providers, since Parasuraman wished to determine the dimensions of service quality from the perspective of the
What Corporate America Wants in Entry-level Accountants
7
user of services. As a result of these focus group interviews, Parasuraman et al. (1988) initially detected 10 dimensions of service quality, and devised a 97-item instrument to measure these 10 dimensions. It is typical in the early stages of instrument development to have many items and many dimensions (Aiken, 1985; Cronbach, 1990). It is only after the application of multivariate techniques, such as exploratory and confirmatory factor analysis, that researchers can begin to better identify the dimension of a construct and the items which load on each dimension. Using techniques such as exploratory and confirmatory factor analysis, Parasuraman et al. (1988, 1991) and later Cronin and Taylor (1994) identified five dimensions of service quality: tangibles (e.g., physical appearance of facilities), reliability (e.g., ability to perform service dependably and accurately), responsiveness (e.g., willingness to help customers), assurance (e.g., knowledge and courtesy of employees), and empathy (e.g., caring attention given to customers). The SERVQUAL instrument includes multiple items relating to each of the five dimensions; for instance, items 1-3 shown in Exhibit 3 relate to the tangibles dimension. It is important to note that it is generally advisable to have two or more items which relate to each identified dimension, since single-item measures are notoriously unreliable (Nunnally, 1978). The core of the SERVQUAL instrument, then, is two 22-item scales developed and then refined by Parasuraman et al. (1988, 1991). One scale measures perceptions and the other scale measures expectations. Exhibit 3 shows a sample of perception and expectation items. Panel A shows three of the four expectations items relating to the tangibles dimension. One Exhibit 3 Panel A
Strongly disagree
Strongly agree
(1) Excellent--companies will have modern-looking equipment
1
2
3
4
5
6
7
(2) The physical facilities at excellent--companies will be visually appealing
1
2
3
4
5
6
7
(3) Employees at e x c e l l e n t - companies will be neat-appearing
1
2
3
4
5
6
7
Panel B
Strongly disagree
Strongly agree
(1) XYZ Company has modern-looking equipment
1
2
3
4
5
6
7
(2) XYZ Company's physical facilities are visually appealing
1
2
3
4
5
6
7
(3) XYZ Company's employees are neat-appearing
1
2
3
4
5
6
7
8
B.P. Hartman and J. M. Ruhl
such item is "Excellent--companies will have modern-looking equipment." The corresponding perception item is shown in Panel B: "'XYZ Company has modern-looking facilities." A seven-point scale accompanies each item. Service quality scores can range from 1 to 7 for each item. Just as in the GSO study, SERVQUAL studies report P-E gaps, which are the service quality scores. A third section of SERVQUAL lists the five dimensions of service quality and asks respondents to allocate 100 points across the five dimensions of service quality, allocating the most points to the dimension considered most important. The five SERVQUAL dimensions are a concise representation of the core criteria that customers employ in evaluating service quality (Zeithaml et al., 1990). As part of this third section, respondents are asked to specifically indicate which of the five dimensions is the most important dimension, followed by the second most important dimension, and finally the least important dimension. This subsection is useful in identifying respondents making thoughtless or careless responses.
GSO'S APPROACH As indicated above, the GSO study employs a perceptions-minusexpectations framework and calculates difference scores (Peter, Churchill, & Brown, 1993). The GSO study in essence attempts to assess the quality of management accounting education and is in certain ways similar to the SERVQUAL instrument. We suggest that the GSO would have been better served if they had followed the examples set by the service quality researchers with respect to construct validation, the development of the instrument, and the interpretation of the study's results.
Construct Validation and Instrument Development The accepted procedures for construct validation and research instrument development are outlined by NunnaUy (1978). Nunnally suggest that there are three aspects to construct validation: (1) specifing the domain of observables related to the construct; (2) from empirical research and statistical analyses, determining the extent to which the observables tend to measure the same thing; and (3) subsequently performing studies of individual differencesand/or controlled experiments to determine the extent to which supposed measures of the construct produce results which are predictable from highly accepted theoretical hypotheses concerning the construct. Aspect 3 consists of determining whether a supposed measure of a construct correlates in expected ways with measures of other constructs or is affected in expected ways by particular experimental treatments. (p. 98)
What Corporate America Wants in Entry-level Accountants
9
Apparently GSO did not complete even the first step of validating the construct they call accounting knowledge and skills areas (AKSAs). As indicated in Exhibit 1, the domain of observables was drawn "from the content specifications of professional accounting examinations and from major topic areas taught in college accounting courses" (Siegel & Sorenson, 1994, p. 68). The GSO study simply assumes that the 15-item scale reflects the construct accounting knowledge and skills. However, as the literature cited above indicates, input from people who are experienced users of a particular service is critical in construct validation. The important factor is to understand the dimensions of the construct from the perspective of the user groups. Unfortunately, Siegel and Sorenson did not ask their focus group members if the 15 items reflected their (focus group) view of the AKSAs. Judging from the GSO's description of the input from focus groups appearing in their monograph (Siegel & Sorenson, 1994b, p. 61), the focus groups did not provide any input regarding the dimensions of accounting skills and knowledge. Thus, it is an open question whether or not the 15 AKSAs presented in the GSO study really do represent the domain of items comprising the AKSA construct. With regard to the second aspect of construct validation, users of the study cannot determine whether the AKSA construct consists of a single dimension measured by 19 items, 19 separate dimensions of the single AKSA construct, or somewhere in between these extremes. An important reason for determining the number of dimensions associated with a factor is to enable the researcher to write at least two items measuring each dimension, thereby avoiding the unreliability of single-item measures (Nunnally, 1978). The researcher who has not identified the dimensions tested cannot possibly determine whether there are either sufficient items measuring each dimension, or extraneous and redundant items. Assuming that the 15-item scale does measure accounting knowledge and skills, there should be some correlation between the items. If the GSO had performed correlation or factor analysis, there would have been some way of determining whether there was overlap or redundancy in any of the 15 items shown in Exhibits 1 and 2. We suspect such redundancy does exist. For example, item 1 in Exhibits 1 and 2 is "Product and Service Costing," while item 13 is "Strategic Cost Management (including ABC)". One would expect there to be some overlap in responses to items 1 and 13 since ABC principles can be applied to product and service costing. We are also surprised that the GSO chose to include both strategic cost management and ABC in a single item, since some would argue that they are reasonably distinct concepts. Further, it is possible that item 1, "Product and Service Costing" represents two separate dimensions, "Product Costing" and "Service Costing." If this is the case, then the two dimensions "Product Costing" and "Service Costing" should be assessed with at least two items each.
10
B.P. Hartmanand J. M. Ruhl
In short, scale development is a tedious, time consuming process which requires the researcher to first provide a theoretical basis for the dimensions of the construct in question and the items which relate to that construct. This is only the beginning step in scale development. Researchers must test their instruments using different respondent groups, continually refining the scale over successive iterations. Successive iterations are necessary to refine item wording to determine whether respondents understand item wording, and if their understanding matches the researcher's. Only after this scale development process is complete can the researcher claim to have developed an instrument with the requisite construct validity. A researcher failing to do this may believe that he or she is measuring a particular construct while actually measuring some other construct. Unfortunately, the GSO has not psychometrically tested their AKSA scale. It is unclear whether the GSO instrument is measuring one construct, several constructs, or no constructs at all. The GSO's Scale for Measuring Perceptions Minus Expectations SERVQUAL is a two-part instrument in which one part measures perceptions and the second part expectations. Thus, SERVQUAL respondents first indicate 22 expectations, then 22 perceptions. Such is not the case in the GSO study. As indicated in Exhibit 2, the GSO respondents indicated perception-expectation gaps for each of the 15 AKSAs by completing two side-by-side 0-to-100 scales indicating preparation expectations and preparation perceptions. Zero represents no preparation at all, and 100 is the highest degree of preparation. There are several problems with this scale. The first problem relates to ambiguity surrounding the definition of an entry-level management accountant. The instructions ask respondents to indicate, "The degree of preparation you would like an entry-level management accountant to have." But does this refer to an entry-level management accountant at the respondent's firm? Depending on the type of reponsibilities assigned to entry-level accountants at different respondents' firms, responses may vary dramatically. On the other hand, the phrase "entry-level management accountant" may not refer to an entry-level person at a specific firm; it may instead refer to a hypothetical ideally prepared entrylevel management accountant. Either interpretation of the phrase "entrylevel management accountant" seems reasonable. This ambiguity suggests the possible presence of significant error variance in responses. A second critical ambiguity relates to the instructions shown in part A of Exhibit 2 versus the wording appearing in the response section shown in Exhibit 2. The intructions ask respondents to indicate, "The degree of preparation you would LIKE an entry-level management accountant to have." However, the response indicates, "the degree of preparation you
What Corporate America Wants in Entry-levelAccountants
11
EXPECT." Note that the words "like" and "expect" are not synonyms. For example, when I awaken tomorrow morning I would LIKE to be 10 years younger and be a starting pitcher for the Cleveland Indians. I do not EXPECT this to happen, however. We suggest that the GSO's use of the words "like" and "expect" as if they are synonyms may have led to confusion and error variance. Some respondents may have focused on the word like, others on expect, and some may have blended the meaning of the two words. Most respondents probably would LIKE 100% preparation along most of the AKSAs dimensions, but not EXPECT that level of preparation. This could be compared with free information; if information has no cost, most respondents request all that is available without regard to what actually will be used. Unfortunately, the GSO study then uses this confused information to build the "Preparation Gap" table reported in their Figure 4-1. The "Preparation Gap" table is the basis for conclusions and policy recommendations. A third concern regarding the GSO's perceptions minus expectations scale relates to the side-by-side presentation of the expectation versus perception scales of the GSO instrument. Side-by-side presentation makes the scale rather transparent; it should be obvious to most respondents that a perceptions versus expectations comparison is desired. SERVQUAL does not employ the side-by-side presentation and therefore avoids the problem of transparency to some degree. Thus, SERVQUAL respondents first indicate 22 expectations, then 22 perceptions. This means that the SERVQUAL instrument provides 21 items separating a given expectation item and its corresponding perception item.
The GSO's Scale for Measuring AKSA's Importance Exhibit 1 shows the GSO scale used to measure the importance of the AKSAs. Respondents entered a number from 0-to-100 indicating the importance of each of the 15 AKSAs, with zero indicating not important at all and 100 indicating extremely important. This GSO scale bears a slight similarity to the SERVQUAL scale described above in which respondents allocated 100 points across the five dimensions of service quality as identified by Parasuraman et al. (1988, 1991). Unfortunately, critical differences make it difficult to evaluate the AKSA's importance ratings obtained from the GSO scale. The GSO study fails to recognize a critical issue with respect to importance ratings. Importance comparisons are only meaningful when one factor is compared with another. That is, importance can only be assessed with regard to some reference point, and the GSO study does not provide that reference point. The GSO study simply asks respondents to indicate the importance of knowledge for an entry-level management accountant. But, compared to what?
12
B . P . Hartman and J. M. Ruhl
To illustrate, the GSO study infers that the most important ASKA is Budgeting. Compared with knowledge of individual income taxation, respondents may deem knowledge of budgeting to be very important. But with regard to, say, knowledge of how to read and write, knowledge of budgeting is relatively unimportant. To make the importance ratings interpretable, the GSO should provide some comparison reference point. Lacking such a reference point, the interpretation of the importance ratings becomes very difficult or impossible. Some respondents may be comparing a specific AKSA with the remaining 14 AKSAs shown in the instrument, while other repondents may be comparing a specific AKSA with unspecified other skills typically performed by an entry-level accountant. The GSO have not provided enough guidance to allow respondents to make comparable, interpretable responses. SERVQUAL does not exhibit this problem. Parasuraman et al. (1988, 1991) provide a reference point: each of the five service quality dimensions is evaluated vis-d-vis the other dimensions. This comparison is facilitated by the fact that researchers have identified a relatively small number of dimensions of service quality. Having identified these dimensions, respondents can, with relative ease, indicate the importance of each dimension compared to each other dimension. In a later section of their experimental instrument, the GSO request respondents to make importance comparisons between selected pairs of AKSAs; for example, comparisons are requested between internal auditing knowledge and external auditing knowledge. However, with regard to the importance of the 15 AKSAs, the GSO place most emphasis on results obtained using the scale shown in Exhibit 1. THE GSO'S SCALE FOR ASSESSING UNIVERSITY TEACHING The GSO study includes a scale used to solicit respondents' input regarding how well universities are teaching students in five areas. This scale is shown in Exhibit 4. Zero indicates a poor job and 100 indicates an excellent job. Mean scores reported for evaluation of teaching (Siegel & Sorenson, 1994b, p. 17) ranged from 47 to 68. A critical issue is: how does one interpret these scores? Siegel and Sorenson seem to have simply applied an academic grading scale to their data. They write: "Our experience with 0-to-100 evaluation scales indicated that scores in the 90s represent an "excellent" rating; the 80s, "very good"; 70s, "good"; 60s, "not so good"; and below 60, "poor" (Siegel & Sorenson, 1994b, p. 16). Unfortunately, no other validation of these cutoff scores was provided. The GSO has no basis for interpreting the teaching scale in this manner. The GSO simply assumes that the respondents think that scores in the 90s represent "excellent", scores in the 80s represent "very good" and so forth. We argue that the results of the teaching scale are uninterpretable due to
What Corporate America Wants in Entry-level Accountants
13
Exhibit 4. Evaluation of Teaching (5) Based on what you observe in the job preparedness of entry-level management accounts in your company, how good a job do you think universities do at each of the following? Please respond with a number from 0 to 100, where zero indicates a very poor job and 100 indicates an excellent job. Your evaluation
(a) Preparing students for working in a business organization? (b) Preparing students for careers in management accounting? (c) Teaching:
(1) (2) (3) (4) (5)
Computer literacy Team building Quantitative skills Management accounting Communication skills
the lack of a common reference point. Commonly accepted procedures in instrument development (Cronbach, 1990; Fiske, 1978; Goldstein & Hersly, 1984; Kline, 1986), and measurement assessment (Anastasi, 1988; AERA, APA, & NCME, 1985; Nunnally, 1978) require a series of validating steps to justify use of cutoff criteria. Cutoff scores then would be established and tested for reliability and validity. Apparently none of this was done in the GSO study. Instead of using the tested and validated scales of the SERVQUAL method, the GSO study used a scale that had not been validated, and had no anchors other than the end points. DISCUSSION The rapidly changing American business environment certainly presents a challenge to accounting educators. To properly prepare our students for the challenges of the late 1990s and the turn of the century, adaptation and change of curriculum is necessary. The GSO study discussed in this paper has established a platform for dialogue regarding these issues. Curriculum reform in response to (or in partnership with) corporate clientele is highly desirable. Major changes in curriculum are not accomplished easily or frivolously at most educational institutions. Once revisions have been discussed, designed, approved, and implemented into programs, they tend to stay in place for some time. Therefore, discourse based on relevant and reliable data is not just healthy, but also necessary. Relevance and reliability of the data implies rigorous research studies grounded in sound methodological procedures.
14
B . P . Hartman and J. M. Ruhl
The Joint IMA/FEI Position Statement calling for a restructuring of the accounting curriculum is based on a single study, one which has serious flaws in its methodology. Measurement of expectations and perceptions has been well documented in the marketing literature as a valid research technique using SERVQUAL. The GSO study is patterned after a SERVQUAL study, but the researchers have deviated in important ways from the appropriate methodology. The GSO study did not properly develop the AKSA construct. Instead of developing the AKSA dimensions through interviews with focus groups who recently had contact with entry-level accountants, the GSO study identified dimensions from content specifications of professional accounting examinations and from major topic areas taught in college accounting courses. The GSO study simply assumed that the 15-item scale properly reflected the construct, accounting skills and knowledge. The focus groups did not provide any input regarding the dimensions of accounting skills and knowledge. Additionally, it is not possible to determine if the AKSA construct consists of a single dimension measured by 19 items, 19 separate dimensions of the single AKSA construct, or somewhere in between these extremes. We suggest that redundancy exists in the items shown in Exhibits 1 and 2, yet the GSO study does nothing to identify or reduce the redundancy. With regard to the GSO scale for measuring expectations and perceptions, the GSO scale departs dramatically from the SERVQUAL scale. SERVQUAL measures expectations first, then perceptions in a separate section, using a 7-point likert scale. The GSO study measured expectations and perceptions in the same question side-by-side, using a 0to-100 scale. Further, the GSO researchers failed to clarify the term "entry-level accountant," the 0-to-100 scale was not validated, and the wording in the instructions was not consistent. In one instance, respondents were asked what level of preparation they would LIKE from entry-level accountants, and then asked what degree of preparation did the respondents EXPECT. The terms like and expect are not synonyms and cannot be used interchangeably without causing confusion and error variance. Further compounding this problem, the responses from the like-expect data were used to build a "preparation gap" table that purported to indicate the lack of preparation of entry-level accounting students. The preparation gap table is used as the basis for the conclusions of the study and for making policy recommendations of changes in curriculum. Such a draconian policy recommendation should not be based on the results of a single study, even in the absence of methodological concerns. A much better understanding of the dimensions of accounting knowledge and skills for entry-level accountants could be a starting point for
What Corporate America Wants in Entry-level Accountants
15
f u t u r e research. A d d i t i o n a l l y , refinement o f the scale using b o t h e x p l o r a t o r y a n d c o n f i r m a t o r y f a c t o r analysis w o u l d seem to be in order. F u t u r e r e s e a r c h s h o u l d also a d d r e s s the issue o f c r i t e r i o n validity. I n short, to m e a s u r e e x p e c t a t i o n s a n d p e r c e p t i o n s , a p r o p e r a p p l i c a t i o n o f the S E R V Q U A L o r some o t h e r m e t h o g o l o g y is necessary. I n the m e a n t i m e , a c c o u n t i n g e d u c a t i o n s h o u l d n o t castigate itself for p o o r p e r f o r m a n c e . W h i l e we agree t h a t c u r r i c u l u m m u s t be k e p t u p to d a t e a n d r e s p o n s i v e to the needs o f c o r p o r a t e A m e r i c a , we are n o t c o n v i n c e d t h a t m a n a g e m e n t a c c o u n t i n g e d u c a t i o n is as s u b s t a n d a r d as i m p l i e d (or stated) in the G S O study. A b o v e all, a c c o u n t i n g e d u c a t i o n s h o u l d n o t r u s h into m a j o r c u r r i c u l u m o v e r h a u l b a s e d on this single study.
REFERENCES Aiken, L. R. (1985). Psychological testing and assessment. Boston, MA: Allyn & Bacon. A Joint IMA/FEI Position Statement on the Results of the Survey. (1994). What corporate America wants in entry-level accountants. Management Accounting, p. 25. American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME]. (1985). Standards for educational and psychological testing. Washington, DC: American Psychiatric Association. Anastasi, A. (1988), Psychological testing. New York: Macmillan. Babakus, E., & Boiler, G. W. (1992). An empirical assessment of the SERVQUAL scale. Journal of Business Research, 24, 253-268. Babakus, E., & Marigold, W. G. (1992). Adapting the SERVQUAL scale to hospital services: An empirical investigation. Health Service Research, 26, 767-780. Carman, J. M. (1990). Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions. Journal of Retailing, 66, 33-55. Cronbach, L. J. (1990). Essentials of psychological testing. St Louis, MO: Harper & Row. Cronin, J. J. Jr, & Taylor, S. A. (1994). SERVPERF versus SERVQUAL: Reconciling performance-based and perceptious-minus-expectations measurement of service quality. Journal of Marketing, 58, 125-131. Cronin, J. J. Jr, & Taylor, S. A. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56, 55-68. Fiske, D. W. (1978). Strategies for personality research. San Francisco, CA: Jossey-Bass. Goldstein, G., & Hersey, M. (Eds). (1984). Handbook of psychological assessment. New York: Pergamon. Kline, P. (1986). A handbook of test construction. New York: Methuen. Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its impfications for future research. Journal of Marketing, 49, 41-50. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64, 12-40. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1991). Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 67, 420--450. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1994). Reassessment of expectations as a comparison standard in measuring service quality: Implications for further research. Journal of Marketing, 58, 111-124. Peter, J. P., Churchill, G. A. Jr, & Brown, T. J. (1993). Caution in the use of difference scores in consumer research. Journal of Consumer Research, 19, 655-662.
16
B.P. Hartman and J. M. Ruhl
Reidenbach, R. E., & Sandifer-Smallwood, B. (1990). Exploring perceptions of hospital operations by a modified SERVQUAL approach. Journal of Health Care Marketing, 10, 47-55. Siegel, G., & Sorenson, J. E. (1994a). What corporate America wants in entry-level accountants. Management Accounting, 26-31. Siegel, G., & Sorenson, J. E. (1994b). What corporate America wants in entry-level accountants. Montvale, NJ: Institute of Management Accountants. Teas, R. K. (1993). Expectations, perfomanee evaluation, and consumers' perceptions of quality. Journal of Marketing, 57, 18-34. Zeithaml, V. A., Parasuraman, A., & Berry, L. L. (1990). Delivering quality service: Balancing customer perception and expectations. New York: The Free Press.