International Journal of Accounting Information Systems 9 (2008) 122–126
Contents lists available at ScienceDirect
International Journal of Accounting Information Systems Discussion
Discussion of “An examination of contextual factors and individual characteristics affecting technology implementation decisions in auditing” ☆ Carlin Dowling University of Melbourne, Australia
a r t i c l e
i n f o
Article history: Received 15 June 2007 Received in revised form 15 October 2007 Accepted 31 October 2007
In reading the title of the paper by Curtis and Payne (2008-this issue), the first question that came to my mind (and I'm sure will come to anyone's mind who is familiar with the numerous technology acceptance studies that have been conducted in the information systems domain) is: Do we really need another technology acceptance study? After reading the paper, my answer is “yes”. Although there have been numerous studies conducted within the information systems domain, Curtis and Payne (2008-this issue) do an excellent job motivating the need for this (and future studies) to provide guidance to assist audit firms and auditors increase the utilization of audit technologies. The current study achieves this by examining two potential intervention mechanisms (budget period and senior support) and individual characteristics (risk preference and perceived budget pressure). Prior to discussing the details of the study there are two important scope limitations readers should be aware of. One, the study focuses on the voluntary adoption of an audit technology. Although the authors make it very clear that the study is focusing on the voluntarily adoption of a technology, the limitations of this choice are not explicit. Specifically, because use of the commonly deployed audit technologies (such as electronic work-papers, decision aids, etc.) is typically mandatory within most large audit firms (Dowling and Leech, 2007), the study's findings are unlikely to generalize to the way common audit technologies are deployed in audit firms. The other important scope limitation that is not explicit is that whilst many audit technologies (such as those that enable 100% sample testing) have the potential to improve audit quality and efficiency, the study controls for the impact on audit quality and focuses exclusively on a technology that will decrease audit efficiency in the first year of adoption and lead to an overall increase over a three
☆ Based on the version of a paper by Curtis and Payne dated December 7, 2006. In text references have been amended to the version of the paper appearing in this issue. E-mail address:
[email protected]. 1467-0895/$ – see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.accinf.2007.10.003
C. Dowling / International Journal of Accounting Information Systems 9 (2008) 122–126
123
year period. I suspect that if a technology also impacts audit quality, this may alter the adoption decision. Thus the study's results are also unlikely to generalize to an audit technology that has the potential to impact audit quality and efficiency. This should not be interpreted as a weakness of the study; controlling for the impact of the new technology on audit quality increases the interpretability of the study's findings. 1. Theory The Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) provides the theoretical background. The UTAUT was developed from a review and integration of a large body of literature investigating technology acceptance and has been validated in both mandatory and voluntary technology use settings (Venkatesh et al., 2003). As such this theory provides a solid theoretical basis for investigating technology use. The UTAUT and the subset of the model used by Curtis and Payne (2008-this issue) are depicted in Fig. 1. As depicted in Fig. 1, Curtis and Payne (2008-this issue) focus on the antecedents of intended use. Although intended use is a critical and important predictor of actual usage (Venkatesh et al., 2003), intention generally explains about 35–40% of the variation in actual usage. Thus whilst Curtis and Payne's (2008-this issue) focus on intended use based on the “strong likelihood that behaviour will follow” is an acceptable constrain of using a paper based case, readers should recognize that not measuring actual use means that many of the other factors (and facilitating conditions) that influence technology adoption are not included. Of the three direct predictors of intended use highlighted in Fig. 1, Curtis and Payne (2008-this issue) investigate performance expectancy and social influence (discussed below). Effort expectancy is not manipulated. Effort expectancy is salient in the initial adoption of a technology and numerous studies have shown that increasing ease of use increases technology acceptance. Although there is no reason to suspect that the results would differ in the audit context, the paper would have benefited from a brief discussion justifying why effort expectancy is not modeled and performance expectancy and social influence are. 1.1. Performance expectancy In the UTAUT, performance expectancy is the “degree to which an individual believes that using the system will help him or her attain gains in job performance” (Venkatesh et al., 2003: 447). Although there
Fig. 1. Part of the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003) investigated by Curtis and Payne (2008this issue).
124
C. Dowling / International Journal of Accounting Information Systems 9 (2008) 122–126
are many ways in which an auditor's job performance can be measured and assessed, Curtis and Payne (2008-this issue) operationalize performance expectancy through the budget period (one or three year). Focusing on budget attainment is a unique contribution of this study, which increases the generalizability of the findings beyond the audit setting to other areas where budgets are used. The authors' report that participants are skeptical about audit firms adopting multi-year budget periods; no doubt because budgets are used for many other purposes within the audit setting and high staff turnover decreases the potential for audit firms to move to multi-period budgets. Thus, finding that having a longer budget period improves technology adoption is less likely to have feasible practice implications for audit firms than other budget areas where technology adoption is voluntary. Furthermore, I am concerned that budget period (and thus the assumed drive to meet budget) is not a valid operationalization of performance expectancy beyond the artifact of the experimental setting. The experimental materials explicitly informed participants that their performance will be evaluated on attaining budget and thus for the purposes of the experiment the operationalization is valid. However, I suspect that if the study was conducted on an actual audit, budget period/type/tightness of budget may in fact be a facilitating condition that directly impacts use rather than intention, particularly when considered over multiple client engagements. This of course is not something that can be resolved within the current study, but is something researchers considering field/survey work in audit firms should consider. 1.2. Social influence The study also manipulates social influence. Social influence is the “degree to which an individual perceives that important others believe he or she should use the new system” (Venkatesh et al., 2003: 451). There are many sources of social influence including peers, subordinates and superiors of the auditor within their audit team, the audit office and the audit firm. Curtis and Payne (2008-this issue) claim to “explore the impact of more remote individuals on the acceptance decision” by focusing on the social influence of the practice office managing partner. Besides stating that they are going to investigate the influence of a remote superior because a prior study has investigated the influence of a direct superior, a solid argument is not provided as to why understanding a remote superior's influence is important. Individuals react to pressure they perceive because they feel other individuals think they should behave in a certain way. For this pressure to have an impact on an individual's behaviour, the individual must perceive that if they do not behave in the way they perceive the pressure they will incur some cost. I am not convinced theoretically or via the way social influence was manipulated that a remote source of social influence has been captured. The manipulation of the two conditions as reported in the case material is reproduced below: Treatment condition
Information in case
No social influence
There is no information regarding whether the engagement partner or the practice office managing partner support use of the software. The engagement partner told you that the practice office managing partner is not going to require use of the new software, but is encouraging implementation.
Social influence
Based on the above extract from the experimental materials I am not convinced that a remote source of social influence has been operationalized successfully. There is a strong possibility that having the engagement partner communicate the managing partner's view has confounded the source of the social influence such that the engagement partner has implicitly communicated their support of the managing partner's view. Whilst this confound is a limitation of the current study, future research should ensure that the source of the social influence is clear. For instance, this could have been achieved in the current study by having the case materials inform the participant that they recalled a recent memo from the managing partner about the software encouraging use. This would have removed the engagement partner as a potential source of social influence. A further interesting avenue for future research is to consider what happens when the engagement partner and the managing practice partner conflict in their support/non-support for a new technology? Whose influence dominates? When? Under what conditions?
C. Dowling / International Journal of Accounting Information Systems 9 (2008) 122–126
125
In addition to including two contextual factors to represent performance expectancy and social influence, Curtis and Payne (2008-this issue) contextualize the UTAUT to include two unique individual level characteristics: risk preference and perceived budget pressure. 1.3. Risk preference The essence of the hypotheses predicting how risk preference will impact voluntary adoption is that risk-averse auditors are less likely to adopt a new technology because adoption increases uncertainty. However, I am concerned that the experimental materials have reduced the level of uncertainty associated with adopting a new technology. The case materials explicitly inform participants the adopting the technology will increase budget hours by 50 in year one and decrease budget hours by 50 in years 2 and 3. Although explicitly specifying the budget hour impact of the new technology controls for effort expectancy across the treatment conditions, it also decreases the uncertainty inherent in adopting a new technology. The consequence of this is that although risk preference is found to be a significant predictor of intention, I'm not entirely convinced it is for the reasons consistent with the hypothesis development. 1.4. Perceived budget pressure The hypotheses for budget pressure suggest that auditors who perceive higher levels of budget pressure are less likely to adopt a new technology because of concerns it will increase budget tightness. The implication is that individuals who perceive greater pressure on an engagement will react to this pressure. Because perceptions of pressure differ across individuals, Curtis and Payne (2008-this issue) measure budget pressure using a one item scale: “on past audit engagements, I have felt significant pressure to control budget hours”. However, no information is provided that the firm's budgets across actual audit engagements are consistent in terms of pressure (i.e., all budgets are tight for example). If the budget set across clients and the firm are not consistent in terms of pressure, there is potential that the item is capturing both an individual's perceived budget pressure and differences in engagements they have worked on (i.e., actual budget pressure). Furthermore, from the case materials it appears that budget pressure may have been manipulated. Participants are either told that this audit engagement “has met budget in prior years” or has “been approximately 3% over budget in the past two years”. There is no mention of this manipulation in the description of the paper or the analyses conducted. 2. Sample The hypotheses are tested using an excellent sample of in-charge auditors from one firm. Although using auditors from one (undisclosed) audit firm limits the generalizability of the results, the design also removes the potential confound of differences across audit firms driving the results. Since the type of technology deployed differs across audit firms (Dowling and Leech, 2007) restricting the experiment to auditors from one firm is a strength. Although confidentiality requirements no doubt restrict the disclosure of the audit firm, some general background information about the firm would have provided readers with some additional insights. 3. Construct measurement Two of the three non-manipulated variables were measured using scales with more than one indicator. The means provided in Table 1 of Curtis and Payne (2008-this issue) indicate that the multi-items have been summated equally to obtain the measure. Although no information is provided regarding the factor weightings, it is questionable that all of the items in each of the multi-item scales contributed equally to the underlying construct, particularly for the risk preference items that have a Cronbach's alpha of 0.62. It would be interesting to see if the results hold if weighted factor loadings are used to calculate the composite measures. 4. Reporting results In total 181 auditors participated and 13 were discarded. It is unclear if these auditors were from the one or multiple treatment conditions. Furthermore, although Table 1 reports the means by the two treatment
126
C. Dowling / International Journal of Accounting Information Systems 9 (2008) 122–126
conditions, the cells are grouped, and thus the pattern of means for each of the 4 cells is not disclosed. To add in interpreting the results, I would like to have seen a panel that reported the means for the treatment conditions, e.g.: Social influence
1 year budget period
3 year budget period
Yes
No
Intention = mean (SD) Budget pressure = mean (SD) Risk preference = mean (SD) Intention = mean (SD) Budget pressure = mean (SD) Risk preference = mean (SD)
Intention = mean (SD) Budget pressure = mean (SD) Risk preference = mean (SD) Intention = mean (SD) Budget pressure = mean (SD) Risk preference = mean (SD)
Interpretation of the findings would also have been assisted by the inclusion of a series of graphs visually depicting the 2 way interactions. Overall, I enjoyed reading the paper and I believe the results provide some interesting insights for practice and potential future research direction. References Curtis MB, Payne EA. An examination of contextual factors and individual characteristics affecting technology implementation decisions in auditing. International Journal of Accounting Information Systems 2008;9:104–21 (this issue). Dowling C, Leech SA. Audit support systems and decision aids: current practice and opportunities for future research. International Journal of Accounting Information Systems 2007;8:92–116. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Quarterly 2003;27(3):425–78.