British Accounting Review (1992) 24, 119-137
MODELLING A U D I T RISK L. C. L. S K E R R A T T
University of Manchester A. W O O D H E A D
University of Durham A number of discrepancies have been found between the multiplicative joint risk model and the judgments o f au&tors m practice. The importance of this finding depends largely on the realism of the benchmark risk model used. Therefore, the objective o f this paper is to extend the joint risk model to reflect more accurately the choices and circumstances faced by auditors. In addition, identifying the components o f audit risk in a systematic manner is also important because it may be able to enhance audit decision processes. This paper critically reviews the joint risk model and also a number of recent contributions to the measurement of posterior audit r~sk. W e show how each o f these different insights should be incorporated into a comprehensive measure of posterior audit risk at the level o f the individual audit objective (e.g. account balance). The differences between our proposed model and other risk measures are dlustrated with some numerical examples and we idenufy the circumstances under which the different models will ymld different estimates of audit risk. Interestingly, we find that our proposed model and the au&tor risk judgments identified in recent stu&es, exhibit similar characteristics when compared with the joint risk model.
INTRODUCTION The familiar joint risk model, which formed the basis of professional guidelines both in the UK (APC, 1987) and the US (AICPA, 1981; 1983), has recently been subjected to a great deal of empirical scrutiny, by comparing it with the actual risk judgments of auditors. For example, one line of enquiry relates to the model's assumption that the individual risk elements (inherent risk, control risk and detection risk) are independent components in determining overall audit risk. Studies by Peters (1990) and Brown & Solomon (1990) indicate that this independence is not reflected in the practical judgments of auditors. Other field experiments, for example Strawser (1990; 1991), have questioned the implied equal weighting of the components in the model. Generally, the joint (planned) We are grateful to members of the Manchester Financial Accountmg & Auditing Workshop for most helpful comments on previous drafts; also to Robson Rhodes, for access to their library Any errors are the sole responslblhty of the authors Address correspondence to Len Skerratt, Department of Accounting and Finance, Roscoe Building, Manchester M13 9PL. 0890-8389/92/020119 + 19 803.00/0
© 1992 Academic Press Limited
120
L. C. L
SKERRATr AND
A. W O O D H E A D
risk model has been found to be quite a poor reflection of the risk judgments of auditors. One explanation for these discrepancies is that the decision processes o f auditors are imperfect. For example, Ashton (1991) argues that, since errors are discovered infrequently, even an auditor with many years o f service may be relatively inexperienced in interpreting and evaluating unusual audit evidence. Spires (1991) also suggests that procedural variations between different audit firms may give rise to different audit judgments. Another explanation is that the probability approach to measuring audit risk is fundamentally flawed. For example, the model is not couched in decision-theoretic terms, and therefore cannot incorporate auditor loss functions. This aspect of risk modelling may be important in explaining cross-section variation in audit judgments, since the auditor's financial exposure with respect to an individual client may well influence the tolerance level to overall risk. Furthermore, the auditor's exposure with respect to one particular chent may affect risk judgments with other clients as well, since the audit firm may be concerned primarily with financial exposure o f the firm as a whole rather than with respect to any particular client. The general attractions o f the decision theoretic framework are substantial; its contribution is to incorporate loss functions explicitly into the decision process. However, this is likely to be o f most use at the level of determining the audit opinion on the financial statements as a whole, whereas the probability model focuses on subsidiary audit objectives, typically at the level o f the account balance. Consequently, the decision theoretic approach is unlikely to increase our understanding o f differences between the joint risk probability model and audit practice. Furthermore, progress in modelling the audit in decision theoretic terms has been slow and limited. For example, an early paper employing this approach is Kinney (1975); and yet recent work (Shibano, 1990) is not able to give insights much beyond the probabilistic joint risk model. Presumably, this is one 1 o f the reasons w h y recent empirical research on audit judgment, discussed above, employs the joint risk model as a benchmark. In view of these factors, the established technology of probability risk modelling is likely to be relevant for some time yet. Consequently, it is important that, as far as possible, the probability analysis reflects the real world complexities faced by auditors. Recent papers by Kinney (1989), Aldersley (1989) and Sennetti (1990) have attempted to do just this. While these make substantial extensions to the joint risk model, they each emphasise different aspects o f the decision process. Consequently, it is not at all clear how the contributions, in aggregate, have improved the realism of the probabihty approach to audit risk measurement.
MODELLING AUDIT RISK
121
The purpose of this paper is to analyse and evaluate these contributions in order to develop a single and realistic model for the estimation o f audit risk. This is important on two counts. First, the evaluation o f professional practice is only as good as the benchmark employed. The second and more substantial reason is that an important rationale for the risk modelling is to support (though not replace) the subjective decision processes involved in audit risk measurement. If such support is to be worthwhile, then the model must be sympathetic to the underlying decision processes which take place. In the interests of brevity, the exposition employs the familiar ladder tree diagram used in Leslie, Anderson & Teitlebaum (1980, Appendix C), rather than a formal mathematical approach. The ladder tree technique maps out the different paths which may apply to an assertion in the financial statements. B A C K G R O U N D : P L A N N E D A N D P O S T E R I O R RISK The familiar joint risk model specifies audit risk as the probability that material mis-statement in an assertion will arise and not be detected by either the internal control structure, analytical review or by detailed testing. The following definitions are adopted: A = an assertion in the financial statements; IR = inherent risk, the susceptibility o f an assertion to material misstatement, assuming there are no related internal control structure policies or procedures; CR = control risk, the risk that material mis-statement that could occur in an assertion will not be prevented or detected on a timely basis by the entity's internal control structure policies or procedures; AR = the risk that analytical review will fail to detect material misstatement in an assertion; T D = the risk that detailed testing will fail to detect material mis-statement in an assertion. Using the above definitions, the joint risk model would identify audit failure as Event 1 in Figure 1, which has a probability o f IR.CR.AR.TD. This joint risk model is also k n o w n as a planned risk because it does not take into account any of the events which have arisen during the course of the audit. While planning is an important component of the audit process, it is essential that the risk model should be able to contribute to the formation of the audit opinion after the conduct of the audit tests. Consequently, Leslie et al. (1980, Appendix C) propose a posterior risk measure, and their model is adopted in the Canadian Institute's research study, CICA (1980). The posterior risk is the probability that material
122
L.
C.
L.
SKERRATr
AND
A.
WOODHEAD
; i r---CR n
IN
IR
I...I
I I I I IAI = =
I I I I I t-(I-CR)
AR
TD~
Event I
~w
Event 3
CA~
Event 2
CA=
Ii
I|
11-IR)
IA= incorrect occeptance. CA=correct acceptonce.
Figure 1. Planned and posterior risk.
mis-statement remains even though none of the audit tests have identified its presence. The reasoning of Leslie et al. (1980) in constructing their posterior measure can also be illustrated in Figure 1. If no material mls-statement m the assertion is found after the conduct of the audit, then this could be generated by Event 1; but the result is also consistent with Event 2, in which no material mis-statement has occurred. Therefore, audit risk is the probability of Event 1 (material mis-statement exists but is not detected by the audit tests) relative to the probability that any of the events (Event 1 and Event 2) generated this findmg. This is equal to: = Prob (Event 1)/(Prob (Event 1 ) + P r o b (Event 2)) = IR.CR.AR.TD/(IR.CR.AR.TD + (1 -- IR))
(1)
However, this measure o f posterior risk in (1) (and which is adopted in CICA, 1980) is incomplete since it is also possible that the audit finding is generated by Event 3, in which material mis-statement occurs but is corrected by the internal controls. Although Leslie (1984, pp. 92-94) notes this possibility, it is excluded by construcuon. It is assumed that the auditor will know whether or not Event 3 has occurred, i.e. whether the internal controls have detected mis-statement in the assertion. Then, if Event 3 has occurred, then there is no audit risk; mis-statement has occurred, but has been picked up by the internal controls prior to the audit. If Event 3 has not occurred, then only Events 1 & 2 could have given rise to the audit finding.
123
MODELLING AUDIT RISK
A l t h o u g h Leslie's reasoning is logically valid, it is likely to be cost ineffective for auditors to track d o w n all the errors u n c o v e r e d b y internal controls during the entire accounting period. T h e auditor will u n d o u b t edly wish to evaluate the ability ex ante o f the internal controls to detect material mis-statement; h o w e v e r , this does n o t require specific k n o w l e d g e o f a m i s - s t a t e m e n t actually u n c o v e r e d b y the controls ex post. T a k i n g E v e n t 3 into account gives a posterior risk equal to: = P r o b (Event 1)/(Prob (Event 1 ) + P r o b (Event 2 ) + P r o b (Event 3)) = I R . C R . A R . T D / ( I R . C R . A R . T D + (1 - - I R ) + I R . ( 1 - - C R ) ) = I R . C R . A R . T D / ( I R . C R . A R . T D - 4 - 1 -- (IR.CR)) (2) T a b l e 1 illustrates the risk achieved u n d e r the different posterior risk models for c o m b i n a t i o n s o f high, m e d i u m and l o w inherent and control risks. I f w e a d o p t a 0.5 risk o f analytical r e v i e w procedures failing to detect material error, and plan for an overall tolerable risk o f 0-05, then the consequent levels o f detailed substantive testing w h i c h will be planned are those given in the c o l u m n headed T D . W h e r e T D = 1, the auditor has TABLE 1 Differences between risk models: (a), Planned risk; (b), CICA model (1); (c), CICA extended model (2)
Planned risk
CR
AR
TD
CICA (1)
CICA
IR
Panel A. 0-05 0.05 0-05 0-05 0.05 0"05 0-05 0.05
0-9 0.9 0-9 0-6 0.6 0"6 0-3 0.3
0-9 0"6 0-3 0-9 0"6 0"3 0-9 0-6
0.5 0"5 0-5 0-5 0'5 0"5 05 0.5
0.12 0.19 0-37 0-19 0.28 0"56 0-37 0.56
0"33 0-33 0.33 0-11 0.11 0' 11 0-07 0.07
0.21 0-10 0-06 0-10 0"07 0"06 0"06 0"06
Panel B: 0"05 0-05 0"05 0.05 0-05 0-05 0.05 0"05
0"8 0-8 0.8 0.5 0-5 0-5 0.2 0.2
0"8 0.5 0.2 0.8 0-5 0-2 0.8 0-5
0'5 0.5 0-5 0.5 0"5 0-5 0.5 0.5
0.16 0.25 0.63 0.25 0"40 1.00 0"63 1.00
0.20 0-20 0"20 0.09 0.09 0.09 0'06 0.06
0.12 0"08 0"06 0"08 0.06 0.05 0.06 0'05
(2)
124
L. C. L. SKERRATr A N D
A. W O O D H E A D
sufficient assurance from the low assessments o f IR, CR and AR so that no tests o f detail are planned. Table I compares the impact on posterior risk assessment o f using either the CICA model (1) or the CICA extended model (2). We assume a planned overall audit risk o f 0.05 throughout and base the levels o f detailed testing accordingly. After the results of detailed testing are known (and assuming no errors are found), the CICA model (1) suggests that the level o f achieved risk is greater than planned risk at all levels o f IR and CR. This increase over planned risk is most significant when IR is assessed as high. However, comparing CICA (1) with CICA (2) shows that the exclusion o f Event 3 will tend to overestimate posterior audit risk, since the term IR is greater than IR.CR. The modifying effect o f this for different levels of CR is illustrated in Table 1. The situation of high IR-low CR is of particular interest. In this case, the extended CICA model (2) generates a posterior risk of 0.06, which is very close to that planned o f 0-05, and is considerably more reassuring than the CICA (1) posterior risk o f 0-33 in Panel A and 0.20 in Panel B. Furthermore, this situation is not simply of mathematical curiosity; it may arise quite frequently in practice, since where the client perceives high inherent risk, tight internal controls are likely to be introduced to compensate. In this situation, an audit which is based on the level o f detailed substantive testing indicated by the CICA model (1), will give rise to over-auditing. 2 Table 1, (1) and (2) also give some insight into w h y recent empirical research has questioned the relevance of the joint risk model. If auditors employ decision processes similar to (1) and (2), then it is not surprising that the practical judgments might be interpreted as failing to give unequal weight to the IR, CR, AR and T D components (Strawser, 1990, 1991), or as failing to treat them as independent components o f risk (Peters, 1990; Brown & Solomon, 1990). THE RISK OF I N C O R R E C T REJECTION The measures o f audit risk in the joint probability model and also in (1) and (2) above, are driven by the likelihood that tests will incorrectly accept an assertion (i.e. will fail to detect material error when it is present). In contrast to this, the significant advance made in Kinney (1989), is that audit risk should also include the probability that an assertion in the financial statements will be incorrectly rejectedJ This is illustrated in Figure 2. The setting in Figure 2 is one in which the outcome o f analytical review determines whether detailed testing or whether extended detailed testing is undertaken. The exposition here, based on Figure 2, follows Kinney m
125
MODELLING AUDIT RISK Panel A: The state in which material misstatement exists. Outcome
i
IAI
AP~
AR
I I
I
I --(i-AR)
TO
IA t
(I-TO)
CRel
ETD
IA2
I
i I
11- ETD)
CRe 2
Ponel B : The store in which moterlol misstatement does not exist. Outcome
IAt
I_J
(I-API~)
I--AR t I I
I
I , L_.O-AX~).~I I
TO*"
CA,
I-TI~'
IR I
ETD~'
CAI
O-ETa)
IRe
| A : incorrect occeptoncc; CRe: correct rojectmn; CA: correct acceptance,and 1R: mcorreot roiection.
Figure 2. The risk of incorrect rejection (Kinney, 1989).
that the inherent risk and control risk components are aggregated to form the term, assessed prior risk. The additional definitions, which are required, are as follows: APR = assessed prior risk; = the probabihty that material mis-statement will exist in the book values, whether the presence is due to any aspect o f inherent risk or to control failure; ETD ---- the risk that extended detailed testing will fail to detect material mis-statement when it exists; AR* = the probability that analytical review will not signal material mis-statement when it is not present; TD* = the probabihty that detailed testing will not signal material misstatement when it is not present; ETD* = the probability that extended testing will not signal material mis-statement when it is not present. Kinney's conclusion is that the omission of the audit outcome space
126
L. C. L. SKERRATr A N D
A. W O O D H E A D
elements which involve incorrect rejection o f an assertion, will understate the estimated posterior audit risk. However, in order to integrate Kinney's insight into our probability framework, two clarifications of the model are necessary. Translation of the risk to an individual client The first clarification arises from the fact that Kinney's analysis is conducted by examining the properties o f a portfolio o f 1000 audit clients. The paper does not resolve how the results translate to the posterior risk o f an individual au&t. This issue is addressed by Aldersley (1989). However, as we show below, the translation is incorrect. Aldersley's approach (1989, figure 2, p. 87) is to take the ratio o f the probability o f incorrect acceptance to the probability o f all acceptances (correct or incorrect). In terms o f Figure 2 in this paper, this means that his estimate o f posterior risk is measured as
[Prob(IA,) + Prob(IA2)]/ [Prob(IAl) +Prob(IA2) +Prob(CA1) +Prob(CA2)]. It is difficult to understand the logic of this procedure, which employs the audit paths o f both detailed testing and extended detailed testing. The difficulty arises because the posterior measurement is made at the completion of the audit. But then, the auditor will know whether analytical review indicated material mis-statement or not, and therefore whether extended detailed testing was actually employed or not. This means that, in assessing whether material mis-statement remains at the end of an individual audit, certain audit paths can be eliminated from the calculations. Consequently, the ratio o f the probability o f incorrect acceptance to the probability of all acceptances (incorrect or correct) (i.e. posterior audit risk), should relate to the known series of substantive tests undertaken by the auditors, as follows. If analytical review indicated material mls-statement, but this was not uncovered by extended detailed testing, then posterior audit risk would be = [Vrob(IA2)]/[Vrob(IA2) +Prob(CA2)] = [APR. (1 -- AR).ETD]/[APR. (1 -- AR).ETD + (1 -- APR). (1 -- AR*).ETD*]. The more typical outcome envisaged in audit risk measurement is one in which all audit tests fail to detect material mis-statement. In this case, analytical review does not indicate material mis-statement and extended detailed testing is not employed; then posterior audit risk is
MODELLING A U D IT RISK
= [Prob(IA,)]/[Vrob(IA,) +Vrob(CA1)] = [APR.AR.TD]/[APR.AR.TD + (1 -- APR).AR*.TD*]
127 (3)
The low probability of incorrect rejection The second clarification relates to Aldersley's observation that, in practice, the likelihood o f incorrect rejection o f an assertion is remote (Aldersley, 1989, p. 88). He argues that this means that the different audit outcome paths in Panel B of Figure 2 can be ignored, and that there is a single path with a probability o f (1--APR), the probability of correct acceptance. However, this conclusion is incorrect, as we show below. The rationale behind Aldersley's observation is that when audit tests indicate a mis-statement, further tests will be employed until the client and auditor are satisfied that the assertion is, indeed, incorrect. Therefore, the probability of a false rejection is negligible. This may well be so, but correct acceptance can be achieved by two different routes. First, analytical review and detailed testing both (correctly) fail to fred material misstatement, the e v e n t C A 1. Second, mis-statement is incorrectly signalled by analytical review, but when extended detailed testing is undertaken no mis-statement is found; the conclusion is then made that the analytical review signal was, in fact, wrong. This path leads to the e v e n t CA2 and is the condition identified by Aldersley in his intuitive discussion. This distinction between CA1 and CA2, however, is critical and is the error in Aldersley's analysis. Only one o f them will be relevant to the estimation of audit risk, depending on whether analytical review indicates mis-statement or not. This is illustrated by (3) earlier which gives the posterior risk when no material error is detected (by analytical review nor by tests of detail). Even if the term T D * is constrained to be equal to 1, so that the audit will never falsely conclude that material error is present, the term AR* may be less than 1, and therefore clearly affects the measurement of posterior risk. 4 ROLE OF C O N T R O L RISK A simplification made by Kinney (1989) is the aggregation of inherent and control risk, because the analysis is not primarily concerned with risk assessments made prior to substantive testing. Sennetti (1990) addresses this issue and develops a model in which each of these components individually affect the outcome space. In this section, we present a critical review of Sennetti's proposal. The Sennetti approach to control risk is illustrated in Figure 3. The setting is one in which the auditor's assessment of the internal controls determines whether analytical or extended analytical procedures are employed; the probability that the auditor will rely on the internal controls
128
L. C. L. SKERRATr AND A. WOODHEAD Panel A. The slate In whmh material misstatement exists. Ou~ome
;
AR
~ICl--I I-1
I~
I-1
IR
ha
I
El
I
,,
I
"(I-AR)
I
~EAR
I
I
='(I-ICI)"L(I- EAR)
r--'-TD.
IAi
L(I-TD) r'--ETD
CRel IA2
I I
=-( I - E T D ) ' - r~TO I ¢--(I-TD) r-'--ETD I L(I-ETO),--
CRez IA 3 CRes
IA4
CRe4
Panel B: The stole in which material misstatement does not exist Outcome
r--TO" r---AR ~
I
I
"(I-TO')
~IC2---4 r~
IAI I~
(I-IR)
r-i
ICl
L_I
I
~-(I-AR")
I
I
CAt IR= CAz
ETD"
I
[R2 CA3
L-(I-ETD*)
I
~EAR*
I
I
I t(l-IC2)-I-(lEAR*)
~TD*
I ~-- (I-TDf') -
~ I
-
ETI:f
(I-ETD*)--
IR4
IA= incorrect acceptance, CRe=correct rejection; CA=correct acceptance, and IR= incorrect rejechon.
Figure 3. Sennetti (1990) approach to control risk.
under various conditions is given by the IC (internal control) variable. As in the previous setting of Figure 2, the outcome o f the analyucal review determines whether detailed testing or extended detailed testing is undertaken. The addiuonal defimtions, which are required, are as follows: C = consideration o f the internal control structure; IC1 = the risk of overreliance on internal controls, the probability that the auditor will fail to properly assess control risk at its maximum level; IC2 = the probability of correct reliance, the probability that the auditor will fred the controls effective in preventing mis-statement; EAR = the risk that extended analytical review will fail to signal material mis-statement; EAR* = the probability that extended analytical review will not signal material mis-statement when it has not occurred. The innovative feature o f Sennetti's analysis is the effect which the auditor's evaluation o f the internal controls have on audit risk. This is
MODELLING AUDIT RISK
129
captured by the terms IC1 and IC2. These variables relate to the error which the auditor may make in assessing the power o f the controls. However, they do not capture the power o f the controls themselves; this is reflected in the term CR which is excluded from the Sermetti's risk assessment. In this model, the auditor chooses either to accept or reject the adequacy o f the internal control structure, for a given level of materiality. N o intermediate positions are allowed because the evaluation of the internal controls determines which of the two levels o f analytical review will be employed. For example, if the internal controls are judged to be inadequate, then the auditor will want to place more emphasis on (obtain more assurance from) analytical review; consequently the path o f extended analytical review will be chosen. An important objective of the Sennetti (1990) model is to focus on the error which the auditor may make in assessing the internal control structure. Through the terms IC~ and IC2, the model makes explicit the binary choice faced by the auditor during the course o f the audit, to employ extended analytical review or not, in the light o f the control structure which is in place. 5 Sennetti's measure o f risk in these circumstances (1990, p. 108), is illustrated by the top path of Figure 3, when no material mis-statement is detected. Here, the auditor assesses control risk as adequate, and the results from analytical review and detailed testing find no evidence o f material mis-statement. This finding is consistent with outcomes IA1 or CA1 and audit risk is calculated as = [Prob(IA0]/[Prob(IA~) +Prob(CAt)] = [IR.IC~.AR.TD]/[IR.IC~.AR.TD+ (1 -
IR).ICvAR*.TD*].
Despite the novelty of this approach with respect to the estimation of audit risk, we show below that the variables IC1 and IC2 are irrelevant to posterior risk. An important key to understanding what is wrong with this model is to realise that IC1 = IC2. Although Sennetti defines them as being different, both variables measure the probability that the auditor will conduct analytical procedures (i.e. not extended) at the next stage o f the audit. By definition, the auditor does not know whether material mis-statement exists or not; consequently, the auditor's decision about what level of analytical procedures to employ cannot be conditional upon the state of nature concerning material mis-statement. The decision will depend upon the auditor's assessment o f both inherent risk and the internal control structure in place; that is, upon the assessment of the probability that a material error might arise and remain undetected by the internal controls (and not upon whether the event has happened).
130
L. C. L. S K E R R A T T
AND
A. W O O D H E A D
A related problem here is that it is variables IC1 and IC2 which feature in the measurement o f audit risk, and not the variable CR. As we argue above, IC~ and IC2 relate to the choice of the analytical procedures which will be employed on the audit. Since they try to anticipate how effectively the internal controls will be assessed, they may well have value during the planning o f the audit. However, after the audit has been conducted, when the posterior risk is being estimated, it is not necessary to speculate about what audit path will be taken. This is known with certainty. Therefore, IC1 and IC2 should have no place in the estimation o f posterior risk. A C O M B I N E D M O D E L OF POSTERIOR RISK In this section we bring together the novel features of the contributions evaluated above to provide an integrated model o f posterior risk. Specifically, we combine • the posterior framework o f the CICA model, with • the Kinney (1989) insight concerning the incorrect rejection of an assertion, together with • the Sennetti (1990) concern to identify the precise role of internal control evaluation. Following our evaluation o f Sennettl (1990), the risk measure proposed here reverts to modelling control risk through the variable CR. Unlike the variables IC1 and IC2, the CR term does capture the property required for the estimation o f posterior risk, namely the power o f the controls to signal matenal mis-statement when it is present# However, IC~ and IC 2 do capture an important characteristic o f the audit process, namely the variation in the extent o f analytical review based on the auditor's evaluation o f the internal controls. We also incorporate this feature, together with the Kinney extension of the outcome space to include incorrect rejection. The model is illustrated in Figure 4. As in the Sennetti setting, the auditor's assessment o f the internal controls determines whether analytical or extended analytical procedures are employed. Also, the outcome o f the analytical review (or extended analytical review) determines whether detailed testing or extended detailed testing is undertaken. The additional definition, which is required for this, is IC = the probability that the auditor, after assessing inherent and control risk, will employ ordinary (i.e. non extended) analytical procedures at the next stage o f the audit. Figure 4 is divided into three panels. Figure A represents the state in which an assertion is susceptible to material mis-statement and is not prevented by the entlty's internal control structures. Panel B represents
MODELLING AUDIT RISK
131
Panel A The state m which material misstatement exists. Outcome
IAI
f~TD-
~AR
I
I
t--I-TO.
[C.---q I
LI-AR
~CR----I I I ~EAR I I L(I-IC)--I "I-EAR
r~
CRel IA z
~ETD.
IR
I LI - ETD r--TD I t-I -TD ~ETD I "1 - ETD
-
CRez IA~ CRe~ IA4 CRe4
Panel B The state ,n which material misstatement corrected by internal controls I I I I I 4- CR
AR* i ~IC---I I t-I-AR* I I I ,----EAR* I I t(I- IC').--I t"-I-EAR
~TD*" i
CAi
~I-TD* ~ETD* I . t-I-ETD.-~TD* I . t--i -TO r---E TO*" I ~--I- ETD*
IR, CA~ IR 2 CAs
IR3 CAa IR4
Panel C The state m which material misstatement does not occur
~AR" I
~IC~ IAI-- I- iR LJ
I --I-AR t l I I r--EAR* I I t'(I-IC)--I t'-I-EAR*
TA=lncorrect acceptance, CRe= correct rejection,
CA=correct
,,----T C* I ~I-TD*. ~ETD* I t--I - ETD*
IRe
t~TD*
CA7
I ~ I -TD* ~ETD* I ~ I-E T D * ' - acceptance,
and IR=lncorrect
CA 5
IRe CA B
IRr CAB
IRe re]echon
Figure 4. A combined model for posterior audit risk.
the state in which material mis-statement is detected by the internal control structure. Panel C represents the state in which material mis-statement does not occur. 7 It should be noted that although the term IC is included in Figure 4, this is for completeness only; IC does not form part of the posterior risk measure. As discussed above, the rationale for this is that IC measures the probability that the auditor will choose non-extended analytical procedures. However, at the end of the audit when posterior risk is being
132
L. C. L. SKERRATr AND A. W O O D H E A D
estimated, it is known with certainty what choice was made and consequently IC does not affect posterior audit risk. Posterior risk in this model is then determined by the tests undertaken by the auditor. For example, if control risk is assessed as adequate, and the results from analytical review and detailed testing find no evidence o f material mis-statement, then this outcome is consistent with events IA1, CA1 and CAs. In this situation audit risk is = [Prob(IA,)]/[Vrob(IA,) +Vrob(CA1) +Vrob(CAs)], where Prob(IAl) = IR.GR.AR.TD Prob(CA1) = IR.(1 -- C R ) . A R * . T D * Prob(GAs) = (1 - IR).AR*.TD* = [IR.CR.AR.TD]/[IR.GR.AR.TD + (1-- IR.GR).AR*.TD*]
(4)
On the other hand, if control risk is assessed as inadequate, then extended analytical review will be employed. But if no material mls-statement is found at the close of the audit, then this outcome is consistent with events IA3, CA3 and GAy. Then in this case, the posterior audit risk is = [Prob(IA3)]/[Prob(IA3) +Prob(CA3) +Prob(CAT)], where Prob(IA3) = IR.CR.EAR.TD Prob(CA3) = IR.(1 -- CR).EAR*.TD* Prob(CA7) = (1-- IR).EAR*.TD* = [IR.CR.EAR.TD]/[IR.CR.EAR.TD+(1--IR.CR).EAR*.TD*]
(5)
The examples o f audit risk in (4) and (5) relate to the typical situation envisaged in the audit risk literature in which testing reveals no material mis-statement. They have the same structure as our interpretation of Kinney (1989) in (3) but the detailed components are different in two respects. First, assessed prior risk (APR = IR.CR) is separated into its elements of IR and CR. Secondly, the risk measure reflects the power o f the tests actually employed during the course of the audit; for example, in (5), extended analytical review is employed and therefore the terms EAR and EAR* replace AR and AR*. Furthermore (4) and (5) also have the same structure as our simple representation of audit risk in (2). This is that audit risk can be expressed as a function o f three events: (i) the probablhty that material mis-statement remains undetected; 0i) the probability that mis-statement has arisen but has been corrected by the internal controls, and (lii) the probability that mls-statement has not occurred. Table 2 illustrates the impact of the terms AR* and T D * by comparing the posterior risk measures from (1) (CICA), (2) (CICA extended) and (4) (combined model). Table 2 takes the initial range o f risk assessments
133
M O D E L L I N G A U D I T RISK
TABLE
2
Differences between risk models: (a), planned risk; (b), CICA model (1); (c), CICA extended model (2) and (d), combined model (4) (a)
IR
CR
AR
TD
(b)
(c)
AR*
TD*
(d)
Panel A: 0.05 0-05 0-05 0.05 0.05 0"05 0"05 0.05
0"9 0.9 0-9 0-6 0'6 0.6 0.3 0-3
0'9 0"6 0"3 0"9 0"6 0'3 0"9 0"6
0.5 0.5 0.5 0.5 0.5 0.5 0.5 05
0.12 0-19 0.37 0.19 0.28 0"56 0"37 0.56
0-33 0.33 0-33 0.11 0.11 0.11 0.07 0"07
0.21 0.10 0"06 0.10 0"07 0"06 0"06 0.06
0.80 0.80 0.80 0.80 0.80 0.80 0.80 0.80
0"95 0"95 0"95 0-95 0"95 0.95 0 95 0.95
0.26 0.13 0"08 0.13 0.09 0.07 0"08 0.07
Panel B: 0.05 0.05 0-05 0.05 0"05 0"05 0"05 0 05
0'9 0"9 0"9 0"6 06 0.6 0.3 0.3
0.9 0"6 0.3 0.9 0"6 0"3 0-9 0"6
0"5 0'5 0"5 0"5 0-5 0-5 0-5 0.5
0 12 0.19 0-37 0.19 0.28 0.56 0"37 0"56
0"33 0"33 0.33 0.11 0.11 0.11 0"07 0"07
0-21 0.10 0-06 0"10 0-07 0.06 0.06 0.06
0"60 0"60 0.60 0"60 0"60 0"60 0"60 0"60
0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95
0.32 0.16 0'11 0.16 0.12 0.10 0-11 0.10
Panel C: 0.05 0.05 0"05 0"05 0-05 0-05 0-05 0.05
0.9 0"9 0"9 0.6 06 0.6 0.3 0"3
0.9 0.6 0.3 0.9 0"6 0"3 0"9 0-6
0"5 0'5 0"5 0-5 0-5 0-5 0.5 0'5
0 12 0.19 0-37 0.19 0.28 0"56 0"37 0"56
0.33 0.33 0.33 0.11 0.11 0,11 0 07 0-07
0.21 0-10 0"06 0.10 0-07 0 06 0.06 0.06
0.80 0.80 0.80 0.80 0.80 0.80 0.80 0"80
0.99 0.99 0.99 0"99 0"99 0-99 0"99 0-99
0.25 0-12 0.08 0.12 0.09 0.07 0.08 0.07
Panel D: 0-05 0.05 0.05 0.05 0 05 0.05 0-05 0-05
0.8 0.8 0.8 0.5 0"5 0.5 02 0-2
0.8 0"5 0.2 08 05 0-2 0.8 0.5
0-5 0.5 05 0.5 0.5 0"5 0"5 05
0.16 0.25 0.63 0.25 0"40 1.00 0-63 1.00
0.20 0.20 0.20 0"09 0 09 0-09 0.06 0 06
0.12 0"08 0-06 0.08 0.06 0.05 0.06 0.05
0-80 0-80 0.80 0.80 0.80 0.80 0"80 0"80
0.95 0.95 0.95 0.95 0"95 0"95 0-95 0 95
0.15 0.10 0.07 0 10 0-08 0.07 0.07 0.07
Panel E: 0.05 0.05 0-05 0-05 0 05 0 05 0.05 0.05
0.8 0'8 0"8 0"5 0-5 0-5 0.2 0.2
0'8 05 0-2 0.8 0.5 0.2 0"8 05
0-5 0.5 0.5 0.5 0'5 0.5 0-5 0-5
0.16 0.25 0'63 0.25 0 40 1-00 0.63 1.00
0.20 0.20 0.20 0.09 0.09 0 09 0-06 0.06
0.12 0"08 0.06 0-08 0 06 0.05 0.06 0.05
0"60 0.60 0.60 0.60 0 60 0-60 0.60 0.60
0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95
0.20 0.13 0.09 0.13 0.10 0.09 0.09 0"09
134
L. C. L. SKERRATr A N D
A. W O O D H E A D
depicted in Table 1 and shows the effect o f the possibility that analytical review and tests of detail may wrongly signal material mis-statement, the probabilities (1--AR*) and (1--TD*) respectively. The effect o f wrong signalling by the audit tests on posterior risk is shown in column (d) o f Table 2. Comparing this with the CICA extended model in column (c) shows that the posterior risks are higher throughout the ranges and that the effect increases with the probability o f wrong signalling. As with Table 1, the results are consistent with the conclusions o f recent field work which suggest that posterior risk is not a simple combination of the components of planned risk. In addition, comparing column (d) with the level o f planned risk fixed at 5%, lends support to the finding in Daniel (1988) that audit managers' assessments of audit risk tend to be significantly higher than the value given by the extant multiplicative models. SUMMARY, D I S C U S S I O N A N D FURTHER RESEARCH Identifying the components o f audit risk in a systematic manner is important because it may be able to enhance audit decision processes. However, an increasing number o f discrepancies have been found between the joint risk model and the judgments o f auditors in practice. Therefore, the objective o f this paper is to extend the model to reflect more accurately the choices and circumstances faced by auditors. The paper critically reviews the joint risk model and also a number o f recent contributions to the measurement o f posterior audit risk. We show how each o f these different insights should be incorporated into a comprehensive measure o f posterior audit risk at the level o f the individual audit objective (e.g. account balance). The differences between our proposed model and other risk measures are illustrated with some numerical examples and we identify the circumstances under which the different models will yield different estimates o f audit risk. Interestingly, we find that our proposal and the auditor risk judgments identified in the recent studies, exhibit similar characteristics when compared with the joint risk model. There are a number of areas which need further discussion and research. 1. One important premise of the research is that risk modelling is relevant for the evaluation o f posterior risk, as well as for planning. However, it might be used only at the planning stage. This might appear a plausible strategy for auditors since the extant risk modelling focuses on a given audit objective (for example, the valuation o f stock), and not the financial statements as a whole. Since the audit opinion relates to the aggregation o f the financial statements, auditors may consider that the opinion is formed more effectively through judgmental processes.
MODELLING
AUDIT
RISK
135
However, there is no necessary conflict between the posterior risk and judgmental processes. Indeed they should be viewed as complementary. Since the role o f posterior risk is currently limited to a specific audit objective, it is important that the conclusions from the individual objectives are aggregated to form a professional opinion. Although the aggregation process is (given existing mathematical models) best left to professional judgment, the judgment is typically based on individual audit tests derived from an audit risk model at the planning stage. An important contribution of the posterior approach is to show that, even if the tests can fred no signs of material mis-statement, the achieved risk is different from planned risk; that is, posterior risk assessment is a necessary part of using a risk model as a planning tool. 2. The numerical examples chosen to illustrate the various models are derived from the armchair. They are not constructed from discussions with auditors in practice, and future research should identify more carefully the likely range o f values which parameters in the model may take. This will enable a more careful assessment o f the benefits to be derived from a detailed specification o f the audit process, in contrast to, for example, the joint (planned) risk model. 3. The audit procedures modelled here assume that auditors will compensate for poor controls by extended analytical review. Since analytical review is often constrained by the availability o f data, the compensation may take the form o f extended tests of detail. The more such substitutions can be reflected in the model, the greater its practical relevance for auditors. Future models of the audit process should incorporate this feature. 4. The comparisons made here between the various models, assume (somewhat heroically) that auditors can measure the component risks (IR, CR, T D etc.) without error. However, this assumption is unlikely to be satisfied in practice. Consequently, any differences which are identified between the various audit risk models need to be assessed in relation to the measurement error o f the component risks. It may be that differences which are observed in theory are o f little practical importance in the fight of these measurement errors. Future research should assess the likely impact of this aspect o f risk modelling.
NOTES 1. Another reason for field tests to use the joint risk model as a benchmark is its recognised status in professional gmdelines. 2. This conclusion appears somewhat paradoxical in that Table 1 and a comparison o f ( l ) and (2) indicate that the C I C A model gives rise to a larger audit risk than the C I C A extended vernon m (2); this is despite the fact that the auchtor has more knowledge in the former s,tuation (that the event 3 has not occurred) than in the latter.
136
L.c.L.
SKERRAT'f AND A. WOODHEAD
In fact, this result does make sense because the extra knowledge m the C I C A model (1) shows that the internal controls have not identified the mis-statement being investigated; this makes the audit conclusion o f no error more risky than otherwise. In contrast, the CICA-extended version (2) attaches some positive probability to the event that the controls may have picked up the material error being investigated, and therefore the audit conclusion of no error attracts less risk. 3. Such an event may arise because an Isolated error is found in a sample, and is then wrongly projected over the whole of the population. This event is discussed further in Sherer & Kent (1988, pp. 59-61). 4. More formally, Aldersley's observation defines the following equation, that Prob(IRi) = (1--APR).AR*.(1--TD*) = Prob(IR2) = (1 --APR).(1 --AR*).(1 --ETD*) = 0. This equation is solved either by (i) AR* = 1 and T D * = 1, or by (ii) T D * = 1 and ETD* = 1. However, it is the first o f these conditions which is required for the probabihty of correct acceptance to be (1--APR). 5. In practice when the controls are inadequate, the extra assurance required may also come from extended tests ofdetad as well as extended analytical review. However, for simplicity, this is omitted from the audit model. 6. We assume here that, unlike analytical review and tests of detail, the internal controls wdl not signal material error when it is not present. 7. As explained above, we assume here that the internal controls will not signal material error when it is not present. Therefore, aspects of control risk (CR) are omitted from Panel C. However, the model does allow the auditor to vary the analytical review procedures in response to an assessment of the perceived power o f the controls in place.
REFERENCES Aldersley, S.J. (1989). 'Discussion o f achieved audit risk and the audit outcome space', Auditing: A Journal of Practice & Theory, Supplement, pp. 85--97. Ashton, A. H. (1991). 'Experience and error frequency knowledge as potential determmants o f audit expertise', Accounting Review, April, pp. 218-239. Audit Practices Committee (1987) Audit Samphng, London: ICAEW. American Institute of Certified Public Accountants (1981). Statement on Auditing Standards No. 39, 'Audit sampling', N e w York: AICPA. American Institute o f Certified Pubhc Accountants (1983). Statement on Au&tmg Standards No. 47, 'Audit risk and materiahty In conducting an audit', N e w York A I C P A American Institute o f C e m f i e d Pubhc Accountants (1988). Statement on Auditing Standards No. 55, 'Consideration of the internal control structure in a financial statement audit' New York: AICPA. Brown, C. E. & Solomon, I. (1990). 'Auditor configural reformation processing in control risk assessment', Auditing: A Journal of Practice & Theory, Fall, pp. 17-38. Canadian Institute of Chartered Accountants (1980). Extent of Audit Testing: A Research Study, Toronto: CICA.
MODELLING AUDIT RISK
137
Cushing, B. E. & Loebbecke, J. K. (1983). 'Analytical approaches to audit risk: A survey and analysis', Auditing: A Journal of Practice & Theory, Fall, pp, 23-41. Darnel, S.J. (1988). 'Some empirical evidence about the assessment of audit risk in pracuce', Auditing: A Journal of Practice& Theory, Spring, pp. 174-180. Houghton, C. W. & Fogarty, J. A. (1991). 'Inherent risk', Auditing: A Journal of Practice & Theory, Spring, pp. 1-21. Kinney, W. R. (1975). 'Decision theory aspects of internal control system design/compliance and substantive tests', Journal of Accounting Research, Supplement, pp. 14-37. Kinney, W. R. (1989). 'Achieved audit risk and the audit outcome space', Auditing: A Journal of Practice & Theory, Supplement, pp. 67-84. Leslie, D A., Anderson, A. D. & Teitlebaum, R.J. (1980). Dollar Umt Sampling: A Practical Guidefor Auditors, Toronto: Pitman. Leshe, D. A. (1984). 'An analym of the audit framework focusing on inherent risk and the role ofstatlsucal sampling m compliance testing', Auditing Sympostum VII (University of Kansas), pp. 89-125. Peters, J. M. (1990). 'A cogmtive computational model of risk hypothesis generation', Journal Of Accounting Research, Supplement, pp. 83-103. Sennetu, J T (1990). 'Toward a more consistent model for audit risk', Auditing: A Journal of Practtce& Theory, Spring, pp. 103-112. Sherer, M & Kent, D. (1988). Auditing and Accountability, London: Chapman. Shibano, T. (1990). 'Assessing audit risk from errors and irregulanues',Journal Of Accounting Research, Supplement, pp. 110-140. Spires, E. E. (1991). 'Auditors' evaluauon of test-of-control strength', Accounting Review, April, pp. 259-276. Strawser, J. R. (1990). 'Human reformation processing and the consistency of audit risk judgments', Accounting and Business Research, Winter, pp 67-75. Strawser, J. R. (1991). 'Examination of the effect of risk model components on perceived audit risk', Auditing" A Journal of Practice & Theory, Spring, pp. 126-135.