Expert Systems With Applications, Vol. 7, No. 2, p. 357-372, 1994 Copyright © 1994 Elsevier Science Ltd Printed in the USA. All rights reserved 0957-4174/94 $6.00 + .00
Pergamon
A Constructive Synthesis Approach to a Knowledge-Based Internal Control Evaluation System Design J O N G U K CHOI Sangmyung Women's University,Anseodong,Cheonan-city,Chungnam, Korea
Abstract~In knowledge engineering interviews with experienced auditors, it was observed that the auditor's review processes in evaluating the client's internal control system can be characterized by the following: using selective attention, forward chaining with a hypothesis-test as a back-up, and multiple knowledge sources. Based on these observations and cognitive science theories, a computational model, called the red-flagging model, was developed and implemented in a microcomputer environment. In the red-flagging model, a problem solver pays attention to relevant information and uses a blackboard architecture for allowing multiple knowledge sources to communicate with each other. The computerized internal control evaluation review system, ICEI, built on the red-flagging model, processes more than 400 checklist question items and incrementally constructs review conclusions from relevant information. In a performance test against 29 real audit cases (correspond to 232 cases in previous research), the review system showed more than 80% compatibility with experienced auditor's conclusions.
To be e~cient within a reasonable time limit for ensuring the integrity of the client's financial statements, public auditors must rely on general information obtained from internal accounting control evaluations to decide timing, areas, and scope of intensive audit examinations. In other words, auditors carry out intensive investigations in areas for which internal control evaluations indicate weakness while less intensive auditing tests are needed in the areas where internal control is indicated to be strong. In public accounting firms, the internal control evaluation review task is assigned to experienced auditors, generally managers who have at least 5 or 6 years of field experience. They review audit working papers, including the internal control evaluation checklists which are divided into separate sections for each major transaction cycles, enabling the work of completing the checklist to be divided conveniently among several junior auditors or senior auditors. Checklist evaluation is believed to be the most objective and rigorous technique for evaluating the internal control of the client; therefore, it currently is widely used by auditors (Hansen & Messier, 1982) (Brown, 1962). Furthermore, the checklists on computer systems are used for maintaining historical audit data about a specific client. Most checklists are designed so that a " N O " answer to a question indicates a weakness in internal control, while "YES" indicates the existence of a relatively strong internal control. When no judgment can be made of
1. I N T R O D U C T I O N EXPRESSING an independent and expert opinion of the integrity of financial statements is an invaluable service rendered by the public accounting profession (Meigs, Whittington, & Meigs, 1985). Generally, the financial statements approved by auditors are used as major information sources for financial decision making of various interest groups: managers, employees, customers, suppliers, owners, creditors, governmental agencies, or other firms who are considering a merger (Meigs, Whittington, & Meigs, 1985, p.3). They rely on the financial analysis of the organization's past performance and current financial status; an analysis of the business future in which the organization operates; based on these and other considerations, a projection of the organization's likely performance over some future period o f time (Hart, Barzilay, & Duda, 1986). However, despite the importance of the verification of the client's financial statements, the public accountants cannot go through all, or even a major portion of the great number of business transactions and accompanying documents comprising a fiscal year's operations to express their own opinion of the fairness of the client's financial statements.
Requests for reprints should be sent to Jong UK Choi, Department of Information Processing, Sangmyung Women's University, Anseodong, Cheonan-city,Chungnam, Korea 330-180. 357
358
J. U. Choi
the quality of the accounting control system, "NA" (not available) can be answered. Frequently, the checklists are accompanied by written narratives or flowcharts as complementary tools for compensating inflexibility of the standardized checklist. In the checklist review, the experienced auditor evaluates the effectiveness of the internal accounting control in preventing and detecting material errors and irregularities based on the information collected in previous audit activities and information revealed in the checklists. As a result, the auditors identify the accounts or transaction flows requiring extensive audit tests and also determine the degree of extensive tests. The review task of the experienced auditors, generally managers in a public accounting company, is a complex and difficult problem that requires years of experience and professional knowledge obtained through formal education and fieldwork. Because of its complexities and professionalities, the review process has not been completely computerized. However, with the advent of artificial intelligence concepts, a few expert system approaches to a computerized expert review system have experimented with a diagnostic problem solving model, which is called a hierarchical classification model (Clancey, 1985). Examples of the hierarchical classification model in building expert review systems are AUDITOR (Dun-
gan, 1982; Dungan & Chandler, 1985), EDP-XPERT (Hansen & Messier, 1986; Messier & Hansen, 1984), INTERNAL CONTROL ANALYZER (Gal, 1985), ICES (Grundnitski, 1986), and AUDIT-PLANNER (Steinbart, 1987). EDP-XPERT was developed using AL/X. The system assists computer audit specialists in making judgments as to the reliability of internal controls in advanced computer environments. The goal of evaluation is constructed by evaluating four different subgoals: the reliabilities of supervisory, input, processing, and output controls. The evaluation of the effectiveness of the control system in each subgoal area is determined by checking primitive and detailed control objectives; they include, for example, whether an ID card is used for identifying an individual user, how often access codes are changed for database security, and whether the terminal is lockable with keys issued to only authorized users, etc. Gal (1985) developed INTERNAL CONTROL ANALYZER, a computer program that assists auditors in evaluating the internal accounting control system in the revenue cycle. The conceptual knowledge organization of INTERNAL CONTROL ANALYZER, as depicted in Fig. 1, consists of nodes and edges in a hierarchical tree structure. The most general concept, R E V E N U E Cycle accounting control, is placed at the
Revenue Control
Cycle[
~ASH R E C E I P T | Control
I POPULATION Control
I
t DUTIES s C o°"/In t ~
I
/2
OMPLETE NESS NTROL
AUTHORIZAI COMPARI MATHEMAT -ICAL -TION I -SON CONTROLI CONTROL CONTROL
/5
......... Q . o o [] FIGURE 1. Hierarchical structure of knowledge organization (source: Internal-Control-Analyzer (Gal, 1985).
Knowledge-Based Internal Control top of the subgoals: Sales and Cash Receipt accounting control in the second level. Each control object at the second level has three different subgoals at the third level: Population, Separation of Duties and Accuracy controls. The effectiveness of each control at the third level is determined by the other subgoals and questions at the lower level. When the Completeness and Authorization control at the lowest level are identified, the expert's rule in the knowledge base generates an evaluation of the effectiveness of the Population control at the third level, which in turn partially contributes to the evaluation of the Cash Receipt control system at the second level. Finally, the overall evaluation of the revenue cycle is derived from evaluating two subgoals: Cash Receipt and Sales. Grundnitski (1986) reported his development of a computer-based consultation system that offers advice about the effectiveness of an internal accounting control system of the client, with emphasis on the Sales and Accounts Receivable accounts. The knowledge base of the system, Internal Control Evaluation System (ICES), was built from entry-level auditor training manuals. Like other internal control systems such as EDP-XPERT and INTERNAL CONTROL ANALYZER, ICES's evaluation proceeds by propagating up a decision tree from the primitive control objectives at the bottom level to the final goal. That is, as in other systems, the system chains its way backward from detailed control objectives at the bottom level to the abstracted conclusion of the effectiveness of the control system at the top level. The judgment ofmateriality requires an experienced expert's decision that accounts for all the considerations of quantitative and qualitative information. Eight factors affecting the decision making of materiality were identified by Steinbart (1987), and their relationships were coded in EMYCIN to build an expert audit decision support system, AUDITPLANNER. Just as in other systems described previously, the eight factors at the bottom level are related to two different subgoals: materiality ratio and materiality base. As shown in the examples of accounting expert systems, the qualitative factors affecting the public accountants' judgment are structured into a hierarchical multilevel tree or network. The goal at the highest level can be reached through backward matching in a sequential manner from the bottom level to the top level. But, the accounting researchers involved in building accounting expert systems have failed to recognize the fact that the "simple" hierarchical classification model, has inherent limitations in solving a complex and difficult problem. This is especially true when it is combined with surface (heuristic) knowledge. The limitations that are attributable to the hierarchical structure in developing expert systems are combinatorial explosion in solving sizable problems and inflexibility in problem solving control structure.
359 Combinatorial explosion is a phenomenon in which the number of possible paths required to solve the problem increases exponentially as the number of problem-solving steps increases. Despite smart knowledge organization schemes and researchers' confidence in expert's heuristic knowledge (Clancey, 1985; Lenat, 1982), enumeration of possible solution paths remains as one of the major problems in expert system research (Georgeff, 1983; White, 1986). Bailey et al. (1981b) and Hansen and Messier (1982) proved that internal control evaluation using a hierarchical classification model is definitely computationally intractable. They asserted that the time complexity function of evaluating an internal control system using checklist in a hierarchical classification model is f(n) = 3n, where n is the number of questionnaire items. In fact, the time complexity (search time) of questionnaire evaluation is not exponential but linear. The time requirement to draw a final conclusion from 3n possible paths is less than 3 * n. However, the enumeration of the possible paths is evidently exponential, that is, f(n) = 3". Different from a human being's problem-solving abilities, most expert systems, especially the systems built on the simple hierarchical classification model, have not achieved flexibility in reaching a conclusion. The strategy and even the sequence of problem solving in a hierarchical classification model is pre-enumerated by the programmers, and therefore, solution development is not controlled by the system itself but by a predetermined programmer's knowledge. In fact, the importance or generality of the objects in a hierarchical tree or network in the model is determined, ordered, and fixed by the knowledge engineer. Thus, the problem-solving system cannot modify its behaviors according to solution development. Additionally, the internal control evaluation expert systems that have been built on the primitive hierarchical classification model do not incorporate heterogeneous knowledge sources. A single knowledge source is mainly used for decision making. Nonexistence of a communication mechanism between heterogeneous knowledge sources is attributed to the strict enforcement of subsumption relationships in the hierarchical classification model. To overcome the limitations of current expert system approaches to developing an internal control evaluation system and to design a computerized evaluation system that would be of practical use to auditors, a computational model is developed based on cognitive processes of human experts in a review process. The red-flagging model is suggested for solving combinatorial explosion problems and ojected-oriented blackboard model was employed to implement flexible problem solver. This paper consists of 5 sections. In Section 2, presented is the description of the red-flagging model that can be applied to solving various business review prob-
360
J. U. Choi
lems. The description of red-flagging model in the cases of the internal control evaluation is presented in the following section. In Section 4, the computer implementation of the model and evaluation of system performance is discussed. Section 5 is reserved for concluding remarks. 2. RED-FLAGGING M O D E L 2.1. Selective Attention
Red-flagging model is developed based on the selective attention theory of cognitive scientists that describes the assimilating process of sensory inputs in the micro level of human information processing. The red-flagging model is a computational model for developing knowledge-based business review systems, not for describing human perception mechanisms at the micro level. The reason for employing the far-fetched theory of human perception is that human beings' perceptional information processing is very similar to the reasoning process of human experts in higher level in terms of memory handling and selectivity, especially, the selectivity of human information processing that is explained by the limited information resource; the magical number of 7 + 2 in short-term memory (Miller, 1956); "bounded rationality" (Simon, 1957) (Fox, 1981). Selectivity is very prominent in perception and in reasoning, even in memory discrimination. In other words, although the process of human decision making at the macro (decision making) level seems much more complex than the micro level (perceptional) mechanisms, the underlying principle might be the same (Simon, 1981; Smolensky, 1986). Selective attention in perception is an ability to focus human being's mental effort on specific while excluding other stimuli from consideration (Best, 1986, p. 36). Johnston and Heinz (1978) defined attention as the systematic admission of perceptional data into consciousness as follows: Since we cannot be fully conscious of all the inputs that continuously flood into our processing systems, some selection of perceptional information is needed prior to consciousness. Since random selection would provide consciousness with an uninterpretable college of perceptional data, systematic selection is prerequisite to a coherent and intelligent picture of the world. This systematic admission of perceptional data into consciousness defines our usage of the term Attention. (p. 421) According to the Broadbent's bottleneck theory (Broadbent 1958), information input stored in sensory memory is subjected to a preattentive analysis, which determines the characteristics of the stimuli such as pitch, intensity, and so on. As a result, the selective filtering process determines which stimuli will undergo further processing and thus, no further elaboration of
information processing takes place on these stimuli that are not selected. An important aspect of the selective attention theory is forgetfulness of "irrelevant data" which is classified as meaningless data and shiftness of attention from selectivity in an early stage to analysis and interpretation in a later stage. 2.1.1. Selectivity in Expert's Decision Making. Newell and Simon (1976) postulated that the first task of intelligence is to avert the ever-present threat of the exponential explosion of search by only generating plausible moves (p.123). That is, intelligent agents move into the right direction which shows "promise of being solutions or of being along the path toward solutions" by selectively guiding problem-solving activities. Therefore, they asserted that any intelligent system has to assist "the selectivity of its solution generator" (Newell & Simon, 1976, p. 123) to solve complex problems of combinatorial explosion. In expert's problem solving, as in human perception mechanism, memory storage, and learning, the "selectivity" is prominent. When Patel and Groen (1986) conducted experimental research on the protocols of cardiologists in the task of diagnosis of bacterial endocarditis, they found that "expert physicians have greater ability to isolate relevant from irrelevant material and they make more inferences from the relevant material" (p.94). The selectiveness of human expert's judgmental decision has been observed by us in many knowledge engineering interviews with human experts in medical claim reviews and experienced auditors. In building a computerized medical insurance claim review system, more than 1 year's intensive knowledge engineering interviews with the best experts from Blue Cross/Blue Shield were conducted (Kuo & Choi, 1985) (Kuo, 1986). Also, to extract expert's knowledge in auditing fields, experienced auditors in manager levels from six Big Eight accounting firms, independent accountants and two accounting faculty members, had been interviewed for 1½ years, beginning August 1986, to develop a knowledge-based internal control evaluation system (Choi, 1988). At the interviews with experts in medical claim reviews and auditing, the selective attention of human experts has been observed and incorporated in building a computational model. For example, in the medical claim review of experienced nurses selectivity is frequently observed. Experienced nurses did not try to remember every piece of information listed in the claim documents (in fact, it might be impossible) when they go through their review process. In the middle of the review, when they find information that is out of normal bounds, they think over that information for a moment or start to collect related information. Frequently, they come back to the information which they previously scanned already but which the reviewer cannot exactly remember what it was. The reviewers, from their experience, know
Knowledge-Based Internal Control
361
that the data has a close relationship with the data currently in consideration. That is, when the review process is underway, every item does not receive the auditor's attention even though the information is being reviewed. Therefore, the observations of human experts in medical claim review indicated that the reviewer did not pay close attention to every piece of information available but paid closer attention to only relevant and important information. In the checklist review, the auditors showed the same decision-making process as in the medical review. The auditors cannot remember every item in the checklists so they frequently come back to previously scanned items to collect further information. A possible interpretation of this phenomena from a cognitive scientist's view is that the reviewer possesses expertise in distinguishing relevant from irrelevant item in the checklists. When the item is judged as unimportant or irrelevant, it might be ignored, or frequently forgotten. Otherwise, as is in perception mechanism integrating separated representation of visual scenes into a complex display using focal attention (Treisman & Gelade, 1980), unattended items will be "'freefloated" until the item is recalled for checking interrelationships with other items. Even though the interpretation of the materiality is not visible to other observers and, although the reviewer cannot exactly remember the contents of the information a few minutes later, the selection process is occurring to the reviewer. The interpretation of selectivity in the human expert's review process is consistent with the findings of cognitive scientists who have studied the performances of experts and novices (Chi, Glaser, & Rees, 1982) (Chi, Feltovich, & Glaser 198 l). Also, in the accounting area
INPUT
a similar conclusion has been drawn from a survey research. Rigsby (1986) investigated the relationship between task complexity and auditors' judgments. He stated: The more experienced auditors tend to differentiate to a great extent in making materiality judgments than less experienced auditors . . . . Partners and managers recognized the shades of complexity presented in the situations, while seniors and staff found everything material (Xii). Based on these findings from cognitive theories and knowledge engineering interviews with auditors, the conclusion can be drawn that human experts in the review process are using their expertise to distinguish important information from irrelevant information and these mental activities are directed by contextual interpretation of multiple knowledge sources, domain knowledge, and domain-free knowledge.
2.2. Red-Flagging Model The review process of the red-flagging model consists of two discriminate and sequential phases: the selective attention phase and the evaluation phase. As shown in Fig. 2, the selective attention phase goes through a prealtentive filtering step and an attentive analysis step to reduce the cognitive burden of a reviewer by selecting relevant or important information and then to build up final conclusions. The cognitive activities in the preattentive filtering step are dominated by focusing the reviewer's attention on determining which information will undergo further processing and which information will be erased from memory. Then, in the
OUTPI
0
Damrao o f ReTiability Extant and area o f 8ubotantivo rearm Audit
~Le.at~antive Attentiveltem Conflicts Global li~arlng Analysis ~ v a l u a t i o n
Selective Attention
Evaluation Construction
Red-Flagging
Evaluation
(Review)
FIGURE 2. Red-flagging model.
roport
362 attentive analysis step, more careful analysis of the information is conducted to check whether the information is redundant, missing, irrelevant, or insignificant, with respect to the contextual interpretations. The attentive analysis step is an additional scanning process that requires the reviewer to understand domain contexts and to analyze the information in terms of domain specific situations. Frequently, information transferred from the previous step (preattentive filtering) can be discarded when additional analysis indicates it is not worth elaborate analysis or further processing in the later phase. Then, as shown in Fig. 2, the reviewer evaluates each item, resolves conflicts between items, and finally constructs a final conclusion from the remaining items. The reviewer in this phase collects relevant information and compares them with expectations developed from partial information while resolving conflicts between individual items, and then building up a consistent, final conclusion. In the item evaluation, an individual interpretation of a data is conducted in the context of situational data. For example, a diagnosis of medical insurance claim cases should be interpreted with other information: sex, age, patient history, and so on. Frequently, patients have more than one disease. Therefore, when multiple diseases are reported from medical doctors, the interrelationship between them should be identified in the conflicts resolution step. Identifying interrelationships between individual items and interpreting them against situational data is the most important and difficult task of human experts. Then, in the global evaluation, considerations of higher level factors are done for reaching a final conclusion. The factors in the higher level, in case of medical insurance claim reviews, are considerations of the integrity of medical service providers, company's claim policy, and other data. For example, in the medical insurance claim review, the experienced nurses understand the policy of the insurance company and give positive and favorable consideration to the claim cases. Therefore, though the medical judgments of review cases are negative to the request, insurance reviewers accept the request because of company policy. In fact, understanding high level constraints make big differences, as shown in a previous section. Expert reviewers have knowledge of not only medical experience but also of claim review experience. The important thing is that the cognitive process of the reviewer in the selective attention phase is dominated by materiality of individual data and in the evaluation phase by the interrelationships between data. The materiality judgment in the first phase can drastically reduce the cognitive burden of the review system; in other words, reduce the search space into a manageable size. The selective attention phase characterizes the red-flagging model by collecting only appropriate information for further processing. In the second phase,
J. U. Choi a contextual interpretation of the collected data is conducted based on the "first principle" knowledge and heuristic knowledge. The most important task conducted in the second phase is to identify the interrelationship between data and to assign weights of each data in the combinations. Final conclusions need integrated and consistent views of human experts. It is not simply an assembled view of individually interpreted partial conclusion. For this reason, the second phase of the model is characterized by "construction" process. 3. A C O M P U T E R I Z E D I N T E R N A L C O N T R O L EVALUATION SYSTEM The red-flagging model assumes that the review system in the review process goes through two stages: selective attention and incremental construction offinal conclusions. In the first stage, the system eliminates irrelevant or unimportant questions in the checklist from the further considerations and thus the problem space of the final conclusion in the next stage would be drastically reduced to manageable size. In the second stage, the system starts with the remaining questions of the checklist, resolving conflicts between questions, evaluating each question, and building a global evaluation. The factors involved in the evaluation review process of experienced auditors and their relationships with judgments in each step of the red-flagging model is shown in Fig. 3.
3.1. Selective Attention Stage The answer to the evaluation questions in a checklist should be one of these: "YES", "NO", or " N A " (not available). Because the reviewer is concerned with identifying weak control procedures in the accounting system of a client, the experienced auditor needs not take into further consideration the question items which show that strong or proper internal control exists. That is, the checklist questions answered "YES" by junior auditors are classified into "non-target" information and discarded in the early stage of the review process. In contrast, the checklist questions that are marked " N O " or "NA", are selected at the preattentive filtering stage as "target information" and then transferred to the next step, attentive analysis. The simple idea that human beings do not pay great attention to unimportant or irrelevant information is very prominent and important cognitive process in pattern recognition (vision, voice, and other patterns) and memory storage. For example, the marketing theories on consumer behavior assume that only strong and significant stimuli could catch the consumer's attention, which is used in the commercial advertisement. In the auditing firms, very few question items in the checklist are marked " N O " or " N A " and when the
Knowledge-Based Internal Control
363
Bu=inum Type I ~ ~bjecti~
/
f
, Business Sise ~ a t e r i a l t i ~ I ~Judgmont-
f\
I
-
FIGtlIIE 3. Example ot internal control evalualion.
less experienced auditors finish checking the internal control system of the client, it is usually less than 10% or 15%, auditors agreed. Otherwise, the auditors would discontinue the audit activity because the accounting system of the client is worse than what normal criteria would indicate as an unauditable situation. The audit risk is too high so all the audit firms' manuals, in case of unsatisfactory evaluation conclusion of the internal control system, recommend to discontinue further auditing activities. The professional institute (AICPA) suggests that "an auditor may conclude that further study and evaluation are unlikely to justify any restriction of substantive tests". Such a conclusion would cause an auditor to discontinue further study and evaluation of the internal accounting control system and to design substantive tests that do not contemplate reliance on such internal accounting control procedures (AICPA papa 4202.04, 1980). From the computer implementation point of view, this simple idea alone might drastically reduce the computational load of computer review systems, which is otherwise combinatorially explosive. In the internal control evaluation review, all the checklist items marked "no control existing" or "information not available" need not necessarily be transferred to the later stage of constructing evaluation conclusions. Also, the checklist items that are considered important and relevant to the further evaluation in later stages should be assigned the importance in the substantive test design. Some of the items which are marked "relevant" in the first step of selective attention are discarded during the attentive analysis step and thus are not included in later consideration. In this step, the problem solver's attention focuses on determining the materiality of a specific checklist question:
whether the absence of a specific control procedure is important to further consideration. Frequently, experienced auditors, based on their professional knowledge and the historical data of the client, know that the absence of a specific control procedure would not be serious in certain circumstances so that it can be ignored in drawing evaluation conclusions. From the audit risk's or cost comparison's stand point, it is more beneficial to ignore that item because it does not affect finding important and potential errors or irregularities. For this reason, the professional institute of CPA (AICPA) warns public auditors that the auditors should not evaluate the reliability of the internal accounting control systems based solely on the checklist answer. The auditor should ask himself the question: "Should the absent procedure be dismissed as not applicable because it is not relevant in the client's circumstances?" (AICPA Para 4200.16, 1980). The decision as to whether a specific checklist question with " N O " or " N A " is important or relevant to further consideration is a very subjective judgment of senior auditors; it depends on the auditor's individual experience, audit situations, and other factors. In attentive analysis step, materiality judgment is the most important consideration. The review system goes on further to ask additional questions to ensure that control procedure is absent through supplementary control analysis. 3.1.1. Materiality. Auditors showed that several decision factors should be considered in materiality judgments: industry trend (industry knowledge), business types, sizes, locations (business knowledge), independence and quality of internal audit departments, owner's awareness of accounting controls, purpose of financial statement's user, sophistication of computer system (accounting system knowledge), dollar amount
364
effects (accounting knowledge), and so on. In this step, the review system retrieves an extensive range of data from external files and sends them to the domain knowledge processor. Several accounting research results support the view that the materiality judgment of a specific control procedure depends on environmental factors, and that the judgment requires the senior auditor's professional knowledge obtained through audit experience and formal education (Joyce, 1976; Messier, 1983; Reekers & Taylor, 1979; Rigsby, 1986; Weber, 1980). The findings are in correspondence to the experimental results of Chi, Glaser, and Rees (1982), Chi, Feltovich, and Glaser ( 1981), Kolodner (1983), Larkin, McDermott, Simon and Simon (1980). 3.1.2. Supplemental Control When an experienced auditor finds the absence of a specific control procedure, he/she would not mechanically interpret it as a weak control existing. Instead, in the knowledge engineering interview auditors indicated, auditors ask themselves a question whether there is another control that can mitigate the absence of preventive control. As "the supervision within the firm compensated for the lack of separation of duties" (Gal, 1985, p. 160), the auditors usually ask whether "there is possibility for the firm to compensate for the lack of complete separation among incompatible functions" (p.152). Therefore, Gal (1985) asserted that "the use of compensating controls also made the system less stringent because situations that had inadequate separation of duties were evaluated as having adequate controls" (p. 161). In the computer implementation of the redflagging model, a model of question-asking, a case of seeking supplementary controls for compensating absences of specific control procedures is established.
3.2. Evaluation Stage Through the evaluation process, final conclusions are incrementally constructed based on the identifications and interpretations of interrelationships between control procedures. Different from conventional approaches to knowledge-based review systems (Gal, 1985) (Hansen & Messier, 1986), in the red-flagging model the review system assembles relevant data into a consistent interpretation rather than selectively search down predetermined decision trees. 3.2.1. Item Evaluation. Accounting control procedures are designed for controlling accounting transaction data flows that prevent and detect accounting errors and irregularities. Therefore, the absence of a specific control procedure might result in significant effects on the validity of accounts and eventually lead to invalid financial statements. For this reason, in the item evaluation step, the review system must identify which ac-
J. U. Choi
counts will be affected by the absence of a specific control procedure and how seriously the accounts will be affected. The identification of accounts affected by the absence of a specific control procedure depends on the case specific accounting control systems, because each organization employs different procedures which would lead to a variety of accounts names. However, from the attributes of a control procedure, the review system can make approximate estimations on what can go wrong and what accounts can be effected (Ernst & Whinney, 1979). The attribute of a control procedure is defined as the description of operational functions and control objectives that the control procedure is related to. For example, the absence of the control procedure to check the products shipped should be billed, which would lead to material errors such as unbilled deliveries and account irregularities of employee's sending products without billing. Eventually, the absence could result in underestimating accounts: sales, accounts receivable, and inventory/cost of sales (Ernst & Whinney, 1979, 19. 26) (Touche Ross, 1978, p. 100). The inferencing process of generating types of accounts affected by absences of specific controls in the review system, should be conducted based on the causal-effects network and also, they are provided in audit manuals, a public knowledge source. Propagating a chain of relationships between findings of compliance tests and accounts affected is an important task of the review system. However, the simple propagation is not enough to design substantive tests in which the extent of intensive audit texts should be determined from the audit evidences. To determine the extent of audit tests, the review system calculates the probability of how much impacts the absence of control procedures would have on each account. The evaluation of accounting control system is nothing but an assessment of the quality of the accounting control system. Determining the extent of substantive tests is a task requiring a professional knowledge of the accounting system and experiences. Interviews with auditors indicated that most auditors employ top-down approaches to designing substantive tests. Understanding industry, business, management, and accounting systems should be advanced before designing compliance tests. For example, an analytical review is an important process in audit planning which determines compliance tests and substantive tests in advance. Based on analytical reviews and discussions with managers in the client's organization, auditors assess audit risks of each accounting control areas and items in financial statements. Conclusively, determination of the degree of substantive tests is established based on a general understanding of the client's business environments and accounting system environments. Therefore, in the review system, a top-down approach to propagating constraints from business un-
Knowledge-Based Internal Control derstanding to control procedures at bottom levels is employed. Data at the top level such as business type and integrity of management constraint the types of accounts and degree of substantive tests through assigning probabilities. For example, the control procedures related to inventory controls are important to manufacturing companies and thus higher probability is assigned. In contrast, if the client is a construction company, the absence of the control procedure is assigned a lower probability. Then, the operations and location of the client increases or decreases the probability of an audit risk. This procedure is very similar to the materiality judgment in attentive analysis step, But it is different in that item evaluation eventually generates probabilities of audit risk and assigns a different probability to each accounts affected. In item evaluation, judgments are made; accounts affected and the degree of audit risk in each accounts. 3.2.2. Conflict Resolution. In real audit situations, control procedures are interrelated through business transaction cycles. For example, the increase of sales in income statements is frequently followed with decreases in inventory, increases in accounts receivable, or increases in cash. Therefore, accounts receivable, cash, inventory, and sales are interrelated. For this reason, an accounting error occurring in control points in sales would lead to a chain of malfunctions: inventory, accounts receivable, or cash. For this reason, auditors closely investigate other control procedures when they find an accounting error or nonexisting control procedure. In red-flagging model implementation, the scheduler knowledge source in control knowledge asks data interface whether there are other control procedures that are related to the item in consideration and that are indicated as having "weak" or "nonexisting" controls. If a set of control procedures is found to be weak or nonexisting, but they are related to the absence of a specific control procedure, this condition is called critical combination (Deloitte Haskins, 1985). Deloitte and Haskins define critical combinations as indicating possibilities for perpetration and concealment of errors and irregularities (1983, p. 1). Perpetration refers to an action that results in a loss from unauthorized use or disposal of tangible assets", while "concealment refers to an action taken, if necessary, to prevent detection of the perpetration. (p. 1) Especially, clustered sets of nonexisting control procedures in terms of separation of duties would result in serious accounting errors or irregularities. In redflagging, when a critical combination is identified, a much higher probability of audit risk is assigned to those areas by the evaluator knowledge source in control knowledge. When a critical combination is not
365 identified, a single condition is announced with lower probability of audit risk assigned to the control procedure. When, checking critical combination conditions is terminated and conflicts are identified and resolved, the accounts and the degree of substantive tests are transferred to the final step, global evaluation. 3.2.3. Global Evaluation. In the conflicts resolution step, each control procedure and related accounts are evaluated for identifying critical combinations conditions. Also, probabilities of audit risk are assigned to each account. Therefore, at the end of the conflict resolution step, substantive tests design might be completed. Remember that the computer system is not so comprehensively intelligent in integrating diverse environmental factors that should be considered in the substantive test design. Even though some control procedures are identified as weak or nonexisting in compliance tests and checklist evaluations, all of the accounts related to them need not be tested in the substantive tests. Some of the accounts, which are not evaluated as risky items, should be included in the substantive tests. For example, though the control procedures relating to the fixed assets in a manufacturing company are evaluated as strong in the compliance tests and the checklist evaluations, the accounts should be investigated in the substantive tests. Some auditors in the audit-planning stage rely on income statements for designing compliance tests and balance sheets for designing substantive tests. Also, they do compliance tests when the amounts of transactions related to a specific account is large while they do substantive tests when small amounts of transactions are involved. Based on this, the review system generates the accounts that should be tested regardless of the checklist evaluation results. Typically, fixed assets and properties in manufacturing are considered for substantive tests. In addition, audit risk analysis should be done based on other decision factors: the integrity of management, historical audit data, and audit firm policy. Incorporating these factors are done using question-answering patterns. As a conclusion of the red-flagging model, the review system draws a conclusion on the overall quality of the internal accounting control system, the areas requiring intensive tests, and the degree of intensive tests.
4. S Y S T E M E V A L U A T I O N ICE 1, a knowledge-based internal control evaluation system was developed based on red-flagging model for solving combinatorial explosion problems and an object oriented implementation of blackboard model (Erman et al., 1980; Hayes-Roth, 1978; Nii, 1986a,b) for providing flexibility in problem-solving activities. Conceptually, as shown in Fig. 4, the review system
366
Z U. Choi
Explanatio
User Interface
Facility
Resolver
*C*
I l~.valuator
]
Scheduler
M. 1 + "C'
M. 1 + "C" [ DataBase, c, and Interfa~l~r Data F i l e s Assemb__
D DDD
(ASCII files)
D
FIGURE 4. ICE1: A computerized evaluation system.
consists of three intercommunicating parts: control knowledge part, domain knowledge part, and data files. The control knowledge, like a central processing unit in a computer system, exists on top of the domain knowledge. It formulates goals and then develops problem-solving plans to achieve the goal. To accomplish established problem-solving plans, the control knowledge triggers a chain of domain knowledge modules, sets up a priority list of subgoals, evaluates alternatives, resolves possible conflicts, and control input/ output operations. The control knowledge in the ICE 1 system is comprised of three different knowledge sources: scheduler, conflicts resolver, and evaluator. The scheduler in control knowledge executes input/ output controls and directs problem solving activities by evaluating, triggering, and manipulating other modules. The scheduler allocates time and computational resources to achieve high performance. Conflicts resolver checks possible conflicts between items, evaluates the materiality of each knowledge source opinion, and then resolves the conflicts to maintain consistency of system conclusion. The evaluator in the control knowledge generates heterogeneous probability functions, calculates, compares each of them, and draws a conclusion. Finally, the blackboard is a common data structure in which various knowledge sources are per-
mitted to interact and communicate by writing and erasing opinions. The problem-solving technique using a blackboard in a multiple-level structure is quite similar to a panel discussion consisting of several experts sitting around a table and contributing specialized expert opinions. Each expert in the panel discussion is assumed to understand the other experts' opinions and send messages to other experts, by writing on the blackboard (Rychner, Banares-Alcantara, & Subrahmanian, 1984). The domain knowledge that is triggered, evaluated, and manipulated by the control knowledge is partitioned into multiple-layered knowledge sources: industry, business, accounting system, accounting and so on. Each knowledge source propagates its own opinion and contributes to decision making involved in each judgment involved in operations. As shown in Fig. 5, for example, a specific control procedure has multiple attributes: control objectives, procedures, transaction cycles, and accounts affected. On the other hand, materiality judgment of the absence of a specific control procedure should be made based on cooperation of multiple knowledge sources: industry, business, accounting system, and accounting. The multiple knowledge sources make their own judgments from their own criteria and then post their opinions on the
Knowledge-Based Internal Control
367
Checklist Questionnaires
Cycle
J
Cycle
I
~.
Cycle
J
FIGURE 5. Composition of checklist.
blackboard about materiality. Importantly, the relationship between the control procedure and a specific knowledge source denotes the expert's heuristic knowledge. Heuristic knowledge knows what data should be included and what weight each data should have in a specific knowledge source decision making. In each judgment, different knowledge sources are triggered, evaluated, and manipulated. Control knowledge and domain knowledge were encoded into Cand an expert system shell, M. I. Major portions of domain knowledge, which links a specific control procedure and multiple data were explicitly coded in M. l while major portions of control knowledge is implicitly programmed into C. The data interface handles 13 external files to store, manipulate, update, and retrieve data which are requested by knowledge base. The explanation function is supported by the M. 1 system shell. In the performance evaluation of an expert system, generally the compatibility was calculated by simply comparing the number of agreed conclusions divided by the total number of test cases (Buchanan, 1982; Dungan, 1982; Steinbart, 1987). This is a popular method. Using each conclusion in each cycle as a test case, there were 232 test cases (29 cases; 8 transaction
cycles). Fifty-seven of these test cases were "not applicable", leaving 175 suitable cases. The current review system and auditors agreed on 151 of these cases, an agreement ratio of 83%. The data used in performance evaluation are in Fig. 6 and evaluation results are summarized in Fig. 7. When the distance between auditor's conclusion and system's conclusion is measured, for example, the distance of "very good" and "good" is only 20% away, the agreement ratio rises up to 95%. 5. CONCLUSIONS
5.1. ConstructiveSynthesis Simon (1983) suggested that problem solving can be viewed as the process of search, reasoning, or constraints satisfaction, depending on problem representation and solution development. In search, Simon (1983) defines problem solving as selectively "moving from one node to another along links that connect" (p.7) until a solution is encountered. On the other hand, problem solving can be viewed as reasoning activities in which the problem is represented as a set of axioms to be proved. The problem solving is a process in which more and more information is accumulated by deduc-
Test #
type
Size
Owner's Involvement
~uper~ion
of duties
audit record
ICE1 Evaluation Result
none
NA
good
good
good
good
"
"
bad
bad
good
bad
"
"
good
good
good
good
m
.
w
w
n
w
none
good
good
good
good
w
m
N
mtexial ~'ror
bad
good
good good
Internal Auditor
Separation
State
1
~ovemment
2
.
3
"
"
4
"
.
5
mftware mmpany
large
none
6
"
"
N
7
"
"
.
8
~ervice :ompany
9
lie tis~bufion
10 11
medium small
t
w
"
"
.
.
.
NA
slrong
NA
bad
g~d
none
none
bad
bad
nalenal ~rror
w
w
w
n
n
n
w
w
w
#
H
R
material error
12
onsa'ucfion ~ompany
large
exist
slrong
good
good
~ x c e l l e n t excellent
13
n~hine ool manuf,
medium
none
medium
good
gcx~d
excellent
excellent
14
"
¢
n
w
w
w
s
15
"
s
n
w
pp
pt
pp
16
"
n
•
s
,,
pp
pJ
17
extile naxhinery medium
none
none
bad
bad
materia error
bad
18
zhool dislrict
exist
NA
good
good
excellen
good
]9
"
21 22
large
n
construction company :ons~ruc~i~ ,y
none n sma]
4
P'~
~
5
~P
~
26
29
none
pp
church grrnnln
v
,,
pp
sm ,,
strong strong
J~
good good
pp
bad
~p
excellen
bad
good
e x c e l h n t good
pp
op
pp
pp
p,
"
"
"
good
'P
"'
pJ,
Jp
pp
D~,
t,~
pp
ip
J1
w
op
r e s t a u r ; ,nhledl a m n o n e
27 28
w
strong ~,p
good ~p
none
none
good
,,
,,
,,
good pp
bad ,,
FIGURE 6. Audit data collected in the evaluation test.
good po
good ,,
good sp
good sp
Knowledge-Based Internal Control ing new axioms or propositions until an answer is found. Finally, in constraints propagation the problem is solved by hierarchically propagating constraints until a subset object is found which satisfies all the constraints. From the point of solution development's view, problem solving can be viewed as either selective process or constructive process, depending on whether solutions are pre-enumerated or not. Selective problem solving is a process of selectively searching the preenumerated problem space using matching function (Ciancey, 1984, 1985). Therefore, inferencing of problem solver isfinding a match between given data against known solutions. To the contrary, in constructive problem solving "the solutions aren't explicitly enumerated so there can be no pre-existing links for mapping problem descriptions to solution directly" (Clancey, 1985 p. 333). Therefore, solutions are "constructed by operators for incrementally elaborating and aggregating solution components" (p.333). The concepts of the "selective" and "constructive" process correspond to solving the Analysis and Synthesis (design) task (Simon, 1981 p. 131). Again, what the task expert systems have to solve can be categorized as diagnosis and design (Chandrasekaran, 1984); classification, design, and decision support (Sowa, 1984 p. 289); interpretation, prediction, diagnosis, planning, monitoring debugging, repairing, instruction, and control (Hayes-Roth, Waterman, & Lenat, 1983). Traditionally, accounting researchers believe that computer implementation of internal control evaluation checklist reviews belongs to diagnostic problem solving (Messier & Hansen, 1984) (Gal, 1985). In the feasibility studies of expert system application to accounting problem solving they found close similarities between medical diagnostic problems for which early expert systems were built and ill-structured accounting problems for which auditing researchers attempt to develop computerized auditing systems (O'Leary, 1987). Therefore, Messier and Hansen (1984) stated: We believe that there is a direct analogy between the way a physician diagnoses a disease and the way an auditor "diagnoses"the state of a client'saccountingsystemand financial statements. An examination ofsome recent research in medical decision making discloses a good deal of similarity between physician and auditor decision making. (p. 187). Also, Gal (1985), in designing an internal control evaluation system, found that "similarities between the task for which EMYCIN was originally designed and the judgment examined in this study" and that "a very general examination of the diagnosis process reveals a number of similarities between medical diagnosis and auditing judgments involved in the evaluation of internal control" (p.96). For this reason, current expert
369 internal control evaluation systems were built on the same problem-solving model and inexact reasoning model as MYCIN (Choi & Yoo, 1988). That is, accounting researchers, excited with the impressive performance results of early expert systems, believed that the selective analysis problem-solving model employed in early diagnostic systems can be directly applicable to internal control evaluation systems design. The major limitations of selective analysis based on hierarchical classification models were discussed in (Choi & Yoo, 1988) and briefly discussed in Section 1 in this paper. Instead of employing a selective analysis problemsolving model, our system employs constructive synthesis approaches to designing checklist review systems. Rather than establishing pre-enumerated solution space, our approach attempts to incrementally generate a final conclusion from the collection of relevant information that is selected in the early phase of the problem-solving process. Constructing a conclusion without pre-enumerated solution paths is possible using causal-effects analysis. This approach is supported by cognitive scientists who investigated medical doctors' decision making processes. Patel and Groen (1986) conducted an experimental research on the reasoning model employed in the diagnosis of acute bacterial endocarditis. In the protocol analysis, which was mainly propositional analysis, they found that all of the human physicians with accurate diagnoses employed forward reasoning models while the physicians with inaccurate diagnoses used backward reasoning and irrelevant rules. This result is consistent with other experiments of Larkin, et al. (1980) and Kuipers and Kassirer (1984). Based on the result, Patel and Groen concluded "our result indicated that it is not adequate" to employ backward chaining in designing MYCIN (Shortliffe & Buchanan, 1975) and that "what they support is a model more like NEOMYCIN which makes extensive use of forward chaining but uses hypothesis testing as a backup" ( 1986, p. 108). In fact, it is more reasonable to assume that the physicians collect medical evidences, even partially, and establish their conclusions, rather than to assume that they establish any specific hypotheses at first and then attempt to prove it. Likewise, in the business review process, human experts do not start with specific hypotheses rather they collect relevant evidences (information) available and then attempt to establish consistent conclusions.
5.2. Contributions
Though the red-flagging model was mainly motivated and developed for internal control evaluation systems design, the model can be applicable to designing a set of business review systems: credit card application re-
370
J. U. Choi
Test #
1
transaction
busines type
2
1
~tate 60 governmel ~ (2) ,,
3
4
*'
l(2J
60
80
5 (4)
4
""
2 (2)
5
mftware ;ompany
6
"'
4(3)
.
.
.
.
l (2)
7
o.
90 ] (2)
.
.
.
.
1 (2)
60
NM
80
tile 90 distributi, 4(3)
80
*' 70
l (2)
l (l)
60
NM
NM
,,
,,
.
.
330/370
.
.
.
.
33O/370
95
NM
1 (I)
NM
2(1)
80
70
9O
50
I (I)
I (I)
60
,,60
2(1~0
l(l-)
,,
4(4) 5°
"
4(3)
12
company
13
machine 80 80 60 80 50 tool m a n u 'J (l) I (1) 1 (1) 1 (1) 1 (l)
7O
6O
2(1)
"'
.
.
50
70
1 (1)
NA
1 (l)
.
NM
362. 5/385
NM
335/375
60 3(3)
335/375
50
l(1)
.
75 l (l)
NM
50
(1)
11
90
70
1 (I)
45
4(4)
4(3)
45 l (l)
335/375
95 1 (l) 70
1 (1)
NM
NM
90
35
1 (1)
l (1)
8O 4 (3)
"* "
1 (1)
330/3'/O
.
'"
15
70
l (l)
.
lO
14
270/325
4(5)
70
1 (2)
constructi n
290/325
290/325
7O
90
9
compatibilit ratio
290/325
60
9O
company
NA
""
3(2)
l (l)
NA
3(2)
80
](2)
'"
8
60
*"
l (2)
8~
1(1)
7
60
13(2)
3
service
6
80 60 45 I (2) 3(2) 1(1) 8O
8
5
NA
,o
90 I (2)
cycle
335/335 545/545 525/545
80
6O
4 (5)
522. 5/545
"'
70
.
.
.
.
~extile
~achinery I ( I )
;chool iistrict
. 80
.
.
I(I)
. 80
. 60 I(I)
60 l(l)
.
.
.
60 50 70 4(5) I(I) I(I)
30 NA
l(l)
4(3)
80 l(l)
60 l(1)
,.
,,
.,
.,
,.
.,
,,
,,
,,
,,
,,
,,
,,
50
NA
I (I)
75
I (l)
45
I (I)
437. 5/545
!I i ~ 40 ~Ip6O I ( I )
80 l(l)
,.
Constructi( n 70 •o m a p n y I (I)
. . . .
60 1(1)
35 3(3)
,,
,.
405/405 35 35
95
II(I)
4O
1 (1)
4 8 5 / 5 0 0
NM
405/405 405/405 375/3"/5
FIGURE 7. Results of ICEI's performance test. Note: 1. The numbers in italic letter indicates audit risk probability of the control objective in a specific transaction cycle in the client's internal control system. The probability was provided by experienced auditors; 2. The number in roman letters indicates auditor's evaluation level of the control objective. (1 = excellent, 2 = good, 3 = moderate, 4 = bad, 5 = very bad). The number in roman letters in the parentheses indicates the review system's evaluation level.
view, loan application review, college admission application review, and so on. From the epistemological perspective, the proper implementation of the red-flagging model may extend to other problem solving which
requires professional knowledge but which are combinatorially explosive by repeating the principle: pay attention to only relevant items which are out of normal bounds.
Knowledge-Based Internal Control
371
~est
busines~ type
transaction 1
2
3
cycle 5
4
6
7
compatibility ratio
8
I
~2
:onstruct icn 70 :ompany 1 (l)
NM
l (l)
50
l (l)
75
l (l)
4[[
I (l)
24
"
4 (S)
25
,,
4(4)
70
28
restaurant
i (3)
a.(3)100
"
=hurch
group
. . . . . . . . . . 50 "" 3(2)
I00 27
4 (4) 4O 2(1)
C~
28
40
95
35
1(1)
100 1 (1)
NM .
.
NM
.
.
4(4)
.
60 60
75 I(I)
4~ I(I)
95 I(I)
60 1 (1)
65 1 (1)
40 1 (1)
.
.
.
.
60
80
o*
5O 1 (1)
NM
NM
'*
'"
2130/2925 2180/3045
35 NM ,,
1(1)
2020/2815 2060/2855
oo
.
1(1) 1(1)
l(l)
29
NM
NM
2280/3145 2380/3245
35 l(1)
2380/3335
35 4 (4)
2380/3370
NOTE: 1) The numbers in italic letter indicates audit risk probability of the control objective in a specific transaction cycle in the client's internal control system. The probability was provided by experienced auditors. 2) The number in roman letters indicates auditor's eveluation level of the control objective. (1---excellent, 2=good, 3--moderate, 4=bad, 5=very bad) The number in roman letters in the parentheses indicates the review system's evaluation level. FIGURE 7. (Continued).
REFERENCES AICPA. (1980). AICPA audit and accounting manual. New York: American Institute of Certified Public Accountants. AICPA. (1983). Statement on auditing standards no. 47: Audit risk and materiality in conducting an audit. New York: American Institute of Certified Public Accountants. Ashton, R.H. (1974, Spring), An experimental study of internal control judgements. Journal of Accounting Research, 12, 143-157. Ashton, R.H., & Kramer, S.S. (1980, Spring). Students as surrogates in behavioral accounting research: Some evidence. Journal o.fAccounting Research, 1-15. Bailey, A.D., Jr., Gerlach, J., McAff, R.P., & Whinston, A.B. (198 l a, May). Internal accounting controls in the ot~ce of the future. IEEE Computer. 59-70. Bailey, A.D., Jr., Gerlach, J., McAff, R.P., & Whinston, A.B. ( 198 l b, Summer). An application of complexity theory to the analysis of internal control systems. Auditing:A Journal of Practice& Theory. 38-52. Bailey, A.D., Jr., Gerlach, J., McAff, R.P., & Whinston, A.B. (1983, January). An OIS model for internal accounting control evaluation. ACM Transactions on Oft~ceInformation Systems. 25--44. Bailey, A.D., Duke, G.L., Gerlach, J., Ko, C., Merservy, R.D., & Whinston, A.B. ( 1985, April). TICOM and the analysis of internal controls. The Accounting Review, 186-20 I.
Best, J.B. (1986). Cognitivepsychology. St. Paul, MN: West Publishing. Broadbent, D.E. (1958). Perspectiveand communications. London: Pergamon Press, Brown, R.G. ( 1962, Nov). Objective internal control evaluation. The Journal of Accountancy, 50-56. Buchanan, B.G. (1982). New research on expert systems. In J.E. Hayes, D. Michie, & Y.H. Pao (Eds.), Machine Intelligence. 10, 269-299. Chi, M.T.H., Feltovich, P.J., & Glaser, R. (1981, April-June). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5 (2), 121-152, Chi, M.T.H., Glaser, R., & Rees E. (1982) (Eds.). Expertise in problem solving. Advances in the Psychology of Human Intelligence. 1 ( 1), 7-75. Choi, J.U. (1988). A constructive approach to building a knowledgebased internal control evaluation system. Unpublished doctoral dissertation, Dept. of Management Science, College of Business Administration, University of South Carolina, 1988. Choi, J.U., & Yoo, K.H. ( 1988, August). A survey and critical review of current accounting expert systems. Paper presented at the annual meeting of American Accounting Association, Florida, Clancey, W.J. (1984). Classification problem solving. Proceedings of AAAI, 49-55.
372 Clancey, W.J. (1985). Heuristic classification. Artificial Intelligence, 289-350. Deloitte, Haskins & Sells. (1985). Reference manual of control plan. Dungan, C.W. (1982). A model of an audit judgement in the form of an expert system. Doctoral dissertation, University of Illinois at Urbana-Champaign. Dungan, C.W., & Chandlers, J.S. (1985, October). AUDITOR: A microcomputer based expert system to support auditors in the field. Expert System, 2 (4), 210-224. Erman, L.D., Hayes-Roth, F., Lesser, V.R., & Reddy, D.R. (1980) The hearsay-ll speech understanding system: Integrating knowledge to resolve uncertainty. Computing Survey, 12 (2), 213-253. Ernst & Whinney (1979). Evaluating internal control: A guide for management and directors. Fox, M.S. (1981, January). An organizational view of distributed systems. IEEE Transactions on Systems, Man, and Cybernatics, |1 (1), 70-80. Gal, G.F. (1985). Using auditor knowledge to formulate data model constraints: An expert system for internal control evaluation. Doctoral dissertation, Dept. of Accounting, Michigan State University. Garey, M.R., & Johnson, D.S. (1979). Computers and intractability, A guide to the theory of NP-completeness. New York: W.H. Freeman Company. Georgeff, M.P. (1983). Strategies in heuristic search. Artificial Intelligence. 20, 393--425. Grundnitski, G. (1986, Jan.), A prototype of an internal control expert systemfor the sales/accounting receivableapplication. University of Texas at Austin Working Paper. Hamilton, R.E., & Wright, W.F. (1983, Autumn). Internal control judgments and effects of experience: Replications and extensions. Journal of Accounting Research, 756-765. Hansen, J.V., & Messier, W.F. (1982, October). Expert systems for decision support in EDP auditing. International Journal of Information Science, 357-379. Hansen, J.V., & Messier, W.F. (1986, Fall). A preliminary investigation of EDP-XPERT. Auditing: A Journalof Practice& Theory, 109-123. Hart, P.E., Barzilay, A., & Duda, R.O. (1986). Qualitative reasoning for financial assessments: A perspective. AI Magazine, 7 (1), 6268. Hayes-Roth, B. (1985). A blackboard architecture for control. Artificial Intelligence, 26, 251-321. Hayes-Roth, F., Waterman, D.A., & Lenat, D.B. (1983). Building Expert Systems. Reading, MA: Addison-Wellesley. Johnston, W.A., & Heinz, S.P. (1978). Flexibility and capacity demands of attention. Journal of Experimental Psychology."General, 107 (4), 420--435. Joyce, E.J. (1976). Expert judgement in audit program planning, supplement to Journal of Accounting Research, 29-60. Kolodner, J.L. (1983). Toward an understanding of the role of experience in the evolution from novice to expert. Int. J. ManMachine, 19, 497-518. Krogstad, J.L., Ettenson, R.E., & Shanteau, J. (1981). Materiality judgment: The development of auditor expertise, Working Paper, Creighton University, recited from Rigsby (1986). Kuipers, B., & Kassirer, J.P. (1984). Causal reasoning in medicine: Analysis of a protocol. Cognitive Science, 8, 363-385. Kuo, H., & Choi, J.U. (1985, December). Expert system shell evaluation for the MEDCLAIM project. Institute of Information Management, Technology, and Policy, University of South Carolina. Kuo, H. (1986). MEDCLAIM: An expert support system for medical claims review. Master Thesis, Department of Computer Science, University of South Carolina, Columbia. Larkin, J., McDermott, J., Simon, D.P., & Simon, H.A. ( 1980, June). Expert and novice performance in solving physics problems. Science, 208 (4450), 1335-1342.
J. U. Choi Lenat, D.B. (1982). The nature of heuristics. Artificial Intelligence, 189-249.
Meigs, W.B., Whittington, O.R., & Meigs, R.F. (1985). Principle of Auditing. 8th eds. Richard D. Irwin. Messier, W.F., Jr. (1983, Autumn). The effects of experience and firm type on materiality/disclosure judgments. Journal of Accounting Research, 611-618. Messier, W.B., & Hansen, J.V. (1984). Expert systems in accounting and auditing: A framework and review. In Moriarity, S. & Joyce, E. (eds.) Decision making and accounting: Current research, 182213. Miller, G.A. (1956). The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review, 81-97. Newell, A., & Simon, H.A. (1976, March). Computer science as empirical inquiry: Symbols and search. Communications of ACM, 19 (3), 113-126. Nii, H.P. (1986a, Summer). Blackboard system: The blackboard model of problem solving and the evolution of blackboard architecture. AI Magazine, 38-53. Nii, H.P. (1986b, August). Blackboard systems: Blackboard applications systems, blackboard systems from a knowledge engineering perspective. AI Magazine, 82-106. O'Leary, D.E. (1987). The use of artificial intelligence in accounting. In Silverman, B.G. (eds.) Expert System for Business, 82-98. Patel, V.L., & Groen, G.J. (1986). Knowledge based solution strategies in medical reasoning. Cognitive Science, 10, 91-116. Plumlee, R.D. (1985, Autumn). The standard of objectivity for internal auditors: Memory and bias effects. Journal of Accounting Research, 23 (2), 683-699. Reckers, P.M., & Taylor, M.E. (1979, Fall). Consistency in auditor evaluations of internal accounting controls. JournalofAccounting, Auditing and Finance, 42-44. Rigsby, J.T. (1986, August). An analysis of the effects of complexity on the materiality decisions ofauditors. Ph.D. Dissertation. Dept. of Accounting, Memphis State University. Rychener, M.D., Alcantara, R.B., & Subrahmanian, E. (1986). A rule-based blackboard kernel system: Some principle in design. IEEE Workshopon Principlesof Knowledge-Based Systems, 5964. SAS (1973). Statements of auditing standards. New York: American Institute of Certified Public Accountants. Shortliffe, E.H., & Buchanan, B.G. (1975). A model of inexact reasoning in medicine. Mathematical Biosciences, 351-379. Simon, H.A. (1981). Models of man, New York: Wiley. Simon, H.A. (198 I). The sciences of the artificial, 2nd ed. Cambridge, MA: MIT Press. Simon, H.A. (1983). Search and reasoning in problem solving. Artificial Intelligence, 7-29. Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. In Rumelhart, D.E., McCielland, & PDP Research Group (eds.) Parallel Distributed Processing. Cambridge, MA: MIT Press, 194-281. Sowa, J.F. (1984). Conceptual structures: Information processing in mind and machine, Addison-Wellesley. Steinbart, P.J. (1987, January). The construction of rule-based expert system as a method for studying materiality judgements. Accounting Review, LXIi (I), 97-116. Touche Ross (1978). The Touche Ross audit process manual. Treisman, A.M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97-136. Weber, R. (1980, Spring). Some characteristics of the free recall of computer controls by EDP Auditors. Journal of Accounting Research, 214-241. White, A.P. (1986). Probablistic induction by dynamic path generation in virtual trees. In Bramer, M.A. (eds.) Research and Development in Expert Systems III. Cambridge University Press, 35-46.