Identifying significant model inputs with neural networks

Identifying significant model inputs with neural networks

Expert Systems with Applications PERGAMON Expert Systems with Applications 17 (1999) 13–19 Identifying significant model inputs with neural networks...

95KB Sizes 0 Downloads 101 Views

Expert Systems with Applications PERGAMON

Expert Systems with Applications 17 (1999) 13–19

Identifying significant model inputs with neural networks Tax court determination of reasonable compensation Chris Bjornson, Douglas K. Barney* Indiana University Southeast, 4201 Grant Line Road, New Albany, IN 47150, USA

Abstract Neural networks have much to offer academic researchers and business practitioners. For example, recent research has shown that neural networks can classify and predict as well as traditional statistical methods, such as ordinary least squares (OLS). Neural networks, however, are limited in that they do not provide measures of significance of individual inputs as OLS (and other methods) provides. When neural networks overcome this limitation their variety and numbers of applications will increase dramatically and they will become more valuable to academe and practitioners. This study compares the abilities of OLS and neural networks, when used in conjunction with the Wilcoxen signed-ranks test to identify significant model inputs. q 1999 Elsevier Science Ltd. All rights reserved. Keywords: Neural networks; Wilcoxen signed-ranks; Ordinary least squares model

1. Introduction The use of artificial intelligence, especially neural networks, in accounting is a rapidly growing area. Academics, and international public accounting firms, cite the usefulness of neural networks. Neural networks offer some distinct advantages over more traditional statistical methods: neural networks can overcome situations in which data are missing or ambiguous; neural networks are readily adaptable to different settings or changes in data. An increasing number of accounting researchers are using neural networks. Many of the research articles describe what a neural network is and provide examples of potential accounting applications (e.g. Foltin & Garceau, 1996; Zarowin, 1995; Management Accounting: Magazine for Chartered Management Accountants, 1995; Etheridge & Brooks, 1994). Some articles compare the performance of neural networks and traditional statistical packages (e.g. Zhang & Fuh, 1996; Lenard, Pervaiz & Madey, 1995; Bansal, Kauffman & Weitz, 1993). Other articles provide examples of how neural networks may be used in particular accounting fields: taxation (Barney & Bjornson, 1997a), managerial (Horridge, 1997; Zhang & Fuh, 1996), financial (Kryzanowski & Galler, 1995; Bansal et al., 1993), and auditing (Lenard et al., 1995). This article advances this research by * Corresponding author. Tel.: 1 1812-941-2532; fax: 1 1812-9412672. E-mail addresses: [email protected] (C. Bjornson), [email protected] (D.K. Barney)

applying neural networks to a tax setting and identifying which factors influenced the decision of the tax court. The study uses ordinary least squares (OLS) as a benchmark against which to compare the neural network results. The subject of this study is whether the tax court determined compensation paid to a corporate officer was reasonable and therefore deductible by the corporation as salary. Unreasonable salary would be considered dividend and nondeductible by the corporation. The ability to identify significant determinative factors and predict the court’s decision would be of great benefit to corporations. While a list of factors is available that are determinative of the court’s decision, the relative weighting of these factors is unknown. This paper attempts to determine if neural networks can identify these significant inputs better than a traditional OLS model.

2. Neural networks Neural networks are intended to work in a manner similar to the synaptic processes of the human brain. The network processes input data (the human brain may receive this data from the body’s sensory organs) through layers of nodes and connectors, evaluates the data and develops an output, much as the human brain evaluates input data and reaches a conclusion. Similar to human brains, neural networks will weight the data and consider the interrelations. Neural networks have a layer of input data, at least one hidden layer, and a layer of output data (the conclusion or

0957-4174/99/$ - see front matter q 1999 Elsevier Science Ltd. All rights reserved. PII: S0957-417 4(99)00017-2

14

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19

Fig. 1.

prediction). Fig. 1 provides a diagram of a typical neural network. Each layer of the neural network has nodes (neurons). Connectors (with corresponding weights or values) attach each node in the layer to all nodes in the layers above and below. The input layer is the top layer in Fig. 1, with each node representing an independent variable. Connectors attach each node in the middle (hidden) layer to each node in the input and output layers. (The hidden layer is called such because there are no observable values for the nodes in the hidden layer.) Most neural networks have a single node in the output layer. This node provides the model output. This output may be dichotomous (e.g. representing default/no default in a typical study of loan decisions) or a value such as a percentage (as in this study). The neural network ‘‘learns’’ the relationships between the inputs and the outputs through repeated exposure to the data set. The first step in training the model is to randomly divide the data set into two (in-sample and out-of-sample) sets of observations. The neural network trains using the input data and actual output of the in-sample data set. Initially, the model will randomly assign weights to all of the connectors. After the model runs through all of the observations once it adjusts the weights and runs the observations again. As the model runs through iteration after iteration, the number of observations classified correctly increases and the mean squared error (MSE) drops as the model becomes better at predicting the in-sample results. The predictive ability of the neural network improves overall with additional iterations, based on some criterion, such as MSE. The more iterations the model runs, the better the in-sample prediction will be. This process continues until the researcher is satisfied with the results. The true test of the usefulness of the neural network model comes with testing the model on the out-of-sample data set. Currently there are no firm rules, only heuristics, to guide the decision of when to stop the learning process. One heuristic is to stop the program when the decline in the MSE levels off. The potential problem with continuing to run the program past this point is that the model may become over-

specified for the in-sample observations, and lose the ability to predict the out-of-sample results. Model over-specification is a lesser problem for studies with adequate sample size, especially when limiting the numbers of input and hidden layer nodes. In this study sample sizes are several multiples of the numbers of input and hidden layer nodes. Therefore, over-specification should not be a problem in this study. Fig. 1 provides an example of a neural network. (The neural network of this study differed only in that it used 12 hidden layer nodes, the default setting with the software package.) The neural network input layer of this study is the set of 12 data points for each of the in-sample observations and the output layer is the actual outcome or percentage prediction of the model. There are therefore 156 (12 × 12, plus 12 × 1) total connectors in the neural network of this study. 2.1. Strengths of neural networks With the development of user-friendly neural network software packages, individuals can quickly, easily, and fairly inexpensively develop neural network models. These models are readily adaptable to many different settings and users can update these models for changes in data. Research literature indicates that neural networks are superior to other classification and prediction models. Researchers have compared several traditional statistical models, including logit (Salchenberger, Cinar & Lash, 1992), probit (Dasgupta, Dispensa & Ghose, 1994), OLS (Barney, 1993), and discriminant analysis (Coats & Fant, 1993) with neural networks. In most cases, the neural network showed at least a minor improvement over the other models. In accounting research, one might expect neural networks to provide consistent and perhaps significant improvement over other models for two reasons: 1. Liang, Chandler, Han & Roan (1992) show that neural networks perform significantly better than probit or Iterative Dichotomizer 3 when nominal (categorical) input variables dominate the model. Accounting criteria often include nominal variables and many times these nominal variables dominate the model. 2. Neural networks can evaluate non-linear data or discontinuous data (Kastens, Featherstone & Biere, 1995). Traditional statistical models assume distributional data properties (e.g. logit assumes the underlying data follows a logistic distribution, and OLS assumes a normal distribution). Accounting data may violate these distributional assumptions. Neural networks make no data distribution assumptions and will function well with any data distribution (Hill, Marquez, O’Connor & Remus, 1994). 2.2. Limitations of neural networks There are several potential limitations to neural networks

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19

of which researchers and practitioners should be aware. With small data sets neural network models may become over-specified. This means that the neural network will work well at classifying observations in-sample, but is not a good predictor out-of-sample. An adequate sample size will mitigate or eliminate the possibility of over-specification. In addition, limiting the number of iterations can reduce the possibility of over-specification. While there are no firm rules, the use of some heuristics will help avoid over-specification. One such rule is to stop training when the neural network MSE no longer declines significantly. Also, using fewer hidden layer nodes reduces the possibility of over-specification. While there is no theory for determining the number of nodes to use in the hidden layer, Kastens et al. (1995) suggest (using the Kolmogorov (1963) mathematical existence theorem) that a single hidden layer model will implement a continuous mapping function perfectly with 2n 1 1 hidden layer nodes (where n is the number of inputs). This seems contrary to the heuristic above, that fewer nodes will avoid over-specification. With the sample size of this study (several times more observations than input nodes) and using no more hidden layer nodes than input nodes, over-specification should not be a problem. The neural network body of literature provides no theory for the appropriate number of hidden layers. As the number of hidden layers increase, model flexibility when dealing with non-linear or discontinuous data increases (Kastens et al., 1995). However, Lawrence (1993) concludes that it is not necessary to use more than two hidden layers even with the most non-linear and discontinuous data. While neural networks make excellent predictive models, neural networks lack a theory or model for determining the relative impact of input variables on outcomes. This limitation is the focus of the current research.

3. Methodology This study compares the relative abilities of neural network and OLS prediction models. The study is an improvement over previous work by correcting a major methodological flaw and therefore providing a more accurate and reliable comparison of these methods. The only existing published research on identifying significant inputs is Barney and Bjornson (1997b). In that study the authors used the Wilcoxen technique to identify significant variables from neural network analysis and used one OLS equation to determine which variables OLS identified as significant. The authors performed no further comparisons. This study carries the analysis further by combining the Wilcoxen procedure with both the OLS and NN methodologies and then compares models using the variables initially identified as significant.

15

3.1. Data This study uses reasonable compensation data collected by Boyd (1977) from tax court cases. In each case the judge listed 12 variables (Mason Mfg. Co., 1949) that should be considered when deciding a reasonable compensation case. Boyd lists taxpayers’ claims, IRS allowances and the courts’ determinations of reasonable compensation. Boyd also lists information about each of the 12 variables listed in the Mason Mfg. Co. (1949) case. Some variables are cardinal and some are nominal (favoring the IRS, favoring the taxpayer, or not mentioned by the judge). This study defines an observation as one taxpayer in one year. Therefore, one court case may result in several observations as the IRS may challenge several taxpayers or several years of compensation in one case. The decimal output of this study was based on three numbers: taxpayer calculation of reasonable compensation, IRS calculation of reasonable compensation (which is lower than what the taxpayer claimed on the tax return), and the Court’s determination of reasonable compensation. The Court agreed with the IRS, agreed with the taxpayer, or determined a number somewhere between the two. The numerator for the fraction was the difference between the Court’s determination and the IRS’ calculation. The denominator was the difference between the taxpayer’s and the IRS’ calculations.

3.2. OLS Methodology This study analyzes the significance of inputs using correlation analysis, OLS regression, neural networks, and Wilcoxen signed-ranks tests. OLS is one of the most common methodologies researchers have traditionally used. OLS, however, has theoretical limitations. This paper does not explore OLS methodology in detail, but does provide the following synopsis of OLS methodology and an explanation of the use of OLS in this study. Numerous studies note that a wide range of methodologies result in similar predictive abilities. Researchers have found that OLS models as well as logit (e.g. Gessner, Kamakura, Malhotra & Zmijewski, 1988; Barney, 1993) or probit (Noreen, 1988). Because OLS can model data for which the observed outcome is not dichotomous (logit and probit require a dichotomous observed outcome for modeling) this study compares neural networks with OLS. According to the research of Gessner et al. (1988) and Noreen (1988), OLS should provide an adequate benchmark for comparison with the neural network model. The neural network of this study used the default settings of the Brainmaker Neural Network package from California Scientific Software Corporation. These included one hidden layer with 12 nodes, the logistic squashing function, and default settings for training and testing.

16

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19

3.3. Statistical method

Table 1 Model variables EMQ TIM SAL COX SGI SNI ECO DNI SAF SAM AVC FOR COURT

Employees qualifications Timeliness and scope of employees duties Current year sales Complexity of the business Ratio of salaries to gross income Ratio of salaries to net income Economic conditions Ratio of dividends to net income Comparison of salaries to other firms Comparison of salaries to other employees Average compensation in prior years Formality and timing of corporate action Dependent variable (fractile of IRS—taxpayer range)

Table 2 Correlation analysis

EMQ TIM SAL COX SGI SNI ECO DNI SAF SAM AVC FOR

Court a

Numbers of significant correlations with other independent variables

0.315 0.000 2 0.129 0.036 2 0.048 0.434 2 0.032 0.610 2 0.201 0.001 2 0.070 0.255 2 0.040 0.516 2 0.030 0.622 0.132 0.031 2 0.115 0.061 0.093 0.131 2 0.055 0.369

6 3 5 4 6 6 5 3 2 1 5 3

a The first number for each factor is the correlation coefficient and the second number is the level of significance. Significance tests are two-tailed tests.

The statistical procedures of this study included the following (Table 1): 1. Randomly separate the 264 observations into in-sample and out-of-sample data sets. The in-sample data set contained 75% (198) of the observations and the outof-sample data set contained 25% (66) of the observations. 2. Train the neural network using the in-sample data set and test this model out-of-sample to develop out-of-sample prediction errors (differences of model predicted outcomes versus actual outcomes). 3. Drop one of the input factors and train and test the neural network with 11 input factors (a reduced model). Use the reduced model out-of-sample to provide predictions and prediction errors. 4. Compare the reduced and full models with the Wilcoxen signed-ranks test. Data for the Wilcoxen test was based on mean absolute deviations (MAD) between the out-ofsample predictions of the full model and the actual court outcomes. The researchers calculated MADs for the reduced networks and compared these MADs with the MADs of the full model using the Wilcoxen test. The Wilcoxen test compares two sets of matched observations to determine if they differ. The Wilcoxen test is a non-parametric test (as suggested by Gorr, Nagin & Szczypula, 1994) of matched pairs, and is appropriate for this data because the study uses the same data (court cases) for full and reduced networks. The Wilcoxen test assigns ranks based on the absolute value of the difference between the MADs. The ranks are then assigned negative or positive values depending on the sign of the original difference. When the full network performed better than the reduced network the sign rank was assigned a positive value, and the rank was assigned a negative value when the reduced network outperformed the full network. The expected value is subtracted from the sum of the positive values and this result is divided by the standard deviation of the distribution. The resulting statistic was evaluated using the standard normal distribution. If the prediction errors differ between the two sets of neural network results, then omitting the variable significantly impacted the predictive ability of the neural network, and therefore the variable is significant. 5. Repeat steps 3 and 4 above 11 times (for each of the reduced models). 6. Repeat steps 2–5 above for the OLS full and reduced model comparisons. 7. Develop reduced NN and OLS models (based on the variables each found significant with the above Wilcoxen tests) in-sample and test these models out-of-sample. 8. Run traditional OLS in-sample using all 12 input variables. Test the model out-of-sample using only the

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19 Table 3 OLS input factor significance (adjusted R-square ˆ 0.216; F ˆ 5.534; significance of F ˆ 0.001) B EMQ TIM SAL COX SGI SNI ECO DNI SAF SAM AVC FOR Constant

0.375 2 0.159 2 6.660E-9 0.07986 2 0.210 2 0.06083 2 0.152 2 0.120 0.118 0.07773 2 8.231E-7 0.01379 0.593

Significance t 0.001 0.001 0.343 0.220 0.259 0.302 0.014 0.655 0.015 0.031 0.503 0.715 0.001

variables listed as significant by the OLS in-sample model. 9. Compare the out-of-sample performance of each pair of models (three comparisons): Reduced OLS/Wilcoxen model, Reduced OLS/SPSS model, and Reduced NN/Wilcoxen model. The objective of this analysis is to determine the comparative abilities of OLS and neural networks to identify significant model inputs. The comparison statistics include the Wilcoxen test statistic, MAD, and t-tests.

4. Results Table 2 provides the numbers of significant correlations among the independent variables and the significance of the correlation between each independent variable and the dependent variable. Five input factors (EMQ, TIM, SGI, SAF, SAM) have significant (p , 0.10) correlations with the dependent variable, although none of these correlations Table 4 Wilcoxen input factor significance (*p(0.10) $ 1307; **p(0.05) $ 1363; ***p(0.01) $ 1470) Variable

OLS

Nnet

EMQ TIM SAL SGI COX SNI ECO DNI SAF SAM AVC FOR

1339* 1332* 1630*** 1297 1228 1529*** 1076 1448** 938 1282 1293 1032

1363** 1266 1124 1379** 1845*** 1462** 1212 1094 1183 1296 1048 1257

17

is strong. EMQ has the highest correlation at 0.315; no other variable has a correlation higher than 0.201. Table 3 shows the results of the OLS analysis using the in-sample data. Two factors (EMQ and TIM) are highly significant (p , 0.001) and three other factors (ECO, SAF, and SAM) are significant (p , 0.05). The overall model had an adjusted R-square of 0.216 and an F statistic of 5.534, which is highly significant (p , 0.001). This indicates that the overall model can significantly account for just over 21% of the variance in the court decisions. While the individual input factors were not highly correlated with the court decision the model as a whole is significant in accounting for variance in court decisions. Table 4 shows the results of the Wilcoxen analysis comparing the OLS and NN full models and the reduced models (having 11 input variables). Using the Wilcoxen test and OLS five variables were significant (p , 0.10). SAL and SNI were both significant at 0.01, DNI was significant at 0.05, and TIM and EMQ were significant at 0.10. Only two of these variables are among the five variables identified using the traditional OLS technique on SPSS. There are many inter-correlations in this data set. Multicollinearity in the input data leads to mis-specification of the t-statistics for the independent variables. This may explain the difference in the variables identified as being significant. The second column of Table 4 indicates the results of the neural network analysis using the Wilcoxen technique. Four variables are significant (p , 0.05). COX is significant at 0.01, while EMQ, SGI, and SNI are all significant at 0.05. No additional variables were significant at between the 0.05 and 0.10 levels of significance. Table 5 is a summary of the results of correlation analysis, OLS analysis using SPSS, OLS analysis using the Wilcoxen technique, and neural network analysis using the Wilcoxen technique. Different analyses lead to different conclusions regarding significant inputs. A logical question arises: Which set of ‘‘significant’’ variables does the best job of predicting the court’s decision? To answer this question required the construction of models using only the variables identified by each model as significant (in Table 5). Therefore the OLS/SPSS and the OLS/Wilcoxen models had five variables each and the NN/Wilcoxen model had four variables. After developing the three models in-sample, the researchers tested the models out-of-sample. These outof-sample tests included comparisons of MADs overall for the models, t-tests of MADs by pairs of models, and Wilcoxen comparisons of MADs by pairs of models. Table 6 provides the results of these further model analysis. The OLS reduced model using the Wilcoxen test had a MAD of 0.331, while the OLS model using the variables identified by SPSS had a MAD of 0.336. This difference was not significant (p ˆ 0.716) using a matched pairs t-test. The MAD for the neural network was 0.302, but this result was not significantly better than either OLS model using a matched pairs t-test (p ˆ 0.234 and 0.225, respectively). Using the Wilcoxen test (a matched pairs test) for the

18

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19

Table 5 Summary of significant input factors (blanks indicate a lack of significance)

EMQ TIM SAL COX SGI SNI ECO DNI SAF SAM AVC FOR

Correlation

OLS/SPSS

OLS/Wilcoxen

NN/Wilcoxen

0.01 0.05

0.01 0.01

0.10 0.10 0.01

0.05

0.01 0.01

0.01 0.05 0.05

0.05 0.05 0.05 0.10

0.05 0.05

comparison between the two OLS models resulted in a value of 1062, which is not significant, p(0.10) ˆ 1307. The comparison between the SPSS OLS model and the neural network is also not significant at 1229. The comparison between the reduced OLS model using the Wilcoxen test and the reduced neural network was significant at 1308 p(0.10) ˆ 1307.

5. Summary This study is part of a continuing effort to improve the capabilities of neural networks, by identifying significant model inputs with neural networks. Using Wilcoxen signed-ranks tests with both OLS and neural networks, and using traditional OLS, resulted in three different sets of input factors identified as significant predictive variables. Correlation analysis resulted in a fourth set of significant input factors. It is interesting to note that the neural network identified the fewest number of significant model inputs (four) and Table 6 Model comparison results (*p(0.10) $ 1307; **p(0.05) $ 1363; ***p(0.01) $ 1470) Model summary mean absolute deviations OLS/SPSS OLS/Wilcoxen NN/Wilcoxen t-tests (levels of significance) on individual MADs OLS/SPSS OLS/Wilcoxen Wilcoxen tests on individual MADs OLS/SPSS OLS/Wilcoxen

0.336 0.331 0.302

Neural network 0.234 0.225

OLS/Wilcoxen 0.716

Neural network 1229 1308*

OLS/Wilcoxen 1062

identified these at the lowest levels of significance of any statistical model (three at 0.05 and one at 0.01). This result may be due to the abilities of neural networks to ‘‘fill in gaps’’ of missing data and to take advantage of input factor correlations. Researchers and theorists have proclaimed one of the strengths of neural networks to be their ability to compensate for missing data, at least partly by capitalizing on relationships among the inputs. Therefore, the reduced models (models missing one input variable) may not perform significantly differently from the full model because the neural network is able to compensate for missing data. In this way, neural networks may not find as many inputs to be as significant. The last part of the study included the comparisons of the three statistical models using the reduced sets of input factors (five for each OLS model and four for the neural network model). The neural network reduced model was only slightly the superior model. Similar to the discussion above, the ability of the neural network to operate as well as either OLS model using only four input variables (identified as having lower levels of significance) may be evidence that the neural network is able to compensate for missing data and to incorporate further interaction effects beyond the reach of OLS. While this study did not prove that neural networks are superior models in identifying significant model inputs, this study does show that neural networks, in conjunction with Wilcoxen signed-pairs tests, can identify significant inputs at least as well as other statistical methods. In this way, this study is a significant step forward in the research literature by addressing further uses of neural networks in accounting.

References Bansal, A., Kauffman, R., & Weitz, R. (1993). Comparing the modeling performance of regression and networks as data quality varies: a business value approach. Journal of Management Information Systems, 10 (1), 11–33. Barney, D. K. (1993). Modeling farm debt failure: the farmers home administration. PhD dissertation. University of Mississippi, Oxford. Barney, D. K., & Bjornson, C. (1997). The neural network: a new tool for tax research. Journal of Accounting and Finance Research, 4 (1), 59– 67. Barney, D. K., & Bjornson, C. (1997). Identifying significant model inputs with neural networks: a comparative study. Journal of Accounting and Finance Research, 4 (2), 50–59. Boyd, J. L. (1977). An empirical investigation of reasonable compensation determination in closely-held corporations. PhD dissertation. University of South Carolina, Columbia. Coats, P. K., & Fant, L. F. (1993). Recognizing financial distress patterns using neural network tool. Financial Management, Autumn, 142–155. Dasgupta, C. G., Dispensa, G. S., & Ghose, S. (1994). Comparing the predictive performance of a neural network model with some traditional market response models. International Journal of Forecasting, September, 235–244. Etheridge, H., & Brooks, R. (1994). Neural networks: a new technology. CPA Journal, 64 (3), 36–44. Foltin, C., & Garceau, L. (1996). Beyond expert systems: neural networks in accounting. National Public Accountant, 41 (6), 26–33.

C. Bjornson, D.K. Barney / Expert Systems with Applications 17 (1999) 13–19 Gessner, G., Kamakura, W. A., Malhotra, N. K., & Zmijewski, M. E. (1988). Estimating models with binary dependent variables: some theoretical and empirical observations. Journal of Business Research, 16 (1), 49–65. Gorr, W., Nagin, D., & Szczypula, J. (1994). Comparative study of artificial neural network and statistical models for predicting student grade point averages. International Journal of Forecasting, 10 (1), 17–34. Hill, T., Marquez, L., O’Connor, M., & Remus, W. (1994). Artificial neural network models for forecasting and decision making. International Journal of Forecasting, June, 5–15. Horridge, A. (1997). Neural networks controlling the quality and cost of manufacture. Management Accounting: Magazine for Chartered Management Accountants, 75 (4), 56. Kastens, T. L., Featherstone, A. M., & Biere, A. W. (1995). A neural networks primer for agricultural economists. Agricultural Finance Review, 55, 54–73. Kolmogorov, A. N. (1963). On the representation of continuous functions of many variables by superposition of continuous functions on one variable and addition. Doklady Akademii Nauk SSSR, 144, 679–681 American Mathematical Society Translation, 28, 55–59. Kryzanowski, L., & Galler, M. (1995). Analysis of small-business financial statements using neural nets. Journal of Accounting, Auditing, and Finance, 10 (1), 147–171.

19

Lawrence, J. (1993). Introduction to neural networks. Nevada City, CA: California Scientific Software Press. Lenard, M. J., Pervaiz, A., & Madey, G. (1995). The application of neural networks and a qualitative response model to the auditor’s going concern uncertainty decision. Decision Sciences, 26 (2), 209–228. Liang, T., Chandler, J. S., Han, I., & Roan, J. (1992). An empirical investigation of some data effects on the classification accuracy of probit, ID3, and neural networks. Contemporary Accounting Research, Fall, 306–328. Management Accounting: Magazine for Chartered Management Accountants, 73(4) (1995) 30. Mason Mfg. Co. V. Commissioner of Internal Revenue, 178 f. 2d 115, 119 (6th Cir.1949). Noreen, E. (1988). An empirical comparison of probit and OLS regression hypothesis tests. Journal of Accounting Research, 63 (1), 119–133. Salchenberger, L. M., Cinar, E. M., & Lash, N. A. (1992). Neural networks: a new tool for predicting thrift failures. Decision Sciences, 23, 899– 916. Zarowin, S. (1995). Thinking computers. Journal of Accountancy, 180 (5), 55–56. Zhang, Y. F., & Fuh, J. Y. H. (1996). Feature-based cost estimation for packaging products using neural networks. Computers in Industry, 32 (1), 95–114.