Constructing a reassigning credit scoring model

Constructing a reassigning credit scoring model

Available online at www.sciencedirect.com Expert Systems with Applications Expert Systems with Applications 36 (2009) 1685–1694 www.elsevier.com/loca...

167KB Sizes 0 Downloads 59 Views

Available online at www.sciencedirect.com

Expert Systems with Applications Expert Systems with Applications 36 (2009) 1685–1694 www.elsevier.com/locate/eswa

Constructing a reassigning credit scoring model Chun-Ling Chuang a, Rong-Ho Lin b,* a

Department of Information Management, Kainan University, No. 1, Kainan Road, Luzhu, Taoyuan, 33857, Taiwan, ROC b Department of Industrial Engineering & Management, National Taipei University of Technology, No. 1, Section 3, Chung-Hsiao East Road, Taipei 106, Taiwan, ROC

Abstract Credit scoring model development became a very important issue as the credit industry has many competitions and bad debt problems. Therefore, most credit scoring models have been widely studied in the areas of statistics to improve the accuracy of credit scoring models during the past few years. In order to solve the classification and decrease the Type I error of credit scoring model, this paper presents a reassigning credit scoring model (RCSM) involving two stages. The classification stage is constructing an ANN-based credit scoring model, which classifies applicants with accepted (good) or rejected (bad) credits. The reassign stage is trying to reduce the Type I error by reassigning the rejected good credit applicants to the conditional accepted class by using the CBR-based classification technique. To demonstrate the effectiveness of proposed model, RCSM is performed on a credit card dataset obtained from UCI repository. As the results indicated, the proposed model not only proved more accurate credit scoring than other four common used approaches, but also contributes to increase business revenue by decreasing the Type I and Type II error of scoring system. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Credit scoring model; MARS; ANNs; CBR; Type I error

1. Introduction Credit industry has been rapidly expanded over past few years. Due to the intense competition of credit card issued by banks, more and more people can easily apply a credit card without carefully examining of their credit by the banks. This reckless expansion policy has increased the delinquency rate in the banks. According to the statement issued in 2006 by Financial Supervisory Commission (FSC) of Taiwan, the delinquency rate rose from 1.36% in the second quarter of 2004 to 6.88% in that of 2006. The objective of credit scoring models is to help the banks to find good credit applications who are likely to observe obligation according to their age, credit limit, income and martial condition. Many different credit scoring models have been developed by the banks and researchers in order to solve the classification problems, such *

Corresponding author. Tel.: +886 28811231x3414; fax: +886 27763964. E-mail address: [email protected] (R.-H. Lin). 0957-4174/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2007.11.067

as linear discriminant analysis (LDA), logistic regression (LR), multivariate adaptive regression splines (MARS), classification and regression tree (CART), case based reasoning (CBR), and artificial neural networks (ANNs). LDA, LR and ANNs are generally used as methods to construct credit scoring models. LDA is the earliest one used for the credit scoring model. However, the utilization of LDA has often been criticized due to the assumptions of linear relationship between input and output variables, which seldom holds, and it is sensitive to deviations from the multivariate normality assumption (West, 2000). In addition to LDA, LR is another common alternative to conduct credit scoring tasks. Basically, the LR model is emerged as the technique in predicting dichotomous outcomes and does not require the multivariate normality assumption. However, both LDA and LR are designed for the relationships between variables are linear, which caused them less accurate in credit scoring. In handling credit scoring tasks, ANNs became a new alternative and proved more accurate than LDA and LR. However, ANNs is also being criticized for its long training

1686

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

process in obtaining the optimal network and not easy to identify the relative importance of potential input variables, in addition, it had certain interpretive difficulties. Therefore, ANNs has limitation of applicability in handling general classification and credit scoring problems (Piramuthu, 1999). In addition to the above-mentioned techniques, MARS, a common used classification technique, is proved to be a good supporting tool for neural networks as the advantages of MARS can overcome the shortcomings of neural networks (Lee & Chen, 2005). Except for being used in classification models, CBR is also being widely applied in credit scoring models. CBR can use past experiences to find a solution, update database to increase the ability of solving complex and unstructured decision making problems (Shin & Han, 1999; Wheeler & Aitken, 2000). Many studies have contributed to increasing the accuracy of the classification model with various methods (Ong, Huang, & Tzeng, 2005; Lee & Chen, 2005; Huang, Tzeng, & Ong, 2006). However, most of the previous studies have only concentrated in building a more accurate credit scoring or behavioral scoring model. Even such scoring models are accurate, some misclassification patterns could be emerged, such as Type I error or Type II error. The purpose of this paper is to explore the performance of ANNs-based credit scoring model and reassign the rejected good credit applicants to the preferable accepted class by using the CBR-based classification technique. The increased accepted credit applicants of reassign classification systems in credit card can be significant benefit to decision makers, which contributes to increase business revenue and reduce Type I error of scoring system. The other part of the paper is organized as follows. Credit scoring literature will be reviewed in Section 2. Section 3 gives a brief outline of commonly used techniques in building credit scoring model. The empirical results of the five built models and the methodology of RCSM are presented in Section 4. Finally, Section 5 addresses the conclusions and summarizes the study results. 2. Literature review In this section, six common techniques in building credit scoring models will be discussed. The first two models, LDA and LR, are mostly used for classification problems in the area of statistics. The other four models, CART, MARS, ANNs and CBR, are known for their excellent ability of machine learning areas. LDA was first proposed by Fisher (1936) as a classification technique. It has been reported so far as the most commonly used technique in handling classification problems (Lee, Sung, & Chang, 1999). In the simplest type of LDA, two-group LDA, a linear discriminant function (LDF) that passes through the centroids (geometric centres) of the two groups can be used to discriminate between the two groups. The LDF is represented by

LDF ¼ a þ b1 x1 þ b2 x2 þ    þ bp xp

ð1Þ

where a is a constant, and b1 to bp are the regression coefficients for p variables. LDA has been widely applied in a considerable wide range of application areas, such as business investment, bankruptcy prediction, and market segment (Kim, Kim, Kim, Ye, & Lee, 2000; Lee, Jo, & Han, 1997; Trevino & Daniels, 1995). LDA also has been used by Bardos (1998) and Desai, Crook, and Overstreet (1996) in building credit scoring models. LR is a widely used statistical technique in which the probability of a dichotomous outcome is related to a set of potential independent variables. The LR model does not necessarily require the assumptions of LDA. However, Harrell and Lee (1985) found that LR is as efficient and accurate as LDA even though the assumptions of LDA are satisfied. If there are two groups (e.g. accept and reject), the binary logistic regression (BLR) is used. The probability (p1) of an object belonging to group 1, and the probability (p2) of it belonging to group 2, is given by lnðp1 =p2 Þ ¼ b0 þ b1 x1 þ b2 x2 þ    þ bn xn

ð2Þ

where (p1/p2) is called the odds ratio and ln (p1/p2) the logit transform of p1; xn is the nth predictor variable; and bn is the coefficient of the nth predictor variable. In this equation, the logit transform is being used to relate the probabilities of group membership to a linear function of the predictor variables. LR models have been widely discussed in social research, bankruptcy prediction, and market segmentation (Kay, Warde, & Martens, 2000; Laitinen & Laitinen, 2000; Suh, Noh, & Suh, 1999). LR has also been explored by Laitinen (1999) in building credit scoring models. CART, a statistical procedure introduced by Breiman, Friedman, Olshen, and Stone (1984), is primarily used as a classification tool, with the objective to classify an object into two or more categories. A CART analysis generally consists of three steps. In a first step an overgrown tree is build, which closely describes the training set. This tree is called the maximal tree and is grown using a binary splitprocedure. In a next step the overgrown tree, which shows overfitting, is pruned. During this procedure a series of less complex trees is derived from the maximal tree. In the final step, the tree with the optimal tree size is selected using a cross-validation (CV) procedure The maximal tree is build using a binary split-procedure, which starts at the tree-root. The tree-root consists of all objects of the training set. At each level, a mother group is considered which is split in two exclusive daughter groups. In the next step, every daughter group becomes a mother group. Every split is described by one value of one descriptor, chosen in such a way that all objects in a daughter group have more similar response variable values. The split for continuous variables is defined by ‘‘ Xi < Aj” where Xi is the selected explanatory variable and Aj its split value (shown in Fig. 1). CART has been widely discussed in science, weather forecasting, social research, medical

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

All items

1687

Presentation

X1
X3
Retrieval

X4
Casebase

Adaptation

Fig. 1. General structure of a CART model. Validation

research, (Davis, Elder, Howlett, & Bouzaglou, 1999; Fu, 2004; Kurt, Ture, & Kurum, 2008; Li, 2006). CART has also been used by Li (2006) and Lee, Chiu, Chou, and Lu (2006) in building credit scoring models. MARS was first proposed by Friedman (1991) as a flexible procedure, which models relationships are nearly additive or involve interactions with fewer variables. The optimal MARS model is applied by a two-stage process. In the first stage, MARS constructs a very large number of basis functions to overfit the data initially. In the second stage, basis functions are deleted in the order of least contributions using the generalized cross-validation (GCV) criterion. MARS has been widely used in medical research, technique control and other related fields (Chou, Lee, Shao, & Chen, 2004; Xu, Massart, Liang, & Fang, 2003). MARS has also been explored by Lee and Chen (2005) in handling credit scoring problems. ANNs became useful in modeling non-stationary processes due to their associated memory characteristics and generalization capabilities. Hence, ANNs have been widely used in engineering, science, education, social and medical research, business, financial forecasting and other related fields (Stern, 1996; Vellido, Lisboa, & Vaughan, 1999; Zhang, Patuwo, & Hu, 1998). ANNs have also been explored by Arminger, Enache, and Bonne (1997), Barney, Graves, and Johnson (1999) and West (2000) in handling credit scoring problems. The majority of the above-mentioned references have reported that the credit scoring accuracies of ANNs are proved more reliable and accurate than those of LDA and LR. The typical framework of ANNs is shown in Fig. 2. CBR systems are able to learn from sample patterns of credit card use, and to classify new cases. This approach also has the promise of being able to adapt to new patterns

Output Layer

Fig. 3. Five-step reasoning process for CBR.

of credit applicants as they emerged. Five-step reasoning process in CBR system is shown in Fig. 3. Numerous applications of CBR have been reported in accounting, portfolio management, decision support and related areas (Hansen, Meservy, & Wood, 1995; Mechitov, Moshkovich, Olson, & Killingsworth, 1995; O’Roarty, Patterson, McGrea, & Adair, 1997), including those used in dealing with credit scoring (Wheeler & Aitken, 2000; Lee & Chen, 2005). 3. Research methodology The purpose of this study is to present a reassign credit scoring model (RCSM). RCSM model is divided into two phase. In the first phase, MARS is used to obtain significant input variables of the ANNs model to reduce the number of input nodes, simplify the network structure and shorten the model building time. And then, ANNs model is used to classify credit applicants to either good or bad credit group which are represented as ‘‘1” and ‘‘0”, respectively. The rejected credit applicants by ANNs will be reevaluated in reassigning credit scoring model which is applied by CBR to compare similarities between rejected credit applicants and CBR database which contains good and bad databases. If the value of SG (similarity retrieved from good applicants database) is higher than that of SB (similarity retrieved from bad applicants database), the rejected credit applicant will be reassigned to the conditional accepted class to reduce the Type I error and increase the banking portfolio; otherwise, this applicants will be assigned to rejected group. The processes of reassign credit scoring model is shown in Fig. 4.

Output Neurons

Connections

3.1. The classification stage of RCSM Hidden Neurons

Hidden Layer Connections Input Layer

Update

Input Neurons

Fig. 2. A typical framework of neural network.

3.1.1. Multivariate adaptive regression splines (MARS) MARS, a nonlinear and non-parametric regression methodology, is first proposed by Friedman (1991) as a flexible procedure which models relationships that are nearly additive or involve interactions with fewer variables. MARS excels at finding optimal variable transformations

1688

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

Input credit applicants dataset Obtain significant variables and used as the input variables of ANNs

MARS

Assign applicants to either good credit group or bad credit group

ANNs Credit applicants assigned to

Credit applicants assigned to a

a good credit group

bad credit group

CBR SG

SG > SB

SB

Reject credit applicants

Conditional accepted class

Accept credit applicants

Fig. 4. The processes of reassign credit scoring model.

and interactions, as well as the complex data structure that often hides in high-dimensional data. The optimal MARS model is applied by a two-stage process. In the first stage, MARS constructs a very large number of basis functions to overfit the data initially, where variables are allowed to enter as continuous, categorical, or ordinal – the formal mechanism by which variable intervals are defined, and they can interact with each other or be restricted to enter in only as additive components. In the second stage, basis functions are deleted in the order of least contributions using the generalized cross-validation (GCV) criterion. A measure of variable importance can then be assessed by observing the decrease in the calculated GCV when a variable is removed from the model. This process will continue until the remaining basis functions all satisfying the pre-determined requirements. The GCV function can be expressed as follows: , 2 N 1 X CðMÞ 2 ½y i  fM ðxi Þ ð3Þ 1 GCVðMÞ ¼ N i¼1 N where N is the number of observations, yi is the data response values and C(M) is the cost-penalty measures of a model containing M basis functions and the numerator measures the lack of fit on the M basis function model fM(xi) (Friedman, 1991). 3.1.2. Artificial neural networks (ANNs) An ANNs model involves constructing computers with architectures and processing capabilities that mimic certain processing capabilities of the human brain. An ANNs model is composed of neurons. Each of the neurons receives inputs, processes the inputs and delivers a single output. For this paper, the input variables can be attributes such as the applicant’s income and debt. The output of the

network is the solution to a problem. For example, in this paper, the output can be a good credit or a bad credit, such as ‘‘1” for good credit and ‘‘0” for bad credit. Since Vellido et al. (1999) pointed out that near 80% of business applications using ANNs will adopt the BPN training algorithm. This study will also use the popular BPN to build the credit scoring model. As recommended by Zhang et al. (1998), the single hidden layer network is sufficient to model any complex system. The designed network will have only one hidden layer. From a user’s viewpoint, as illustrated in Fig. 5, a simple BPN consists of three layers: the input layer, the hidden layer and the output layer. The input layer processes the input variables and provides the processed values to the hidden layer. The hidden layer further processes the intermediate values and transmits the processed values to the output layer. The output layer corresponds to the output variables of the BPN. The BPN is trained with a training sample. The training involves repeatedly presenting the input layer with the training sample until the network is able to remember the output of most of the sample items. The BPN develops memory by identifying the relationship between the input variables and the output variables. If the network commits a mistake, the BPN algorithm starts at the output layer and propagates the error backward through hidden layers. Therefore, besides the input variables and output variables, an optimal BPN also includes the optimal number of hidden neurons. For example, to evaluate credit card applicants, a BPN with the following features was developed: (a) Eight input variables: Status of existing checking account, duration in month, credit history, purpose, credit amount, saving account, present employment since, Other debtors/guarantors are applied in this paper.

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

1689

Good credit applicants / Bad credit applicants

Output Layer

Output Neurons

Connections Hidden Neurons

Hidden Layer Connections

Input Neurons

Input Layer

Other debtors or guarantors

Purpose

Credit history

Present employment since

Saving account

Duration in month

Credit amount

Status of existing checking account

Fig. 5. Simple backpropagation network illustration.

(b) Hidden nodes may be set to 2n, 2n ± 1, 2n ± 2, (n denotes input nodes), varying as try and error approach in Section 4.4. (c) One output variable that takes two values: 1 for good credit and 0 for bad credit.

3.2. The reassign stage of RCSM In order to reduce Type I error of scoring system, the case based reasoning (CBR) is applied to reassign the rejected credit applicants in this paper. CBR is a similar machine reasoning that adapts previous similar cases to solve new problems and its five-step reasoning process can be considered as follows (Bradley, 1994): 1. Presentation: a description of the current problem is input into the system. 2. Retrieval: the system retrieves the closest-matching cases stored in a case. 3. Adaptation: the system uses the current problem and closest-matching cases to generate a solution to the current problem. 4. Validation: the solution is validated through feedback from the user or the environment. 5. Update: if appropriate, the validated solution is added to the case base for use in future problem-solving. Case retrieval searches the case base to select existing cases sharing significant features with the new case. Through the retrieval step, similar cases that are potentially useful to the current problem are retrieved from the case base. That is, previous experience can be used or adapted for the solutions to the current problem and mistakes can be avoided. Kolodner (1993) stated that the importance of a dimension in judging similarity and degree of match should be considered in building matching functions. Matching and ranking is the process of comparing two cases with each

other and determining their degree of match and ordering cases according to the goodness of match or the usefulness for the application. Since one of the most obvious measures of similarity between two cases is the distance, the nearestneighbor approach to compute the degree of similarity between the input case and the target case is employed for this paper. A matching function of the nearest-neighbor method is described as follows: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X wi  ðfiI  fiR Þ ð4Þ DISab ¼ i¼1

where DISab is the matching function using Euclidean distance between cases a and b, n is the number of features, fiI denotes input case, fiR denotes case in the CBR database and Wi is the importance weighting of a feature i. 4. Empirical study An academic dataset obtained from UCI Repository of Machine Learning Databases is adopted herein to evaluate the predictive accuracy of proposed credit scoring model and the capability of CBR-based reassign classification. The data consist of a set of loans given to a total of 1000 credit card applicants. Among them, 600 (420 good and 180 bad) applicants with respect to the ratio of good credit (1) and bad credit (0) were randomly selected as the training sample, another 200 (140 good and 60 bad) applicants will be used to test the model, and the remaining 200 (140 good and 60 bad) applicants will be retained for validation. There are 20 independent variables in the dataset such as the credit card applicant’s age, credit amount, credit history, employment, housing and so on. The dependent variable is the credit status of the customer, that is, good or bad credit. 4.1. Credit scoring results of LDA The LDA credit scoring model is implemented by using the SPSS version 10.0 for Windows. Table 1 lists the results

1690

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

Table 1 Credit scoring results using discriminant analysis

Table 3 Credit scoring results using classification and regression tree

Actual class

Actual class

0 (Bad credit) 1 (Good credit)

Classified class 0 (Bad credit)

1 (Good credit)

45 (75%) 33 (23.6%)

15 (25%) 107 (76.4%)

0 (Bad credit) 1 (Good credit)

Classified class 0 (Bad credit)

1 (Good credit)

44 (73.3%) 29 (20.7%)

16 (26.7%) 111 (79.3%)

Overall % correct: 76%.

Overall % correct: 77.5%.

of the LDA model and the proportion of application classified correctly by the LDA model into good and bad categories. For the validation samples, the LDA model accurately classified 76% of the 200 applications. Groupwise, the model correctly classified 75% of the 60 applications of bad credit card applications and 76.4% of the 140 applications of good credit card applications.

ally, the learning rate is set between 0.01 and 0.4, and the momentum is set between 0.8 and 0.99. The training of the network is implemented with various learning rates and training lengths ranging from 1000 to 10,000 iterations until the network convergences. As the issue of determining the optimal number of hidden nodes is a crucial yet complicated one, the most commonly used way in determining the number of hidden nodes is via experiments or trial and error, which may be set to 2n, 2n ± 1, 2n ± 2, n denotes input nodes (Hecht-Nielsen, 1990). When the learning rate, momentum and training epochs are set to 0.1, 0.9 and 3000, respectively, several options of the NN architectures are evaluated, in which the 20-39-1 is found to obtain the better results in Table 4. Accuracy rate of various learning parameters for the 20-39-1 architecture are summarized in Table 5. From the results in Table 6, it is observed that the

4.2. Credit scoring results of LR The LR credit scoring model is implemented by using the SPSS version 10.0 for Windows. Table 2 lists the results of the LR model and the proportion of application classified correctly by the LR model into good and bad categories. For the validation samples, the LR model accurately classified 76.5% of the 200 applications. Groupwise, the model correctly classified 48.3% of the 60 applications of bad credit card applications and 88.6% of the 140 applications of good credit card applications. 4.3. Credit scoring results of CART The CART credit scoring model is implemented by using the CART version 6.0. The credit scoring results of the validation sample summarized in the following table. From the results in Table 3, it is observed that the CART model accurately classified 77.5% of the 200 applications. Groupwise, the model correctly classified 73.3% of the 60 applications of bad credit card applications and 79.3% of the 140 applications of good credit card applications. 4.4. Credit scoring results of BPN The BPN credit scoring model is implemented by using the NeuroShell version 2.0. In this case, there are 20 input nodes in the input layer and only one output node, the good or bad credit status of the applicant. Some guidelines for specifying the learning parameters including learning rate, momentum, maximum number of epochs can be found in the literature (Freeman & Skapura, 1992). Gener-

Table 4 Result of BPN with various network architectures Architecture

Accuracy rate %

20-38-1 20-39-1 20-40-1 20-41-1 20-42-1

74.0 77.0 76.0 74.5 74.5

Table 5 Accuracy rate of various learning parameters for the 20-39-1 architecture Learning rate

0.01 0.03 0.05 0.07 0.09 0.1 0.2 0.3 0.4 a

Momentum rate 0.8

0.9

0.99

79.0 75.5 75.5 75.5 76.5 77.0 75.0 78.0 79.5a

75.5 76.0 76.5 77.0 74.5 77.0 78.5 79.0 76.5

74.5 78.5 77.5 76.5 77.5 79.0 76.5 78.0 78.0

Presents the best accuracy rate.

Table 2 Credit scoring results using logistic regression

Table 6 Credit scoring results using BPN model

Actual class

Actual class

0 (Bad credit) 1 (Good credit) Overall % correct: 76.5%.

Classified class 0 (Bad credit)

1 (Good credit)

29 (48.3%) 16 (11.4%)

31 (51.7%) 124 (88.6%)

0 (Bad credit) 1 (Good credit) Overall % correct: 79.5%.

Classified class 0 (Bad credit)

1 (Good credit)

33 (55%) 14 (10%)

27 (45%) 126 (90%)

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

BPN model accurately classified 79.5% of the 200 applications. Groupwise, the model correctly classified 55% of the 60 applications of bad credit card applications and 90% of the 140 applications of good credit card applications. 4.5. Credit scoring results of classification stage of RCSM The MARS-BPN credit scoring model is implemented by using MARS version 2.0 and NeuroShell version 2.0. The single hidden layer BPN model will again be adopted in building the two-stage hybrid model. MARS obtains 8 significant variables form 20 original variables as shown in Table 7. The input layer of the proposed model contains the obtained significant independent variables of the MARS credit scoring model as the input nodes. The trial and error approach will again be used to determine the appropriate number of hidden nodes for the desired networks (2n, 2n ± 1, 2n ± 2, n denotes input nodes). The training of the network is also implemented with various learning rates and training lengths ranging from 1000 to 10,000 iterations until the network convergence. The network weights are also reset for each combination of the network parameters such as learning rates (from 0.01 to 0.4) and momentum (from 0.8 to 0.99). Several options of the NN architectures are evaluated, in which the 8-161 is found to obtain the better results in Table 8 and the learning rate, momentum and training epochs are set to 0.1, 0.9 and 3000, respectively. Accuracy rate of various learning parameters for the 8-16-1 architecture are summarized in Table 9. The credit scoring results of the validation sample are summarized in Table 10. From the results in Table 10, it is observed that the MARS-BPN model accurately classified 82.5% of the 200 applications. Groupwise, the model correctly classified 56.7% of the 60 applications of bad credit card applications and 93.6% of the 140 applications of good credit card applications. Table 7 Variable selection results of MARS Attribute

Type

Importance %

Status of existing checking account Credit history Duration in month Purpose Savings account/bonds Other debtors/guarantors Credit amount Present employment since

Qualitative Qualitative Numerical Qualitative Qualitative Qualitative Numerical Qualitative

100 62.463 61.005 54.360 44.987 31.021 28.989 25.939

Table 8 Result of BPN with various network architectures Architecture

Accuracy rate %

8-14-1 8-15-1 8-16-1 8-17-1 8-18-1

78.0 76.0 81.5 76.0 78.5

1691

Table 9 Accuracy rate of various learning parameters for the 8-16-1 architecture Learning rate

0.01 0.03 0.05 0.07 0.09 0.1 0.2 0.3 0.4 a

Momentum rate 0.8

0.9

0.99

79.5 79.0 78.0 78.5 79.5 79.5 78.0 79.5 79.5

80.0 79.0 80.5 79.0 80.0 81.5 80.5 81.5 82.5a

79.0 79.5 79.0 77.5 79.0 80.0 78.5 78.5 76.5

Presents the best accuracy rate.

Table 10 Credit scoring results using proposed model Actual class

0 (Bad credit) 1 (Good credit)

Classified class 0 (Bad credit)

1 (Good credit)

34 (56.7%) 9 (6.4%)

26 (43.3%) 131 (93.6%)

Overall % correct: 82.5%.

4.6. Credit scoring results of reassign stage of RCSM For credit scoring, the accepted class is the preferable one to applicants. If an application is rejected after credit evaluation, creditors can suggest conditional acceptance to make sure the result from the credit scoring system is correct and increase business revenue. In this work, the rejected applicants will be reassigned to conditional accept credit applicants through CBR-based method provided that SG value is higher SB value. The reassigning credit scoring model is implemented by using CBR-works version 4.0. Table 11 shows original eight good applicants which were classified as rejected applicants can be reassigned to conditional accepted class except for applicants No. 287. Table 12 shows only one applicant (No. 820) who is original bad applicant will be reassigned to conditional accepted class. After CBR-based reassigning, only one original bad credit applicant will be reassigned. However, eight original good credit applicants will be reassigned, as a result, Type I error will almost be eliminated (Table 13). Also, with nine reassigned applicants, the approval rate will be increased from 78.5% to 83%. The appropriate management strategies for these nine conditional accepted applicants are approved of lower credit limit and usage under monitoring on probation, for example, more frequently to remind and monitor their credit amounts to prevent them from overusing their credit. The creditor can consider increasing their credit limit according to credit behavior after monitoring for a certain time. 4.7. Comparison between methods In order to evaluate the classification capabilities of the five built credit scoring models, the credit scoring results of

1692

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

Table 11 SG and SB of rejected bad credit applicants (original good credit applicants) Applicants No.

SG

SB

Reassigned

Applicants No.

SG

SB

Reassigned

8 79 142 180 287

91.39 87.23 91.23 94.17 90.75

90.02 84.40 90.87 91.05 91.52

Yes Yes Yes Yes No

398 659 759 783

90.99 91.04 92.32 90.99

88.81 89.05 87.68 88.64

Yes Yes Yes Yes

Table 12 SG and SB of rejected bad credit applicants (original bad credit applicants) Applicants No.

SG

SB

Reassigned

Applicants No.

SG

SB

Reassigned

5 16 19 36 132 185 192 237 316 360 379 476 492 523 529 539 557

87.91 92.44 89.18 91.05 87.78 88.91 85.17 86.90 90.47 92.88 88.61 92.27 86.06 85.65 91.93 87.61 89.45

91.20 93.51 91.34 91.92 89.46 89.45 89.01 86.98 92.30 94.22 91.07 92.51 88.13 90.88 93.17 90.42 90.63

No No No No No No No No No No No No No No No No No

596 707 732 740 756 767 772 776 806 815 820 829 886 920 925 974 982

88.18 90.59 90.12 87.70 90.56 91.26 88.12 91.64 89.95 89.58 92.51 89.50 91.78 90.61 86.22 85.24 85.90

89.98 91.98 90.61 89.00 92.88 91.81 89.77 94.10 91.71 92.03 92.22 90.58 93.56 90.63 87.31 89.36 87.23

No No No No No No No No No No Yes No No No No No No

Table 13 Credit scoring results of the second phase of RCSM

Table 15 Cost matrix of Type I and Type II errors

Actual class

Actual class

Classified class

0 (Bad credit) 1 (Good credit)

0 (Bad credit)

1 (Good credit)

33 (55%) 1 (0.71%)

27 (45%) 139 (99.29%)

0 (Bad credit) 1 (Good credit)

Classified class 0 (Bad credit)

1 (Good credit)

0 8 Cost units

+5 Cost units 0

Overall % correct: 86%.

the validation samples are summarized in Table 14. The result shows that the proposed model, RCSM, not only outperforms the commonly utilized linear discriminant analysis, logistic regression, and neural networks credit scoring models in classification stage (MARS combined Table 14 Credit scoring results of the constructed models Methods

LDA LR CART ANNs RCSM (MARS + ANNs) RCSM (MARS + ANNs + CBR)

Classified class

Type I error %

Type II error %

Accuracy rate %

0–0

1–1

45 29 44 33 34

107 124 111 126 131

24 11 21 10 6.4

25 52 27 45 43

76.0 76.5 77.5 79.5 82.5

33

139

0.71

45

86.0

with ANNs methods), but also provides efficient alternatives in conducting credit scoring tasks in reassign stage (combined with MARS, ANNs, and CBR methods). 5. Conclusions Credit scoring has become more and more important as the competition between financial institutions has come to a total conflicting stage. Therefore, credit scoring model is one of the main application areas of classification problems during the past decade. Most companies are starting better strategies with the help of credit scoring models. Hence, various modeling techniques have been developed in different credit evaluation processes for better credit approval schemes. This study compares five common used credit scoring approaches and demonstrates the advantages of MARS, ANNs and CBR to credit analysis. The proposed RCSM is used to properly classify the applications as either accepted or rejected, and thereby to minimize the creditors’

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

risk and translate considerably into future savings. The CBR-based classification technique reassigns the rejected applications to the preferable accepted class, which eliminates Type I error and increases approval rate of credit card application. With proper management strategy, the conditionally accepted applications will contribute to increased revenue. In general, the misclassification costs associated with Type II errors are higher than those of associated with Type I errors (5–1). It is worse to class a customer as good when they are bad, than it is to class a customer as bad when they are good. From the computational results of RCSM, only one original bad applicant will be reassigned to conditional accepted class, the Type II errors cost will be increased five units. However, with eight original good applicants reassigned to conditional accepted class, the Type I errors cost will be decreased eight units. Table 15 shows the cost matrix. The result shows the proposed evolutionary computation based approach has shown enough attractive features for the computer-aided credit analysis system. Further research may aim at time-series credit scoring models that include the change of credit status in every period. By using the proposed methodology with time-series credit scoring model, credit applicants can be segmented into more subgroups due to new variables. Also, more detailed management strategies for the customers in the subgroups can be considered. References Arminger, G., Enache, D., & Bonne, T. (1997). Analyzing credit risk data: A comparison of logistic discriminant classification tree analysis and feedforward networks. Computational Statistics, 12, 293–310. Bardos, M. (1998). Detecting the risk of company failure at the banque de france. Journal of Banking and Finance, 22, 1405–1419. Bradley, P. (1994). Case-based reasoning: Business applications. Communication of the ACM, 37, 40–43. Barney, D. K., Graves, O. F., & Johnson, J. D. (1999). The farmers home administration and farm debt failure prediction. Journal of Accounting and Public Policy, 18, 99–139. Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. Pacific Grove, CA: Wadsworth. Chou, S. M., Lee, T. S., Shao, Y. E., & Chen, I. F. (2004). Mining the breast cancer pattern using artificial neural networks and multivariate adaptive regression splines. Expert Systems with Applications, 27, 133–142. Davis, R. E., Elder, K., Howlett, D., & Bouzaglou, E. (1999). Relating storm and weather factors to dry slab avalanche activity at Alta, Utah, and Mammoth Mountain, California, using classification and regression trees. Cold Regions Science and Technology, 30, 79–89. Desai, V. S., Crook, J. N., & Overstreet, G. A. (1996). A comparison of neural networks and linear scoring models in the credit union environment. European Journal of Operational Research, 95, 24–37. Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, 179–188. Freeman, J. A., & Skapura, D. M. (1992). Neural networks algorithm, application and programming techniques. MI, USA: Addison-Wesley. Friedman, J. H. (1991). Multivariate adaptive regression splines. The Annals of Statistics, 19, 1–67. Fu, C. Y. (2004). Combining loglinear model with classification and regression tree (CART): An application to birth data. Computational Statistics and Data Analysis, 45, 865–874.

1693

Hansen, J. V., Meservy, R. D., & Wood, L. E. (1995). Case-based reasoning: application techniques for decision support. International Journal of Intelligent Systems in Accounting, Finance and Management, 4, 137–146. Harrell, F. E., & Lee, K. L. (1985). A comparison of the discrimination of discriminant analysis and logistic regression. In P. K. Se (Ed.), Biostatistics: Statistics in biomedical. Public health, and environmental sciences. Amsterdam: North-Holland. Hecht-Nielsen, R. (1990). Neurocomputing. Menlo Park, CA: AddisonWesley. Huang, J. J., Tzeng, G. H., & Ong, C. S. (2006). Two-stage genetic programming (2SGP) for the credit scoring model. Applied Mathematics and Computation, 174, 1039–1053. Kay, O. W., Warde, A., & Martens, L. (2000). Social differentiation and the market for eating out in the UK. International Journal of Hospitality Management, 19, 173–190. Kim, J. C., Kim, D. H., Kim, J. J., Ye, J. S., & Lee, H. S. (2000). Segmenting the Korean housing market using multiple discriminant analysis. Construction Management and Economics, 18, 45–54. Kolodner, J. (1993). Case-based reasoning. San Francisco, CA: Morgan Kaufmann Publishers Inc.. Kurt, I., Ture, M., & Kurum, A. T. (2008). Comparing performances of logistic regression, classification and regression tree, and neural networks for predicting coronary artery disease. Expert Systems with Applications, 34, 366–374. Laitinen, E. K. (1999). Predicting a corporate credit analyst’s risk estimate by logistic and linear models. International Review of Financial Analysis, 8, 97–121. Laitinen, E. K., & Laitinen, T. (2000). Bankruptcy prediction: Application of the Taylor’s expansion in logistic regression. International Review of Financial Analysis, 9, 327–349. Lee, T. S., & Chen, I. F. (2005). A two-stage hybrid credit scoring model using artificial neural networks and multivariate adaptive regression splines. Expert Systems with Applications, 28, 743–752. Lee, H., Jo, H., & Han, I. (1997). Bankruptcy prediction using case-based reasoning, neural networks, and discriminant analysis. Expert Systems with Applications, 13, 97–108. Lee, G., Sung, T. K., & Chang, N. (1999). Dynamics of modeling in data mining: Interpretive approach to bankruptcy prediction. Journal of Management Information Systems, 16, 63–85. Lee, T. S., Chiu, C. C., Chou, Y. C., & Lu, C. J. (2006). Mining the customer credit using classification and regression tree and multivariate adaptive regression splines. Computational Statistics & Data Analysis, 50, 1113–1130. Li, Y. (2006). Predicting materials properties and behavior using classification and regression trees. Materials Science and Engineering, 433, 261–268. Mechitov, A. I., Moshkovich, H. M., Olson, D. L., & Killingsworth, B. (1995). Knowledge acquisition tool for case-based reasoning systems. Expert Systems with Applications, 9, 201–212. O’Roarty, B., Patterson, D., McGrea, l S., & Adair, A. (1997). A casebased reasoning approach to the selection of comparable evidence for retail rent determination. Expert Systems with Applications, 12, 417–428. Ong, C. S., Huang, J. J., & Tzeng, G. H. (2005). Building credit scoring models using genetic programming. Expert Systems with Applications, 29, 41–47. Piramuthu, S. (1999). Financial credit-risk revolution with neural and neurofuzzy systems. European Journal of Operational Research, 112, 310–321. Shin, K. S., & Han, I. (1999). Case-based reasoning supported by genetic algorithms for corporate bond rating. Expert Systems with Applications, 16, 85–95. Stern, H. S. (1996). Neural networks in applied statistics. Technometrics, 38, 205–216. Suh, E. H., Noh, K. C., & Suh, C. K. (1999). Customer list segmentation using the combined response model. Expert Systems with Applications, 17, 89–97.

1694

C.-L. Chuang, R.-H. Lin / Expert Systems with Applications 36 (2009) 1685–1694

Trevino, L. J., & Daniels, J. D. (1995). FDI theory and foreign direct investment in the United States: A comparison of investors and non-investors. International Business Review, 4, 177–194. Vellido, A., Lisboa, P. J. G., & Vaughan, J. (1999). Neural networks in business: A survey of applications (1992–1998). Expert Systems with Applications, 17, 51–70. West, D. (2000). Neural network credit scoring models. Computer and Operations Research, 27, 1131–1152.

Wheeler, R., & Aitken, S. (2000). Multiple algorithms for fraud detection. Knowledge-Based Systems, 13, 93–99. Xu, Q. S., Massart, D. L., Liang, Y. Z., & Fang, K. T. (2003). Two-step multivariate adaptive regression splines for modeling a quantitative relationship between gas chromatography retention indices and molecular descriptors. Journal of Chromatography, 998, 155–167. Zhang, G., Patuwo, B. E., & Hu, M. Y. (1998). Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting, 14, 35–62.