Early warning of bank failure

Early warning of bank failure

Journal of Banking and Finance 1 (1977) 249-276. © North-Holland Publishing Company EARLY WARNING OF BANK FAILURE A iogit regression approach Daniel ...

2MB Sizes 0 Downloads 102 Views

Journal of Banking and Finance 1 (1977) 249-276. © North-Holland Publishing Company

EARLY WARNING OF BANK FAILURE A iogit regression approach Daniel MARTIN* Research Specialist, Federal Reserve Bank of New York, New York, N Y 10045, U.S.A.

1. Bank failure and types of early warning models The unsettled economic conditions of the past few years have given rise to much discussion, from both a theoretical and practical viewpoint, on how to measure the soundness of the commercial banking system. Large depositors and other uninsured creditors are interested in their risk of loss, and regulatory agencies in anticipating problem situations requiring their intervention. It is commonly thought that just as banks and other lending institutions examine the financial statements of prospective borrowers, the financial statements of banks themselves can be analyzed to assess the risk of bank failure. A number of firms offer credit reports on banks, for the benefit of large depositors. 1 One of the responses to this increased need for risk measurement has been the application of statistical techniques to financial statement analysis. The objective is to construct an early warning model expressing the probability of future failure as a function of variables obtained from the current period's balance sheet and income statement. Typically, the model is estimated on past financial data; depending on the particular approach used, the past record of bank failures and/or bank examiners' ratings of banks may also be an input to the model. There are various general approaches to the construction of an early warning model, and it is useful to compare them in the context of the question of just what constitutes a bank failure. Like any other enterprise, a bank fails if its net worth becomes negative, or if it is unable to continue its .operations without incurring losses that would immediately result in negative net worth. However, most bank failure situations are resolved in ways that do not entail bankruptcy in the legal *The author gratefully acknowledges assistance provided by Delton L. Chesser of the University of Nebraska and Strother H. Walker of the University of Colorado, who supplied sample computer programs; and of Carole Chapman, Leo Abramowitz and Michael Fischer of the Data Services Function, Federal Reserve Bank of New York who provided on-line access to nationwide bank financial statement data and helpful suggestions on how to work efficiently with such a large data base. The opinions in this paper are strictly those of the author and not of the Federal Reserve System. 1'Now the Customers Check Out the Banks; So Do Other Banks' Wall Street Journal, Sept. 9, 1976.

250

D. Martin. Earh' warnin~ of hank failore

sense. Supervisory mergers, in which the failing bank is merged into a sound institution largely at the initiative of state or Federal regulatory agencies, are more frequent than actual insolvency and receivership, although of course, even supervisory mergers are comparatively rare events. The Bank Merger Act of 1966 (12 U.S.C. 1828(c)) provides for emergency procedures by which a merger application can be approved without the usual delays and analysis of competitive factors, if the merger is necessary to prevent the failure of a bank. Even when emergency procedures are not involved, a merger may actually be supervisory in nature if the merged bank is seen as unable to survive on its own, although its demise was not so imminent as to Call for emergency treatment. It is possible that in the absence of the merger, the bank might have worked out its problems; thus there is a gray area of mergers which may or not have been equivalent to failures. 2 The lack of a totally objective and exact criterion for identifying bank failures in the real world is not a serious problem for research in early warning. Nevertheless, every methodology used in early warning studies represents an attempt to solve or avoid this ambiguity, either by taking the real-world record of bank failures as given or by supplanting it with an abstract definition of failure. The approach followed in the majority of studies, including the present one, may be termed ex post empirical: a group of actual failures is identified from individual case studies, and the characteristics of these banks one or more years prior to failure are compared with a group of banks which did not fail. 3 A second methodology abstracts from the question of supervisory mergers by defining failure as the occurrence of negative net worth, and developing a priori probabilities of this event in terms of observed probability distribution of earnings, loan losses, and other factors influencing the level of banks' capital accounts. This is an a priori, defined model where the measure of risk is independent of the historical record of actual failures, but nevertheless is defined as the probability of a specific event, based on an explicit theory of what causes that event to occur. The third approach, by contrast, is a priori undefined; a concept of bank vulnerability is posited without reference to failure or any other specific event, and an arbitrary linear or quadratic function of the financial variables is assumed to be a measure of this undefined vulnerability. What this function actually measures, for the most part, is the individual bank's deviation from the average of a peer group of similar banks. The ex post empirical approach includes a wide range of specific techniques for comparing the data of failed and non-failed banks. Perhaps the first study of this 2Alternatively, a bank with very low or negative net worth occasionally is able to continue in business through extraordinary 'rescue' operations other than mergers, for instance acquisition by a bank holding company which supplies capital and other assistance. These rescue operations can be considered a form of reorganization and should be included with actual mergers or insolvencies. 3One can, for the purposes of the study, define failures in the strict legal sense thus avoiding the problem of correctly identifying supervisory mergers. But to include those banks which underwent supervisory mergers in the non-failure group is a misspecification of the model.

D. Martin, Early warning of bank failure.

251

sort, that of Secrist (1938), used simple tabular and graphic comparison of one or two balance-sheet ratios at a time, with groups consisting of banks that failed at different time periods and banks that did not fail. Later researchers have used more sophisticated statistical techniques; Meyer and Pifer (1970) used ordinary least-squares linear regression, where the dependent variable is a 0-1 dummy variable with 1 representing failure. The fitted values of this dummy variable were interpreted as estimated probabilities of bank failure. Altman (1969) first used discriminant analysis for predicting bankruptcy of nonfinancial firms, later (1974) applying the same technique to savings and loan associations, while Stuhr and Van Wicklen (1974) and Sinkey (1975a) applied it to commercial banks. The present paper is the first application of logit analysis to the bank early warning problem, but Chesser (1974) applied a logit model to the similar problem of predicting noncompliance by commercial loan customers. What each of these statistical techniques has in common is that they essentially take the real-world classification into failure and nonfailure groups as a dependent variable, and attempts to explain the classification as a function of several independent variables. The independent variables are mostly, but not exclusively, ratios computed from the banks' regular financial statements. The same methodology has been used in the study of failure of firms in other industries. A variant of the ex post empirical approach is to use supervisory agencies' judgment of bank condition, rather than actual failure, as the dependent variable. Each agency maintains a 'problem list' of banks within its jurisdiction which are considered substantially riskier than other banks and which will require corrective action by management. A majority of those banks which actually fail were considered problem banks for anywhere from Several months to several years prior to failure. On the other hand, over 90 percent of the banks listed as problems do not fail and most eventually return to non-problem status. The classification of banks as problems or non-problems is based on a combination of many objective and subjective factors, but heavily influenced by bank examiners' evaluation of loan portfolio and the quality of management. The statistical techniques used to compare failed and nonfailed banks are also applicable to the study of problem versus nonproblem banks. Stuhr and Van Wicklen (1974), and Sinkey (1975a) are examples of this type of study. The empirical approach depends on the identification of those banks which have failed in the past, or became problem banks in the past. This grouping of banks is then analyzed in the context of the independent variable data as of a different, earlier, historical period. The result is a function (regression equation, discriminant function, etc.) relating the probability of failure in period t to the independent variables as of period t - 1. As with any other estimated model, its usefulness depends on the goodness of fit on the estimation data, and on the stability of the model over time. The methods used to test these properties of the model depend on the specific technique.

252

D. Martin, Early warning of bank failure

What are the implications of the choice between failure and problem status as a dependent variable? In spite of the ambiguity of some supervisory mergers, failure is an objective indication of a bank's inability to continue its operations, and is after all the event that banks and regulators are ultimately interested in preventing. A direct estimate of a bank's risk of failure seems more relevant to regulatory concerns than an estimate of a bank's risk of belonging to a group of banks whose risk of failure is in turn somewhat higher than other banks'. Yet the probability of becoming a 'problem bank' is such a once-removed estimate of risk, since the problem classification is much more inclusive than the group of actual failed banks. Another advantage of actual failure as a dependent variable is that the 'problem bank' classification depends heavily on subjective judgment. Different agencies often differ in their opinions as to whether a bank is a 'problem' or not. If a statistical model estimated on problem and nonproblem banks indicates high risk for a bank which the examiners rate favorably, or vice versa, it is not always clear that the model is in error. On the other hand, if 'problem' bank classification precedes actual failure, a model to predict this classification may provide a longer lead time for corrective action than would a model to predict failure. This argument must be used with care, since there is also a substantial lag between the time of a financial statement and the time that statement is used as an input in the bank rating. The existence of this lag is one of the principal motivations for developing a statistical early warning system in the first place. It has been alleged that the use of actual failure as a dependent variable may lead to biased probability estimates, if the banks which actually fail are atypical of the banks having a relatively high probability of failure. Why this should be true is not clear, except that a substantial portion of failures are due to fraud rather than the variables reported in the regular financial statements. But even fraud should not systematically bias an early warning model. If the incidence of such crimes is not correlated with any of the balance-sheet and income ratios, the overall predictive power of the regression is less than it would be if there were not bank failures due to defalcation, but the individual coefficients should not be biased. On the other hand, if there is some correlation between bankers' criminal tendencies and one of the ratios, then inclusion of this ratio among the independent variables should affect the coefficient of that variable precisely because it is a useful proxy for the probability of fraud. One might, however, expect that the 'closeness of fit' of a model whose dependent variable is actual failure would be poorer than that of a model based on problem classification. This is because supervisory rating of banks is in part based on the analysis of financial ratios similar or identical to those used as independent variables in the model. The regression, whatever the exact technique being used, may simply be estimating the subjective weights used by supervisory agency personnel to combine different characteristics of banks into a single

D. Martin. Early warning of bank failure

253

overall judgment of risk. The goodness of fit of a model based on supervisory judgments, therefore, reflects the internal consistency of these judgments as well as their relation to the possibility of bank failure. The a priori, defined approach is typified by the work of Santomero and Vinso (1977) on estimating the probability of failure for commercial banks. Failure is defined as the event of the bank's capital account becoming zero or negative, and the probability of this event is estimated by considering period-to-period changes in the capital account as a random variable whose distribution remains stationary over time. The parameters of this distribution for each bank can be estimated by analyzing weekly time series data* reported by the bank, and translated into a probability of failure. Although the distribution of changes in the capital account is of course influenced by profitability, loan losses, liquidity and other factors these elements are not included explicitly in the analysis. Rather, their influence is reflected in the estimated distribution of changes in the capital account. A similar model developed by Nelson (1977) treats capital as a function of earnings and loan charge-offs, which in turn are assumed to be random variables. The probability of failure is a function of the current level of capital, and the estimated mean and variance of earnings and charge-offs. The distribution of these variables can be estimated either from a cross-section of banks in a given year, or from a bank's time series of annual income data. Using either form of the a priori model, one can generate forecasts from the estimated distribution of changes in the capital accounts, and simulate the effect on overall risk in the banking system of a change in capital levels (what would happen if every bank raised its capital by 1 percent of its assets?), or in the mean and variance of earnings and charge-offs. As with the ex post regression type models, there is a question of stability: do the underlying probability distributions change slowly enough that estimates based on a previous ten-year period can be used for forecasting? The Santomero and Vinso study uses data for 1965 through early 1974, and concludes that the overall risk of failure in the banking system is extremely low. But immediately following this period, the massive increase in problem loans, reduced earnings, and the failures of Franklin National Bank and Security National Bank caused many bank stockholders, CD holders, and depositors to revise upward their subjective estimates of risk. Was such an upward revision justified in the Context of the model ? That depends on how the increase in loan losses, and its impact on the distribution of changes in capital accounts, is interpreted. If the events of 1974-75 are merely an outlier in an unchanged distribution, no substantial change in the estimated probability of *Santomero and Vinso used weekly data for 224 large banks filing weekly condition reports with the Federal Reserve from February 1965 to January 1974. Banks not reporting continuously in this period, or which changed their identification numbers due to changed head-office location, were excluded. It is interesting to note (although Santomero and Vinso do not) that these criteria imply the exclusion of United States National Bank, Franklin National Bank, and Security National Bank from the sample. C

254

D. Martin, Early warning of bank failure

failure is called for; but the market appears to have chosen an alternative explanation, namely that the distribution of bank earning has shifted toward greater variance. It should be possible to test the a priori model estimated on d a t a for a historical period in the same way as regression models are tested, by applying it to a later historical period and comparing the results to actual failures in the later period. The a priori, undefined approach to early warning further abstracts from the definition of bank failure by replacing it with a more general concept of bank vulnerability. A bank is said to be vulnerable to the extent that it is likely to undergo financial diffÉculty of any sort, ranging from a short-term decline in earnings to complete failure. Also, a bank with a given level of vulnerability may have different probabilities of failure depending on the external economic environment. Thus, bank vulnerability cannot be expressed as a probability of any specific event, since a wide range of possible events are being considered. This concept is basic to the attempt of Korobow and Stuhr (1975) to develop a measure of bank vulnerability which, unlike discriminant functions and other empirical results, would be totally independent of actual classifications of banks as failures and nonfailures or problems and nonproblems. Their measure is a linear combination of twelve financial ratios. The weight of each variable is arbitrary: the standard deviation of the variable is computed for a cross-section sample of banks' data as of a given year, and the variable is weighted by either 1 or 1 (depending on the a priori assumption about whether this ratio is favorable or unfavorable) divided by that standard deviation. This weighted sum is termed the rank, score of the bank, and it is assumed that positive scores indicate 'resistance'-the lack of vulnerability-and negative scores indicate vulnerability. Korobow and Stuhr demonstrate that there is a correlation between low rank scores as of one year and the incidence of problem banks in the following two- to four-year period. A similar a priori undefined model is contained in Sinkey's study (1975b) of Franklin National Bank, where Franklin National is compared to a peer group of fifty large banks in order to determine whether it was an atypical bank during the years prior to its failure. Both univariate and multivariate chi-square tests are used, and if a bank is an outlier with respect to some or all of the financial ratios being studied, it runs a higher risk of problem status or failure. However, Sinkey notes that outlier status may mean that the bank is significantly less risky than average for its peer group, or that it is an unfavorable outlier in some aspects and a favorable outlier in others. Each of these models is subject to the criticism that what is being measured bears no relation to bank risk. The chi-square test is clearly ambiguous, since a bank which differs significantly from the peer group mean may differ in either direction. The rank score technique avoids this difficulty by entering each variable into the score, according to its expected sign, but it contains the assumption that the standard deviations of the variables determine their relative -

D. Martin, Early warning of bank failure

255

weights, i.e. the terms at which changes in the variables can be 'traded off' without changing the level of vulnerability. No theoretical argument is offered by Korobow and Stuhr to support this assumption. As long as one retains the generalized concept of vulnerability, without defining precisely what banks are expected to be vulnerable to, empirical testing of the chi-square or rank score methodologies is not possible. In practice, however, Korobow and Stuhr abandon the abstract concept in favor of testing the rank score against future problem or nonproblem status. The variables included in the score were selected by testing various combinations and selecting that set which produced the best possible separation of future problem and nonproblem banks. This testing process is described in a later article by Korobow, Stuhr and Martin (1976). The score can also be used as a variable in discriminant or logit analysis, and thus translated into an estimated probability of problemstatus. The probability estimates in the Korobow, Stuhr, and Martin article are based on an arctangent regression equation, a procedure differing only slightly from logit analysis. Once the a priori undefined model has been transformed in this way, it is merely another version of the ex post empirical model, with problem vs. nonproblem status as the dependent variable. In that case, the origlnal goal of developing a model without reference to actual bank ratings, which was motivation for preferring the arbitrary weighting scheme over the discriminant analysis model [Korobow and Stuhr (1975, p. 159)], has been discarded. There does not seem to be any reason why the rank score is preferable as an ex post empirical model. On the contrary, an arctangent or logit regression of the sort used by Korobow, Stuhr, and Martin, with the rank score as the single independent variable, must necessarily have a poorer fit to the data than a similar regression including all of the original independent variables. This is because the rank score regression is equivalent to running the multivariate regression subject to a set of constraints on the relative size of the coefficients, and unless the constrained and unconstrained estimates happen to be identical, the constraints can only be satisfied at the cost of reduced accuracy. A priori undefined models are nevertheless of value in an environment where the model is used not to develop a measure of risk, but simply to identify ekceptional situations for further study. If a bank is significantly different from the group average, it may not be a problem, but the difference calls for some explanation. Differences may be measured with respect to individual variables, or by a composite measure such as the rank score or multivariate chi-square statistic. Whether a bank that is an outlier really constitutes a problem is not determined by the models, but by a detailed analysis of many factors including examiners' subjective evaluations. The National Bank Surveillance System (NBSS) implemented by the Office of the Comptroller of the Currency, and the Systemwide Minimum Surveillance Program of the Federal Reserve System, both rely on this combination of statistical and subjective approaches. In such an

256

D. Martin, Early warning of bank failure

environment, the statistical model is an extension of existing bank analysis activities and as such may be more readily accepted by supervisory agency personnel than a more rigorous model would be.

2. The logit model and its relation to discriminant analysis Let N be the total number of observations (banks); M the number of explanatory variables; x 0 the value of thejth variable for the ith observation; and Y= YI, Y2,---, YNa dependentvariable which represents the final outcome: Y~= 1 for failed banks, Y~= 0 for non-failures. Lot Nt be the number of observations with Y~-1. Although only two possible outcomes will be considered in this discussion, the logit model may be extended to three or more possibilities. Because of the nature of early warning models, it is assumed that the explanatory variables x 0 are drawn from data for a time period previous to that of the dependent variable Y. Using these conventions of notation, the ex post empirical early warning model can be written in its most general form: Pr(Y/= 1 ) = F ( X i l , xi2, ..., XiM, bl, b2, ... ),

(1)

that is, the probability that Y~= 1 is ~ function of that observation's x~ s and of bt, bz, etc., which are constants estimated from sample data. The form of the probability function F is an assumption rather than an empirical result, although alterffative specifications ofF may be tested on the data in order to select the most appropriate form of F. The assumed functional form of (1) in the logit model is the logistic function:

1 Pr(y~ = 1 ) = Pi = 1 + e- w,,

i = 1, ..., N,

(2)

where W~= bo +,~¥= 1 b.ixij is a linear combination of the independent variables and a set of coefficients B - (bo, bl, ... bM) which are to be estimated. This specification can be derived from a simple model ofbank failure, where the independent variables are financial ratios or other data believed to be relevant to the bank's risk of failing. We assume that there is some linear combination Wof the independent variables that is positively related to the probability of failure. That is, the higher the value of W/, the higher the probability of failure, conditional on the bank's values of xil,..., x~t~. The coefficient vector B of this linear combination is not known a priori, but must be inferred from the known values of the xij's and Y~'s. W, if it exists, is an index of a bank's propensity to fail; to use the terminology of

D. Martin, Early warning of bank failure

257

Korobow and Stuhr (1975), it measures vulnerability'. However, it is unrealistic to expect that any set of independent variables can provide such a complete picture of a bank's condition that we can predict with certainty whether the bank will fail or not. There are countless tangible and intangible influences on the outcome" variables, and by definition it is unobservable; we only know ex post that each quality of the loan portfolio (as distinguished from estimates made by management or examiners); attitudes and prejudices of large depositors, regulators and other people of importance; purely random events, etc. These excluded variables (most of which cannot be directly observed) determine how vulnerable, in terms of the included variables, a bank would have to be in order to fail. Accordingly, it seems plausible that each bank has a 'tolerance for vulnerability' ~ so that if W~ > ~ then the bank will fail, otherwise not. ~ is a function of the excluded variables, and by definition it is unobservable; we only know ex post that each bank's value of W~ was greater or less than its 1~/. What does this model say about a bank's ex ante probability of failure as a function of W~? An additional assumption is required, that Wis a random variable with cumulative distribution function (c.d.f.) G(w). The probability can then be written as:

Pr(Y~= 1 ) = P r (W~> W~)=Pr (W~< W~)= G(W~).

(2a)

This assumption is fairly reasonable, since Wis the combination of a great many variables many of which are at least unknown if not actually random. Note that equation (2a) does not by itself determine the a priori probability that Y~= 1, since that probability is equal to G(W*), where W* is the sample mean value of W, and W* can be set to any level by adjusting the constant term. Therefore, without loss of generality one can use the standard normal (or the standard form of any other probability distribution) in specifying G. In particular, a not-so-rigorous appeal to the central limit theorem would suggest that Wis normally distributed. In that case, G(w) is the cumulative normal, and (2a) can be rewritten as

1 l W, -oo e-~2/2dt'"

Pr (Yi=I)=G(W~)= ( ~ 7 ~

(2b)

The estimation of the coefficients of W~ in eq. (2b) is known as probit analysis. Unfortunately, there are computational difficulties with estimating (2b) which in many practical applications make it desirable to use an approximation to the normal distribution rather than the normal itself. The logistic distribution is such an approximation, which is virtually indistinguishable from the normal except at the extreme 'tails'. If G(w) is the logistic distribution, then (2a) becomes

D. Martin, Early warning of bank failure

258

1

Pr (Y~= 1)=G(W~)= 1 +e_W,,

(2c)

which is the same as eq. (2), the basic equation of the logit model. 5 Another motivation for the logit model is suggested by linear discriminant analysis. If the observations for which Y~= 1 and those for which Y~= 0 are multivariate normal populations with respect to the independent variables, having means M~ and Mo respectively, and equal covariance matrices $1 = So = S, then the probability of Y~= 1 is given by eq. (2), with the following values for the coefficients: bo=-1/2

(MI +Mo)' S-I(M1-Mo)+ln (N1/(N-N~)),

(3)

(bl, b2, ..., bM) = S- 1(M1 - Mo). The linear discriminant function is thus a special case of the logit model; if the assumptions of linear discriminant analysis are met, then the probability of group membership is given by eq. (2), but the reverse is not true since the logit model does not imply that the independent variables are multivariate normally distributed. (See Cox (1970), Jones (1975) for a more detailed discussion of this distinction.) In practice, the mean vectors and covariance matrices are not known but must be estimated in order to generate the linear discriminant function. In situations where it is believed that the distributional assumptions are met, but the estimates of Mo, M1 and S are suspect (e.g. due to small sample size in one group), direct estimation of the coefficients bo, bl ..., bM may be a preferable alternative method. 6 5Another possibility for G(w), which does differ significantly from the normal distribution, is the Cauchy distribution: 1

Pr (Y~= 1 ) = G(W~)= ½ + - arctan W~.

(2d)

7t

This distribution is somewhat peculiar in that its mean does not exist and its variance is infinite. In effect it implies that there exist significant numbers of'outlying' observations, which in our context are banks whose financial variables indicate extreme vulnerability but do not fail, and vice versa. The fact that some banks, whose financial data would to most analysts imply extreme vulnerability, nevertheless survive for years or even decades suggests that such outliers exist, but it is not clear that they exist in significantly greater numbers than can be accounted for by normal distribution for W. There does not appear to be any literature on the estimation ofeq. (2d) for more than one independent variable, although a univariate version of the equation is used by Korobow, Stuhr and Martin to estimate banks' probabilities of problem status. A test of the logit versus arctangent regression models on the data used in that study (Second District member bank financial ratios) showed that the logit • model produced slightly better probability estimates for any given set of independent variables. 6If the normality assumption applies, but not equality of the covariance matrices, quadratic discriminant analysis is indicated as the appropriate model. However, it does not necessarily resolve the problems of a data set where linear discriminant analysis is inapplicable. First, the data may not be multivariate-normal at all; non-normal marginal distributions of the individual variables are strong

D. Martin, Early warning of bank failure

259

Essentially what we are trying to achieve by estimating the coefficients of the logit model is to produce a set of probability estimates so that those observations where failure occurred were assigned high ex ante probabilities of failure, and those which did not fail were assigned low probabilities. A 'good fit' is a set of coefficients that comes as close as possible to this objective. The logit model, and the process by which it is estimated, can be shown in graphic form by plotting the estimated probability Pi as a function of W~, the linear combination of the independent variables. In effect, the independent variables are represented as a single dimension, shown as the horizontal axis in fig. 1. The vertical axis represents both Pi and the actual outcome Y~ for each observation. o = one o b s e r v a t i o n

1.0

o _

o

o

9___

Ba

4-~

O.8

.r= r~

r~

~p

o~-

O

E

(~

0.6

O

0.4

~

E

II

~II

0.2

","t

_

0 -~

~ c

o,

-3

Bank D o

.

-2

o

-

-1

.

o

0

,

.

+i

+2

o

-

+3

+4

M Wi=bo+ ~ b.x.. 3= I

]

z]

Fig. 1. Graphic representation of the logit model.

As shown in the graph, Pi is a continuous function of W~, approaching zero for very large negative values and approaching one for very large positive values of W~. The curve slopes upward at all points, since an increase in W~ means a higher probability. However, the slope becomes small at the extreme upper and lower

evidence of such a situation. (Loan loss variables, for example, tend to follow extremely skewed distributions.) Second, the coefficients of the squares and cross-products of the variables may have no clear economic meaning in terms of the problem at hand. This point is not relevant if one is interested only in overall discrimination regardless of whether the coefficients make sense or not-analogous to the single-minded pursuit of high R 2 in some linear regression studies. But if the purpose of the model includes testing of hypotheses about the marginal contributions of individual variables to the probability of failure, it would seem reasonable to include only those nonlinear interactions suggested by these hypotheses, rather than have the all-or-none choice between linear and quadratic discriminant analysis.

D. Martin, Early warning of bank failure

260

ends of the scale. This is intuitively plausible, as one would expect that if a bank has an extremely high (or low) probability of failure, a marginal change in the independent variables will have little effect on its prospects. The same marginal change might have a very great effect if the bank's probability of failure were, say, 0.5. In fig. 1 selected observations are shown according to their values of W~for a certain set of coefficients, and their actual outcome Y~.Evidently, the model based on these coefficients was an accurate predictor of what would happen to Bank A and Bank B. On the other hand, Bank C was assigned a low probability of failure and did fail, while Bank D was given a high probability and did not. If Bank C could be moved to the right and Bank D to the left on the graph, without materially affecting the positions of the other banks, the model would have greater predictive power. Fig. 2 shows the same observations, but with W~based on a new set of coefficients, where the accuracy of the function is improved. o = one observation

1.o

°

>~

Bank C f

4~

Bank A

0.8

,r= .(~ ¢o

o

E

• r-

,,

E o

0.6-

--

0.4

-~

,

O. 2

,r,, i

0, -,

v

.C -3

O -2

• O(~ -I

, O 0

• +I

• +2

• +3

+4

M

Wi = bo + ~I= bjxij Fig. 2. G r a p h i c r e p r e s e n t a t i o n of the logit model, with a new set of coefficients b0, bl .... , bM.

The technique of maximum likelihood estimation of eq. (2) bears a strong resemblance to the graphic procedure outlined above. Given the vector Y = Y1,..-, YN of actual outcomes, and a vector of coefficients B = bo, bl,..., bM, the likelihood function of the sample of N observations is given by: N

L(Y, B ) = . [ I P['(1-P,) ~-r'

(4)

i=l

L(Y, B) is written as a function of the coefficients B as well as the actual outcomes Y since the probabilities Pi are determined by the coefficients. The estimation

D. Martin, Early warning of bank failure

261

procedure consists of finding a set of coefficients which maximize L(Y, B). For computational purposes, and for convenience in applying tests of significance to the results, the problem is restated as the equivalent one of minimizing - 2 log likelihood: N

-21nL(Y,B)=-2

~

Y~lnP~+(1-Y~)ln ( l - P , ) .

(5)

i-1

As - 2 In L (Y, B) approaches zero, the probability estimates approach prediction with certainty, the ideal situation where Pi = Y~for all observations. The other extreme is given by - 2 In L0, which is - 2 log likelihood evaluated at the null hypothesis that Pi=N1/N for all observations and the explanatory variables have no influence. A coefficient vector B resulting in probability estimates with - 2 In L( Y,B ) > - 2 in Lo will be rejected as producing less accurate probability estimates than the null hypothesis. The maximum-likelihood solution (minimized - 2 In L(Y, B)) is obtained using an iterative technique. A variety of programs are available for nonlinear optimization problems such as this one, but all of them work in the same general fashion: an initial set of coefficients is revised at each iteration until the process converges to the maximum-likelihood solution. The move from fig. l to fig. 2 above corresponds to one iteration, and the better 'fit' of fig. 2 is reflected in a lower value of - 2 In L(Y, B). The statistical properties of the maximum-likelihood estimates, which are asymptotically unbiased and normally distributed, are discussed in detail by McFadden (1974); Monte Carlo simulations show that for large sample sizes (over 300 observations), the amount of bias is small for both the coefficients themselves and their estimated variance. Comparing the maximum-likelihood estimates of eq. (2) with the linear discriminant function (3), several general observations can be made. First, closeness of the two solutions varies considerably with the data. Brelsford and Jones (1967) give examples, based on simulated data, in which the linear discriminant function is virtually indistinguishable from the solutions obtained by various iterative methods. However, a later article by Jones (1975) shows that in some real-world applications, the linear discriminant estimate has a much higher value of - 2 In L(Y, B) than the maximum-likelihood estimate. Moreover, rejection of the discriminant estimate can occur, that is, its fit to the data as measured by the likelihood function is poorer than that of the null hypothesis. This phenomenon is especially common in situations where one group, e.g. failed banks, is very small relative to the total number of observations, though even then it occurs in some data sets but not others. Clearly, any representative sample of banks will contain only a small proportion of failures. Most studies have used non-representative samples, such as Meyer and Pifer (1970) who paired failed and nonfailed banks, or Altman (1974) where the

262

D. .~h.'tin. Early wm'ning of hank faiho'e

proportion of problem S & Ls in the sample is far higher than in the population as a whole. In such samples, the discriminant estimate will not be rejected 7 although it may still differ from the maximum-likelihood estimate: This would suggest that estimation of a linear discriminant function on a non-representative sample, even with an adjustment of the constant term to equate the a priori probability of failure with the proportion in the population as a whole, may introduce bias into the results. Rejection of the discriminant estimate cannot be ascribed to inequality of the covariance matrices, since it affects quadratic as well as linear discriminant functions. The most plausible explanation, supported by study of the marginal distributions of particular variables, is that the data are not multivariatenormally distributed, so that neither linear nor quadratic discriminant analysis is strictly applicable. It should be emphasized that the likelihood measurement used to test probability estimates is a somewhat different and more stringent test than the usual tests of classification accuracy; and that assignment of probability estimates to each observation is a different objective than classification into groups.

3. Empirical results An example of the logit model is provided by a study of failure among U.S. commercial banks, in which the model was estimated on historical data for the entire population of banks which are members of the Federal Reserve System. The dependent variable is the occurrence of failure, supervisory merger or other emergency measure to resolve an imminent failure situation, within two years of the statement year to which financial ratio data apply. (All of these events will hereafter be referred to as 'failure.') Various periods were tested, and various combinations of financial ratios as independent variables. Of approximately 5,700 Federal Reserve member banks, 58 banks were identified as failures at some time between 1970 and 1976. These banks were identified through examination of publicly available sources, including the merger decisions of Federal bank supervisory agencies, newspaper articles and a search of balance sheets for banks whose net worth declined drastically over a year or less. The occurrence of failure is probably understated by such a procedure, since the motivation for a merger is not always made public, and the report on what was actually a supervisory merger may not distinguish it from the vast majority of mergers where viable banks are involved. Similar questions are raised in cases where a bank is given operating assistance to allow it to remain in business, such as the Bank of the Commonwealth, Detroit in 1972 or Freedom National Bank, 7Note that this result is n o t a reflection of neglecting to adiust tile linear discriminanl function to the correct a priori probability of failure. Eq. 13) contains such an adjustment in the constant term.

D. Martin, Early warning of bank failure

263

New York in 1975. While these two banks were publicly described as banks which would have failed in the absence of such assistance, other banks apparently received similar assistance without any public statements as to their condition. 8 Doubtful cases were assumed to present nonfailed banks. The independent variables are drawn from a set of 25 financial ratios on the nationwide database maintained by the Federal Reserve Bank of New York for research on bank surveillance programs. These ratios are in turn a small subset of all possible combinations of items from the Report of Condition and Report of Income, and were chosen for their usefulness in previous early warning studies conducted by several different agencies. Most of these studies were confined to banks in a limited geographical area, and were based on the prediction of problem status rather than failed banks. A list of the 25 variables appears in the Appendix, and more detailed specifications for calculating the ratios may be obtained from the author. The variables may be classified into four broad groups: (i) asset risk (ii) liquidity, (iii) capital adequacy, and (iv) earnings. Asset risk can be measured exclusively in balance sheet terms, for example Loans/Total Assets-since loans are believed to be more risky than securities and cash assets-or through income measures such as Gross Charge-Offs/Net Operating Income. Since the data is based entirely on financial statements, examiners' evaluations of loans do not appear in the ratios. Liquidity is measured by such ratios as Net Liquid Assets/Total Assets, where net liquid assets include Federal Funds sold plus short-term Treasury securities plus loans to brokers and dealers, less Federal Funds purchased and other short-term borrowings. Capital adequacy is measured by various forms of the basic capital/assets ratio, with or without debt capital in the numerator. Earnings are expressed by Operating Expenses/Operating Revenues, Net Income/Total Assets, Net Interest Margin/Earning Assets, etc. Each of these general characteristics could in theory be expected to have an influence on a bank's prospect of failure. Banks typically are threatened with failure because of losses on assets; and the other three characteristics measure the ability of a bank to remain open in spite of these losses. Liquid assets and/or access to short-term borrowings allow the bank to maintain cash flow if expected loan repayments fail to materialize and depositors make larger than normal withdrawals. Capital adequacy and earnings allow losses to be offset by current or past income. This simple theory of the causes of bank failure is well-known 'since it has been the basis of bank supervision for decades. Yet in empirical analysis, each general concept has to be represented by a specific measure drawn from accounting data, and ratios which supposedly measure the same character8The problem applies particularly to small, closely-held banks where the rescue operation may take the form of purchase of new stock by directors, officers or local businessmen with little or no intervention from the supervisory authorities. Banks which sell capital stock in spite of negative net income can reasonably be suspected of being involved in such rescue operations.

264

D. Martin, Early warning of bank failure

istic may differ substantially in the extent to which they are correlated with subsequent failure. Table 1 contains logit regression results for selected combinations of independent variables as of 1974, estimating the probability of failure in 1075-76. The estimated coefficients of the logit regression are shown with significance levels derived from a test of each variable's marginal contribution to - 2 1 n L(Y,,B). Since the population of 5598 member banks as of 1974 included 23 subsequently failed banks, the logit estimates are tested against the null hypothesis that all banks have equal probability of failure equal to 23/5598 =0.0041. Values of - 2 In L(Y, B) are given for the logit model and the null hypothesis, and also for the linear and quadratic discriminant models for purposes of comparison. The likelihood ratio index (L.R.I .) is a statistic analogous to R ~, approaching zero if the logit model does not differ from the null hypothesis, and approaching one as the model approximates the perfect-prediction case. The Information Criterion is used in selecting the combination of variables that best fits a given set of data; and consists of the log-likelihood adjusted for the number of coefficients estimated in the model. It is based on the work of Akaike (1972) as applied to logit analysis by Jones (1975). The logit and discriminant models are compared in a discriminant-analysis context by computing classification accuracy for failed and nonfailed banks. For classification purposes, a bank is assigned to the failed-bank group if its estimated probability is higher than 0.0041, the sample proportion of failures for this period. That is, the cut-off point assumes a priori probabilities of group membership equal to the sample frequencies and equal misclassification c o s t s . 9 The percentages shown in the table represent the percent of actual failed and nonfailed banks classified correctly by the model. This is the standard measure of classification accuracy used in the discriminant studies. For example, in column (1) the logit function classified 20 of the 23 actual failed banks, and 4939 of the 5575 nonfailed banks, in the appropriate groups; hence the percentages of 87.0 and 88.6, respectively. This seemingly high accuracy is obtained even though a total of 656 banks were classified as failures by the model, of which only 20, or 3.0 percent, actually failed. The same considerations apply to the other logit and discriminant functions in the table; because the groups differ vastly in size, a high percentage of classification accuracy is consistent with a classification rule that exaggerates the size of the smaller group by many times. The classification rule may nevertheless be of practical use, even though a 'prediction' of 656 bank 9Although discriminant studies have been criticized for assuming a priori probabilities equal to sample proportions, and the assumption is legitimate here since the 'sample' consists of the entire population being studied. No precise estimates of the relative costs of misclassification are available, so the costs are arbitrarily assumed to be equal. The equal cost assumption is used by Altman (1974) and Sinkey (1975a) among others, in reporting classification results. However, as Sinkey notes in his study a practical application ofdiscriminant analysis requires taking misclassification costs into account; the same is true of the use of logit analysis for classifying observations.

188.75

73.9 % 96.5 % 91.3 % 92.5 %

87.0 % 94.4 % 91.3 % 93.1%

-

35.59 (0.999)

3.54 (0.822) 7.93 (1.00)

1.93 (0.976)

91.3 % 92.6 %

82.6 ~o 95.9 %

91.3 % 91.2 %

167.02

0.48 •

155.02 298.66 720.95 653.63

+ +, +

7.06 (0.999) - 122.78 (1.00)

(4)

aTest of overall significance of regression (chi-square ( b ) - (a)) shows regression significant at 0.99 level.

91.3 5/0 91.4 %

166.55

0.49 •

152.55 298.66 677.51 723.17

31.15 (0.447)

87.0 5/0 88.4 %

185.51

0.41 a

0.41"

-

(0.992) (0.846) (0.721) (1.00)

- 4 6 . 5 9 (0.999)

2.30 5.22 2.88 7.91

+ + +

+ 8.09 (1.00) + 5.83 (0.976) + 8.12 (1.00)

175.51 298.66 453.24 495.16

(1.00) (0.971) (1.00) (0.309) (0.544) (0.999)

2.07 (0.404) - 198.06 (0.999)

(3)

- 1 4 . 4 7 (1.00) -

(2)

174.75 298.66 841.14 874.06

+ 7.67 + 6.75 + 8.05 + 9.70 + 2.50 - 47.10

D

- 14.67 (1.00)

(1)

Classification results: Percent correctly classified Logit Model: Failure 87.0 % Nonfailure 88.6 % Linear Discriminant: Failure 65.3 % Nonfailure 96.6 % Quadratic Discriminant: Failure 82.6 % Nonfailure 95.0 ~/o

log likelihood Logit -Model Null Hypothesis Linear Discriminant Quadratic Discriminant Likelihood Ratio Index= ( ( b ) - (a))/(b) (f) Information Criterion = (a)+2 x (number of coefficients)

-2 (a) (b) (c) (d) (e)

Constant Net Income/Total Assets Gross Charge-offs/Net Operating Income Expenses/Operating Revenues Loans/Total Assets Commercial Loans/Total Loans Loss Provision/Loans + Securities Net Liquid Assets/Total Assets Gross Capital/Risk Assets

Coefficients (significance levels in parentheses)

0.47 a

156.99 298.66 699.88 698.38

35.63 (0.999)

7.89 (1.00)

2.20 (0.993)

91.3 % 92.0 %

82.6 5/o 96.2 %

91.3 % 91.1%

166.99

-

+ +

5.33 (1.00) - 120.86 (1.00)

(5)

8.20 (1.00)

91.3 89.9

82.6 ~o 90.9

95.7 % 89.2 %

192.24

0.39 •

182.24 298.66 345.53 404.25

- 3 2 . 6 6 (0.998)

+ 4.36 (1.00) + 2.88 (0.758) + 7.19 (1.00)

-

(6)

Table 1 Selected regression results, 1974 data (independent variables: as of 1974, 5598 observations, 23 failures, 5575 nonfailures; dependent variable: failure, 1975-76).

bd O5

O~

q-

a

266

D. Martin, Early warning of bank failure

failures cannot be taken at face value, if the cost of misclassifying a nonfailed bank is low enough relative to the cost of classifying as safe a bank which subsequently fails. Turning now to the results, column (1) is the combination of six variables used by Korobow, Stuhr and Martin (1976) to estimate probabilities of problem status among Second F.R. District banks. When applied to probabilities of failure, only four of the variables are significant at the 0.95 level or better; of the others, one (Net Liquid Assets) has a wrong sign. Another variable, Loss Provision/Loans + Securities, is not significant even though in theory the prospects of future loan losses should be of critical importance to whether a bank is likely to fail. Omission of two variables gives an equation shown in column (2), where all variables are significant and have the expected signs, but with hardly any change in - 2 log likelihood. In column (3), two new variables are added, improving the fit of the equation; Gross Chargeoffs/Net Operating Income is apparently a more relevant measure of asset quality than Loss Provision/Loans+Securities. Omitting Expenses/Operating Revenue, which has a wrong sign due to collinearity with Net Income/Total Assets, and Loans/Total assets, which is not significant, has little effect on the summary statistics or the coefficients of the remaining variables. (Columns (4)-(5)) Net Income/Total Assets, however, appears to be essential since its omission sharply increases - 2 log likelihood (column (6)). In all equations, the Gross Capital/Risk Assets ratio is significant at the 0.99 level, as is Commercial Loans/Total Loans. The conventional regulatory wisdom about capital adequacy is at least consistent with the first result, and the second may reflect the greater risk of commercial loans as compared with other types of loans. Alternatively, the percentage of commercial loans may be a proxy for illiquidity, since banks with relatively heavy commercial lending volume also tend to have low amounts of liquid assets, or for management 'aggressiveness' and propensity to take risks in other areas. Using the Information Criterion to select the 'best' model and excluding column (3) because of the wrong sign, column (5) is the most satisfactory model, with four variables characterizing profitability, asset quality, and capital adequacy. No liquidity variable is explicitly included in the model; attempts to substitute other liquidity variables for Net Liquid Assets/Total Assets did not yield significant coefficients. Comparing the logit and discriminant estimates, all of the latter had values of - 2 l o g likelihood greater than that of the null hypothesis. Hence in each equation, both linear and quadratic discriminant functions are rejected as probability estimates, while the logit estimates had - 2 l o g likelihood significantly lower than that of the null hypothesis. However, when the different models are compared in terms of classification rather than probability estimation, the classification accuracy of the logit a n d discriminant models are virtually the same. The relative merits of logit vs. discriminant analysis, at least in this empirical example, appear to depend on the intended use of the results. If a dichotomous'

• D. Martin, Early warning of bank failure

267

classification into 'sound' and 'unsound' banks is the goal, then we may be indifferent between discriminant and logit models, since the classification accuracies are similar. In fact, the linear discriminant model may be preferred since it minimizes the computations required, even if inequality of the covariance matrices (or nonnormality) makes the results theoretically suspect. On the other hand, the user may be capable of varying levels of response to risk of failure; a seller of Federal funds can vary the risk premium at which he is willing to supply funds to different banks, or a supervisory agency can choose between more and less urgent measures to deal with a problem situation. In that case, probability estimates can be of greater interest that a simple classification, and the quality of these estimates is an important issue, favoring the logit approach, a° While the results presented so far are for 1974 data, similar regressions were estimated for earlier periods. Table 2 contains coefficients and summary statistics of logit and discriminant models based on data as of 1970 and the incidence of failure in 1971-1972. The quality of the results is far poorer than in table 1, and the small number of banks in the failed-bank group raises questions whether the sample is too small for reliable estimates. 11 Column (1) again contains the Korobow, Stuhr and Martin set of six variables: here the liquidity variable is significant, while Loans/Total Assets and Commercial Loans/Total Loans have insignificant (and wrong sign) coefficients. While elimination of Provision/Loans +Securities and Net Liquid Assets/Total Assets had little effect on the probability estimates for 1974, in 1970 it reduces the fit of the equation to the point where it does not differ significantly from the null hypothesis. (Column (2)) This equation is also interesting in that while the linear discriminant function is rejected as having higher - 2 l o g likelihood than the null hypothesis the quadratic discriminant function is not. In fact, the quadratic discriminant probability estimates are better than the logit estimates. Substitution of Gross Charge-Offs/Net Operating Income for Loss Provision/Loans + Securities (columns (3)-(4)) improves the quality of the results slightly, especially if Net Liquid Assets/Total Assets is retained in the equation. Only in one of the equations is the capital ratio significant at the 0.95 level or better, again unlike the 1974 results where it was an important factor whatever other variables were combined with it. In all cases, the fit of the regression to the data is considerably poorer in 1970 with L.R.I. values ranging from 0.05 to 0.16 as compared to 0.39 to 0.49 in 1974.

X°Chesser (1974) makes a similar point in connection with commercial loan customers: a loan officer is hardly ever faced with an unequivocal accept or reject proposition but rather the question of what terms are appropriate to a particular customer. Therefore, a discriminant model classifying borrowers as potentially noncompliant vs. safe credit risks is less relevant to the loan officer's decision than a model which estimates a borrower's probability of not complying with loan terms. 11A sample size of about fifteen in the smallest group seems to be the minimum required; see Jones (1975). Only 12 banks out of 5617 failed in the 1971-72 period, including supervisory mergers, a fortunate fact from a real world point of view, however inconvenient for statistical work !

155.00 171.54

- 2 log likelihood (a) Logit Model (b) Null Hypothesis

162.71 171.54

. . - 2 1 . 8 5 (0.966)

(0.918) (0.983) (0.835)

6.58 (0.998) -

+ 3.29 (0.995) + 0.55 (0.128) + 0.08 (0.029)

-

(0.990) (0.299) (0.391)

6.03 (0.996)

+ 3.01 - 1.34 - 1.20 + 46.93 -14.81 - 12.76

-

Constant Net Income/Total Assets Gross Charge-offs/Net Operating Income Expenses/Operating Revenues Loans/Total Assets Commercial Loans/Total Loans Loss Provision/Loans + Securities Net Liquid Assets/Total Assets Gross Capital/Risk Assets

147.41 171.54

. -8.02

+3.38 -1.76 - .36 -0.58

. (0.647)

(0.999) (0.882) (0.085) (0.125)

- 7 . 4 4 (0.999) -

(3)

-

148.53 171.54

5.76 (0.486)

-

+ 3.57 (0.999) - 0.17 (0.041) - 0.46 (0.174)

- 6.27 (0.993) - 1 8 . 7 4 (0.385)

(4)

143.41 171.54

+ 40.66 (0.625) 11.26 (0.945) - 7.90 (0.642)

+ 3.23 (0.999) + 2.76 (0.962) _

- 8.94 (1.00) +56.87 (0.661)

(5)

6.47 (1.00) 9.54 (0.206)

145.65 171.54

- 10.52 (0.923) - 4.48 (0.408)

+ 3.54 (0.999) -

-

(6)

t~

0~

(2)

Coefficients (significance levels in parentheses) (I)

e~

Table 2 Selected regression results, 1970 data (independent variables: as of 1970, 5617 observations, 12 failures, 5605 nonfailures; dependent variable: failure 1971-72).

~o

OO

83.3 % 79.8 %

91.6 % 55.3 %

66.7 % 89.5 %

58.3 % 88.6 %

58.3 % 85.6 %

160.53

0.13b

313.87 627.46

aTest of overall significance of regression (chi-square ( b ) - (a)) shows regression significant at 0.95 level. bTest of overall significance of regression (chi-square ( b ) - (a)) shows regression significant at 0.99 level.

58.3 %, 88.6 %

159.41

50.0 % 79.7 %

172.71

169.00

0.14 b

58.3 % 85.2 %

0.05

0.10a

315.77 435.09

50.0 % 59.3 %

208.91 152.77

457.90 718.09

Classification results: Percent correctly classified Logit Model: Failure 58.3 % Nonfailure 67.4 % Linear Discriminant: Failure 50.0 % Nonfailure 90.8 % Quadratic Discriminant: Failure 66.7 % Nonfailure 86.3 %

(c) Linear Discriminant (d) Quadratic Discriminant (e) Likelihood Ratio Index = ((b) - (a))/(b) (0 Information Criterion = (a) + 2 x (number of .coefficients)

66.7 % 91.7 %

58.3 % 91.7 %

66.7 % 83.8 %

157.41

0.16 b

379.89 401.26

66.7 % 89.6 %

58.3 % 88.6 %

50.0 % 84.7 %

155.65

0.15 b

299.81 744.60

~"

.~

Constant Net Income/Total Assets Gross Charge-offs/Net Operating Income Expenses/Operating Revenues Loans/Total Assets Commercial Loans/Total Loans Loss Provision/Loans + Securities Net Liquid Assets/Total Assets Gross Capital/Risk Assets + 3.57 (0.999)

-

-

+ 3.42 (0.999)

+ 1.51 (0.571)

-12.59 (0.791)

5.64 (0.493)

0.48 (0.180)

- 6.37 (0.999) -18.71 (0.384)

5617 12 5605

1970 1971-72

5.82 (0.999) 5.99 (0.120)

-

5714 12 5702

1969 1970-71

Coefficients (significance levels in parentheses)

Number of observations: Total Failure Non-failure

Year: Independent variables data dependent variable (failure vs. non-failure)

6.55 (0.999) 9.32 (0.222)

-15.79 (0.812)

+ 1.40 (0.492)

+ 4.23 (0.999)

-

5587 10 5577

1971 1972-73

-20.81 (0.949)

+ 6.56 (0.999)

+ 3.25 (0.999)

- 6.76 (1.00) -38.91 (0.978)

5565 13 5552

1972 1973-74

Table 3 Regression results: Comparison of different time periods.

-55.67 (1.00)

+ 8.37 (1.00)

+ 4.42 (1.00)

- 4.81 (0.999) + 6.98 (0.135)

5546 18 5528

197~ 1974-75

-

+

+

33.63 (0.999)

7.89 (1.00)

2.20 (0.993)

5.33 (1.00) -120.86 (1.00)

5598 23 5575

1974 1975-76

1',.2

t..

e,

e.,

O

90.0 9/o 90.2 ~o

66.7 ~ 89.8 9/o

76.9 9/o 93.2 ~o

69.2 9/o 95.7 9/o

92.3 % 87.4 %

0.31"

126.42 183.51 573.09 697.08

*Test of overall significance of regression (chi-square ( b ) - (a)) shows regression significant at 0.99 level.

80.0 9/o 90.6 ~

58.3 ~o 88.6 9/o

0.21"

80.0 % 87.8 %

0.13"

O.12"

115.61 146.49 320.96 257.39

58.3 % 85.6 %

148.53 171.54 • 309.56 630.31

151.23 171.95 370.02 876.51

Classification results: Percent correctly classified Logit Model: Failure 41.7 % Non-failure 82.2 % Linear Discriminant Failure 41.7 ~o Non-failure 88.7 ~ Quadratic Discriminant: Failure 66.7 9/o Non-failure 78.9 9/o

-

- 2 log likelihood (a) Logit Model (b) Null Hypothesis (c) Linear Discriminant (d) Quadratic Discriminant (e) Likelihood Ratio Index = ((b) (a))/(b)

88.9 ~ 91.1 9/o

83.3 9/o 93.2 9/o

83.3 % 90.3 %

0.46"

129.69 242.24 453.56 567.24

91.3 9/o 92.0 ~

82.6 9/o 96.2 ~o

91.3 % 91.1%

0.47*

156.99 298.66 699.88 698.38

~-.

'~

rn

.~

272

D. Martin, Early warning of bank failure

Classification accuracies for both the logit and discriminant models are also lower. The difference between time periods can be seen in another perspective in table 3, which compares estimates of four-variable model which was 'best' in 1974, estimated in each year 1969 through 1974. In each period, the dependent variable is the occurrence of failure in the two subsequent years. The overall quality of the equation is higher in the later periods, as is the significance of the capital adequacy variable. Gross Charge-Offs/CNet Operating Income has relatively little variation in its coefficient from year to year, but the other coefficients vary widely. These results can be interpreted in light of banking developments since 1970. In the early part of the period, bank failures were relatively infrequent, often involving fraud or local economic trends in a bank's immediate service area. These factors are not as well reflected in bank financial statements as the recession-related asset problems which were important in later years; hence the early regressions show less relationship between ratios drawn from these statements and the risk of failure. Similarly, in the earlier years the overall low level of loan losses meant that earnings and capital adequacy were of little importance to a bank's ability to survive under the conditions of that time. But beginning in 1973, a sharp increase in loan problems affected the entire banking system, increasing the importance of earnings and capital, and of whatever factors-loan risk, illiquidity, or management strategy-for which the Commercial Loans/Total Loans variable is a proxy. However, the effect of actual loan losses, at an individual bank remained constant over the whole period, the danger posed by large losses is the same in spite of changing economic conditions. The policy implications of these findings are not surprising: the relevance of conventional bank soundness criteria will vary over the business cycle. In periods when bank failures are extremely rare, the empirical link between capital adequacy (and other measures of bank safety) and the actual occurrence of failure will be weak. Supervisory use of a statistical early warning function estimated from that period will yield questionable results, whether the specific form of the model is discriminant analysis, logit analysis, or other techniques. To the extent that supervisory standards on capital, etc. hamper bank profitability and growth, one might expect that the industry will use the lack of empirical connection between financial ratios and failure to argue for relaxation or abandonment of those standards. In periods of stress due to increased loan losses, when a larger number of banks fail, the conventional supervisory wisdom reasserts itself; 'weakness' as measured by earnings, capital and asset composition is an indicator of risk, although the link with failure is still far short of perfect prediction. If measurement of failure risk is based on equations estimated in the 1974-76 period, the model may provide more realistic 'worst case' estimates than the oft-criticized supervisory standards based on Great Depression experience. Such an early warning system will tend to overpredict failure in future periods of prosperity and lower loan loss

D. Martin, Early warning of bank failure

273

experience, but the relative weights of different characteristics will be based on their objective contribution to risk rather than the subjective weights currently used in rating banks. These conclusions are at variance with studies of bank failure in the 1930s, in which bank capital adequacy was found to have little or no relation to the incidence of failure even under conditions of extreme stress in the banking system. The studies used univariate comparisons for failed and nonfailed banks; application of multivariate techniques such as discriminant or logit analysis on 1930s data might show that ceteris paribus, higher capital/risk asset ratios made banks less likely to fail. On the other hand, ratio tests of bank soundness may break down completely in an extreme situation when even the most conservatively managed institutions find themselves dependent on the central bank in its role as lender of last resort. If that is true, then statistical early warning models are of most interest in periods of moderate adversity, rather than in times which are either better or substantially worse.

NWRAT~5 NWRAT~6 NWRAT~7 NWRAT~8 NWRAT~9 NWRAT~I~ NWRAT~I 1 NWRAT~12 NWRAT~13

DT* BASIS-ID* DIST-FED-RES FDIC-ID TYPE-REP a*

Field name

Appendix

6 7 8 9 10 11 12 13 14

Field no.

2 3 4 5 6 7 8 9

1

i

Ratio no.

LL.TS LA.TS EQ.ARA GC.ARA NI.TA DIV.NI GCO.NI NIE.OP EXP.OP

Ratio name Historic data

LL.TS LA.TS EQ.ARA GC.ARA NI.TA DIV.NI GCO.NI NIE.OP EXP.OP

Current data

t

Historic Data: TYPE-REP is identical to DIST-FED-RES. Current Data: T Y P E - R E P has the form ddr, where 'dd' is the Federal Reserve District and 'r' is the report type, as follows: 1. Year-to date report, all banks 2. Current-quarter report, large bank 3. Current quarter report, small bank 4. Semiannual report, small bank Loans and leases/Total Sources of Funds Liquid Assets/Total Sources of Funds Equity Capital/Adjusted Risk Assets Gross Capital/Adjusted Risk Assets~ Net Income/Total Assets-Cash Items in Process Dividends/Net Income Gross Charge-offs/Net Operating Income + Loss Provision Non-interest expenses/Operating Revenue Total Operating Expense/Operating Revenue

Ratio description

Ratio data for nationwide surveillance study storage on NWRAT data set.

O~

ga

t'rl t~

22 23 24 25 26

27 28 29 30

NWRAT021 NWRAT022 NWRAT023 NWRAT024 NWRAT~25

NWRAT026 NWRAT027 NWRAT028 NWRAT029 ~

22 23 24 25

17 18 19 20 21

10 11 12 13 14 15 16

NIE.NO ECM DIV.ADJ OP

IN.EA INT.EA LL2.TS LA2.TS NCO.NI

LN.TA.I CI.LN PROV.TL.I LIQ.TA.I GC.RA.I TA PROV

NIE.NO ECM DIV.ADJ ISF.TS

IN.EA INT.EA LL2.TS LA2.TS NCO.NI

LN.TA.I CI.LN PROV.TL.I LIQ.TA.I GC.RA.I TA CI2.LN

f Historic Data: Loss Provision (Thousands of Dollars) Current Data: Commercial and Industrial Loans+Loans to REITs and Mortgage bankers+Construction Loans+Commercial Real Estate Loans/Total Loans Net Interest Margin/Earning Assets Net Interest Margin (taxable equivalent)/Earning Assets LL.TS, with gross Fed. Funds purchased instead of net Fed. Funds LA.TS, with gross Fed. Funds purchased instead of net Fed. Funds Charge-offs (net of recoveries) /Net Operating Income+Loss Provision Non-interest expense/Operating Revenue-Interest expense Equity Capital Measure Common Stock Dividends/Net Income- Preferred Stock Dividends f Historic Data: Operating Revenue (Thousands of Dollars) Current Data: Interest Sensitive Funds/Total Sources of Funds

Loans/Total Assets-Cash Items in Process Commercial and Industrial Loans/Total Loans (Domestic only) Loss Provision/Total Loans and Securities Net Liquid Assets/Total Assets-Cash Items in Process Gross Capital/Risk Assets (alternative definition) Total Assets (Thousands of dollars)

*Retrieval key. aThis field has a different meaning for historical data (dates 691231 to 751231) than for current data (760331 to present).

15 16 17 18 19 20 21

NWRAT014 NWRAT015 NWRAT016 NWRAT017 NWRAT018 NWRATO19 NWRAT020 a

b~

g~

276

D. Martin, Early warning of bank failure

References Akaike, H., Use of a'n information theoretic quantity for statistical model identification, Proceedings, Fifth Hawaii Conference on System Sciences, pp. 249--250, 1972. Airman, E.I., Corporate bankruptcy, potential stockholder returns, and share valuation, Journal of Finance, December 1969. Altman, E.I., Predicting performance in the savings and loan association industry, Salomon Brothers Center for the Study of Financial Institutions Working Paper, 1974. Altman, E.I., A financial early warning system for over-the-counter broker dealers, Journal of Finance, September 1976. Brelsford, W.M. and R.H. Jones, Estimating probabilities, Monthly Weather Review, May 1967, pp. 570-576. Chesser, D.L., Predicting loan noncompliance, The Journal of Commercial Bank Lending, August 1974, pp. 28-38. Cox, D.R., Some procedures connected with the logistic qualitative response curve, in: F.N. David, ed., Research Papers in Statistics, John Wiley & Sons, London, 1966. Cox, D.R., Analysis of Binary Data, Methuen & Co., Ltd., London, 1970. Finney, D.J., Probit analysis, The University Press, Cambridge, England, 1952. Jones, R.H., Probability estimation using a multinomial logistic function, Journal of Statistics and Computer Simulation, 1975, Vol. 3, pp. 315-329. Joy, O.M. and J.D. Tollefson, On the financial applications of discriminant analysis, Journal of Financial and Quantitative analysis, December 1975, pp. 723-738. Korobow, L. and D.P. Stuhr, Toward early warning of changes in banks' financial condition: A progress report, Federal Reserve Bank of New York Monthly Review, 1975, pp. 157-165, Korobow, L., D.P. Stuhr and D. Martin, A probabilistic approach to early warning of changes in bank financial condition, Federal Reserve Bank of New York Monthly Review, July 1976, pp. 187-194. McFadden, D., The revealed preferences of a government bureaucracy: Empirical evidence, Bell Frontiers of Econometrics: Academic Press~ New York, 1974. McFadden, D., The revealed preferences of a government bureaucracy: Empirical evidence, Bell Journal of Economics and Management Science, Spring 1976, pp. 55-72. Meyer, P.A. and H.W. Pifer, Prediction of bank failures, Journal of Finance, September 1970, pp. 853-868. Nelson, R.W., Optimal capital policy of the commercial banking firm in relation to expectations concerning loan losses, Federal Reserve Bank of New York working paper, 1977. Press, S.J., Applied multivariate analysis, Holt, Rinehart and Winston, Inc., New York, 1973. Santomero, A.M. and J.D. Vinso, Estimating the probability of failure for commercial banks and the banking system, Journal of Banking and Finance, September, 1977. Secrist, H, National bank failures and non-failures, The Principia Press, Bloomington, Ind., 1938. Sinkey, J.F., A multivariate statistical analysis of the characteristics of problem banks, Journal of Finance, March 1975, pp. 21-35. Sinkey, J.F., Franklin National Bank: A portfolio and performance analysis of our largest bank failure, FDIC Working Paper No. 75-10, Washington, 1975. Stuhr, D.P. and R. Van Wicklen, Rating the financial condition of banks: A statistical approach to aid bank supervision, Federal Reserve Bank of New York Monthly Review, September 1974, p. 233-238. Walker, S.H. and D.B. Duncan, Estimation of the probability of an event as a function of several independent variables, Biometrika (1967) 54, 1 pp. 167-179.