The adjustment of credit ratings in advance of defaults

The adjustment of credit ratings in advance of defaults

Journal of Banking & Finance 31 (2007) 751–767 www.elsevier.com/locate/jbf The adjustment of credit ratings in advance of defaults Andre´ Gu¨ttler a...

180KB Sizes 10 Downloads 89 Views

Journal of Banking & Finance 31 (2007) 751–767 www.elsevier.com/locate/jbf

The adjustment of credit ratings in advance of defaults Andre´ Gu¨ttler

a,* ,

Mark Wahrenburg

b,1

a b

European Business School, Finance Department, Schloss Reichartshausen, 65375 Oestrich-Winkel, Germany Goethe University Frankfurt, Finance Department, Mertonstr. 17, Postfach 111932, 60054 Frankfurt, Germany Received 23 May 2005; accepted 24 May 2006 Available online 24 July 2006

Abstract This paper assesses biases in credit ratings and lead–lag relationships for near-to-default issuers with multiple ratings by Moody’s and S&P. Based on defaults from 1997 to 2004, we find evidence that Moody’s seems to adjust its ratings to increasing default risk in a timelier manner than S&P. Second, credit ratings by the two US-based agencies are not subject to any home preference. Third, given a downgrade (upgrade) by the first rating agency, subsequent downgrades (upgrades) by the second rating agency are of greater magnitude in the short term. Fourth, harsher rating changes by one agency are followed by harsher rating changes in the same direction by the second agency. Fifth, rating changes by the second rating agency are significantly more likely after downgrades than after upgrades by the first rating agency. Additionally, we find evidence for serial correlation in rating changes up to 90 days subsequent to the rating change of interest after controlling for rating changes by the second rating agency. Ó 2006 Elsevier B.V. All rights reserved. JEL classification: G15; G23; G33 Keywords: Credit rating agencies; Credit rating biases; Leader–follower analysis

*

Corresponding author. Tel.: +49 6723 69285; fax: +49 6723 69208. E-mail addresses: [email protected] (A. Gu¨ttler), wahrenburg@finance.uni-frankfurt.de (M. Wahrenburg). 1 Tel.: +49 69 79822142. 0378-4266/$ - see front matter Ó 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.jbankfin.2006.05.014

752

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

1. Introduction External credit ratings play an important role in international capital markets. Since John Moody started in 1909 with a small rating book, the market has developed into a multi-billion dollar industry. In the core of Basel II, credit rating agencies will play an even more central role than they have so far. However, their failure to predict the crises at firms such as Enron, WorldCom or Parmalat has cast a cloud over the shining future of credit rating agencies in recent years. The economic relevance of defaults is tremendous: Moody’s reported defaulted bonds with a face value of around USD 390 billion between 1997 and 2004 (Moody’s, 2004). Therefore, investors and regulators are growing concerned about the quality of external credit ratings given the rating agency’s business model, i.e. the fact that they are paid by the issuers, and given the oligopolistic structure of the market for external credit ratings. For the purpose of investigating and comparing long-run ratings performance, historical default rates by rating category are the first thing to check. If historical default rates are higher for riskier rating grades, then a rating agency has done a good job in the period under investigation. In addition, validation measures, such as the area under the receiver operator characteristic (ROC) curve, wrap these historical default rates into a single number for the forecasting quality of default risk (Stein, 2005). Moody’s and S&P, as the leaders in the market for corporate ratings, provide these historical default rates and ROC statistics on an annual basis. However, given the fast economic relevance of bond defaults it seems puzzling that almost nothing is known about rating adjustments in connection with near-to-default issuers. The question whether there are biases in credit ratings of near-to-default issuers should be of particular interest for investors and regulators. For example, investors in non-US markets should know if there is any home preference on the part of US-based rating agencies, i.e. if non-US issuers are rated lower by the dominant US-based rating agencies Moody’s and S&P. The second question – which agency is the rating leader of near-todefault issuers – matters because producing credit ratings is very expensive given the vast importance of soft rating criteria,2 which must be collected through intensive contact with the management. Therefore, it would seem rational for credit rating agencies to treat rating changes by another important rating agency as a trigger prompting them to check and review their own ratings in a timely manner, especially in the case of risky issuers. Hence following the competitors’ rating changes is less costly than doing one’s own research, especially since credit rating agencies are not forced to do so, given that the market for credit ratings is not regulated or very competitive. Just following the other agency’s rating, which can be interpreted as a special form of herding, results in a loss of information (Kuhner, 2001). We address these two research questions by analyzing a sample of 407 issuers – 172 of them with accounting data – with multiple ratings by Moody’s and S&P, which defaulted in the years 1997–2004. We also add another sample with non-defaulted issuers for the years 2000–2004 (these years account for around 90% of all defaults) to control for potential biases due to our mapping of the rating agencies’ different rating approaches. Though

2

Soft rating criteria are, for example, the management quality or the issuer’s product policies in comparison to those of its competitors.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

753

our contribution is limited by its data, namely the relatively short observation period and the small sub-sample with accounting data, several conclusions emerge: (i) Moody’s seems to adjust its ratings to increasing default risk in a timelier manner than S&P, even after accounting for the differences in mean ratings due to their dissimilar rating approaches. (ii) Credit ratings by the two US-based agencies are not subject to any home preference. We even discover that Moody’s and S&P assigned more conservative ratings to US issuers than to non-US issuers. This might be due to their better forecasting ability in the home market, or to the high quality of accounting information in the US, or to different regional bankruptcy legislation. (iii) Given a downgrade (upgrade) by the first rating agency, subsequent downgrades (upgrades) by the second rating agency are of greater magnitude in the short term. This might be due to information flow between the two rating agencies or simply an independent reaction to new information. (iv) Given different classes of rating adjustments, i.e. upgrades, slight downgrades and harsh downgrades, in the first agency’s rating, the following adjustment by the second rating agency varies accordingly. Harsher rating changes by one agency are followed by harsher rating changes in the same direction by the second agency. (v) Rating changes by the second rating agency are significantly more likely after downgrades than after upgrades by the first rating agency. (vi) We find evidence for serial correlation in rating changes up to 90 days subsequent to the rating change of interest after controlling for rating changes by the second rating agency. The paper is related to several strands of literature. One strand analyzes biases in external credit ratings, although rating agencies claim that their ratings are independent of issuer characteristics. Shin and Moore (2003) have analyzed the home preference hypothesis. They find that ratings assigned by Moody’s and S&P to Japanese firms are systematically lower than those assigned by the Japanese rating agencies R&I and JCR. In addition, Nickell et al. (2000) observe that higher rated Japanese firms are more likely to be downgraded by credit rating agencies with headquarters in the US, and Japanese firms with low ratings are less likely than US firms to be upgraded by those agencies. These results might be explained by the conservatism of US credit rating agencies in less familiar markets. However, Ammer and Packer (2000) find no evidence of different default rates between US and foreign firms for the period 1983 to 1998 after controlling for time and rating effects. There might also be a size bias, yielding incentives for moral hazard (White, 2002), since credit rating agencies are partly paid in basis points of the debt volume. Credit rating agencies might have an incentive to rate bigger companies better in order to avoid losing the client because of a pessimistic rating. However, it is well known that credit rating agencies rely heavily on their reputation for not allowing such considerations to influence their assessments (Covitz and Harrison, 2003). Given the staleness in rating (Amato and Furfine, 2004), there might also be a fallen angel bias, i.e. a tendency to adjust credit ratings too slowly for former investment grade rated issuers. In the area of lead–lag analysis of different rating agencies or rating systems, Johnson (2003) examines rating changes around the investment grade boundary. He finds that the credit rating agency Egan-Jones3 leads S&P in downgrading issuers from BBB to noninvestment grade ratings. Delianedis and Geske (1999), among others, provide evidence that credit rating migrations can be predicted by market based risk measures months in advance.

3

Egan-Jones Ratings is a small credit rating agency, which has issued ratings since 1995.

754

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

Another strand of the literature shows that rating changes are serial correlated. Altman and Kao (1992) detect positive serial autocorrelation in ratings by S&P when the initial rating change was a downgrade. Lando and Skødeberg (2002) find positive serial correlations for downgrades in a sample of debtors rated by S&P, whereas Christensen et al. (2004) provide this kind of evidence for a sample of issuers rated by Moody’s. The literature is, as far as we know, silent on the analysis of multiple rated near-todefault debtors. Our analysis of this special issuer group allows us to focus on a part of the investment universe which is very important for the reputation of credit rating agencies and for the regulation of these entities. Since rating leadership is an indicator of the forecasting quality of a rating agency, by analyzing ex post the rating adjustment of issuers that subsequently defaulted we enlarge upon the comparison of the performance of external rating agencies offered (e.g., Gu¨ttler, 2005). Besides, our research expands the existing literature insofar as we analyze lead–lag relationships in the rating of near-to-default issuers directly through a Granger-like approach instead of relying on (indirect) event study approaches (e.g., Norden and Weber, 2004). Further, controlling for the impact of rating changes by a second important rating agency, we add to the literature further evidence of serial correlation of rating changes, which prior to this study was restricted to rating data for just one rating agency (e.g., Christensen et al., 2004). The study is organized as follows. Section 2 gives a brief overview of the dataset. The next section presents the empirical results. Section 4 contains concluding remarks. 2. Dataset Since we want to examine changes in the rating of companies that subsequently defaulted, a database of defaulted firms and their credit rating history before the default event is required. For this purpose, we use the default data contained in annual default reports by Moody’s and S&P on publicly traded companies for the years 1997–2004. We define the default event as the earliest date reported by the two agencies, on a daily basis. Beginning in 1990, the rating history of long-term, senior unsecured ratings, i.e. issuer ratings, and the history of so-called Watchlist entries was obtained from Bloomberg. We observe 407 companies with default announcements and long-term, senior unsecured ratings by both rating agencies. We only include multiple rated issuers to be able to conduct a lead–lag analysis. For these 407 issuers the dataset contains 1495 issuer ratings by Moody’s and 1962 by S&P. For a reduced dataset of 172 defaulted issuers, we found accounting information in Datastream. We use four key accounting variables as indicators of creditworthiness: interest coverage (operating income plus interest expenses divided by interest expenses), operating income/sales, long-term debt/assets, and total debt/assets.4 To eliminate outliers we winsorize all variables at the 5th and 95th percentiles of their cross-sectional distributions. Missing values are set to the cross-sectional median. Besides, we use Datastream as a source of macroeconomic variables (change in GDP and unemployment rate). The annual default reports of the two rating agencies provide additional information. Missed interest payments are the main reason for default, followed by chapter 11 filings. As in other international studies, US-based firms dominate our sample, accounting for

4

See Blume et al. (1998) for an example of the use of these ratios.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

755

Table 1 Distribution of defaults and the amount of outstanding debt over time 1997

1998

1999

2000

2001

2002

2003

2004

Panel I: Full sample Number of defaults Median outstanding debt Mean outstanding debt

4 349.38 597.61

10 369.80 389.97

34 159.75 280.24

83 158.50 315.40

117 225.00 687.29

91 300.00 1054.66

45 280.00 487.62

23 250.00 382.45

Panel II: Reduced sample Number of defaults Median outstanding debt Mean outstanding debt

0 0.00 0.00

5 585.00 544.22

12 156.10 355.85

31 287.50 471.81

51 227.33 786.71

42 380.00 919.47

20 375.00 708.90

11 330.00 532.54

This table shows the distribution of defaults which occurred in the years 1997–2004 involving issuers rated by Moody’s and S&P for the full and the reduced sample with accounting data. We define the date of the default event as the earliest date reported by either of the two agencies. As the reported debt amount, we use the reported values in millions of US dollars. If Moody’s and S&P do not report an identical default date and/or if there are discrepancies in the reported debt amount we calculate the mean outstanding debt amount between Moody’s and S&P if they report the same default date, or, if they do not report the same date, we use the debt amount as of the earlier default date.

around 80% of the total number of firms. This is due to the US origin of these two rating agencies, which did not begin to expand their activities to other regions until the 1980s. Aside from widely known companies like Enron and Worldcom, defaults by smaller issuers dominate our dataset. The telecommunications sector leads by a long way, reflecting the bursting of the asset bubble and the unwillingness of investors to support these (mostly) highly leveraged companies any longer. There are 16 (18) so-called fallen angels in our reduced sample for Moody’s (S&P). These are companies which formerly had an investment grade rating (Baa3/BBB or better) but were downgraded to non-investment grade (Ba1/BB+ or worse) during our observation period, which begins in 1990.5 The eight-year period for which we compile default data, 1997–2004, includes years in which the economy was healthy and others in which economic conditions were unfavorable. As we can see from Table 1, whereas only 4 multiple defaults are observed in 1997 for the full sample, this variable peaks in 2001 at 117 and declines to 23 in 2004. The mean (median) amount of outstanding debt peaks one year later in 2002, with an average amount of USD 1.05 (0.30) billion. To compare the timeliness of the rating adjustment by Moody’s and S&P it is necessary to construct a master scale of credit ratings. We map the two rating scales by utilizing a master scale with 21 notches by assigning numerical values to ratings (Aaa/AAA = 1, Aa1/AA+ = 2, . . . , C/D = 21). This approach is common in the relevant literature, e.g., Cantor and Packer (1997) use 17 rating classes. Since, in contrast to these studies, our dataset is dominated by low ratings, we break the low segment down further, making a total of 21 classes. For the dependent variables of the ordered probit analysis in Section 3.1 we merge these rating classes into seven classes (Aaa to Aa3/AAA to AA = 1; Ca to C/CC to D = 7). Otherwise, with 21 rating classes, the number of observations in each rating class would be too low. 5 Obviously, we are not able to identify as fallen angels those issuers that have a rating history that goes back further than 1990. Fallen angel status should therefore be interpreted as being applicable in the medium term only.

756

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

Mapping different rating scales in our case is somewhat special, since it is specifically at the lower end of the credit quality scale that any comparisons between Moody’s and S&P’s ratings are particularly difficult. First, this is because the two rating agencies do not apply the same rating scale, even though their rating scales look quite similar at first sight. Furthermore, Moody’s assesses the probability of default (PD) and the loss given default (LGD), which together comprise the so-called ‘‘expected loss’’ approach, whereas S&P only evaluates the PD (Estrella, 2000). As default risk increases, LGD considerations become much more relevant than for investment grade rated issuers. Thus, these two credit rating agencies may assign different ratings to a firm at the same point in time even if they share an identical view on the PD. However, this mapping procedure is not only used in academic practice (e.g., Cantor and Packer, 1997), but also by regulatory authorities. SEC’s important investment/non-investment boundary BBB/BB+ for S&P and Baa3/Ba1 for Moody’s is one example among others. To check the matching patterns of the two rating scales, we also use a further dataset consisting of issuers that held a long-term rating by Moody’s and S&P at the end of a given year and that were not mentioned in Moody’s or S&P’s default reports for the respective year. We only use multiple rated issuers, since it is not appropriate to compare portfolios of a different level of creditworthiness according to their mean rating. Doing so for all ratings and separately for junk ratings, we are able to check for rating tendencies on the part of the two rating agencies. Besides rating changes, Bloomberg also delivers Watchlist entries. As part of the rating monitoring process, an issuer might be placed on this formal rating review. These entries signal to the market that a rating change in the near future is highly probable, but that the rating analysts need more time to assess the magnitude of the forthcoming rating change. As one example among several, Hamilton and Cantor (2004) find that the accuracy of default predictions is significantly better after the inclusion of Moody’s Watchlist information. In this study, we make use of Watchlist information by adding 1 to the numerical rating for a negative Watchlist entry and by subtracting 1 from the numerical rating master scale (n = 21) for a positive entry. 3. Empirical results 3.1. Biases of credit rating levels We first examine whether the rating levels are biased according to the issuers’ domicile, the debt amount at the time of default and the rating history. In this analysis, we control for key characteristics of creditworthiness and macroeconomic circumstances. To do so, we order the data of the reduced sample with accounting information according to the following periods: over 1440 days, between 1440 and 1081 days, between 1080 and 721 days, between 720 and 541 days, between 540 and 361 days, between 360 and 181 days, between 180 and 91 days and between 90 and 31 days before our defined default event. Table 2 presents the results for the rating level before default. As expected, the closer the default date, the lower the mean ratings of both rating agencies are. Comparing the two rating agencies, we find that Moody’s assigns lower ratings than S&P for all eight periods. The mean rating difference oscillates between 0.58 and 1.25 notches on the mapped, numerical rating scale from 1 (Aaa/AAA) to 21 (C/D) with adjustments for Watchlist additions.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

757

Table 2 Rating adjustment and risk perception before default for the reduced sample Ratings, days to default >1440

1440–1081

1080–721

720–541

540–361

360–181

180–91

90–31

Panel I: Moody’s Mean 10%-Quantile 90%-Quantile Freq. of investment grade Observations

13.4630 9 16 22.22% 54

13.8116 10 17 17.39% 69

14.7182 11 18 9.09% 110

15.0224 11 18 7.46% 134

15.3933 12.9 18 6.00% 150

16.1341 14 19 4.27% 164

16.9286 15 20 2.38% 168

17.6941 15 20 1.18% 170

Panel II: S&P Mean 10%-Quantile 90%-Quantile Freq. of investment grade Observations

12.8876 9 15 17.98% 89

13.1429 9.8 16 14.29% 119

13.5342 10 16 12.33% 146

13.7875 11 16 8.75% 160

14.1796 12 17 6.59% 167

14.8824 12 17 5.29% 170

15.8198 13 19 4.65% 172

16.8779 14 20 2.33% 172

Panel III: Accounting data Interest coverage Operating income/sales Long-term debt/assets Total debt/assets

2.6606 0.0807 0.4429 0.3561

2.4596 0.0797 0.4286 0.4113

2.1208 0.0593 0.5034 0.4202

1.8392 0.0412 0.5360 0.4293

1.6743 0.0400 0.5071 0.4671

1.3964 0.0270 0.5408 0.4997

1.3371 0.0257 0.5430 0.5309

1.2426 0.0209 0.5240 0.5261

For the analysis of rating adjustments before default, we arrange the rating history of all defaulted issuers according to eight time periods before the defined default date. We use mapped, numerical ratings from 1 (Aaa/ AAA) to 21 (C/D) with adjustments for Watchlist additions. The frequency of investment grade rated issuers corresponds to the number of issuers with a rating of 10 (i.e. Baa3/BBB) and better in the respective period divided by the total number of observations in the respective time period. Panel III presents four key accounting variables of creditworthiness for all 172 companies.

This result alone would yield no insight, since a very ‘‘conservative’’ rating system with the lowest rating would always win this type of comparison. To get a perspective on the question of false positives, i.e. a very low rating for an issuer that did not default, we also check the rating distribution for those issuers. Table 3 presents results for non-defaulting issuers that held a long-term rating by Moody’s and S&P at the end of a given year and were not mentioned in Moody’s or S&P’s default reports for the respective year. For these non-defaults, we get almost comparable mean ratings for the overall sample with all ratings (cf. Panel I). The mean difference over the years 2000–2004 comes to only 0.11 notches, which is not significantly different using a two-sided t-test. For the sub-group of junk ratings, the mean difference over these five years is 0.34 notches, which is significantly different on the 1% significance level. Hence, our numerical mapping is not completely appropriate for junk ratings. The latter result that Moody’s assigns lower ratings on average if one uses our kind of numerical mapping is also in line with research into split ratings (e.g., Cantor and Packer, 1997). However, the differences for near-to-default issuers are far greater than one would expect after assessing our results for non-default issuers. Even after subtracting 0.34 notches for the differences in our matching procedure, we still observe substantial differences in mean ratings in favor of Moody’s. Therefore, Moody’s seems to signal increasing default risk in a timelier manner than S&P in our observation period. This result is in line with Gu¨ttler (2005), who finds that – based on a sample of 11,428 issuer ratings and 350 defaults in several datasets from 1999 to 2003 – Moody’s rating accuracy seems to be higher as well.

758

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

Table 3 Rating distribution of non-defaulted issuers 2000

2001

2002

2003

2004

Panel Ia: All ratings, Moody’s Mean 10%-Quantile 90%-Quantile Number of observations

8.9653 3 17 808

9.2137 3 17 1165

9.3764 3 17 1740

9.7716 4 17 2802

10.3809 4 17 4707

Panel Ib: All ratings, S&P Mean 10%-Quantile 90%-Quantile Number of observations

8.9233 4 16 808

9.1966 4 16 1165

9.3741 4 16 1740

9.6909 4 16 2802

10.1884 5 15 4707

Panel IIa: Junk ratings, Moody’s Mean 10%-Quantile 90%-Quantile Number of observations

15.4079 12 20 277

15.3537 12 20 410

15.2451 12 20 616

15.0762 12 19 1089

14.8197 12 18 2158

Panel IIb: Junk ratings, S&P Mean 10%-Quantile 90%-Quantile Number of observations

15.1504 12 21 266

15.2677 12 21 396

15.0853 12 21 598

14.7585 12 21 1064

14.3511 11 18 2099

The table presents rating distributions of issuers that held a long-term rating by Moody’s and S&P at the end of a given year and were not mentioned in Moody’s nor S&P’s default reports of the respective year. In Panel II, junk ratings are ratings worse than 10 (which equals Baa3/BBB).

Regarding the potential effects of the above described biases in credit ratings of issuers before default, we perform an ordered probit analysis of determinants Xk for the credit rating R of debtor i for the 8 different time spans t before default for Moody’s and S&P6 Ri;t ¼ at þ

11 X

kk X k;i;t þ ei;t ;

ð1Þ

k¼1

where X1,i,t equals 1 if company i in period t has its headquarters in the US, and zero if not; X2,i,t is equivalent to the outstanding debt amount at the time of default of the corresponding company i in period t (measured as the natural log of the debt amount in billions of US dollars); X3,i,t equals 1 if company i in period t is a fallen angel, zero if not; X4,i,t equals the initial rating (or the rating at the beginning of our observation period); X5,i,t equals the difference from the credit rating one period before (according to the master rating scale with 21 notches); and ei,t is the random disturbance of issuer i in period t. The other variables are the accounting ratios of creditworthiness (interest coverage, operating income/ sales, long-term debt/assets, total debt/assets) and macroeconomic variables (change in GDP and unemployment rate).

6

Ordered probit models are more appropriate than linear models to analyze the qualitative nature of ratings. However, OLS analysis provides qualitatively similar results.

Table 4 Regression results of determinants of the rating level Rating, days to default >1440

Panel II: S&P US Debt amount Fallen angel Initial rating D Rating to last period Interest coverage Operating income/sales Long-term debt/assets Total debt/assets DGDP Unemployment rate Observations Pseudo R2

2.9174** 0.2503 3.5448*** 1.4867*** 0.4303** 0.3591 1.9483* 4.6120* 3.484 1.1596 54 0.4022 0.9921 0.2504 4.3390*** 4.5084*** 0.5778*** 1.5761* 1.3068 2.8142 21.231 13.1158 89 0.6042

1080–721

740–541

540–361

360–181

180–91

90–31

0.9546** 0.1069 2.2216*** 0.5616*** 0.3693*** 0.1866** 0.2850 0.3107 0.2848 1.926 1.7843 69 0.3525

1.0062*** 0.2119*** 1.6334*** 0.6548*** 0.2797** 0.1461** 0.1981 0.1093 1.5313** 10.022* 11.3785 110 0.3579

0.9912*** 0.2604*** 1.5257*** 0.6434*** 0.3373* 0.2871*** 0.0664 0.0775 1.8372*** 16.751*** 11.1239* 134 0.3566

0.7910*** 0.1882*** 1.2518** 0.6256*** 0.3853*** 0.2136*** 0.2036** 0.2984 1.1260** 14.305*** 4.5943 150 0.2844

0.7248*** 0.0733 1.4776*** 0.4919*** 0.4524*** 0.1239** 0.0507 0.0033 1.0258** 14.046** 3.0557 164 0.2534

0.5564*** 0.0190 1.4750*** 0.3191** 0.4524*** 0.1051* 0.0232 0.0763 1.0479** 18.296*** 2.6938 168 0.1922

0.2767 0.0063 0.5587 0.2059 0.2087*** 0.1681** 0.0140 0.4835 1.6621*** 11.653* 3.9761 170 0.1192

0.4920 0.5152** 4.8621*** 4.2139*** 1.5212*** 0.2964** 2.1392** 0.6455 0.3653 19.726 25.6469* 119 0.6254

0.5082 0.1912** 1.6961*** 1.2834*** 0.3897 0.2965*** 0.1979 0.0816 0.4844 23.831*** 13.0460 146 0.4352

0.5425** 0.1506* 1.2499*** 1.1689*** 0.4696* 0.2862*** 0.2773 0.0218 0.4716 14.827*** 7.3535 160 0.4352

0.5524** 0.1555* 1.0723*** 1.0160*** 0.5050*** 0.1816*** 0.1126 0.1787 0.7912 15.074** 5.0606 167 0.3165

0.5361** 0.1568** 1.3878*** 0.6491*** 0.5905*** 0.0491 0.1024 0.3795 0.7244 10.000 0.3427 170 0.2685

0.2980 0.1774** 1.1551** 0.4226*** 0.5442*** 0.0140 0.0953 0.4197 1.1318** 11.032 4.8169 172 0.2540

0.0785 0.0219 0.5138 0.3349** 0.2407*** 0.1109 0.0794 0.7471** 0.9500** 17.215** 5.4986 172 0.1382

759

The table reports the results of eight ordered probit regressions for each rating agency. Panel I (II) shows results for the rating level of Moody’s (S&P) as dependent variable. The dependent variables are the rating levels Ri for the eight periods before the default event. Independent variables are a dummy that takes the value 1 if the company has its headquarters in the US, the size of the issuer (substituted by the natural log of the debt amount at the time of default), and a second dummy that takes the value 1 if the company is classified as a fallen angel. An issuer is defined as a ‘‘fallen angel’’ if it had an investment grade rating at some time during the observation period 1990 to 2004 but was subsequently downgraded to non-investment grade before the end of 2004. We apply robust quasi-maximum likelihood standard errors. Two-sided significance levels are given as ***, **, and * representing 1%, 5%, and 10% respectively.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

Panel I: Moody’s US Debt amount Fallen angel Initial rating D Rating to last period Interest coverage Operating income/sales Long-term debt/assets Total debt/assets DGDP Unemployment rate Observations Pseudo R2

1440–1081

760

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

The dependent variable R and the fourth independent variable X4,i,t are lumped together in seven rating classes from 1 (Aaa to Aa3/AAA to AA) to 7 (Ca to C/CC to D) to obtain enough observations in each rating class for an ordered probit model. In Eq. (1), t signifies the eight periods before default. Therefore, we conduct eight regressions, i.e. one for each period, for the rating level of Moody’s (Panel I) and eight additional regressions for S&P (Panel II) as a dependent variable. Table 4 gives the results of the ordered probit model. In contrast to Nickell et al. (2000), Shin and Moore (2003) and more in concurrence with Ammer and Packer (2000), we find that credit ratings by the two US-based agencies are not subject to any home preference. We even discover that Moody’s (S&P) assigned US issuers in seven (three) out of the eight periods lower ratings than non-US issuers. This might be due to their better forecasting ability in their home market. Among others, Coval and Moskowitz (2001) provide evidence for this line of argumentation. They show that fund managers can earn a substantial abnormal return in local investments. However, this result could also be due to the better quality of accounting information in this developed market. Liu and Ferri (2001) show that the closer relationship between firm and sovereign ratings in developing markets can partly be explained by the lower quality of information which is disclosed by the rated firms. Results might also be affected by different regional bankruptcy legislation. In the US, it might be rational for a given firm to file for Chapter 11, which is a default by the rating agencies’ definition, in order to reorganize its business. On the other hand, Keiretsu7 partners might rescue a comparable company in Japan. Besides, in three (six) periods, Moody’s (S&P) ratings are higher for larger issuers. This might be evidence for size preference, since rating agencies are (mainly) paid by the issuers in proportion to the nominal debt amount. However, it could also be because we cannot include all relevant control variables such as soft rating criteria, e.g., management quality or accounting quality. Another possible explanation might be that bigger firms need a bigger shock to drop into default. Since bigger shocks should be less probable, companies that are comparable in most respects apart from size are assigned different ratings; i.e. bigger firms get higher ratings. Fallen angels are assigned higher ratings until the last period by both rating agencies. This presumably stems from the slowness of the agencies’ rating adjustment. The other two variables of rating history, initial rating and the rating change compared to the last period, are also in almost every period significantly different from zero. The control variables further explain large parts of the cross-sectional variation. Interest coverage seems to play an important role for both rating agencies. For Moody’s, total debt over assets is also important. In the case of the macroeconomic variables, only the coefficient for the change in GDP is significantly different from zero.8

7

A Keiretsu is a set of companies with interlocking business relationships and shareholdings, a common organizational form in Japan. 8 The number of observations varies over the eight periods, and it is not clear whether this variation influences the results. We test this by analyzing a fixed sample with 110 (146) issuers rated by Moody’s (S&P) through the whole period from at least 721 days before default without interruption until the default event. Results remain qualitatively unchanged.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

761

3.2. Lead-lag analysis of credit rating changes Next, we apply a lead–lag analysis using the full sample without accounting information. In contrast to the preceding analysis of rating level biases, we now concentrate on rating changes. Since the first rating in our dataset serves as an initial rating, we need at least one additional rating change to calculate actual rating changes. For 350 issuers with multiple rating changes there are 978 rating changes by Moody’s and 1296 rating changes by S&P. Table 5 provides the distribution of the rating changes’ magnitude. Rating changes are reported according to mapped, numerical ratings from 1 (Aaa/AAA) to 21 (C/D). For Moody’s (S&P) we found 131 (200) upgrades, 422 (614) upgrades by one notch, 258 (259) by two notches and 167 (223) by more than two notches. We also show whether, given different classes of adjustments, i.e. upgrades (61), slight downgrades (1 and 2) and harsh downgrades (P3), in the first rating agency’s rating, the subsequent adjustment by the second rating agency varies. Hence, for each rating agency, we compute the average rating change for the second rating agency for three periods, i.e. 1–90, 91–180, and 181–360 days after the rating change by the first rating agency. For example, the average rating change by S&P in the period 1–90 days after a downgrade Table 5 Influence of the magnitude of the first agency’s rating changes on subsequent rating changes by the second rating agency

Panel I: Rating changes by Moody’s Number of rating changes by Moody’s Rating change by S&P 1–90 days later Rating change by S&P 91–180 days later Rating change by S&P 181–360 days later Panel II: Rating changes by S&P Number of rating changes by S&P Rating change by Moody’s 1–90 days later Rating change by Moody’s 91–180 days later Rating change by Moody’s 181–360 days later

Upgrades

Downgrades

61

1

2

P3

Mean rating change Likelihood of rating change Number of rating changes Mean rating change Likelihood of rating change Number of rating changes Mean rating change Likelihood of rating change Number of rating changes

131 0.6667 25.19% 33 0.9000 15.27% 20 0.9231 29.77% 39

422 1.8251 43.36% 183 1.7542 27.96% 118 1.8087 27.25% 115

258 1.8281 49.61% 128 1.6119 25.97% 67 2.0196 19.77% 51

167 2.4674 55.09% 92 2.3846 15.57% 26 1.5556 10.78% 18

Mean rating change Likelihood of rating change Number of rating changes Mean rating change Likelihood of rating change Number of rating changes Mean rating change Likelihood of rating change Number of rating changes

200 0.4186 21.50% 43 0.2432 18.50% 37 0.6724 29.00% 58

614 1.6444 38.93% 239 1.7885 25.41% 156 1.5924 25.57% 157

259 1.9618 50.58% 131 1.8049 15.83% 41 1.7333 11.58% 30

223 2.4694 43.95% 98 2.0000 11.21% 25 0.6364 4.93% 11

The table presents the distribution of the rating changes’ magnitude for Moody’s (S&P) in Panel I (Panel II). The first row of each Panel provides the overall observations for the respective rating agency. The following nine rows of each Panel provide average rating changes, the likelihood of a rating change (the overall number of rating changes in each column divided by the number of rating changes in the respective subsequent period), and the number of observations for the second rating agency for three periods after the rating change of interest.

762

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

by Moody’s by one notch equals a downgrade of 1.83 notches. The harsher the first agency’s downgrade is, the harsher the downgrade by the second rating agency seems to be in the first half-year after the first agency’s downgrade. We also provide the average occurrence of a rating change by the second rating agency following the rating change by the first rating agency. This measure can be interpreted as the likelihood of a ‘‘rating reaction’’ by the second rating agency. For the first 90 days following the first agency’s rating change we observe increasing likelihoods from upgrades to harsh downgrades. We conduct pairwise Wilcoxon rank-sum tests for all possible combinations of differences in average rating changes by a second rating agency that followed rating changes by the first rating agency (cf. Table 6). For example, we calculated the upper left value of 2.4918 in Panel (Ia) as the difference between a 1 notch downgrade by Moody’s (1.8251) and an upgrade by S&P in the first 90 days (0.6667) following Moody’s downgrade. Hence, positive differences signify that upgrades are followed by moderate rating adjustments (column 1), and that (harsher) downgrades are followed by harsher downgrades (columns 2–4). For the first periods, i.e. 1–90 and 91–180 days after the rating change, rating differences are mostly significantly positive. Significant negative results in

Table 6 Magnitude of one agency’s rating changes with respect to the magnitude of a preceding rating change by a second rating agency

Panel I: Rating changes by Moody’s Panel (Ia) Rating change by S&P 1–90 days later

Panel (Ib) Rating change by S&P 91–180 days later

Panel (Ic) Rating change by S&P 181–360 days later

Panel II: Rating changes by S&P Panel (IIa) Rating change by Moody’s 1–90 days later

Panel (IIb) Rating change by Moody’s 91–180 days later

Panel (IIc) Rating change by Moody’s 181–360 days later

Upgrades

Downgrades

6 1

1

2

1 2 P3

2.4918*** 2.4948*** 3.1341***

1 2 P3

0.8542*** 0.7119** 1.4846***

1 2 P3

0.8856***

1 2 P3

1.2257*** 1.5432*** 2.0508***

0.3175** 0.8250***

0.5076***

1 2 P3

1.5452*** 1.5616*** 1.7568***

0.0164 0.2115

0.1951

1 2 P3

0.9199*** 0.1410 0.9560*

1.0970*

1.0965*** 0.6325

1.0609*** 0.0361

0.0030 0.6423*** 0.1423 1.4846* 0.2109 0.2531

0.6393***

0.7727

0.4641

The table presents results for all possible combinations of differences in average rating changes by a second rating agency which followed rating changes by the first rating agency. Differences in average rating changes are calculated according to Table 5. The null hypothesis states that the differences are equal to zero. We apply the Wilcoxon rank-sum test for two sub-samples. Two-sided significance levels are given as ***, **, and * representing 1%, 5%, and 10% respectively.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

763

Table 7 Likelihood of one agency’s rating changes with respect to the magnitude of a preceding rating change by a second rating agency

Panel I: Rating changes by Moody’s Rating change by S&P 1–90 days later

Panel II: Rating changes by S&P Rating change by Moody’s 1–90 days later

Upgrades

Downgrades

61

1

1 2 P3

0.1817*** 0.2442*** 0.2990***

0.0625 0.1172**

1 2 P3

0.1743*** 0.2908*** 0.2245***

0.1165*** 0.0502

2

0.0548

0.0663

The table presents results for all possible combinations of differences in the likelihood of a rating change by a second rating agency which followed rating changes by the first rating agency up to 90 days later. Differences in the likelihood of rating changes are calculated according to Table 5. The null hypothesis states that the differences are equal to zero. We apply the Wilcoxon rank-sum test for two sub-samples. Two-sided significance levels are given as ***, **, and * representing 1%, 5%, and 10% respectively.

Panel (IIc), which appear counterintuitive, should be disregarded since at least one sub-sample is smaller than 30. Given this small sample size, results do not seem to be robust. To test the significance of the likelihood of a rating change by a second rating agency according to the different rating changes by a first rating agency we test differences in these likelihoods for the period 90 days after the respective rating adjustment by the first agency (cf. Table 7). Positive differences indicate that the likelihood of a rating change by the second rating agency increases from upgrades to harsh downgrades. We find significant differences in the likelihood between upgrades and downgrades. After the latter, rating changes by the second rating agency are significantly more likely than after upgrades by the first rating agency. We do not assess the other two periods 91–180 and 181–360 days, since we find no intuitive results for them. To assess a potential lead–lag relationship simultaneously for both rating agencies, we employ a Granger-like regression approach. Choosing the appropriate econometric approach was somewhat tricky. On the one hand, we have a panel structure with a cross-section of 350 issuers and a time series for these issuers. However, the panel is very unbalanced. For many issuers we have only one rating change available, i.e. no time series data. Since we analyze multiple rating changes only, our dataset would shrink considerably if we were to use a panel approach. Nevertheless, intuition and the results presented in Table 3 give sufficient reason to assume the existence of a strong time trend, since rating downgrades are sharper the closer the default event comes. Therefore, even though we prefer not to use a panel approach due to data restrictions, we have to control for the time trend by using period dummies for the distance to default.9

9 Nevertheless, we have checked with a reduced panel dataset whether fixed effects are observable in the crosssection and over the periods. We find significant period fixed effects only. By using period dummies in the pooled ordered probit model, we are able to control for the period effect without losing any data.

764

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

We define the following ordered probit model with Moody’s as potential rating follower and S&P as potential rating leader (and vice versa with S&P as potential rating follower and Moody’s as potential rating leader): 3 3 7 X X X DRM b1j DRSi;tj þ b2j DRM cw I w;i;t þ ei ; ð2Þ i;t ¼ a þ i;tj þ j¼1

j¼1

w¼1

where DRM i;t indicates a rating change by Moody’s for debtor i at time t. Like for the first regression model, the ordered probit model should be more appropriate for the qualitative nature of ratings. We employ four different classes of rating changes (61, 1, 2, P3) to obtain enough observations in each class of rating changes. DRSi;tj specifies the change in the rating for debtor i by S&P for three predefined periods t  j, with j = 1 for 1–90 days, j = 2 for 91–180 days, and j = 3 for 181 to 360 days before the rating change for debtor i at time t. The variable DRM i;tj incorporates the lagged rating changes by Moody’s for debtor i for the same three periods t  j. To control for the distance to default the rating changes are attributed to 8 periods before the default event by applying 7 dummy variables in regression model II. Iw,i,t are indicator variables with the value 1, if the rating change by Moody’s took place more than 1440 days (w = 7), between 1440 and 1081 days (w = 6), between 1080 and 721 days (w = 5), between 720 and 541 days (w = 4), between 540 and 361 days (w = 3), between 360 and 181 days (w = 2), between 180 and 91 days (w = 1) before default and zero otherwise. The last period before default, i.e. in this analysis up to 90 days before default, serves as the reference. Results for the lead–lag analysis are given in Table 8. We find clear evidence that, given a downgrade (upgrade) by the first rating agency, downgrades (upgrades) by the second rating agency are of greater magnitude in the following periods 1–90 and 91–180 days. Hence, there might be information flow between the two rating agencies.10 Given this information flow, the potential rating agency’s reactions following a second agency’s rating change could span the whole bandwidth, from merely increasing the priority, to checking the issuer’s creditworthiness, all the way to a purely mechanical reaction to the other agency’s rating change. Besides, other explanations for our results might be plausible as well: (i) Rating adjustments might follow important information releases by the issuers or observations in the bond or stock market. These rating adjustments by the two rating agencies follow this news but (mostly) do not take place at the same point in time. In general, the direction of Moody’s and S&P’s rating change should be the same, i.e. they should both be upgrades or downgrades; (ii) Bad (good) news might be autocorrelated, forcing rating agencies to adjust their rating several times without being influenced by the other agencies’ rating adjustments. To some extent, rating changes might amplify such autocorrelation, since the rating level influences the issuer’s refinancing conditions. In the second regression model, the rating changes by the same rating agency in the first 90 days subsequent to its own rating changes are smaller. The degree of serial correlation is (almost) the same for both rating agencies. This adds further robustness to the results of the existing literature in this field (e.g., Christensen et al., 2004), with the difference that we additionally include and therefore control for the influence of rating changes by a second important rating agency. 10 Please note that this does not imply that one rating agency is more likely to downgrade (upgrade) an issuer’s rating, given a downgrade (an upgrade) by a second agency. Hence, our lead–lag analysis is all about the size of a rating change and not about its probability.

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

765

Table 8 Regression results of the lead–lag analysis Regression I Coefficient Panel I: Moody’s as rating follower D Rating by Moody’s 1–90 days before D Rating by Moody’s 91–180 days before D Rating by Moody’s 181–360 days before D Rating by S&P 1–90 days before D Rating by S&P 91–180 days before D Rating by S&P 181–360 days before 91–180 days to default 181–360 days to default 361–540 days to default 541–720 days to default 721–1080 days to default 1081–1440 days to default >1440 days to default Observations Pseudo R2 Panel II: S&P as rating follower D Rating by S&P 1–90 days before D Rating by S&P 91–180 days before D Rating by S&P 181–360 days before D Rating by Moody’s 1–90 days before D Rating by Moody’s 91–180 days before D Rating by Moody’s 181–360 days before 91–180 days to default 181–360 days to default 361–540 days to default 541–720 days to default 721–1080 days to default 1081–1440 days to default >1440 days to default Observations Pseudo R2

0.0698 0.0320 0.0202 0.3397*** 0.2635*** 0.0698

Regression II Standard error

1296 0.0536

Standard error

0.0518 0.0630 0.0553 0.0323 0.0561 0.0515

0.1288** 0.0275 0.0148 0.2697*** 0.2022*** 0.0287 0.0467 0.2353** 0.5621*** 0.7784*** 0.6809*** 1.0476*** 1.0246*** 978 0.1086

0.0550 0.0624 0.0564 0.0332 0.0555 0.0492 0.1130 0.1094 0.1314 0.1710 0.1434 0.1750 0.1575

0.0427 0.0585 0.0519 0.0333 0.0467 0.0557

0.1382*** 0.0590 0.0446 0.2009*** 0.1244** 0.0601 0.4609*** 0.8343*** 0.9930*** 1.2462*** 1.3812*** 1.5029*** 1.6870*** 1296 0.1294

0.0436 0.0559 0.0465 0.0359 0.0487 0.0508 0.1012 0.1033 0.1293 0.1377 0.1307 0.1646 0.1444

978 0.0720 0.0007 0.0489 0.0835 0.3066*** 0.2044*** 0.0928*

Coefficient

This table shows results of a lead–lag analysis applying ordered probit analysis. The dependent variables are rating changes DRi,t by Moody’s (S&P) in Panel I (Panel II) of 350 issuers. We use the four classes of rating changes of Table 5. The independent variables of regression model I are lagged rating changes by Moody’s and S&P for three time periods (1–90, 91–180, and 181–360 days) before the respective rating change. In regression model II, the rating changes are attributed to eight periods before the default event to control for the distance to default by applying seven dummy variables. The last period before default, i.e. up to 90 days before default, serves as the reference. We apply robust quasi-maximum likelihood standard errors. Two-sided significance levels are given as ***, **, and * representing 1%, 5%, and 10% respectively.

Adding the period dummies into the second regression model increases the Pseudo R2 sharply. For Moody’s as rating follower it increases from 7.2% to 10.9% and for S&P as rating follower it more than doubles from 5.4% to 12.9%.11 Except for the period 91–180 11

We achieve qualitatively similar results with an OLS approach. The adjusted R2 for the second regression model is around 25% for both rating agencies.

766

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

days before default for Moody’s, the period dummies are always significantly negative. Since the period less than 91 days before default serves as our reference, rating changes – which are mostly downgrades – in the periods further away from the default event are less severe. 4. Concluding remarks We have examined whether there are biases and lead–lag relationships in credit ratings of near-to-default issuers by using a dataset of defaulted issuers with multiple ratings by Moody’s and S&P, covering the years 1997–2004, and a control sample of non-defaulted issuers. We find evidence that Moody’s seems to adjust its ratings to increasing default risk in a timelier manner than S&P. Besides, there seems to be no home bias on the part of the US-based rating agencies. Third, given a downgrade (upgrade) by the first rating agency, subsequent downgrades (upgrades) by the second rating agency are of greater magnitude in the short term. In addition, more severe downgrades (or downgrades instead of upgrades) by the one agency are followed by more severe downgrades (or downgrades instead of upgrades) by the second agency. Fifth, rating changes by the second rating agency are significantly more likely after downgrades than after upgrades by the first rating agency. Besides, we find evidence for serial correlation in rating changes 1–90 days after the rating change of interest after controlling for rating changes by the second rating agency. One possible criticism of the study is that the results are limited by its data, namely the relatively short observation period. However, even with a longer observation period, the record number of defaults in the years 2000–2002 would dominate a much larger sample too, due to the low number of defaults in ‘‘normal’’ years. Despite its relative shortness, our chosen observation period provides a unique opportunity to conduct a default-based analysis given the large number of defaults in those years. Another possible criticism is that some might challenge our assumption that it is beneficial to downgrade defaulting issuers as early as possible. We argue that it is good to be the first rating agency to generate the headline, if the company under review defaults in the near future. Of course, the company does not appreciate the downgrade since its borrowing costs increase, thereby shrinking its financial scope. Besides, assessing ratings in a timelier manner could increase the probability of a rating reversal in the case of a wrong rating change, which increases trading costs for governance-ruled fund managers (e.g., Lo¨ffler, 2004). However, trading costs for corporate bonds have fallen sharply since the mid1990s. The lower transaction costs are, the less important is the stability of credit ratings. Second, the reputation of rating agencies in the eyes of bond investors is based on their power to predict forthcoming defaults. The earlier a default is signaled through a junk rating, the better it is for the rating agency’s reputation. One example of this was the late downgrade of Enron by Moody’s and S&P. Other rating agencies like Egan-Jones Ratings downgraded Enron to junk status much earlier and thereby increased their credibility in the market. Acknowledgements We thank Gunter Lo¨ffler, Lars Norden, Daniel Roesch, Roger Stein, two anonymous referees and the editor of the JBF, seminar participants in Frankfurt, Mannheim, Ulm, in

A. Gu¨ttler, M. Wahrenburg / Journal of Banking & Finance 31 (2007) 751–767

767

the meeting of the Swiss Finance Association, C.R.E.D.I.T., and the Southern Finance Association for their helpful comments. All errors and opinions expressed in this paper are of course our own. References Altman, E.I., Kao, D.L., 1992. The implications of corporate bond ratings drift. Financial Analysts Journal 48, 64–67. Amato, J.D., Furfine, C.H., 2004. Are credit ratings procyclical? Journal of Banking and Finance 28, 2641–2677. Ammer, J., & Packer, F., 2000. How consistent are credit ratings? A geographic and sectoral analysis of default risk. Board of Governors of the Federal Reserve System, International Finance Discussion Papers 668. Blume, M.E., Lim, F., MacKinlay, A.C., 1998. The declining credit quality of US corporate debt: Myth or reality? Journal of Finance 53, 1389–1414. Cantor, R., Packer, F., 1997. Differences of opinion and selection bias in the credit rating industry. Journal of Banking and Finance 21, 1395–1417. Christensen, J.H.E., Hansen, E., Lando, D., 2004. Confidence sets for continuous-time rating transition probabilities. Journal of Banking and Finance 28, 2575–2602. Coval, J.D., Moskowitz, T.J., 2001. The geography of investment: Informed trading and asset prices. Journal of Political Economy 109, 811–841. Covitz, D. M., & Harrison, P., 2003. Testing conflicts of interest at bond ratings agencies with market anticipation: Evidence that reputation incentives dominate. Board of Governors of the Federal Reserve System, Finance and Economics Discussion Series 2003-68. Delianedis, G., Geske, R., 1999. Credit risk and risk neutral default probabilities: Information about rating migrations and defaults. UCLA working paper. Estrella, A., 2000. Credit ratings and complementary sources of credit quality information. BIS working paper. Gu¨ttler, A., 2005. Using a bootstrap approach to rate the raters. Financial Markets and Portfolio Management 19, 277–295. Hamilton, D.T., Cantor, R., 2004. Rating transition and default rates conditioned on outlooks. Journal of Fixed Income 14, 54–71. Johnson, R., 2003. An examination of rating agencies’ actions around the investment grade boundary. Federal Reserve Bank of Kansas City, Research Working Paper 03-01. Kuhner, C., 2001. Financial rating agencies: Are they credible? Insights into the reporting incentives of rating agencies in times of enhanced systemic risk. Schmalenbach Business Review 53, 2–26. Lando, D., Skødeberg, T.M., 2002. Analyzing rating transitions and rating drift with continuous observations. Journal of Banking and Finance 26, 423–444. Liu, L.-G., Ferri, G., 2001. How do global credit rating agencies rate firms from developing countries? ADB Institute Research Paper, No. 26. Lo¨ffler, G., 2004. Ratings versus equity-based measures of default risk in portfolio governance. Journal of Banking and Finance 28, 2715–2746. Moody’s, 2004. Default and recovery rates of corporate bond issuers, 1920–2004, Moody’s Special Comment. Nickell, P., Perraudin, W., Varotto, S., 2000. Stability of transition matrices. Journal of Banking and Finance 24, 203–227. Norden, L., Weber, M., 2004. Informational efficiency of credit default swap and stock markets: The impact of credit rating announcements. Journal of Banking and Finance 28, 2813–2843. Shin, Y.S., Moore, W.T., 2003. Explaining credit rating differences between Japanese and U.S. agencies. Review of Financial Economics 12, 327–344. Stein, R., 2005. The relationship between default prediction and lending profits: Integrating ROC analysis and loan pricing. Journal of Banking and Finance 29, 1213–1236. White, L.J., 2002. The credit rating industry: An industrial organization analysis. In: Levich, R.M., Reinhart, C., Majnoni, G. (Eds.), Ratings, Rating Agencies and the Global Financial System. Kluwer Academic Publishers, Boston, pp. 41–63.