Accuracy, usefulness and the evaluation of analysts’ forecasts

Accuracy, usefulness and the evaluation of analysts’ forecasts

International Journal of Forecasting 19 (2003) 417–434 www.elsevier.com / locate / ijforecast Accuracy, usefulness and the evaluation of analysts’ fo...

117KB Sizes 0 Downloads 62 Views

International Journal of Forecasting 19 (2003) 417–434 www.elsevier.com / locate / ijforecast

Accuracy, usefulness and the evaluation of analysts’ forecasts Haim A. Mozes* Fordham University Graduate School of Business, Fordham University GBA, 113 West 60 th Street, Faculty Center New York, NY 10023, USA

Abstract This paper provides evidence that forecast immediacy (FI), which is the speed with which analysts respond to a significant change in the publicly available information set, is an ex-ante determinant of analysts’ forecast properties. Specifically, the greater the FI the less accurate analysts’ forecasts and the greater the dispersion in analysts’ forecasts. Conversely, FI is positively related to forecast usefulness, which is the extent to which the forecast improves upon the accuracy of existing forecasts. One implication of these results is that it may be appropriate to evaluate separately analysts whose forecasts tend to have high FI and analysts whose forecasts tend to have low FI. A second implication is that, to the extent that forecasts with high FI tend to be more accurate than prior forecasts, one might place less weight on forecasts that precede high FI forecasts.  2002 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. Keywords: Earnings forecasting; Evaluating forecasts

1. Introduction There is a significant body of research that investigates the ex-ante determinants of analysts’ forecast properties. Among the factors examined are forecast timeliness (e.g., Crichfield, Dykman, & Lakonishok, 1978; O’Brien, 1988), the information environment (e.g., Brown, Richardson, & Schwager, 1987; Kross, Ro, & Schroeder, 1990), analyst incentives (e.g., Conroy & Harris, 1995; McNichols & Lin, 1998), analyst quality / reputation (e.g., O’Brien, 1990; Stickel, 1992; Sinha, Brown, & Das, 1997), analyst experience (e.g., Mikhail, Walther, & Willis, 1997;

*Tel.: 11-212-636-6124; fax: 11-212-765-5573. E-mail address: [email protected] (H.A. Mozes).

Jacob, Lys, & Neale, 1999), and the size of the analyst’s brokerage firm (e.g., Clement, 1999; Mozes & Williams, 2000). The various properties tested include forecast accuracy, forecast dispersion, and forecast usefulness. Forecast immediacy is the speed with which analysts respond to a significant change in the publically available information set. The fewer the number of forecast revisions subsequent to the release of significant new information but before a given analyst’s revision, the greater the immediacy of that analyst’s revision. This paper adds to the literature by providing evidence that forecast immediacy (FI) is another ex-ante determinant of analysts’ forecast accuracy, and that the greater the FI, the less accurate analysts’ forecasts and the greater the dispersion in analysts’ forecasts. Conversely, FI is positively related to forecast usefulness, which is defined here as the extent to which the forecast

0169-2070 / 02 / $ – see front matter  2002 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. doi:10.1016/S0169-2070(02)00056-0

418

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

improves upon the accuracy of existing forecasts (Williams, 1996). The rationale behind the paper’s results is that when forecast revisions are made shortly after the public release of new information (e.g., the forecasts have high FI), those current forecasts, which will at least partially reflect the new information, will have a greater informational advantage over the previous forecasts, which will not reflect any of the new information. Therefore, FI is positively related to the increase in the accuracy of the current forecasts, relative to that of the previous forecasts. However, in cases of high FI, analysts may not have had sufficient time to consider fully the implications of the new information before revising their forecasts. In addition, in forecasts with high FI, analysts do not observe prior forecasts made in response to the new information. Therefore, FI is negatively related to forecast accuracy and positively related to forecast dispersion. While the relation between FI and forecast accuracy, dispersion, and usefulness is intuitive, this paper’s contribution is in actually testing and empirically demonstrating these links. Because analysts’ information set cannot be observed and existing databases do not provide the events that precipitated analysts’ forecast revisions, it is difficult to measure a forecast’s FI and to determine whether that forecast is a late forecast in response to old information or an early forecast in response to significant new information. This paper provides a methodology for measuring FI without directly observing analysts’ information set or the events that triggered analysts’ forecast revisions, using the forecast cluster developed by Mozes and Williams (1999). The paper’s results suggest that analysts’ forecast accuracy and usefulness need to be jointly evaluated. Analysts’ skill in forecasting when there is high FI may be related to their speed in information assimilation and interpretation, while analysts’ skill in forecasting when there is low FI may be related to their ability to uncover private information or to their thoroughness in evaluating existing information. If most of an analyst’s forecasts occur immediately following the public release of new information, his average forecast accuracy may be low, relative to that of analysts whose forecasts considerably lag the

information release, but it may be incorrect simply to conclude that he is a poor analyst. Rather, it may be appropriate separately to evaluate analysts whose forecasts tend to have high FI and analysts whose forecasts tend to have low FI. A second implication of the paper deals with the appropriate weights to place on analyst forecasts for the purposes of forming earnings expectations and making investment decisions. To the extent that forecasts with high FI tend to be more accurate than prior forecasts, one might place less weight on forecasts that precede high FI forecasts. The methodology employed in this paper can be used to determine whether a particular analyst’s forecasts tend to have high or low FI and whether a particular forecast or set of forecasts has high or low FI. The remainder of the paper proceeds as follows. The second section provides a literature review and the test hypotheses. The third section defines the cluster of contemporaneous forecasts, explains the sampling procedures, provides the data sources and summary descriptive statistics, and introduces the test designs. The fourth section provides the results and the fifth section provides a summary and conclusions.

2. Literature review

2.1. Determinants of analysts’ forecast properties One determinant of analyst forecast properties is forecast timeliness. Crichfield et al. (1977) show that the shorter the forecast horizon, the greater forecast accuracy. O’Brien (1988) shows that the most recent analyst forecast is more accurate than the mean of all outstanding analyst forecasts. Brown (1991) and Mozes and Williams (1999) develop timely forecast composites that combine forecast recency with forecast aggregation. A second factor influencing analysts’ forecasts is the information environment. Brown et al. (1987) find that firm size is positively related to analysts’ forecast superiority over time-series based forecasts. Their explanation is that more information is available to analysts following large firms than to analysts following small firms. (This result is consistent with

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

Atiase (1987) and Bhushan (1989), who find that the information content of earnings announcements is inversely related to firm size.) Kross et al. (1990) do not find a positive relation between firm size and analysts’ forecast superiority, but they do find that analysts’ forecast superiority relative to time-series models is positively related to the firm’s news coverage in the Wall Street Journal, and to the time-series variability of earnings. (The time-series variability of earnings measures how well a given model ‘fits’ the earnings time-series. It is simply the variance of the difference between the model’s predicted earnings and actual earnings. The model can be a simple random-walk or it can consist of auto-regressive and moving average parameters. The differences are usually standardized by either price or earnings.) Their explanation for the latter result is that a greater time-series variability of earnings triggers a greater information search on the part of analysts. Consistent with this result, Bhushan (1989a) and Brennan and Hughes (1991) find that the number of analysts following the firm is positively related to the variability of the firm’s security returns. Luttman and Silhan (1995) and Lys and Soo (1995) show that analysts’ forecast accuracy is a decreasing function of the time-series variability of earnings. Analysts’ incentives are a third determinant of analysts’ forecast properties. Conroy and Harris (1995) show that for Japanese stocks, sell-side analysts are more optimistic and less accurate than other analysts. Dugar and Nathan (1995) and Hunton and McEwen (1997) find that analysts tend to bias their earnings forecasts upwards when their brokerage firms have investment banking relationships with firms they follow. However, Dugar and Nathan also find that this bias does not come at the expense of forecast accuracy. Lin and McNichols (1997) find that analysts tend to bias their long-term growth forecasts upwards, but not their short-term earnings forecasts, when their brokerage firms have investment banking relationships with firms they follow. Das, Levine, and Sivaramakrishnan (1998) find that the higher the firm’s time-series variability of earnings, the more analysts bias their forecasts upwards. They argue that such behavior represents an attempt by analysts to gain access to management’s non-

419

public information. In summary, while there is some evidence of a link between forecast incentives and forecast bias, the evidence of a link between forecast incentives and forecast accuracy is inconclusive. Another determinant of analysts’ forecast behavior is the uncertainty in their information set. Abarbanell and Bernard (1992) find that analysts systematically under adjust when they revise their forecasts in response to new information, generating a positive bias for bad news and a negative bias for good news. A possible explanation for the under adjustment behavior is that it is a response to uncertainty over whether the effects arising from the new information are permanent or transitory (e.g., Ali, Klien, & Rosenfeld 1992; Lewis, 1989). The under adjustment behavior on the part of market professionals is not limited to financial analysts’ earnings forecasts. For example, Frenkel and Froot (1987) find that economists systematically under predicted the strength of the dollar in the 1980s. Lewis (1989) explains Frenkel and Froot’s (1987) results with a model where market participants revise their beliefs using a systematic under adjustment process. As evidence that new information can increase analysts’ uncertainty, Morse, Jens, and Stice (1991) and Brown and Han (1992) find that the convergence of analysts’ beliefs, which measures the change in analysts’ uncertainty, is negatively related to the magnitude of the ‘earnings surprise’ or the change in analysts’ information set. Accordingly, one would expect that (a) analysts’ uncertainty is an increasing function of the change in analysts’ information set, and (b) the forecast under adjustment is an increasing function of the change in analysts’ information set. A final determinant of analysts’ forecast properties is analyst quality / reputation. Though O’Brien (1990) fails to identify individual analysts who are consistently more accurate than other analysts, Stickel (1992) provides evidence that Institutional American All-American analysts’ forecasts are more accurate that other analysts’ forecasts. Butler and Lang (1991) find that individual analysts’ forecast optimism / pessimism (e.g., bias) tend to persist, but they also find that accuracy differences do not persist. Sinha et al. (1997) develop an accuracy-ranking procedure that controls for forecast timeliness. They find that the accuracy rankings generated within their estimation

420

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

sample persist in hold-out samples. Mikhail et al. (1997) find that analysts’ forecast accuracy improves with their experience following specific firms, but Jacob et al. (1999) find no experience effect after controlling for analysts’ skill and brokerage affiliation.

2.2. Hypothesis development Because competing analysts will quickly revise their forecasts in response to new public information, individual analysts may face pressure to revise their forecasts as quickly as possible when new information becomes public. As evidence, Stickel (1989) finds that the increase in forecast revision activity following an earnings announcement is positively related to the number of competing analysts. In cases with high FI, analysts may be unable to consider fully the implications of the new information, due to the short time period between the information release and the resulting forecast revisions. As a result, forecast accuracy will be lower in these cases. In contrast, analysts may have the luxury of refining their models, verifying their information, and finetuning their forecasts when their revisions result from the reassessment of previously available public information and their forecasts have low FI. This motivates the following hypothesis: H1: The greater the forecast immediacy, the lower the accuracy of individual analyst’s forecasts. Because forecasts with higher FI are defined as those forecasts made closely following the release of significant new information, the change in analysts’ information set and analysts’ uncertainty are both likely to be greater for forecasts with greater FI. Prior research suggests that analysts are more likely to react to the uncertainty arising from new information by under adjusting their forecasts. Accordingly, one would expect FI to be positively related to forecast dispersion and to the under adjustment in analysts’ forecasts. This motivates the following hypotheses: H2: The greater the forecast immediacy, the greater the dispersion in analysts’ forecasts. H3: The greater the forecast immediacy, the greater analysts’ under adjustment in their forecasts. If analysts’ forecasts closely follow the release of new information, the new forecasts will at least

partially reflect that information, whereas the previous forecasts will not reflect the new information. Thus, while FI is negatively related to the absolute level of forecast accuracy, it should be positively related to the increase in the accuracy of the current forecasts relative to that of the previous forecasts (i.e., forecast usefulness). The reason is that the greater the FI, the greater the informational advantage of the current forecasts relative to previous forecasts. This motivates the following hypothesis: H4: The greater the forecast immediacy, the more useful analysts’ forecast revisions.

3. Data

3.1. The forecast cluster On average, approximately 23% of analysts following a firm revise their year-ahead earnings estimates in a given month (Brown, Foster, & Noreen, 1985). These revisions are not evenly distributed over time. Rather, they occur in response to specific corporate events such as quarterly earnings announcements (Stickel, 1989). These characteristics result in an irregular pattern of forecast clusters, with different cluster sizes and varying time lengths between clusters. A forecast cluster is a set of contemporaneous forecasts for which no individual analyst’s forecast is separated by more than three calendar days from another analyst’s forecast. In this manner, forecasts made on Friday (Monday) and the following Monday (Thursday) are part of the same forecast cluster, but not forecasts made on Friday (Monday) and the following Tuesday (Friday). (The results of the paper are not sensitive to alternative definitions of the cluster. I also defined a cluster as a set of forecasts separated by no more than four or five calendar days, with substantially similar results to those reported in the paper. The reason why the results do not differ much is that the individual forecasts are generally grouped into the same clusters, regardless of the cluster definitions. This occurs because there are relatively few cases in which forecasts are separated from one another by exactly four or five calendar days.) Because a given cluster continues until three consecutive days pass with no

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

new forecasts, (a) there is a minimum of four calendar days from the beginning of one cluster until the beginning of the following cluster, and (b) there is no theoretical maximum for the length of time between the first and last forecasts in a given cluster. Each individual forecast is assigned into a unique forecast cluster, which may contain one or more different forecasts. Because a given forecast is only included in one forecast cluster, successive clusters for a given firm do not contain any overlapping forecasts. The cluster consensus forecast is defined as the mean of all forecasts within the cluster. Mozes and Williams (1999) show that at any point in time, the most recent cluster’s consensus forecast is superior to the mean of all outstanding analyst forecasts as a measure of market expectations. (Mozes and Williams’ results are based on comparisons of: (i) forecast accuracy, (ii) the association between forecast errors and excess returns, and (iii) the serial correlation in forecast revisions.) As a result, the absolute change in the cluster’s consensus forecast, from one cluster to the next, represents the change in market expectations between the two forecast clusters, due to the information that arrived to the market in that interval. Forecast clusters are used in this paper to form proxies for FI.

3.2. Data sources One-year ahead annual forecasts are extracted from the First Call Corporation’s historical detailed database (RTEE) from 1990 to 1994, covering sellside analysts’ earnings forecasts for approximately 5500 firms. An advantage of the RTEE database is that individual forecast dates are clearly identified. To be included in this study, a company must be listed on the COMPUSTAT database since 1986 and be followed by a minimum of five analysts on the First Call RTEE database. (The first requirement is necessary for calculating the time-series variability of earnings. Because of this requirement, it must be acknowledged that the paper’s results are possibly affected by a survivorship bias. The observations lost due to the survivorship bias likely include a greater percentage of cases where actual results were substantially less than forecasted results. If forecast immediacy is, on average, lower in these cases, that

421

might bias the tests towards rejecting the various hypotheses. However, there is no reason to believe that forecast immediacy is, in fact, lower for these cases.) The latter requirement eliminates firms that are not widely followed. Forecasts for which the precise estimate date is uncertain (e.g., batch forecasts) are excluded from this study. In this manner, forecasts within a cluster likely correspond to the same information set. Sixty-two percent of the forecast clusters have a cluster size of one, the majority of clusters consist of forecasts occurring on a single day, and the first and last forecasts in a given cluster are separated by more than five calendar days in only a small fraction of the forecast clusters. (A total of 150 000 individual forecasts were initially extracted from the First Call database for the 5-year period, 1990–1994. After applying all sampling requirements, approximately 80 000 individual forecasts remained, belonging to approximately 53 000 forecast clusters.)

3.3. Test variables The variables used to measure FI are defined in the following table: NUMFOL is the number of analysts following the firm, and it varies annually. NUMANS is the number of forecasts in the cluster; %FORECAST is the percentage of analysts following the firm who provide a forecast in the cluster (NUMANS / NUMFOL); D%FORECAST is %FORECAST in the previous cluster minus %FORECAST in the current cluster; AVG%FORECAST is the average value of %FORECAST for the clusters in which analyst j’s forecasts occur for firm k; REVISION is the current cluster’s consensus forecast minus the previous cluster’s consensus forecast, divided by price; AREVISION is the absolute value of REVISION; DAREVISION is AREVISION in the previous cluster minus AREVISION in the current cluster; AVGAREVISION is the average value of AREVISION in the clusters in which analyst j’s forecasts occur for firm k. Forecast immediacy is indirectly measured, using

422

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

%FORECAST as a proxy for the number of forecasts that already followed the most recent information release, and using AREVISION as a proxy for the magnitude of the new information that precipitated the forecast revisions. The rationale for the %FORECAST variable is as follows. An internal First Call study (Levine, 1998) suggests that over half of the large clusters of contemporaneous forecast revisions can be immediately related to specific events, such as earnings and dividend announcements, conference calls, and management forecasts, that occur within several days of the forecast revisions. In addition, Stickel (1989) finds a greater than average number of forecast revisions in the period immediately following the firm’s earnings announcement, and Williams (1996) finds a greater than average number of forecast revisions in the period immediately following management forecasts. (Both Stickel and Williams define the ‘average number of forecasts revisions’ as the average number of revisions in a period of equal length as their measurement period.) These results imply that the public release of significant new information is likely to trigger a large number of analyst revisions. Therefore, the greater (lower) %FORECAST, the lower (greater) the number of forecasts that have already followed the most recent information release. (Of course, there will be cases where a cluster with greater %FORECAST will have lower FI than a cluster with a lower %FORECAST. For example, consider the following sequence: significant new information is released, a forecast cluster with one analyst occurs, and a cluster with many analysts occurs. In this case, the cluster with the fewer forecasts has the higher FI. Using %FORECAST to measure FI in effect assumes that the sequence where significant new information is released, a forecast cluster with many analysts occurs, and a cluster with one analyst occurs, is far more common than the previous sequence.) (I also defined %FORECAST as the number of forecasts in the cluster. The results using this definition are substantially the same as those reported in the paper.) The rationale for the AREVISION variable is that the greater AREVISION, the greater the amount of new information that arrived to the market between the previous and current forecast clusters. (There will be cases where AREVISION results from an outlier forecast or a data transcription error, rather than from

new information. However, the argument is that, in general, AREVISION results from new information rather than from data errors.) Hence, large forecast revisions (i.e., higher AREVISION), and forecast revisions made by many analysts (i.e., higher %FORECAST) are more likely to have followed closely the release of significant new information, while smaller forecast revisions or forecast revisions made by fewer analysts are more likely to have resulted from individual analysts’ private information search or the reassessment of previously available public data. Likewise, the greater AVGAREVISION and AVG%FORECAST, the greater the FI associated with analyst j’s forecasts for firm k. D%FORECAST and DAREVISION represent the change in FI, relative to the previous cluster. The greater the values of D%FORECAST and DAREVISION, the greater the decrease in FI from the previous cluster to the current cluster. Although %FORECAST and AREVISION both measure aspects of FI and the correlation between them is significant, the correlation between them is modest (correlation 5 0.08, P value , 0.01). Forecast errors are calculated as actual earnings minus the forecasted earnings. The variables used to measure forecast accuracy are defined in the following table. AIFORERR is the average absolute forecast error of the individual forecasts in the cluster, divided by price; CFORERR is the error of the consensus forecast of the cluster, divided by price; ACFORERR is the absolute value of CFORERR; AVGFORERR is the average absolute forecast error, divided by price, for analyst j’s forecasts for firm k. The variables used to measure the change in forecast accuracy are defined in the following table: CLUSTIMP is the absolute forecast error of the previous cluster’s consensus forecast, minus the absolute forecast error of the current cluster’s consensus forecast, divided by price; INDIMP is the absolute forecast error of the previous cluster’s consensus forecast minus the average absolute forecast error of the individual

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

analysts who forecast in the current cluster, divided by price. The greater the value of CLUSTIMP, the greater the improvement in the cluster’s consensus forecast accuracy relative to the previous cluster’s consensus forecast, and the more useful the consensus forecast of the cluster. The greater the value of INDIMP, the greater the improvement in individual analysts’ forecast accuracy relative to the previous cluster’s consensus forecast and the more useful on the average an individual forecast in the cluster. The variables related to dispersion are defined in the following table: DISP is the standard deviation of analyst forecasts within the cluster, divided by price; DDISP is DISP in the previous cluster, minus DISP in the current cluster. A positive value for DDISP represents a convergence in analysts’ beliefs. The remaining variables, used as controls in the various tests, are defined in the following table: HORIZON is the number of days until the end of the fiscal year; AVGHORIZON is the average value of HORIZON for analyst j’s forecasts for firm k; LAG is the number of days from the end of the previous cluster until the first day of the current cluster; TSVAR is the time-series variability of the firm’s earnings process, calculated as the standard deviation of the first-differenced annual earnings series, divided by price; DOWN is a 0–1 dummy variable, with a value of 1 if REVISION is negative. For all variables, price is measured as the closing price on the first day of the current cluster. Because it is meaningless to measure the dispersion of analysts’ forecasts within a cluster containing only one forecast, DISP is only defined when there are two or more forecasts in the cluster and DDISP is only defined when successive clusters each have two or more forecasts.

423

3.4. Descriptive statistics Panel A of Table 1 provides sample-wide descriptive statistics for the test variables. The mean number of analysts following a sample firm is 13.6; the average number of days from a given forecast until the end of the fiscal year is 242; the mean number of days between forecast clusters is 22.4; and the mean %FORECAST is 0.148. The small cluster sizes and short average time-periods between forecast clusters are consistent with the majority of clusters consisting of either one or two forecasts. The dispersion in analysts’ forecasts is considerably lower than the time-series variability of earnings (0.014 vs. 0.052), consistent with the majority of analysts’ forecast errors being common to all analysts. The mean values for both INDIMP and CLUSTIMP are positive, the mean value for CLUSTIMP is greater than that for INDIMP, and the median value for INDIMP is zero. These results are consistent with later forecasts improving upon the accuracy of earlier forecasts, but primarily when the later forecasts are aggregated with other contemporaneous forecasts. While the mean value for DDISP is positive, consistent with analysts’ uncertainty declining in later forecasts, relative to that of earlier forecasts, the median value of DDISP is negative. Results in Panel B show that the improvement in forecast accuracy and the decrease in forecast dispersion both occur only later in the fiscal year, and both are only detected in larger clusters (i.e., NUMANS $ 3). However, at the same time, the average absolute forecast error tends to be greater in large forecast clusters than in small forecast clusters. These results are consistent with a greater FI being associated with lower forecast accuracy (H1) and greater forecast usefulness (H4). The average value for AREVISION is 0.008, but the average value for CLUSTIMP is only 0.00032. This implies that the accuracy of the cluster consensus only improves by approximately 4% of a given change in the cluster consensus (0.00032 / 0.008 5 0.04).

3.5. Test Design 3.5.1. Tests of H1 Model (1) is used to test H1, where the variables are as defined in the previous section. The subscripts i and t refer to firm i and cluster t, respectively.

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

424

Table 1 Descriptive statistics of test variables Panel A: all observations NUMFOL HORIZON %FORECAST LAG AIFORERR ACFORERR AREVISION TSVAR DISP INDIMP CLUSTIMP DDISP

Mean

Median

13.6 242 0.148 22.4 0.019 0.019 0.008 0.052 0.014 0.00008 0.00032 0.0001

13 241 0.111 14 0.008 0.008 0.003 0.022 0.007 0 0.0001 20.00003

Panel B: mean values of test variables based on HORIZON and NUMANS Horizon Horizon .182 days #182 days %FORECAST 0.143 0.152 LAG 19.4 24.8 AIFORERR 0.025 0.015 ACFORERR 0.023 0.015 AREVISION 0.008 0.008 DISP 0.023 0.006 INDIMP 20.001 0.0012 CLUSTIMP 20.0008 0.0015 DDISP 20.0001 0.00003

AIFORERR it 5 b 0 1 b 1 %FORECAST it 1 b 2 AREVISION it 1 b 3 NUMFOL i 1 b 4 HORIZON it 1 b 5 TSVAR i 1 e it Model 1 If the average absolute error for an individual analyst’s forecast is an increasing function of FI, the estimated coefficients for the two test variables, %FORECAST and AREVISION, should be greater than zero. The estimated coefficients for the control variables HORIZON and TSVAR are expected to be greater than zero, as these variables are related to the difficulty in forecasting. Following Bhushan’s (1989) finding that a greater analyst following may result from increased forecast difficulty, the control variable NUMFOL is expected to be positive. (Though Lys and Soo (1995) find that forecast accuracy increases with analysts following, their test procedure does not control for FI, while Model (1) does.

NUMANS $3 0.318 18.6 0.022 0.021 0.008 0.015 0.0019 0.0027 0.0008

NUMANS ,3 0.11 23.18 0.019 0.019 0.008 0.013 20.0004 20.0003 20.0007

The greater NUMFOL, the less one would ex-ante expect large changes in expectations, and the lower the FI for the average forecast. Therefore, when FI is not controlled for, NUMFOL is negatively associated with forecast errors.) While Model (1) tests the effects of FI on forecast accuracy cross-sectionally, Model (1A) examines whether analysts’ relative forecast accuracy for a particular firm depends on whether their forecasts for that firm tend to have high or low FI. Model (1A) is separately estimated for every sample firm with 10 or more analysts each providing at least 10 forecasts over the test period. AVGFORERR j 5 b 0 1 b 1 AVG%FORECAST j 1 b 2 AVGAREVISION j 1 b 3 AVGHORIZON j 1 e j Model 1A

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

The test variables in Model (1A) are AVG%FORECAST and AVGAREVISION. If analysts’ forecast accuracy and FI for a given firm are inversely related, then the coefficients b 1 and b 2 , across all firms, should be significantly greater than zero. Because forecast accuracy is negatively related to the forecast horizon, the coefficient b 3 for the control variable AVGHORIZON, across all firms, is expected to be significantly positive. The test statistic is defined as the mean of the individual parameter estimates, divided by the standard deviation of those estimates. Barron, Kim, Lim, and Stevens (1998) and Abarbanell, Lannen, and Verrecchia (1995) predict that an increase in the number of forecasts aggregated into the mean will better diversify the idiosyncratic component of analysts’ forecast errors and improve forecast accuracy. While an increase in %FORECAST is expected to result in a decrease in the accuracy of the individual forecasts being aggregated, an increase in %FORECAST may also result in better diversification of the individual forecast errors, so that the resulting forecast aggregation may compensate for the increase in individual analysts’ forecast errors. Therefore, Model (1B) is used to test the relation between the accuracy of the cluster mean and the number of forecasts in the cluster. ACFORERR it 5 b 0 1 b 1 NUMANS it 1 b 2 NUMFOL i 1 b 3 AREVISION it 1 b 4 HORIZON it 1 b 5 TSVAR i 1 e it Model 1B The test variable in Model (1B) is NUMANS. If the diversification of the individual forecast errors resulting from higher NUMANS does not fully compensate for the increase in individual analysts’ forecast errors, the cluster’s average forecast error will be an increasing function of the number of forecasts in the cluster, and the estimated coefficient for NUMANS will be positive. The estimated coefficient for AREVISION is expected to be greater than zero, consistent with the expectation for Model (1). The estimated coefficients for the control variables HORIZON and TSVAR are expected to be greater than zero, as these variables are related to the difficulty in forecasting. Following Bhushan’s (1989) finding that a greater analyst following may result

425

from increased forecast difficulty, the estimated coefficient for the control variable NUMFOL is expected to be positive.

3.5.2. Tests of H2 Models (2) and (2A) are used to test H2. DISPit 5 b 0 1 b 1 %FORECAST it 1 b 2 AREVISION it 1 b 3 NUMFOL i 1 b 4 HORIZON it 1 b 5 TSVAR i 1 b 6 AIFORERR it 1 e it

Model 2

DDISPit 5 b 0 1 b 1 D%FORECAST it 1 b 2 DAREVISION it 1 b 3 LAG it 1 e it Model 2A The first model links FI to the level of analysts’ uncertainty (DISP) and the second model links changes in FI to changes in analysts’ uncertainty (DDISP). If the dispersion in analysts’ forecasts is an increasing function of FI, the estimated coefficients for the test variables %FORECAST and AREVISION (D%FORECAST and DAREVISION) should be greater than zero in Model (2) (Model 2A). The estimated coefficient for the control variable TSVAR is expected to be greater than zero in Model (2), as TSVAR is related to the ex-ante difficulty in forecasting. Barron et al. (1998) show that the dispersion in a group of analysts’ forecasts is s /(s 1 h)2 , where s is the precision of private information, and h is the precision of common information. To the extent that a greater analyst following is related to an increase in the amount of private information obtained by analysts (Bhushan, 1989), the precision of private information should be positively related to analyst following. Accordingly, the estimated coefficient for the control variable NUMFOL is expected to be positive in Model (2). Stickel (1989) finds that analysts appear more reluctant to forecast early in the fiscal year, and that the increase in forecast revision activity following an earnings announcement is significantly weaker in the first two quarters of the fiscal year. He argues that because private information is likely to be more precise later in the fiscal year, analysts may delay issuing forecasts until the precision of their private information increases and

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

426

they can distinguish their forecasts from other analysts’ forecasts. Accordingly, the estimated coefficient for the control variable HORIZON is expected to be negative in Model (2). AIFORERR is inserted into Model (2) to distinguish the effect of FI on the level of forecast dispersion from the effect of FI on the level of forecast accuracy. The estimated coefficient for the control variable AIFORERR is expected to be positive. The estimated coefficient for the control variable LAG is expected to be positive in Model (2A), because the decrease in analysts’ uncertainty, from the previous cluster until the current cluster, should be positively related to the length of time since the previous cluster. The sample size for tests of Model (2) will be considerably lower than for tests of Model (1), because the variable DISP requires that the forecast cluster contains at least two forecasts. The sample size for tests of Model (2A) will be considerably lower than for tests of Model (2), because the variable DDISP requires that consecutive forecast clusters each contain at least two forecasts.

3.5.3. Tests of H3 H3 in tested in two ways. First, the correlations are separately provided for each cluster size. Second, Model (3) is estimated to relate the under / over adjustment behavior to FI. CFORERR it 5 b 0

1

b 1 REVISION it

1

b 2 %FORECAST it* REVISION it

1

b 3 AREVISION *it REVISION it

1

b 4 NUMFOL *i REVISION it

1

b 5 HORIZON it* REVISION it

1

b 6 DOWN *it REVISION it

1

b 7 TSVAR i *REVISION it 1 e it

Model 3

In Model (3), there is a basic relation between the dependent and independent variables, CFORERR and REVISION, and there are six interaction variables which influence that relation. The interpretation of a positive (negative) coefficient for each of these six interaction variables is that an increase in the interaction variable (e.g., %FORECAST, AREVISION, etc.) increases (decreases) the relation between CFORERR and REVISION. Because of the

presence of the interaction variables, the estimated coefficient b 1 cannot be interpreted as the estimated relationship between CFORERR and REVISION. (The estimated relationship between CFORERR and REVISION would be the estimated coefficient b 1 plus the term b 2 %FORECAST using the average value for %FORECAST, plus the term b 3 AREVISION it using the average value for AREVISION, plus . . . ) The way that Model 3 tests H3 is as follows. Recall that forecast errors are defined as actual earnings minus predicted earnings, so that a positive forecast error (CFORERR) implies that analysts’ forecasts were too low, and a negative forecast error implies that analysts’ forecasts were too high. It follows that if the cluster revision and the forecast error are both positive, that would imply that analysts’ prior forecasts were too low, and that they correctly raised their forecasts, but by an insufficient amount. Likewise, if the cluster revision and the forecast error are both negative, that would imply that analysts’ prior forecasts were too high, and that they correctly lowered their forecasts, but by an insufficient amount. On the other hand, if the cluster revision is positive and the forecast error is negative, that would imply that analysts’ raised their forecasts by an excessive amount. Likewise, if the cluster revision is negative and the forecast error is positive, that would imply that analysts’ lowered their forecasts by an excessive amount. Hence, a positive correlation between the cluster revision and the cluster’s consensus forecast error implies that analysts under adjust in their revisions. Therefore, a positive coefficient for the test variable %FORECAST*REVISION would imply that for a given revision, the greater the FI, the more actual earnings will exceed positively revised forecasts and underperform negatively revised forecasts. If the forecast under adjustment is (not) proportional to REVISION, then the estimated coefficient for AREVISION*REVISION should not (should) differ from zero. The variable AREVISION*REVISION is included in Model (3) to control for the possibility that the under adjustment is not proportional to REVISION, but no directional hypothesis is provided. If analysts’ under adjustment behavior is related to forecast difficulty, then the estimated coefficients for the control variables HORIZON*REVISION, TSVAR*REVISION,

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

and NUMFOL*REVISION should be greater than zero. Because (i) analysts may have greater disincentives to sharply lower their forecasts than to sharply increase their forecasts and (ii) analysts tend to make optimistic forecasts when the firm faces adverse circumstances (Klien, 1990), analysts’ under adjustment behavior may be greater with negative revisions. As a result, the estimated coefficient for the control variable DOWN*REVISION is expected to be positive.

3.5.3.1. Tests of H4 Models (4) and (4A) are used to test H4. CLUSTIMPit 5 b 0 1 b 1 %FORECAST it

427

should not arise in Models (1A) and (2A). To the extent that the results for Models (1A) and (2A) are consistent with those in Models (1) and (2), one can develop confidence that the results in Models (1) and (2) are not driven by serial correlation. Though the literature suggests various individual analyst effects on forecast accuracy, the tests do not control for these possible effects. The First Call database used in this study associates forecasts with brokerage firms, but not with particular analysts. As a result, it is difficult to develop variables for analyst quality or experience. Ignoring analyst quality and experience will not bias results if the higher and lower quality analysts are equally distributed in low and high FI cases.

1 b 2 AREVISION it 1 b 3 LAG it 1 e it (Model 4) INDIMPit 5 b 0 1 b 1 %FORECAST it 1 b 2 AREVISION it 1 b 3 LAG it 1 e it (Model 4A) If the improvement in forecast accuracy is related to FI, the estimated coefficients for the test variables %FORECAST and AREVISION should be positive in Models (4) and (4A). The estimated coefficient for the control variable LAG is expected to be positive in Models (4) and (4A), because the improvement in forecast accuracy relative to the previous cluster should be positively related to the length of time since the previous cluster.

3.6. Model specification issues To control for the possible cross-sectional correlation in the models’ dependent variables, the models using levels as the dependent variable (Models (1), (1A), (1B), (2), and (3)) include four time-specific fixed-effects parameters, corresponding to the five calendar years represented in the test sample. To control for possible heteroskedasticity in the regression errors, all t-statistics are computed using White’s (1980) heteroskedastic-consistent variance estimators. As previously discussed, successive clusters for a given firm do not contain any overlapping forecasts, possibly reducing the serial correlation in the disturbance terms in Models (1) and (2). In addition, by construction, the serial correlation issue

4. Results

4.1. The relation between forecast immediacy and forecast accuracy The results of Model (1) are presented in Table 2. These results indicate that the estimated coefficients for %FORECAST and AREVISION are both significantly greater than zero, consistent with H1. The estimated coefficients for HORIZON, NUMFOL, and TSVAR are all significantly positive, as expected. Because Model (1) controls for the time until the end of the fiscal period with the HORIZON variable, I conclude that the effect of FI on forecast accuracy is not captured by the length of time until the end of the fiscal period. Finally, an F-test for the four time-specific fixed-effects parameters is significant (P value ,0.001), implying that forecast accuracy varies yearly. The mean (median) values of the 177 parameter estimates for b 1 , b 2 , and b 3 in Model (1A) are 0.003, 0.285, and 0.006 (0.004, 0.243 and 0.003), respectively, and the Z-statistics for the set of b 1 , b 2 , and b 3 parameter estimates are 0.68, 3.16, and 3.21, respectively. While the Z-statistic for b 1 is insignificant, the nonparametric Wilcoxon signed-rank test for b 1 is significant at the 0.03 level. (In addition, the set of parameter estimates for b 1 is significant using an alternative Z-statistic. That statistic (see Mikhail et al., 1997; Barth, 1994) is defined as follows: Z 5 (N 2 1)1 / 2 (avg (t) / SD(t)) where: avg (t) is the average of the individual t-

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

428

Table 2 Association between the accuracy of the cluster mean and forecast immediacy (Tests of H1) AIFORERR it 5b 0 1b 1 %FORECAST it 1b 2 AREVISION it 1b 3 NUMFOL i 1b 4 HORIZON it 1b 5 TSVAR i 1b k SYk 1e it (Model 1) Y N553 401

b0

b1

b2

b3

b4

b5

Parameter est. t-Statistic

20.005 25.0*

0.008 8.12*

0.552 27.62*

0.0001 4.16*

0.0001 42.7*

0.044 21.9*

Model F-statistic Adj. R 2 F-test of fixed-effect time parameters

1352* 0.186 208*

Significance levels: *0.01, **0.05,***0.1. AIFORERR is the average absolute forecast error of the individual forecasts in the cluster, divided by price; %FORECAST is the percentage of analysts following the firm who provide a forecast in the cluster; NUMFOL is the number of analysts following the firm; HORIZON is the number of days until the end of the fiscal year; TSVAR is the standard deviation of the first-differenced annual earnings series, divided by price; and AREVISION is the absolute value of the current cluster’s consensus forecast mean minus the previous cluster’s consensus forecast, divided by price. Price is the closing price on the first day of the current cluster. Model (1) was estimated using data from First Call for the period 1990 to 1994, using OLS. Significance levels were calculated using White’s (1980) correction for heteroskedasticity. To control for cross-sectional correlation in the disturbance terms, the Model includes four fixed-effect time parameters, b k SYk . However, the results for these parameters are not reported.

statistics; SD(t) is the standard deviation of the set of t-statistics; and N is the number of t-statistics being aggregated.) Using this procedure, the Z-statistics for the set of b 1 , b 2 , and b 3 parameter estimates are 2.46, 4.79, and 4.83, respectively. Overall, the results of Model (1A) further support H1 and provide evidence that FI and forecast accuracy are negatively related, even within a particular firm. The results of Model (1B) are presented in Table 3. These results indicate that the estimated coefficient for NUMANS is significantly greater than

zero. The implication is that in clusters with high FI, the diversification of individual forecast errors which results from forecast aggregation does not fully compensate for the increase in an individual analyst’s forecast errors due to the higher FI. (However, if a group of 20 analysts makes contemporaneous revisions, the mean of those 20 analysts’ forecasts will be more accurate than the mean of any subset of those 20 analysts’ forecasts.) The estimated coefficients for the remaining variables other than NUMFOL are all significantly positive, as they are

Table 3 Association between the accuracy of the cluster mean and forecast immediacy ACFORERR it 5b 0 1 b 1 NUMANS it 1b 2 NUMFOL i 1b 3 AREVISION it 1b 4 HORIZON it 1b 5 TSVAR i 1b k SYk 1 e it (Model 1B) N553,401

b0

Parameter est. 20.003 t-Statistic 26.34* Model F-statistic Adj. R 2 F-test of fixed-effect time parameters

b1

b2

b3

b4

b5

0.0004 3.7*

0.0001 0.34

0.534 26.72* 1276* 0.177 200*

0.0001 42.7*

0.0453 21.63*

Significance levels: *0.01, **0.05,***0.1. ACFORERR is the absolute forecast error of the cluster’s consensus forecast, divided by price; NUMANS is the number of forecasts in the cluster; NUMFOL is the number of analysts following the firm; HORIZON is the number of days until the end of the fiscal year; TSVAR is the standard deviation of the first-differenced annual earnings series, divided by price; and AREVISION is the absolute value of the current cluster’s consensus forecast minus the previous cluster’s consensus forecast, divided by price. Price is the closing price on the first day of the current cluster. Model (1B) was estimated using data from First Call for the period 1990–1994, using OLS. Significance levels were calculated using White’s (1980) correction for heteroskedasticity. To control for cross-sectional correlation in the disturbance terms, the Model includes four fixed-effect time parameters, b k SYk. However, the results for these parameters are not reported.

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

positively related to forecast difficulty. Again, an F-test for the four time-specific fixed-effects parameters is highly significant (P value ,0.001).

4.2. The relation between forecast immediacy and forecast dispersion The results of Models (2) and (2A) are presented in Panels A and B of Table 4, with the results for the time-specific fixed-effects parameters suppressed in Panel A. The estimated coefficients for %FORECAST and AREVISION are both significantly positive in Model (2) and the estimated coefficient for %FORECAST is significantly negative in Model (2A). These results provide support for H2 and imply that the greater FI, the greater analysts’ uncertainty. Alternatively, the more information that arrives to the marketplace since the previous forecasts were made, the greater the increase in forecast dispersion. Consistent with expectations, the estimated coefficients for TSVAR and

429

NUMFOL are significantly greater than zero in Model (2) and the estimated coefficient for LAG is insignificantly different from zero in Model (2A). The estimated coefficient for HORIZON is negative in Model (2), though not significantly so. An F-test for the four time-specific fixed-effects parameters in Model (2) is highly significant (P value ,0.001), implying that forecast dispersion varies annually.

4.3. The relation between forecast immediacy and forecast under adjustment Results presented in Panel A of Table 5 show that the correlation between forecast errors and forecast revisions is negative for small clusters and positive for large clusters. Moreover, the Spearman correlation between (i) cluster size and (ii) the correlation between the cluster revision and the cluster mean forecast error, is significant at the 0.05 level. These results support H3. Results for Model (3) are presented in Panel B of

Table 4 Association between forecast dispersion and forecast immediacy (Tests of H2) Panel A DISPit 5b 0 1 b 1 FORECAST it 1b 2 AREVISION it 1b 3 NUMFOL i 1b 4 HORIZON it 1b 5 TSVAR i 1b 6 AIFORERR it 1b k SYk 1 e it (Model 2) N520 357

b0

b1

b2

b3

b4

b5

b6

Parameter est. t-Statistic

20.001 1.24

0.0007 1.97**

0.321 15.4*

0.0001 5.42*

0.000 20.34

0.006 2.7*

0.10 19.73*

Model F-statistic Adj. R 2 F-test of fixed-effect time parameters

1035* 0.337 16.6*

Panel B DDISPit 5b 0 1 b 1 D%FORECAST it 1b 2 DAREVISION it 1b 3 LAG it 1e it (Model 2A) N510 545

b0

b1

b2

b3

Parameter est. t-Statistic

0.001 0.92

0.004 7.76*

0.009 0.81

20.0001 20.31

Model F-statistic Adj. R 2

22.7* 0.006*

Significance levels: *0.01, **0.05,***0.1. DISP is the dispersion in the cluster’s forecasts, divided by price; DISP is the dispersion in the current cluster’s forecasts minus the dispersion in the prior cluster’s forecasts; %FORECAST is the percentage of analysts following the firm who provide a forecast in the cluster; LAG is the number of days from the end of the previous cluster until the first day of the current cluster; AREVISION is the absolute value of the current cluster’s consensus forecast minus the previous cluster’s consensus forecast, divided by price; D%FORECAST is the previous %FORECAST minus the current %FORECAST; and DAREVISION is the previous AREVISION minus the current AREVISION. Price is the closing price on the first day of the current cluster. Models (2) and (2A) were estimated using data from First Call for the period 1990 to 1994, using OLS. Significance levels were calculated using White’s (1980) correction for heteroskedasticity. To control for cross-sectional correlation in the disturbance terms, Model (2) includes four fixed-effect time parameters, b k SYk . However, the results for these parameters are not reported.

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

430

Table 5 The relation between forecast immediacy and the correlation between the cluster revision and cluster consensus forecast error (Tests of H3) Panel A Cluster size

Correlation between cluster revision and cluster consensus forecast error

1 2 3 4 5 6 7 8 9 10 11 12 13 14

20.123 (P value50.0001) 20.064 (P value50.0001) 0.0164 (P value50.2397) 0.0352 (P value50.0679) 0.0444 (P value50.0698) 0.1006 (P value50.0008) 0.0906 (P value50.0132) 0.0927 (P value50.0329) 0.1523 (P value50.0021) 0.1507 (P value50.0116) 0.3289 (P value50.0001) 20.1035 (P value50.1939) 0.2665 (P value50.0030) 0.3588 (P value50.0001)

Panel B: association between forecast under adjustments and forecast immediacy CFORERR it 5b 0 1 b 1 REVISION it 1b 2 %FORECAST *it REVISION it 1b 3 AREVISION *it REVISION it 1b 4 NUMFOL *i REVISION it 1b 5 HORIZON *it REVISION it 1b 6 DOWN *it REVISION it 1b 7 TSVAR i* REVISION it 1b k SYk 1 e it (Model 3) N553 600 Parameter est. t-Statistic Model F-statistic Adj. R 2

b0 2 0.005 212.26*

b1

b2

b3

b4

b5

b6

b7

21.81 210.67*

0.98 7.51*

20.002 20.18

0.0001 0.20

0.23 7.75*

0.52 17.47*

20.011 22.65*

296* 5.7%

Significance levels: *0.01, **0.05. Model (3) was estimated using data from First Call for the period 1990 to 1994, using OLS. Significance levels were calculated using White’s (1980) correction for heteroskedasticity. To control for cross-sectional correlation in the disturbance terms, the model includes four fixed-effect time parameters, b k SYk . However, the results for these parameters are not reported.

Table 5. If one were to test whether, on average, the cluster consensus forecast under adjusts to new information, the appropriate test would be an F-test on the null that Sb i 50, rather than a simple test of the null that b 1 5 0. The F-test of the null that Sb i 50 is insignificant, implying that, on average, analysts do not under adjust their forecasts. (This is consistent with the results in Panel A of Table 5, which show that smaller clusters tend to exhibit over adjustment behavior while larger clusters tend to exhibit under adjustment behavior.) However, the results also show that there are identifiable factors which lead to a greater under adjustment in analysts’ forecasts. Specifically, the estimated coefficient for %FORECAST*REVISION is significantly greater than zero, providing support for H3. The implication is that the greater the FI, the more likely that the

information triggering the forecast revisions will be less than completely reflected in analysts’ forecasts. As expected, the estimated coefficient for DOWN*REVISION is significantly positive. The results for the parameters linking forecast difficulty to forecast under adjustment are mixed. While the estimated coefficient for HORIZON*REVISION is significantly greater than zero, the estimated coefficient for NUMFOL*REVISION is insignificant, and contrary to expectations, the estimated coefficient for TSVAR*REVISION is significantly negative. An Ftest for the four time-specific fixed-effects parameters (not reported) is highly significant (P value ,0.001), implying that analysts’ tendency to under adjust in their forecast revisions varies from year-toyear. The estimated coefficient for REVISION*AREVISION is insignificant, suggesting

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

that the forecast under adjustment is proportional to the forecast revision. In summary, the results provide evidence that the FI is positively related to analysts’ forecast under adjustment, but the result are mixed regarding whether forecast difficulty is related to analysts’ forecast under adjustment.

4.4. The relation between forecast immediacy and forecast usefulness The results of Models (4) and (4A) are presented in Panels A and B of Table 6, respectively. The Table 6 Association between forecast usefulness and forecast immediacy (Tests of H4) Panel A CLUSTIMPit 5b 0 1 b 1 %FORECAST it 1b 2 AREVISION it 1 b 3 LAG it 1e it (Model 4) N553 401

b0

b1

b2

b3

Parameter est. t-Statistic

20.02 28.4*

0.011 19.32*

0.195 6.47*

0.0003 5.86*

Model F-statistic Adj. R 2

477* 5.9%

Panel B INDIMPit 5b 0 1 b 1 %FORECAST it 1b 2 AREVISION it 1 b 3 LAG it 1e it (Model 4A) N553 401

b0

b1

b2

b3

Parameter est. t-Statistic

20.002 27.92*

0.001 19.35*

0.176 44.71*

0.00003 6.66*

Model F-statistic Adj. R 2

391 4.9%

Significance levels: *0.01, **0.05,***0.1. CLUSTIMP is the absolute forecast error of the previous cluster’s consensus forecast minus the absolute forecast error of the current cluster’s consensus forecast, divided by price; INDIMP is the absolute forecast error of the previous cluster’s consensus forecast minus the average absolute forecast error of individual forecasts in the current cluster, divided by price; %FORECAST is the percentage of analysts following the firm who provide a forecast in the cluster; LAG is the number of days from the end of previous cluster until the first day of the current cluster, and AREVISION is the absolute value of the current cluster’s consensus forecast minus the previous cluster’s consensus forecast, divided by price. Price is the closing price on the first day of the current cluster. Models (4) and (4A) were estimated using data from First Call for the period 1990–1994, using OLS. Significance levels were calculated using White’s (1980) correction for heteroskedasticity.

431

significantly positive coefficients for %FORECAST and AREVISION in Models (4) and (4A) support H4, and imply that revisions made immediately following the release of significant new information will provide the greatest improvement in forecast accuracy, relative to prior expectations. The estimated coefficients for LAG are significantly positive in Models (4) and (4A), as expected.

4.5. Sensitivity analysis The results for the %FORECAST and AREVISION variables hold whether there are more or less than 13 analysts following the firm (the sample median) and whether the revision is positive or negative. This implies that the FI results are not related to (a) analyst following, (b) firm size, or (c) whether the new information is good news or bad news. (The paper’s results are substantially unchanged when all cases with a negative cluster consensus and / or a cluster consensus of less than 0.1 were removed from the sample.) While the fixed-effect time parameters control cross-sectional correlation in the disturbance terms, some cross-sectional correlation may remain. Likewise, while the test-design using non-overlapping forecast clusters reduces the time-series correlation in the disturbance terms, time-series correlation does exist within a fiscal year for a given firm. Although there are, on average, only 15 clusters per firm per fiscal year, within a fiscal year for a given firm, the serial correlation in forecast errors is 0.88, and the serial correlation in forecast dispersion is 0.47. In one test of the extent to which the reported results may have been influenced by serial correlation, the models were estimated using only those forecast clusters that are separated by more than 30 calendar days from the previous cluster. The results were substantially unchanged from those reported for Models (1)–(3), except that the estimated coefficient for %FORECAST was not significant for Model (2). In another test using several subsamples, Models (1) and (1B) were estimated after including additional fixed-effect parameters for each firm and each combination of firm and firm-year. (This procedure could not be done with the full sample, because the resulting matrix of independent variables was too

432

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

large for SAS to process. The subsamples each contained roughly 10% of the total sample.) In these tests, the estimated coefficients for AREVISION were still highly significant, although the results for %FORECAST were not significant. These results suggest that the coefficient estimates for AREVISION are less likely to be affected by serial correlation than those for %FORECAST. Nevertheless, if the effects of the cross-sectional and time-series correlations are not adequately controlled for in the data, the significance levels of the test results and inferences drawn from the various test results may be exaggerated.

5. Summary and conclusions This paper provides evidence on the effects of forecast immediacy (FI) on analysts’ forecasts. Forecast revisions made immediately after the public release of significant new information will at least partially reflect the new information, while the previous forecasts will not reflect any of the new information. Therefore, FI is positively related to the increase in the accuracy of the current forecasts, relative to that of the previous forecasts. However, because a greater FI is reflective of an immediate response by analysts to new information, analysts may not have had sufficient time to consider fully the implications of the new information when revising their forecasts. Therefore, FI is negatively related to forecast accuracy and positively related to forecast dispersion. There is considerable discussion in the literature whether there exist superior forecasters (e.g., O’Brien, 1990; Sinha et al., 1997). The results of this paper suggest that in addition to labeling analysts as ‘stronger’ and ‘weaker’, one might also classify analysts as emphasizing either accuracy or usefulness (i.e., immediacy) in the timing of their forecasts. While, on average, forecasts from analysts designated as ‘usefulness-oriented’ may be less accurate, their forecasts may be more useful. Conversely, while forecasts from analysts designated as ‘accuracy-oriented’ may be less useful, their forecasts may be more accurate. Therefore, it may be appro-

priate to evaluate ‘usefulness-oriented’ and ‘accuracy-oriented’ analysts by different criteria. The methodology introduced in this paper to measure FI can be used to determine whether a particular analyst’s forecasts tend to be ‘usefulness-oriented’ or ‘accuracy-oriented’. A related implication pertains to the appropriate control for timing advantages when comparing analysts’ forecast performance. The traditional approach (e.g., Sinha et al., 1997) controls for the length of time until the end of the fiscal period. However, my results show that the level of contemporaneous forecast activity and the magnitude of the forecast revisions, which proxy for forecast immediacy, are also related to forecast accuracy and usefulness. Hence, these variables should also be considered when evaluating analysts’ performance. The paper’s results further suggest that to the extent that forecasts with high forecast immediacy tend to be less accurate, one might place less weight on those forecasts when aggregating individual forecasts into a composite forecast. Alternatively, to the extent that forecasts with high FI tend to be more accurate than prior forecasts, one might place less weight on forecasts that precede high FI forecasts. A final implication pertains to trading strategies. In a number of strategies, ‘better’ forecasts than the simple consensus are constructed by exploiting systematic errors in analysts’ forecasts. The finding that the correlation in analysts’ forecast errors is related to FI suggests that strategies based on predictable forecast errors should consider those forecasts’ FI.

Acknowledgements The author would like to express his appreciation to the First Call Corporation for providing the data used in this study; and to Stanley Levine, Harry Newman, Patricia Williams, and seminar participants at the American Accounting Association Annual Meeting, the DAIS Group Conference on Quantitative Equity and Earnings Analysis, and the Institute of International Research Conference on Analyzing Corporate Earnings, for their helpful insights and suggestions.

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

References Abarbanell, J. S., & Bernard, V. L. (1992). Tests of analysts’ overreaction / underreaction to earnings explanation as an explanation for anomalous stock price behavior. Journal of Finance, 47, 1181–1207. Abarbanell, J. S., Lanen, W., & Verrecchia, R. (1995). Analysts forecasts as proxies for investor beliefs in empirical research. Journal of Accounting and Economics, 20, 31–60. Ali, A., Klien, A., & Rosenfeld, J. (1992). Analysts’ use of information about permanent and transitory earnings components in forecasting annual EPS. The Accounting Review, 67, 183–198. Atiase, R. K. (1987). Market implications of predisclosure information: Size and exchange effects. Journal of Accounting Research, 25, 168–176. Barron, O. E., Kim, O., Lim, S. C., & Stevens, D. E. (1998). Using analysts’ forecasts to measure properties of analysts’ informational environment. The Accounting Review, 73, 421– 434. Barth, M. (1994). Fair value accounting: Evidence from investment securities and the market valuation of banks. The Accounting Review, 69, 1–25. Bhushan, R. (1989). Collection of information about publicly traded firms. Journal of Accounting and Economics, 11, 183– 206. Bhushan, R. (1989). Firm characteristics and analyst following. Journal of Accounting and Economics, 11, 255–274. Brennan, M. & Hughes, P. (1991). Stock prices and the supply of information. Journal of Finance, December, 1665–1691. Brown, L. D., & Han, J. (1992). The impact of annual earnings announcements on the convergence of beliefs. The Accounting Review, 67, 862–875. Brown, L. D. (1991). Forecast selection when all forecasts are not equally recent. International Journal of Forecasting, 349–356. Brown, L. D., Richardson, G. D., & Schwager, S. J. (1987). An information interpretation of financial analysts superiority in forecasting earnings. Journal of Accounting Research, 25, 49–67. Brown, S. J., Foster, G., & Noreen, E. (1985). Security analysts multi-year earnings forecasts and the capital market. (Studies in Accounting Research [21, American Accounting Association). Butler, K. C., & Lang, L. (1991). The forecast accuracy of individual analysts: Evidence of systematic optimism and pessimism. Journal of Accounting Research, 29, 150–156. Clement, M. B. (1999). Analysts Forecast Accuracy: Do ability, resources, and portfolio complexity matter. Journal of Accounting and Economics, 27, 285–303. Crichfield, T., Dykman, T., & Lakonishok, J. (1978). An evaluation of security analysts’ forecasts. The Accounting Review, 53, 651–668. Conroy, R. M., & Harris, R. S. (1995). Analysts’ earnings forecasts in Japan: Evidence of systematic optimism. PacificBasin Journal of Finance, 3, 393–408.

433

Das, S., Levine, C. B., & Sivaramakrishnan, K. (1998). Earnings predictability and bias in analysts’ earnings forecasts. The Accounting Review, 73, 277–294. Dugar, A., & Nathan, S. (1995). The effect of investment banking relationships on financial analysts’ earnings forecasts and investment recommendations. Contemporary Accounting Research, 12, 131–160. Frenkel, J., & Froot, K. (1987). Using survey data to test standard propositions regarding exchange rate expectations. American Economic Review, 77, 374–481. Hunton, J. E., & McEwen, R. A. (1997). An assessment of the relation between analysts’ earnings forecast accuracy, motivational incentives and cognitive information search strategy. The Accounting Review, 72, 497–515. Jacob, J., Lys, T., & Neale, M. (1999). Expertise in forecasting performance of security analysts. Journal of Accounting and Economics, 28, 51–82. Klien, A. (1990). A direct test of the cognitive bias theory of share price reversals. Journal of Accounting and Economics, 13, 155–166. Kross, W., Ro, B., & Schroeder, D. (1990). Earnings expectations: The analyst’s information advantage. The Accounting Review, 65, 461–476. Levine, S. (1998). What drives forecast clusters? First Call Corporation, New York, NY. Lewis, K. (1989). Changing beliefs and systematic rational forecast errors with evidence from foreign exchange. American Economic Review, 79, 621–637. Lin, H., & McNichols, M. F. (1997). Underwriting relationships, analysts’ earnings forecasts and investment recommendations. Journal of Accounting and Economics, 25, 101–128. Luttman, S., & Silhan, P. (1995). Identifying factors consistently related to Value Line earnings predictability. Financial Review, 30, 445–468. Lys, T., & Soo, L. (1995). Analysts’ forecast precision as a response to competition. Journal of Accounting, Auditing, and Finance, 10, 751–765. Mikhail, M. B., Walther, B. R., & Willis, R. H. (1997). Do analysts improve their performance with experience? Journal of Accounting Research, 35, 131–166, Supplement. Morse, D., Jens, S., & Stice, E. (1991). Earnings announcements and the convergence (divergence) of beliefs. The Accounting Review, 66, 376–388. Mozes, H. A., & Williams, P. A. (1999). Modeling earnings expectations based on clusters of analyst forecasts. Journal of Investing, Summer, 25–38. Mozes, H. A., & Williams, P. A. (2000). Brokerage firm analysts: How good are the forecasts? Journal of Investing, Fall, 5–13. O’Brien, P. (1988). Analysts’ forecasts as earnings expectations. Journal of Accounting and Economics, 10, 53–83. O’Brien, P. (1990). Forecast accuracy of individual analysts in nine industries. Journal of Accounting Research, 28, 286–304. Sinha, P., Brown, L. D., & Das, S. (1997). A reexamination of financial analysts’ differential earnings forecast accuracy. Contemporary Accounting Research, 14, 1–42.

434

H. A. Mozes / International Journal of Forecasting 19 (2003) 417–434

Stickel, S. E. (1989). The timing of an incentive for annual earnings forecasts near earnings announcements. Journal of Accounting and Economics, 11, 275–292. Stickel, S. E. (1992). Reputation and performance among security analysts. The Journal of Finance, 47 (5), 1811–1836. Williams, P. (1996). The relation between a prior earnings forecast by management and analysts response to a current management forecast. The Accounting Review, 71, 103–115. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817–838.

Biography: Haim MOZES received his Ph.D. in accounting and M.S. in Statistics / Operations from New York University. His research work focuses on the use of individual analyst forecast data for constructing superior earnings forecasts and for identifying superior financial analysts.