Journal of International Economics 88 (2012) 74–90
Contents lists available at SciVerse ScienceDirect
Journal of International Economics journal homepage: www.elsevier.com/locate/jie
The carry trade and fundamentals: Nothing to fear but FEER itself ☆ Òscar Jordà a, b, Alan M. Taylor c, d, e,⁎ a
Federal Reserve Bank of San Francisco, United States University of California, Davis, United States c University of Virginia, Department of Economics, Monroe Hall, Charlottesville VA 22903, United States d National Bureau of Economic Research, United States e Center for Economic Policy Research, United Kingdom b
a r t i c l e
i n f o
Article history: Received 30 March 2010 Received in revised form 21 February 2012 Accepted 3 March 2012 Available online 12 March 2012 JEL classification: C44 F31 F37 G15 G17
a b s t r a c t Risky arbitraging based on interest rate differentials between two countries is typically referred to as a carry trade. Up until the recent global financial crisis, these trades generated years of persistent positive returns, which were hard to reconcile with standard pricing kernels. In 2008 these trades blew up, which seemed to weaken the case for a puzzle relating to predictable currency returns. But the rise and fall of this puzzle in the academic literature has only been concerned with naïve carry trades based on yield signals alone. We show, however, that some simple and more realistic fundamentals-augmented trading strategies would have generated strong and sustained positive profits that endured through the turmoil. © 2012 Elsevier B.V. All rights reserved.
Keywords: Uncovered interest parity Efficient markets Exchange rates Receiver operating characteristic curve Correct classification frontier
1. Introduction Once a relatively obscure corner of international finance, the Naïve carry trade – a crude bilateral interest rate arbitrage – has drawn increasing attention in recent years, within and beyond academe, and especially as the volume of carry trade activity multiplied astronomically: daily turnover in the global foreign exchange (FX) market is US $4 trillion, which dwarfs that of the main stock exchanges combined. Inevitably, press attention has often focused on the personal angle. In 2007, The New York Times reported on the disastrous losses suffered
☆ Taylor has been supported by the Center for the Evolution of the Global Economy at UC Davis and Jordà by DGCYT Grant (SEJ2007-63098-econ); part of this work was completed while Taylor was a Houblon-Norman/George Fellow at the Bank of England; all of this research support is gratefully acknowledged. We thank Travis Berge and Yanping Chong for excellent research assistance. We acknowledge helpful comments from the editor and two anonymous referees, as well as from Vineer Bhansali, Richard Clarida, Robert Hodrick, Richard Meese, Michael Melvin, Stefan Nagel, Michael Sager, Mark Taylor, and seminar participants at The Bank of England, Barclays Global Investors, London Business School, London School of Economics, PIMCO, the 3rd annual JIMF-SCCIE conference, and the NBER IFM program meeting. All errors are ours. ⁎ Corresponding author. E-mail addresses:
[email protected],
[email protected] (Ò. Jordà),
[email protected] (A.M. Taylor). 0022-1996/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.jinteco.2012.03.001
by the “FX Beauties” club and other Japanese retail investors during an episode of yen appreciation; one highly-leveraged housewife lost her family's entire life savings within a week.1 By the fall of 2008 attention was grabbed by an even more brutal squeeze on carry traders from bigger yen moves—e.g., up 60% against the AUD over 2 months, and up 30% against GBP (including 10% moves against both in 5 hours on the morning of October 24). Money managers on the wrong side saw their funds blowing up, supporting the June 2007 prediction of Jim O'Neill, chief global economist at Goldman Sachs, who had said of the carry trade that “there are going to be dead bodies around when this is over.”2 Economists have also been paying renewed attention to the carry trade, revisiting old questions: Are there returns to currency speculation? Are the returns predictable? Is there a failure of market efficiency? We are still far from a consensus, but the present state of the literature, how it got there, and how it relates to the interpretations of the current market turmoil, can be summarized in a just few moments, so to speak. 3
1 Martin Fackler, “Japanese Housewives Sweat in Secret as Markets Reel,” The New York Times, September 16, 2007. 2 Ambrose Evans-Pritchard, “Goldman Sachs Warns of ‘Dead Bodies’ after Market Turmoil,” The Daily Telegraph, June 3, 2007. 3 For a full survey of foreign exchange market efficiency see Chapter 2 in Sarno and Taylor (2002), on which we draw here.
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
The first point concerns the first moment of returns. There have been on average positive returns to Naïve carry trade strategies for long periods. 4 Put another way, the standard finding of a “forward discount bias” in the short run means that exchange rate losses will not immediately offset the interest differential gains of the Naïve carry strategy. 5 This finding is often misinterpreted as a failure of uncovered interest parity (UIP). But UIP is an ex-ante, not an ex-post, condition. And the evidence here, albeit limited, shows that average ex-ante exchange rate expectations (e.g., from surveys of traders) are not so far out of line with interest differentials. Rather, expectations themselves seem to be systematically wrong. 6 In this view, expost profits appeared to be both predictable and profitable, seemingly contradicting the risk-neutral efficient markets hypothesis. 7 However, over periods of a decade or two it is quite difficult to reject ex-post UIP, suggesting that interest arbitrage holds in the long run, and that the possible profit opportunities are a matter of timing. 8 The second point concerns the second moment of returns. If positive ex-ante returns are not arbitraged away, one way to rationalize them might be that they are simply too risky (volatile) to attract additional investors who might bid them away. This explanation has much in common with other finance puzzles, like the equity premium puzzle. The annual reward-to-variability ratio or Sharpe ratio of the S&P 500, roughly 0.4, can be taken as a benchmark level of risk adjusted return. It is an open question whether carry trade strategies can reliably surmount that hurdle, even with diversification.9,10
4 We prefer to focus on the period of unfettered arbitrage in the current era financial globalization—that is, from the mid 1980s on, for major currencies. Staring in the 1960s the growth of the Eurodollar markets had permitted offshore currency arbitrage to develop. Given the increasingly porous nature of the Bretton Woods era capital controls, and the tidal wave of financial flow building up, the dams started to leak, setting the stage for the trilemma to bite in the crisis of the Bretton Woods regime in 1970–73. Floating would permit capital account liberalization, but the process was fitful, and not until 1990s was the transition complete in Europe (Bakker and Chapple, 2002). Empirical evidence suggests significant barriers of 100 bps or more even to riskless arbitrage in the 1970s and even into the early 1980s (Frenkel and Levich, 1975; Clinton, 1998; Obstfeld and Taylor, 2004). Taking these frictions as evidence of imperfect capital mobility we prefer to exclude the 1970s and the early 1980s from our empirical work. Other work, e.g. Burnside et al. (2008b,c) includes data back to the the 1970s. 5 Seminal studies of the forward discount puzzle include Frankel (1980), Fama (1984), Froot and Thaler (1990), and Bekaert and Hodrick (1993). 6 Seminal works on survey expectations include Dominguez (1986), Frankel and Froot (1987) and Froot and Frankel (1989). For an update see Chinn and Frankel (2002). 7 A standard test of the predictability of returns (the “semi-strong” form of market efficiency) is to regress returns on an ex ante information set. The seminal work is Hansen and Hodrick (1980). Other researchers deploy simple profitable trading rules as evidence of market inefficiency (Dooley and Shafer, 1984; Levich and Thomas, 1993; Engel and Hamilton, 1990). 8 Support for long-run UIP was found using long bonds by Fujii and Chinn (2000) and Alexius (2001), and using short rates by Sinclair (2005). 9 It is also an open question whether expanding the currency set to include minor currencies, exotics, and emerging markets can add further diversification and so enhance the Sharpe ratio, or whether these benefits will be offset by illiquidity, trading costs, and high correlations/skew. Burnside et al. (2006) compute first, second, and third moments for individual currencies and portfolio-based strategies for major currencies and some minor currencies. In major currencies the transaction costs associated with bid-ask spreads are usually small, although 5–10 bps per trade can add up if an entire portfolio is churned every month (i.e., 60–120 bps annual). Burnside et al. (2007) explore emerging markets with adjustments for transaction costs. Both studies conclude that there are profit opportunities net of such costs, but the Sharpe ratios are low. Moreover, Sager and Taylor (2008) argue that the Burnside et al. returns and Sharpe ratios may be overstated. Pursuing a different strategy, however, using information in the term structure, Sager and Taylor are able to attain Sharpe ratios of 0.88 for a diversified basket of currencies in back testing. 10 In this paper we proceed in a risk-neutral framework, a widely-used benchmark, in contrast to the more recent and contentious use of consumption-based asset pricing models in the carry trade context. Lustig and Verdelhan (2007) claimed that the relatively low Naïve carry trade returns may be explicable in a consumption-based model with reasonable parameters. Burnside (2011) challenged the usefulness of this approach. Returns to our systematic trading rules have very different moments, however, with mean returns twice as large as the typical Naïve strategy, leaving a considerable premium to be explained in either framework.
75
Finally, we turn to the third point of near consensus, on the third moment of returns. Naïve carry trade returns are negatively skewed. The lower-tail risk means that trades are subject to a risk of pronounced periodic crashes—sometimes referred to as a peso problem. Such a description applies to the aforementioned disaster events suffered by yen carry traders in 2007 and 2008 (and their predecessors in 1998). However, many competing investments, like the S&P500, are also negatively skewed, so the question is once more relative. And while individual currency pairs may exhibit high skew, we know that diversification across currency pairs can be expected to drive down skewness in a portfolio. But for Naïve strategies some negative skew still remains under diversification, and it is then unclear whether excess returns are a puzzle, or required compensation for skew (and/or volatility). 11 In a carry trade, the only unknown is the behavior of exchange rates and at least since Meese and Rogoff (1983), the profession has mostly settled into a belief that fundamental explanations of the exchange rate have no predictive value in the short-run. In this paper we show that misalignments in frictionless equilibrium conditions in financial and goods markets, like deviations from UIP or from the fundamental equilibrium exchange rate, or FEER, have significant directional classification ability: simple FX trading strategies that exploit this information largely avoid crashes, attenuate lower-tail risk, modulate volatility, boost returns, and hence improve the Sharpe and gain-loss (Bernardo and Ledoit, 2000) ratios. The contributions of our paper are as follows. We start from common ground and confirm all prior findings for our data (a panel of G-10 currencies): Naïve carry trades had been profitable over long stretches of time. However, a windowed out-of-sample analysis indicates that returns quickly deteriorated in 2007, and by 2008 four-year average returns turned negative. The global financial crisis brought many investment strategies to their knees and the carry trade was no exception. From this perspective, it looked like the carry trade puzzle had been at least attenuated. But a Naïve carry trade is a crude investment strategy as it makes no attempt to hedge exchange rate risk. Marginally more sophisticated strategies extended by a FEER factor are a natural alternative. Moreover, by now simple nonlinear threshold models that weigh signals from departures of UIP and PPP equilibrium are common place (e.g. Taylor et al., 2001; Sarno et al., 2006). Might these simple extensions rescue the carry trade from the roiling waters of 2008? This seems to be the case, indeed. FEER control not only improves statistical performance, it also enhances the financial performance of trading rules beyond that achieved by diversification alone. For simplicity and comparability, we apply these extensions to equal-weighted portfolios of G-10 currencies to serve as a backtesting laboratory. These improved carry trade strategies generate positive returns with moderate volatility and positive skew and for the four year average ending in 2008 eke out a 3.64% annual rate of return. We also compare our approach to rival crash protection strategies. One suggestion is that carry trade skew is driven by market stress and liquidity events (Brunnermeier et al., 2009); but we find that VIX signals provide no additional explanatory or predictive power in our preferred forecasting framework. Another approach suggests that options provide a way to hedge downside risk (Burnside et al., 2008b,c); but these strategies are very costly to implement compared to our trading rule. Interestingly, real-world trading strategies for the years up to and including the 2008 financial crisis offer another window into our results. The Naïve models and crude ETFs crashed horribly in 2008, wiping out years of gains. More sophisticated ETFs bearing some resemblance to
11 Brunnermeier et al. (2009) focus on the “rare event” of crash risk as an explanation for carry trade profits. Burnside et al. (2008b) argue that peso problems cannot explain carry trade profits, although in a different version of the same paper (2008b), they argue to the contrary.
76
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
our preferred model weathered the storm remarkably well with barely any drawdown. 12 Absent perfect arbitrage – the ability to make a gain without the possibility of loss in a zero-cost investment – one can always find a pricing kernel that explains the returns of any asset. We provide formal out-ofsample predictive ability testing results (Giacomini and White, 2006) using innovative returns-, Sharpe- and skew-based loss functions grounded on trading performance rather than the more conventional quadratic-loss approach. Still, it is hard to gain enough texture to tell whether or not the observed returns are commensurate compensation for risk. Our work is sympathetic to a well-established view that asset prices can deviate from their fundamental value for some time, before a “snap back” occurs (Poterba and Summers, 1986; Plantin and Shin, 2007). Recent theory has focused on what might permit the deviation, and what then triggers the snap. In FX markets, recent models include noise traders, heterogeneous beliefs, rational inattention, liquidity constraints, herding, habits, “behavioral” effects, and other factors that may limit arbitrage (e.g., Backus et al., 1993; Shleifer and Vishny, 1997; Jeanne and Rose, 2002; Bacchetta and van Wincoop, 2006; Fisher, 2006; Brunnermeier et al., 2009; Ilut, 2010). To begin a conversation on these topics, we combine notions of approximate arbitrage (Bernardo and Ledoit, 1999, 2000) with classification ability using tools popular in biostatistics, machine learning, and signal processing but rarely used in economics. 13 In addition to providing formal statistics on returns-weighted directional ability, the gain-loss ratios that we report form a basis of comparison with equity gain-loss ratios. Thus, our work also provides a starting point for describing the set of allowable pricing kernels that would explain the data. 2. Currency strategy design Strict absence of arbitrage – defined as zero-cost investments that offer the possibility of gain with no possibility of loss – is a fundamental condition for equilibrium in asset markets. In this section we knit together basic no-arbitrage conditions in international economics to derive a predictive framework that nests commonly used carry trade strategies with alternatives that we propose. Ultimately the goal will be to assess which of these strategies generates approximate arbitrage opportunities (in the sense of Bernardo and Ledoit, 1999). Such opportunities are possibly incompatible with well-functioning capital markets. Since any strict non-arbitrage portfolio will be fairly priced for some set of investor preferences, it will be important for us to design a statistical metric that provides meaningful bounds for the set of pricing kernels that correctly price all portfolios. This we do in Section 4 below. Ex-post nominal excess returns to a carry trade are: xtþ1 ¼ Δetþ1 þ it −it ;
ð1Þ
where et + 1 is the log nominal exchange rate in U.S. dollars per foreign currency, and it and it* are nominal interest rates home (U.S.) and abroad for a riskless deposit with a one-period maturity. In a frictionless world, the carry trade is a zero-cost investment; thus, in the absence of arbitrage the expected excess return to a risk-neutral investor should be zero, that is Etxt + 1 = EtΔet + 1 + (it* − it) = 0. This is the well known uncovered interest rate parity (UIP) condition.
12 Pojarliev and Levich (2007, 2008) argue that active managers deliver less alpha than claimed, and that much of their returns can be described as quasi-beta with respect to style factors like carry, momentum, and value that are now captured in passive indices and funds. 13 The credit risk literature is an exception (see, e.g., Khandani et al., 2010).
Excess returns can also be expressed in real terms. Let pt + 1 and pt* + 1 denote the logarithm of the price level home and abroad so that πt + 1 = Δpt + 1 is the domestic inflation rate (and similarly for the inflation rate abroad, πt* + 1). Define the real exchange rate as qtþ1 ¼ q þ etþ1 þ ptþ1 −ptþ1 , then it is clear that Eq. (1) can be rewritten as: xtþ1 ¼ Δqtþ1 þ rt −r t ;
ð2Þ
where Etrt = it − Etπt + 1 denotes the real interest rate (and similarly for Etrt*). Under the assumption of (weak) purchasing power parity, q is the mean FEER to which qt reverts in the long-run implying that qt −q is a stationary variable and hence a cointegrating vector. In a more general setting, q could include time varying elements (e.g. productivity differentials as in the Harrod–Balassa–Samuelson framework, see Chong et al., 2012), which we do not explore directly in this paper due to space considerations. Expressions (1) and (2) are naturally nested in the stochastic process of the stationary system 14: 2 Δytþ1 ¼
Δetþ1 4 πtþ1 −πtþ1 it −it
3 5;
ð3Þ
where deviations of the real exchange from FEER are a cointegrating vector. Further assuming linearity, a vector error correction model (VECM) is a natural benchmark representation for the stochastic process that describes the evolution of Δyt + 1. The last row of expression (3) may look peculiar for two reasons: (1) there is no natural integrand for it* − it, and (2) this variable is dated at time t. However, neither of these two characteristics pose any difficulty. Broadly speaking, underlying the cointegrating vector is the law of one price applied to goods trading across borders where it* − it does not naturally appear. The timing convention is simply due to the fact that the returns from holding a riskless one-period bond are known with certainty when the transaction is entered. Absence of arbitrage under risk-neutrality is directly linked to the predictability of the first row of Δyt + 1. Thus a natural predictor of Δet + 1 is based on a linear combination of lagged information on Δet, (πt* − πt), it* − it and qt −q. Our analysis is based on a panel of nine countries: Australia, Canada, Germany, Japan, Norway, New Zealand, Sweden, Switzerland, and the United Kingdom set against the U.S. over the period January 1986 to December, 2008, observed at a monthly frequency (i.e., the “G10” sample).15 The dataset includes the end-of-month exchange rates and the one-month London interbank offered rates (LIBOR) and consumer price indices for the U.S. and any of the nine counterparty countries.16 A natural impulse is to determine the stationarity properties of the constituent elements of expression (3) so as to verify that qt + 1 is a proper cointegrating vector. Such steps are common when the objective of the analysis is to directly examine whether purchasing power parity holds in the data; when one is interested in determining the speed of adjustment to long-run equilibrium; and hence to ensure that the estimators and inference are constructed with the appropriate non-standard asymptotic machinery. However interesting it is to investigate these issues, they are of second order importance for our analysis given our stated focus on classification ability and derivation
14 It would be straightforward to augment this model with an order-flow element, as in the VAR system of Froot and Ramadorai (2008). 15 Data for the Germany after 1999 are constructed with the EUR/USD exchange rate and the fixed conversion rate of 1.95583 DEM/EUR. 16 Data on exchange rates and the consumer price index are obtained from the IFS database whereas data on LIBOR are obtained from the British Banker's Association (www.bba.org.uk) and Global Financial Data (globalfinacialdata.com).
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
of profitable investment strategies—the Beveridge–Nelson representation of the first equation in expression (3) is valid regardless of whether or not there is cointegration. For this reason, we provide a far less extensive analysis of these issues for our panel of data than is customary and refer the reader to the extensive literature on panel PPP testing (e.g., see Taylor and Taylor, 2004; and our related work Chong et al., 2012). Instead, we investigate the properties of qt + 1 directly with a battery of cointegration tests that pool the data to improve the power of the tests. These tests also account for different forms of heterogeneity in cross-section units and are based on the work of Pedroni (1999, 2004). The results of these tests are summarized in Table 1, with the particulars of each test explained therein. Broadly speaking, while not overwhelming, it seems reasonable to conclude that the data support our thinking of qt + 1 as a valid cointegrating vector. While the results of Table 1 are informative regarding the relative strength of PPP, we will show momentarily that there is valuable predictive information contained in real exchange rate fundamentals for use in one period-ahead directional predictions, which might form the basis of a trading strategy. Hence, we try various forecasting models, from Naïve models (e.g., simple carry trades) to more sophisticated models (e.g., threshold models of UIP and PPP dynamics). The models are then evaluated in two ways: first from a statistical standpoint, by looking at the in-sample fit and out-of-sample predictive power; and second from a trading standpoint, by examining how implementing these strategies would have affected a currency manager's returns, volatility, Sharpe ratios, gain-loss ratios and skewness for the sample period. 3. A trading laboratory The previous section sets the stage for the design of carry trade strategies. This section calculates the equal-weighted currency portfolio returns of alternative designs out-of-sample. At first blush, it appears that a little sophistication can generate returns, and that these returns would have survived the recent global financial crash. Whether this performance holds up to statistical scrutiny and whether it threatens standard notions of equilibrium pricing are questions that are explored in the following sections. The most basic design consists in arbitraging bilateral interest-rate differentials and hence does not require a model per se: the popular Naïve carry trade assumes that exchange rates are an unpredictable random walk. But such a view rests solely on notions about UIP so it is natural to pit the Naïve carry against a model that also entertains a PPP
Table 1 Panel cointegration tests. Test
Constant
Constant + Trend
Raw
Panel ν ρ PP ADF Group ρ PP ADF
Demeaned
stats
p-value stats
2.50** − 1.68* − 1.59 0.99
0.017 0.097 0.113 0.245
− 0.51 0.350 − 0.98 0.246 7.31** 0.000
p-value
Raw
Demeaned
stats
p-value
stats
p-value
3.82** 0.000 − 2.22** 0.034 1.68* 0.097 − 0.54 0.345
0.14 − 0.56 − 1.09 1.27
0.395 0.340 0.220 0.179
2.04* − 1.78* − 1.71* − 0.39
0.050 0.079 0.093 0.370
− 1.71* 0.092 − 1.65 0.103 0.59 0.335
0.48 − 0.40 7.16**
0.356 0.368 0.0009
− 2.12** − 2.06** 0.74
0.042 0.048 0.303
Notes: We use the same definitions of each test as in Pedroni (1999). The tests are residual-based. We consider tests with constant term and no trend, and with constant and time trend. The first 4 rows describe tests that pool along the within-dimension whereas the last 3 rows describe tests that pool along the between-dimension. The tests allow for heterogeneity in the cointegrating vectors, the dynamics of the error process and across cross-sectional units. ** (*) denotes rejection of the null of no-cointegration at the 95% (90%) confidence level. The sample is January 1986 to December 2008.
77
factor. We call this the Naïve ECM strategy. A more sophisticated investor would then consider including lagged information, a strategy we call VECM if it includes a PPP factor, or simply VAR if it does not. Finally, Coakley and Fuertes (2001), Sarno et al. (2006) and others have found that exchange rates behave like a random walk as long as deviations from UIP and PPP are not large, in which case reversion to parity tends to happen rather quickly. Hence, the last strategy we entertain is a simple threshold model based on these precepts and which we call TECM. There are undoubtedly numerous other alternatives (especially when the nonlinear door is opened) but our objective is not to search for the “best” strategy. Rather, we want to show that a little sophistication can have an unexpectedly big payoff. Currency funds invest in a portfolio of currencies. This allows funds to take advantage of gains from diversification: standard statistical intuition (cf. the Central Limit Theorem) tells us that variance and skewness can be curtailed by taking an average of draws that are imperfectly correlated. 17 As a practical matter for many trading strategies studied in the academic literature, and some used in practice, the portfolio choice is restricted—for reasons of simplicity, comparability, diversification and transparency. The simplest restriction is to employ simple equal-weight portfolios where a uniform bet of size $1/N is placed on each of N FX-USD pairs, so that the only forecastbased decision is the binary long-short choice for each pair in each period. Granted, lifting this restriction to allow adjustable portfolio weights must, ipso facto, allow even better trading performance, at least on backtests. But from our standpoint of presenting a puzzle – showing a strategy with high returns, Sharpes, gain-loss ratios, and low skewness – meaningful forecasting success even when restricted to an equal-weight portfolio is evidence enough to support our story. 18 A typical investor prefers to assess the performance of alternative strategies through out-of-sample analysis since any in-sample results could be the result of over-fitting. However, the recent global financial crisis adds a new wrinkle to this assessment: How sensitive are the conclusions to the inclusion/exclusion of this once-in-a-lifetime event? To investigate this issue we consider three windows of fixed length (so as to make the statistics comparable across samples) over which out-ofsample returns are calculated: 2002–2006; 2003–2007; and 2004–2008. That is, one-step ahead forecasts are generated, one at a time, with a rolling window that begins with the January 1986 observation and finishes with the December 2001 observation (say, for the first of the windows). The evolution of trading performance over time turns out to be quite revealing, cementing the consequence of the puzzles we uncover. The basic format of the analysis below is as follows. We report insample estimates of the models along with basic statistics to get a sense of the contribution of each explanatory factor. Next we compute out-of-sample returns using the three windows just described. For each window we report the average returns of the strategy over the period, their standard deviation and hence their annualized Sharpe ratio, their skewness and their annualized gain-loss ratio (Bernardo and Ledoit, 2000). The Sharpe ratio is a common measure of performance which has been found unreliable in some circumstances (see, e.g. Cochrane and Saá-Requejo, 2000, and Černý, 2003). The gain-loss ratio of Bernardo and Ledoit (2000) overcomes 17
A numerical illustration is provided by Burnside et al. (2008a). For example, we have experimented with linear-weight portfolios based on interest differential weights, or on forecast return weights, and these can sometimes attain higher in-sample returns and Sharpe ratios. These results are not shown. One could also go further and attempt to construct “optimized” portfolios based on mean-variance optimization. However, the value of this approach remains in question as recent empirical work shows that in many contexts the supposed performance gains of meanvariance optimization may be minimal or illusory (DeMiguel et al., 2009). In addition the covariance matrices so estimated can produce extreme weights. As with any unequal-weight approach this can lead, in practice, to recurring trades where portfolio managers bump into position limits for single currencies imposed either by management or by mandate. Thus, like other work in this literature, we prefer to stick to the cleaner and simpler equal-weight approach in this paper. 18
78
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
these shortcomings and as we will see, has a natural connection to the new statistical metrics that we introduce below. 3.1. Two Naïve models Table 2 presents the one-month ahead forecasts of the change in the log nominal exchange rate based on two Naïve models. Model 1, denoted “Naïve,” implicitly imposes the assumption that the exchange rate follows a random walk. All regressors are omitted so there is, in fact, no estimated model. This is simply the benchmark or null model. However crude it may be, this is still a model that one might use as a basis for trading – long high yield, short low yield – so panel (b) explores the out-of-sample trading performance of an equal-weighted currency portfolio with all nine currencies against the U.S. dollar based on such a strategy. This is done over the three out-of-sample windows described above. In this table, and in what follows, we report the profits the investor would have actually made based on an equal-weighted portfolio of long-short trades in each currency, in each period, with direction given by the sign of the model's return forecast. Over the initial window ending in 2006, realized profits are a healthy 54 basis points (bps) per month, with low volatility and skew, thus generating an impressive 1.59 annualized Sharpe ratio and a remarkable 2.98 gain-loss ratio (expected gains are about three times expected losses). As a reference, Bernardo and Ledoit (2000) calculate the gainloss ratio for the stock market index over the period 1959 to 1997 to be 2.6 assuming logarithmic utility. However, as the sample window rolls over 2007 and then 2008, when the financial crisis is in full swing, the picture changes quite dramatically. The same currency portfolio
Table 2 Two Naïve models. Model
(1)
(2)
Naïve
Naïve ECM
(a) Model parameters Dependent variable log(qit − qi)
Δeit + 1 —
R2, within F Fixed effects Periods Currencies (relative to US$) Observations
— — — 276 9 2484
Δeit + 1 − 0.0235** (0.0047) .0119 F(1,2447) = 25.39 Yes 276 9 2484
0.0054 0.0117 − 0.2181 1.5884 2.9819
0.0062 0.0145 0.5792 1.4870 3.3678
0.0030 0.0120 − 0.0630 0.8697 1.8511
0.0036 0.0131 0.8712 0.9466 2.2078
− 0.0024 0.0179 − 2.9748 − 0.4715 0.6438
− 0.0012 0.0143 − 1.6695 − 0.2868 0.7773
(b) Out-of-sample trading performance † Realized profits 2002–2006 Mean Std. dev. Skewness Sharpe ratio (annualized) Gain/loss ratio Realized profits 2003–2007 Mean Std. dev. Skewness Sharpe ratio (annualized) Gain/loss ratio Realized profits 2004–2008 Mean Std. dev. Skewness Sharpe ratio (annualized) Gain/loss ratio
† Statistics for trading profits are net, monthly, from a equally weighted portfolio. Notes: Standard errors in parentheses; ** (*) denotes significant at the 95% (90%) confidence level. Estimation is by ordinary least squares. In column (1), no model is estimated as model coefficients are assumed to be zero and e follows a random walk. Estimation in column (2) is by ordinary least squares. The sample is January 1986 to December 2008.
would now be losing money at a rate of 24 bps per month, with considerable increases in volatility and negative skew, resulting in a − 0.47 Sharpe ratio and a gain-loss ratio of 0.64, below the strict no-arbitrage value of 1. Model 2, denoted “Naïve ECM,” augments the Naïve model: we embrace the idea that real exchange rate fundamentals may have predictive power, in that currencies overvalued (undervalued) relative to FEER might be expected to face stronger depreciation (appreciation) pressure. Now in panel (a) the one-month-ahead forecast equation has just a single, time-varying regressor, the lagged real exchange rate deviation, plus currency-specific fixed effects which are not shown. The coefficient on the real exchange rate term of − 0.02 is statistically significant and indicates that reversion to FEER occurs at a convergence speed of about 2% per month on average, which implies that deviations have a half-life of 36 months—within the consensus range of the literature. In panel (b) the potential virtues of a FEER based trading approach start to emerge, but only weakly. Performance in the window ending in 2006 is even better than with the Naïve model at 62 bps per month, albeit with slightly higher volatility, explaining the lower Sharpe ratio, now at 1.49. However, the gain-loss ratio is instead 3.37, even more impressive. But just as before as the windows roll over to include the financial crisis, gains turn to losses (12 bps per month), positive skew to negative skew with a Sharpe ratio of −0.29 and a gain-loss ratio of 0.78. By focusing mainly on unconditional Naïve carry trade returns, some influential academic papers have concluded that crash risk and peso problems are important features of the FX market. Table 2 may be extremely Naïve, but it suffices to launch the start of a counterargument. It shows that an allowance for real exchange rate fundamentals can limit adverse skewness from the carry trade, and it is indeed well known that successful strategies do just that, as we shall see in a moment. However, we do not stop here, since we have the tools to explore superior but simple model specifications that deliver trading strategies with even higher ex-post actual returns, lower volatilities, and negligible (or positive) skew. 3.2. Two linear models Table 3 shows one-month-ahead forecasts from the richer flexible-form exchange rate models discussed in the previous section. Model 3 is based on a regression of the change in the nominal exchange rate on the first lag of the change in the nominal exchange rate, inflation, and the interest differential (this model is labeled VAR as it corresponds to the first equation of a vector autoregressive model for Δyt + 1 in expression 3). Model 4 extends the specification with the real exchange rate level, and is labeled VECM by analogy with vector error-correction. As with the Naïve ECM (Model 2) a plausible convergence speed close to 3% per month is estimated. Other coefficients conform to standard results and folk wisdom. The positive coefficient on the lagged change in the nominal exchange rate of 0.12 suggests a modest but statistically significant momentum effect. The positive coefficient on the lagged interest differential, closer to +1 than −1, conforms to the well known forward discount puzzle: currencies with high interest rates tend to appreciate all else equal. The positive coefficient on the inflation differential conforms to recent research findings that “bad news about inflation is good news for the exchange rate” (Clarida and Waldman, 2008): under Taylor rules, high inflation might be expected to lead to monetary tightening and future appreciation. We can see that these models offer some improvement as compared to the Naïve ECM (Model 2). The R2 is higher and the explanatory variables are statistically significant. They also weather the financial crisis better than the Naïve models in Table 2. During the window ending in 2006, returns are 45 and 57 bps per month respectively for the VAR and VECM models, which are slightly lower than the 54 and 62 bps per month of the Naïve and Naïve ECM models, resulting in a lower Sharpe and gain-loss
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90 Table 3 Two linear models. Model
(3)
(4)
VAR
VECM
log(qit − qi)
Δeit + 1 0.1218** (0.0267) 0.2889 (0.3529) 0.2325 (0.1603) –
R2, within F Fixed effects Periods Currencies (relative to US$) Observations
0.0164 F(3,2427) = 8.66 Yes 276 9 2484
Δeit + 1 0.1333** (0.0268) 0.7555** (0.3506) 0.1690** (0.0047) − 0.0277** (0.0047) 0.0319 F(4,2426) = 14.36 Yes 276 9 2484
0.0045 0.0145 0.1062 1.0706 2.2303
0.0057 0.0144 − 0.1141 1.3641 2.7335
0.0022 0.0145 0.0706 0.5239 1.4734
0.0021 0.0142 − 0.5490 0.5161 1.4547
0.0004 0.0153 0.0210 0.0834 1.0644
0.0027 0.0161 0.3148 0.5779 1.5646
(a) Model parameters Dependent variable Δeit iit∗ − iit log(πit∗ − πit)
(b) Trading performance † Realized profits 2002–2006 Mean Std. dev. Skewness Sharpe ratio (annualized) Gain/loss ratio Realized profits 2003–2007 Mean Std. dev. Skewness Sharpe Ratio (annualized) Gain/loss ratio Realized profits 2004–2008 Mean Std. dev. Skewness Sharpe ratio (annualized) Gain/loss ratio
† Statistics for trading profits are net, monthly, from a equally weighted portfolio. Notes: Standard errors in parentheses; ** (*) denotes significant at the 95% (90%) confidence level. Estimation is by ordinary least squares. The sample is January 1986 to December 2008.
ratios as well. This remains true as we move on to the early stages of the financial crisis with the window ending in 2007 but by the time the crisis erupts, the advantages of these more sophisticated strategies begin to emerge. The VAR model ekes out barely positive returns of 4 bps per month which deliver a pyrrhic Sharpe ratio and a gain-loss ratio indistinguishable from the strict no-arbitrage norm of one. In contrast, adding the FEER term seems to make a big difference—returns over this period are 27 bps per month, skew is slightly positive and thus the Sharpe ratio is 0.58 and the gain-loss ratio a respectable 1.56. Statistically the VECM is preferred with R2 and F-statistic twice as large, and the coefficient on the lagged real exchange rate highly significant. However, the key finding is that when trades are conditional on deviations from FEER, actual returns have a skew that is zero or mildly positive. 3.3. A threshold model Are these the best models we can find? We think not, due to the limitations of linear models. Two arguments force us to reckon with potential nonlinearities in exchange rate dynamics. The first possible reason for still disappointing performance in the VECM (Model 4) is that the model places excessively high weight on the raw carry signal emanating from the interest differential in some risky “high carry” circumstances. This may lead to occasional large losses, even if it is not enough to cause negative skew. Recall that Naïve Model (2) placed zero weight on this signal as part of an exchange rate forecast,
79
whereas here the weight is 0.76. We conjecture that the true relationship may be nonlinear. Here we follow an emerging literature which suggests that deviations from UIP are corrected in a nonlinear fashion: that is, when interest differentials are large the exchange rate is more likely to move against the Naïve carry trade, and in line with the unbiasedness hypothesis (e.g., Coakley and Fuertes, 2001; Sarno et al., 2006). The second possible reason for disappointing performance might be related not to nonlinear UIP dynamics, but rather nonlinear PPP dynamics. Here it is possible that VECM places too little weight on the real exchange rate signal in some circumstances, e.g. when currencies are heavily over/undervalued and thus more prone to fall/ rise in value. Here again, we build on an influential literature which points out that reversion to Relative PPP, largely driven by nominal exchange rate adjustment, is likely to be more rapid when deviations from FEER are large (e.g., Obstfeld and Taylor, 1997; Michael et al., 1997; Taylor et al., 2001). To sum up, past empirical findings suggest that the parameters of the Naïve ECM Model (2) may differ between regimes with small and large carry trade incentives, as measured by the lagged interest rate differential; and also may differ between regimes with small and large FEER deviations, as measured by the lagged real exchange rate. In addition to these arguments for a nonlinear model, the important role of the third moment in carry trade analyses also bolsters the case, given the deeper point made in the statistical literature that skewness cannot be adequately addressed outside of a nonlinear modeling framework (Silvennoinen et al., 2008). To explore these possibilities we extend the framework further to allow for nonlinear dynamics. We first perform a test against nonlinearity of unknown form and find strong evidence of nonlinearity derived from the absolute magnitudes of interest differentials and real exchange rate deviations. We then implement a simple nonlinear threshold error correction model (TECM). For illustration, we employ an arbitrarily partitioned dataset with thresholds based on median absolute magnitudes. Table 4 presents the nonlinearity tests using Model (4) in Table 3, the VECM model, where the candidate threshold variables are the absolute magnitudes of the lagged interest differential |iit*∗ − iit| and the lagged real exchange rate deviation jqit −q i j. Although more specific nonlinearity tests tailored to particular alternatives would be more powerful, we are already able to forcefully reject the null with generic forms of neglected nonlinearity by simply taking a polynomial expansion of the linear terms based on the threshold variables. This type of approach is often used in testing against linearity when a smooth transition regression (STAR) model is specified but it is clearly not limited to this alternative hypothesis (see Granger and Teräsvirta, 1993). The test takes the form of an auxiliary regression involving polynomials of each variable, up to the fourth power. The results in Table 4 show that the hypothesis of linearity can be rejected at standard significance levels for specifications based on whether the threshold variables are considered individually or simultaneously. Thus our VECM Model (4) in Table 3 appears to be misspecified, and so
Table 4 Nonlinearity tests. Hypothesis tests
Test statistic
H0: Nonlinear in |iit* − iit| and |log(qit − qi)| versus: (a) H1: Nonlinear in |log(qit − qi)| and linear in |iit* − iit| (b) H2: Nonlinear in |iit* − iit| and linear in|log(qit − qi)| (c) H3: Linear in |iit* − iit| and linear in |log(qit − qi)|
F(44,2337) = 4.28 p = 0.0000 F(52,2337) = 1.42 p = 0.0262 F(96,2337) = 28.04 p = 0.0000
Notes: p refers to the p-value for the LM test of the null of linearity against the alternative of nonlinearity based on a polynomial expansion of the conditional mean with the selected threshold arguments. See Granger and Teräsvirta (1993).
80
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
we are moved to develop a more sophisticated yet parsimonious threshold model. For illustration, we use a simple four-regime TECM, where the regimes are delineated by threshold values taken to be the median levels of the absolute values of the regressors Iit =|iit* −iit| and Q it ¼ jqit −q i j, which for simplicity will be denoted by θi and θq respectively. Estimates of this model, denoted Model (5) in Table 5, show that from both an econometric and a trading standpoint, the performance of the nonlinear model is outstanding. It also accords with many widely recognized state-contingent FX market phenomena. We see from panel (a) in Table 5 that the estimated model coefficients differ sharply across the four regimes. The null of no differences across regimes is easily rejected. In column 1 of Table 5 the model performs rather poorly in the regime where both interest differentials
Table 5 Threshold model. Model
(5) NECM
(a) Model parameters (by regime) Dependent variable Δeit iit∗ − iit log(πit* − πit) log(qit − qi) R2, within F Fixed effects Periods Currencies (relative to US$) Observations (by regime)
Qit b θq Iit b θi Δeit + 1 0.0637 (0.0417) 2.5033** (1.1577) 0.6731 (0.4101) − 0.0572** (0.0011) 0.0277 F(4,642) = 4.63 Yes 276 9 655
Qit b θq Iit > θi Δeit + 1 0.1228** (0.0494) 1.7790** (0.6379) 0.3836 (0.3880) − 0.0564** (0.0265) 0.0419 F(4,549) = 4.29 Yes 276 9 562
Qit > θq Iit b θi Δeit + 1 0.1029* (0.0574) 0.8707 (1.3585) − 0.5269 (0.3945) − 0.0146** (0.0072) 0.0176 F(4,554) =1.90 Yes 276 9 567
Qit > θq Iit > θi Δeit + 1 0.2027** (0.0567) − 0.5734 (0.7811) 0.1331 (0.2173) − 0.0364** (0.0077) 0.0749 F(4,641) = 8.60 Yes 276 9 654
(b) Trading performance † Realized profits 2002–2006 Mean 0.0058 Std. dev.
and real exchange rate deviations are small. Still, there is weak evidence for a self-exciting dynamic with a very strong forward bias: the coefficient on the interest differential is large (+ 2.5) consistent with traders attracted by carry incentives bidding up higher yield currencies. However, for small interest differentials, we see from column 3 in Table 5 that this effect disappears once large real exchange rate deviations emerge. Still, the fit of the model remains quite poor in column 3 also. In column 2 of Table 5, where the interest differential is above median, this self-exciting property is also manifest clearly, with a coefficient of +1.8 that is statistically significant at the 5% level. The results strongly suggest that wide interest differentials take exchange rates “up the stairs” on a gradual appreciation away from fundamentals. Of course, for given price levels or inflation rates, such dynamics quickly cumulate in ever-larger real exchange rate deviations, moving the model's regime to column 4 in Table 5, where the fit of the model is best, as shown by a high R2 and F-statistic. And here the dynamics dramatically change such that the exchange rate can rapidly descend “down the elevator”: the lagged real exchange rate carries a highly significant convergence coefficient of 3.6% per month, working against previous appreciation of the target currency, and reverse exchange rate momentum then snowballs, with the lagged exchange rate change carrying a now significant and large coefficient of +0.2. In addition to being the preferred model based on statistical tests, does the nonlinear TECM Model (5) in Table 5 deliver better trading strategies? Panel (b) in Table 5 suggests so. In the calmer period, where the sample ends in 2006, returns are second only to the Naïve ECM (Model 2 in Table 2) at 58 bps per month, positive skew, a Sharpe of 1.58 and an extraordinary gain-loss ratio of 3.34. TECM remains a strong performer as the financial crisis creeps in, with monthly returns of 33 bps per month and second only to the Naïve ECM model again. The gain-loss ratio still makes expected average gains almost twice expected average losses. However, during the financial crisis, the returns on the TECM strategy only fall from 33 to 30 bps a month whereas Naïve ECM returns fall into negative territory from 36 bps to −12 bps per month. All the while, the TECM strategy maintains a neutral skew, a Sharpe of 0.80 and only a marginally lower gain-loss ratio of 1.87. Thus, a simple nonlinear extension of the VECM model generates considerable protection against crash risk while providing reasonably high returns.
0.0128 Skewness 0.2242 Sharpe ratio (annualized) 1.5776 Gain/loss ratio 3.3385 Realized profits 2003–2007 Mean 0.0033 Std. dev. 0.0134 Skewness − 0.0290 Sharpe ratio (annualized) 0.8524 Gain/loss ratio 1.9007 Realized profits 2004–2008 Mean 0.0030 Std. dev. 0.0129 Skewness 0.0664 Sharpe ratio (annualized) 0.8024 Gain/loss ratio 1.8658 † Statistics for trading profits are net, monthly, from a equally weighted portfolio. Notes: Standard errors in parentheses; ** (*) denotes significant at the 95% (90%) confidence level. Estimation is by ordinary least squares.
3.4. Where do performance improvements come from? What explains the trading performance of TECM? Table 6 breaks down the fraction of profitable trading positions correctly called by each model, both in-sample and for all three out-of-sample windows. The basic conclusion is that there are no discernible differences across strategies, as is most easily seen when looking at the pooled results. Even in the out-of-sample window ending in 2008 where TECM appears to dominate Naïve strongly, the difference between correct classification rates is 54% versus 51%. So what is so great about our so-called refined models? The answer is given in Table 7 and explored more formally in the next section. It is not the ability to pick the right direction of trades more often; it is the ability to pick the right direction on trades that are likely to result in large gains or losses. For simplicity, this table compares just the Naïve and TECM models. In-sample (panel a) we find that over 2454 pooled trades, the two models agreed on direction 1556 times and disagreed 898 times. When in agreement, returns were 66 bps per month, annual Sharpe was 0.81 and skew was − 0.47. The worst trade lost 12% in a month, the best gained about the same, 12%. But when the models took opposite sides, the differences were stark. The Naïve model collected negative returns of 41 bps, the TECM just the opposite. Annual Sharpes were − 0.44 versus 0.44. Skew was − 0.85 versus 0.85. Naïve suffered a worst trade of 18%, but the TECM had that on the plus side. TECM's worst month was only a 10% loss, and that was Naïve's best.
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90 Table 6 Fraction of profitable trading positions correctly called.
(a) In sample Australia Canada Germany Japan New Zealand Norway Sweden Switzerland UK Pooled (b) Out of sample 2002–2006 Australia Canada Germany Japan New Zealand Norway Sweden Switzerland UK Pooled (c) Out of sample 2003–2007 Australia Canada Germany Japan New Zealand Norway Sweden Switzerland UK Pooled (d) Out of sample 2004–2008 Australia Canada Germany Japan New Zealand Norway Sweden Switzerland UK Pooled
Naïve
Naïve ECM
VAR
VECM
TECM
0.63 0.58 0.52 0.58 0.63 0.57 0.63 0.54 0.53 0.58
0.57 0.52 0.53 0.54 0.59 0.55 0.51 0.55 0.55 0.55
0.57 0.57 0.56 0.61 0.62 0.57 0.66 0.58 0.49 0.58
0.56 0.56 0.54 0.60 0.58 0.57 0.66 0.55 0.53 0.57
0.56 0.55 0.56 0.56 0.59 0.58 0.62 0.58 0.54 0.57
0.63 0.60 0.60 0.55 0.75 0.62 0.67 0.52 0.55 0.61
0.60 0.63 0.58 0.63 0.75 0.63 0.58 0.57 0.57 0.62
0.60 0.50 0.53 0.55 0.72 0.58 0.67 0.55 0.57 0.59
0.55 0.63 0.58 0.63 0.75 0.58 0.67 0.52 0.55 0.61
0.68 0.57 0.57 0.55 0.73 0.58 0.67 0.53 0.50 0.60
0.62 0.52 0.50 0.55 0.72 0.52 0.60 0.52 0.55 0.56
0.58 0.57 0.48 0.63 0.72 0.52 0.55 0.53 0.50 0.56
0.57 0.57 0.43 0.57 0.67 0.52 0.62 0.53 0.53 0.56
0.50 0.58 0.48 0.62 0.67 0.48 0.63 0.50 0.48 0.55
0.63 0.58 0.48 0.55 0.63 0.50 0.62 0.53 0.45 0.55
0.55 0.43 0.45 0.57 0.62 0.47 0.53 0.53 0.45 0.51
0.52 0.50 0.43 0.62 0.62 0.47 0.50 0.52 0.40 0.51
0.55 0.47 0.42 0.62 0.58 0.52 0.57 0.57 0.50 0.53
0.47 0.55 0.48 0.65 0.58 0.52 0.60 0.57 0.48 0.54
0.62 0.48 0.47 0.62 0.58 0.53 0.60 0.48 0.43 0.54
Note: The in-sample period is 1986 to 2008.
The out-of-sample results in panels (b)–(d) reinforce this story, especially when considering panel (d), which refers to the out-of-sample window ending in 2008 in the middle of the global financial crisis. When the models agreed, returns were virtually zero. The largest loss is about 12% in a month again, and the largest gain 10%. But when the models disagreed, mean returns are a 76 bps loss for Naïve, and a 76 bps gain for TECM. This is probably largely due to the observation that the worst loss for Naïve was 18% in a month, which was the largest gain for TECM. On the other hand, the largest loss for TECM was only about 7% and that was the largest gain for Naïve. With these results it is not hard to see that when the models disagree, skewness is highly negative for Naïve at −0.73 but of the opposite sign for TECM. We conclude this section by asking two final questions: (1) How many of the most profitable trades took place in the extreme regimes of the TECM strategy where it differs most from Naïve? And, how much churning is there in the portfolios? To answer the first question, we considered the 898 trades were the models disagree in the in-sample and then tabulated what portion of the trades belonged to each of the four regimes in the TECM strategy: 28% of the trades fall inside both the UIP and PPP thresholds; 22.5% exceed the UIP threshold but not the PPP threshold; 29.2% exceed the PPP but not the UIP threshold; and 20.4% exceed both thresholds are exceed. Therefore, there does not seem to be a specific condition that is plucking the best trades consistently.
81
Each time a position is reversed the trader is likely to incur transaction fees that eat into returns. We abstract from this matter altogether but report here some basic figures to assess its possible impact and therefore answer the second of the questions posed above. We constructed a churning index that counts the number of times that a position is reversed. That is, if a strategy always goes long in one currency and short in another, churning would be 0. But if on the other hand, positions were reversed each period, churning would be 1. As a benchmark, we calculated the churning ratio for the perfect foresight trader—one that picks the correct direction of trade each time. We did this in-sample and for each of the out-ofsample windows but the results are very similar throughout and therefore not reported for the out-of-sample windows. In-sample, churning varies by country anywhere between 38% of the time for positions against Sweden and 52% for positions against the U.K. In contrast, the Naïve strategy is considerably cheaper to execute, with churning varying from 1% for trades against Australia to 6% for trades against the U.K. This is not surprising given the basic premise of the Naïve strategy. What about the TECM strategy? Churning ranges between a low of 18% for trades against Japan to a high of 31% for trades against the U.K., which is much more commensurate with the perfect foresight trader but presumably slightly more costly to implement in practice. 4. Evaluating carry trade returns and risk Is this all too good to be true? Recall that the perfect foresight returns to a carry trade are xtþ1 ¼ Δetþ1 þ it −it : If we denote x^tþ1 as the one period-ahead forecast and noticing that (it* − it) is known at time t, then the only source of uncertainty in this forecast comes from Δe^tþ1 : It appears then that formal evaluation of carry trade returns boils down to a standard predictive evaluation of nominal exchange rates, but at least since Meese and Rogoff (1983) the consensus view is that exchange rate forecasts are no better than a random walk when judged by fit (RMSE). A moment's reflection reveals that an investor's objectives are not well aligned with this point of view. The reason is that the investor uses Δe^tþ1 to determine the direction of trade but Δe^tþ1 does not directly determine the actual profits of the investment. To see this define dt + 1 ∈ {−1, 1} as an indicator of the ex-post profitable direction of the carry trade, i.e., dt + 1 = sign(xt + 1). The returns realized by the investor can be defined as μ^ tþ1 ¼ d^ tþ1 xtþ1 ; where d^ tþ1 is the binary forecast for dt + 1 implicit in x^tþ1 . In general d^ tþ1 ¼ sign x^tþ1 −c for c ∈ (−∞, ∞). It may be tempting to restrict c = 0, as would be reasonable for a risk neutral investor. In that case we could use standard techniques to assess direction (e.g., Pesaran and Timmermann, 1992). However, risk averse investors facing possibly asymmetric return distributions may wish to guard against tail risk by choosing c ≠ 0. And in any case, we want to weigh direction by the returns associated with each position, an issue to which we return below. Setting the forecasting problem up as a binary choice or direction problem makes sense on a number of dimensions. First, as a statistical matter, making successful continuous exchange rate (and hence, return) forecasts is a harder challenge, as those working in the Meese–Rogoff tradition have shown; but a directional forecast sets a lower bar, one that recent work suggests could be surmountable (e.g., Cheung et al., 2005). Second, it is all that is needed in the case of equal-weight currency portfolios. Thirdly, directional forecasting is important for traders. Like other funds, currency strategies face the risk of redemptions or closure if
82
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
Table 7 When does the TECM model outperform the Naïve model? (a) In sample All trades pooled Naïve TECM Trade where models agree Naïve TECM Trade where models disagree Naïve TECM (b) Out of sample 2002–2006 All trades pooled Naïve TECM Trade where models agree Naïve TECM Trade where models disagree Naïve TECM (c) Out of sample 2003–2007 All trades pooled Naïve TECM Trade where models agree Naïve TECM Trade where models disagree Naïve TECM (d) Out of sample 2004–2008 All trades pooled Naïve TECM Trade where models agree Naïve TECM Trade where models disagree Naïve TECM
N
Mean
Std. dev.
Skewness
Sharpe
Min
Max
2454 2454
0.0027 0.0057
0.0299 0.0295
− 0.6752 0.1068
0.3115 0.6603
− 0.1773 − 0.1202
0.1180 0.1773
1556 1556
0.0066 0.0066
0.0282 0.0282
− 0.4709 − 0.4709
0.8085 0.8085
− 0.1202 − 0.1202
0.1180 0.1180
898 898
− 0.0041 0.0040
0.0315 0.0316
− 0.8539 0.8481
− 0.4449 0.4352
− 0.1773 − 0.1018
0.1018 0.1773
Mean
Std. dev.
Skewness
Sharpe
Min
Max
N 540 540
0.0054 0.0058
0.0264 0.0263
− 0.1938 − 0.1239
0.7037 0.7694
− 0.0832 − 0.0832
0.0744 0.0744
442 442
0.0069 0.0069
0.0267 0.0267
− 0.2128 − 0.2128
0.8875 0.8875
− 0.0832 − 0.0832
0.0744 0.0744
98 98
− 0.0013 0.0013
0.0240 0.0240
− 0.2940 0.2940
− 0.1905 0.1905
− 0.0717 − 0.0520
0.0520 0.0717
Mean
Std. dev.
Skewness
Sharpe
Min
Max
N 540 540
0.0030 0.0033
0.0265 0.0265
− 0.1867 − 0.0962
0.3937 0.4297
− 0.0912 − 0.0832
0.0744 0.0912
400 400
0.0043 0.0043
0.0264 0.0264
− 0.1913 − 0.1913
0.5580 0.5580
− 0.0832 − 0.0832
0.0744 0.0744
140 140
− 0.0005 0.0005
0.0267 0.0267
− 0.1730 0.1730
− 0.0680 0.0680
− 0.0912 − 0.0736
0.0736 0.0912
Mean
Std. dev.
Skewness
Sharpe
Min
Max
540 540
− 0.0024 0.0030
0.0315 0.0314
− 0.2683 0.5984
− 1.0608 0.3292
− 0.1773 − 0.1215
0.1038 0.1773
347 347
0.0004 0.0004
0.0282 0.0282
− 0.4949 − 0.4949
0.0525 0.0525
− 0.1215 − 0.1215
0.1038 0.1038
193 193
− 0.0076 0.0076
0.0362 0.0362
− 1.3709 1.3709
− 0.7264 0.7264
− 0.1773 − 0.0736
0.0736 0.1773
N
Notes: The in-sample period is 1986 to 2008. The Sharpe ratio is annualized. Other statistics are monthly.
negative returns are frequent and/or large. It is nice to pick the bigger winners, rather than just small ones, but not if the strategy risks blowing up. The point is only amplified when funds are leveraged. Thus from a trader's point of view, it is important to knit together returns and exposure in evaluating carry trade strategies, and not just direction. 4.1. Hits and misses: the correct classification frontier Variables used to determine a binary outcome are generically called classifiers. In this paper we will denote such a variable as δ^tþ1 to indicate that it could be a conditional mean prediction from a regression, a conditional probability prediction from a binary choice model, the single index from a dimension reduction technique, etc. A binary prediction can therefore be constructed as d^ tþ1 ¼ sign δ^ tþ1 −c for c ∈ (−∞, ∞). Intuitively, dt + 1 generates an empirical mixture distribution for δ^tþ1 , each constituent distribution associated with whether dt + 1 =−1 or 1. A good classifier is one where these two distributions are easily separable (a concept to be made precise below) with little to no overlap between them. That way, observations of δ^ tþ1 to the left of the threshold c have a higher chance of being properly sorted into the dt + 1 =−1 bin and vice versa. At the other extreme, strict no-arbitrage is equivalent to a perfect overlap of the constituent distributions in the mixture: any given δ^ tþ1 is just as likely to come from the dt + 1 =−1 bin as it is from the dt + 1 =1 bin. In general, the classifier's two empirical distributions need not be
symmetric nor have the same shape, thus resulting in different profit opportunities and exposure. Proper evaluation therefore requires characterizing the returns and risk associated with each opportunity and not just directional ability. Accounting for all these features requires new methods and the foundation for the solutions we propose comes from techniques common in biostatistics and engineering. The specific device that we draw on here is the receiver operating characteristic (ROC) curve, which comes from signal detection theory. 19 The methods that we present below combine this literature and results in Jordà and Taylor (2011), which the reader may wish to consult for a more detailed explanation of some concepts. An investor considering a carry trade must entertain four possible outcomes and the probability of their occurrence. Specifically, let TP(c)= h P δ^ tþ1 > cdtþ1 ¼ 1 denote the true positive h rate, sometimes also called sensitivity or recall rate, and let TN(c)= P δ^tþ1 bcdtþ1 ¼ −1 denote the true negative rate, sometimes also called specificity. The counterparts to
19 The origin of the ROC curve can be traced back to radar signal detection theory developed in the 1950s and 1960s (Peterson and Birdsall, 1953), and its potential for medical diagnostic testing was recognized as early as 1960 (Lusted, 1960). More recently, ROC analysis has been introduced in psychology (Swets, 1973), in atmospheric sciences (Mason, 1982), and in the machine learning literature (Spackman, 1989). See Pepe (2003) for a survey.
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
these rates are the false negative rate FN(c)= P δ^ tþ1 bcdtþ1 ¼ 1 and the h false positive rate FP(c)= P δ^ tþ1 > cdtþ1 ¼ −1 with TP(c)+FN(c)=1
83
h
and TN(c)+FP(c)=1. Naturally, the investor is interested in maximizing the “true” rates and minimizing the “false” rates by choosing c. But the nature of the trade-offs is a little different. If the investor chooses a small c, then as c→−∞ he ensures that TN(c)=1 but TP(c)=0, and vice versa when he lets c→∞. A plot of {TP(c), TN(c)}c for all values of c is a summary of all the correct classification combinations of a given classifier δ^ tþ1 —in economic parlance, a production possibilities frontier of correct classification. For this reason, we will call such a plot the correct classification frontier or CCF. This curve can be displayed in the unit square. For an uninformative classifier, the CCF will be the line bisecting the unit square with slope − 1: the marginal correct classification gain in one direction results in the exact same marginal correct classification loss in the other direction. This situation corresponds to strict no-arbitrage and for a riskneutral investor, the natural null of efficient markets. The better a classifier is, the more the CCF bows away from the origin and above the simplex; and in the limit, a perfect classifier has a CCF that hugs the top-right corner of the unit-square. This corner represents perfect arbitrage—a classifier with this CCF would permit an investor to always take the profitable side of a trade. Most classifiers' CCFs exist between these two extremes, thus generating approximate arbitrage opportunities. Such opportunities may be consistent with the representative investor's attitude toward risk and indeed consistent with kernels that price other assets correctly. This is an issue that we will revisit below. At this point, we make no mention of the returns associated with each {TP(c), TN(c)} pair, nor we discuss yet how an investor's preferences interact with the CCF (as the market with two goods would interact the production possibilities frontier with the utility extracted from each good). But it is useful to introduce a summary measure of classification ability based on the area under the CCF, henceforth AUC. Under strict no-arbitrage, the AUC = 1/2, whereas for perfect arbitrage, AUC = 1, with typical CCFs attaining values in-between. The AUC turns out to be closely related to the Wilcoxon–Mann–Whitney U-statistic—a non-parametric measure of how “separate” the two empirical distributions of the classifier δ^ tþ1 are in the mixture determined by the true state dt + 1 ∈ {−1, 1}. The AUC is also convenient because under the null of no classification ability, Hsieh and Turnbull (1996) show that its large sample distribution is Gaussian. More details on these results are provided in Jordà and Taylor (2011). Table 9 shows the pooled AUC in- and out-of-sample for each of the three windows 2002–2006; 2003–2007; and 2004–2008 that we have been considering. The results are particularly revealing and explain why the Naïve strategy had been so popular in recent times. The 2002–2006 window was a particularly good time for guessing the correct direction of a trade using the interest differential, with AUC values exceeding 0.60 in all cases and the null of strict noarbitrage easily rejected. Naïve and VAR do well but their performance improves even more when the FEER term is added. However, in the 2004–2008 window the value of these simple strategies comes crashing down along with the global financial system. Naïve and Naïve ECM have AUCs indistinguishable from the strict noarbitrage value of 0.5, although VAR, VECM and TECM manage to escape this fate. But these results are old news: we knew from Section 3.4 that TECM really shines, not in better directional prediction, but in spotting trades with high returns. Thus, the next section takes our analysis to the evaluation of classification ability weighted by returns.
Table 8 Area Under the correct classification frontier (CCF). Model (a) In sample Naïve Naïve ECM VAR VECM TECM (b) Out of sample 2002–2006 Naïve Naïve ECM VAR VECM TECM (c) Out of sample 2003–2007 Naïve Naïve ECM VAR VECM TECM (d) Out of sample 2004–2008 Naïve Naïve ECM VAR VECM TECM
AUC
Standard error
z-score
p-value
0.5905 0.5604 0.5996 0.5935 0.6084
0.012 0.012 0.012 0.012 0.011
7.85 5.19 8.67 8.11 9.47
0.000 0.000 0.000 0.000 0.000
0.6239 0.6462 0.6012 0.6428 0.6258
0.025 0.024 0.025 0.025 0.025
4.99 5.97 4.04 5.82 5.08
0.000 0.000 0.000 0.000 0.000
0.5945 0.5989 0.5782 0.5962 0.5704
0.025 0.025 0.025 0.025 0.025
3.80 3.98 3.12 3.87 2.80
0.000 0.000 0.002 0.000 0.005
0.5170 0.5097 0.5529 0.5620 0.5546
0.025 0.025 0.025 0.025 0.025
0.68 0.39 2.14 2.51 2.21
0.495 0.697 0.033 0.012 0.027
Notes: In sample is 1987 to 2008, out of sample is estimated with recursively. The AUC statistic is a Wilcoxon–Mann–Whitney U-statistic with asymptotic normal distribution and its value ranges from 0.5 under the null to a maximum of 1 (an ideal classifier). See Jordà and Taylor (2011).
large losses responsible for negative skewness in the distribution of returns. Thus, a classifier that correctly pinpoints 99 penny trades but misses the dollar trade has almost perfect classification ability but is a money loser. For this reason, Jordà and Taylor (2011) introduce a novel refinement to the construction of the CCF that accounts for the relative profits and losses of the classification mechanism. 20 We proceed in the spirit of gain–loss measures of investment performance (Bernardo and Ledoit, 2000) by attaching weights to a given classifier's upside and downside outcomes in proportion to observed gains and losses. This alternative performance measure will give little weight to right or wrong bets when the payoffs are small; but when payoffs are large it will penalize classifiers for picking trades that turn out to be big losers, and reward them for picking big winners. To keep the new curve appropriately normalized, consider all the trades where “long FX” was the ex-post correct trade to have made. The maximum gain from classifying all these trades correctly with d^ tþ1 ¼ 1 would be L ¼ ∑ xtþ1 : dtþ1 ¼1
Now consider all the trades where “short FX” was the ex-post correct trade to have made. Similarly, the maximum gain from these trades would be S¼
∑
dtþ1 ¼−1
x tþ1
since xt + 1 b 0 when dt + 1 = − 1. We can now redefine, or rescale, the true positive and true negative rates TP(c) and TN(c) in terms of the long-side and short-side returns x actually achieved, relative to the maximally attainable
4.2. Gains and losses: The return-weighted CCF * curve Tables 7 and 8 showed that although the TECM model does not pick winners any better than some simpler specifications, it is able to avoid
20 A similar improvement to ours is the instance-varying ROC curve described in Fawcett (2006).
84
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
gains thus defined. For any given threshold c, this leads to new statistics as follows:
TP ðcÞ ¼
∑d^
tþ1 ¼1jdtþ1 ¼1
xtþ1
L
; and TN ðcÞ ¼
∑d^
tþ1 ¼−1jdtþ1 ¼−1
x tþ1
S
4.3. Beyond RMSE: trading-based predictive ability tests
where we use |xt + 1| to calculate TN*∗(c) since xt + 1 b 0 when dt + 1 = −1. The plot of TP *(c) versus TN *(c) is then, by construction, limited to the unit square, and generates what we term a returns-weighted CCF, which is denoted CCF *. Associated with the CCF * curve is a corresponding returns-weighted AUC statistic, denoted AUC *∗. Intuitively, if AUC * exceeds 0.5 this is evidence that the classifier outperforms the strict no-arbitrage null from a return-weighted perspective or gain–loss ratio of 1. Detailed derivation of these statistics and their properties is provided in Jordà and Taylor (2011), along with techniques for inference. Using these return-weighted statistics strongly reinforces our argument that classifiers based on more refined models (VECM/TECM) are informative; in contrast the performance of more Naïve models is not statistically distinguishable from the null. Table 10 collects summary AUC* statistics for the same in- and out-of-sample periods discussed in Table 9. The AUC *∗ statistics are generally higher than the corresponding AUC statistics reported in Table 9, and the z-scores attained are much higher too. The out-of-sample window 2002–2006 now contains AUC *∗ values at almost 0.68 (VECM), which is quite remarkable. A stark pattern starts to emerge about the benefits of including a FEER factor as strategies that include this term seem to do best regardless of period of evaluation. What about the last window, 2004–2008, which includes the period of financial turbulence? Here too the results are very revealing. In fact, we report values of AUC *∗ below 0.5 for the Naïve strategies suggesting that doing exactly the opposite of what these strategies suggested would have been more profitable. As it stands, these strategies were consistent money losers, more so than flipping a coin at conventional levels of significance. However, the VECM and especially the TECM strategies did reasonably well Table 9 Area under the returns-weighted correct classification frontier (CCF⁎). Model (a) In sample Naïve Naïve ECM VAR VECM TECM (b) Out of sample 2002–2006 Naïve Naïve ECM VAR VECM TECM (b) Out of sample 2003–2007 Naïve Naïve ECM VAR VECM TECM (b) Out of sample 2004–2008 Naïve Naïve ECM VAR VECM TECM
AUC⁎
Standard error
even during this period, easily beating the strict no-arbitrage null. We note that the TECM was not significantly different than the most successful strategy in any period, but clearly weathered the financial storm best.
z-score
Do any of our models generate meaningful improvement from a trader's perspective? The AUC-based tests suggest that our preferred strategies pick the correct direction of carry trade more often than not, even when weighing by returns. Gain–loss ratios, which provide bounds on the set of permissible pricing kernels, indicate that approximate arbitrage opportunities (in the sense of Bernardo and Ledoit, 1999, 2000) would have been available even during the global financial crisis. This section complements this analysis with a more conventional predictive ability testing framework based on loss functions that better reflect an investor's interests. Extant tests designed to assess the marginal predictive ability of one model against another are based, generally speaking, on a comparison of the forecast loss function from one model to the other. Such is the approach proposed by the popular Diebold and Mariano (1995) and West (1996) frameworks and subsequent literature, with the usual choice of loss function being the root-mean squared error (RMSE). The approach that we pursue instead is to consider a variety of loss functions and use a framework that naturally extends our AUC-based analysis. For this purpose we generate a series of out-of-sample, oneperiod ahead forecasts from a rolling sample of fixed length. It is clear that in this type of set-up, estimation uncertainty never vanishes and for this reason we adopt the conditional predictive ability testing methods introduced in Giacomini and White (2006). These tests have the advantage of permitting heterogeneity and dependence in the forecast errors, and their asymptotic distribution is based on the view that the evaluation sample goes to infinity even though the estimation sample remains of fixed length. −1 Specifically, let {Lti + 1}tT= R denote the loss function associated with the sequence of one step-ahead forecasts, where R denotes the fixed size of the rolling estimation-window from t = 1 to T-1, and let i = 0, 1 (with the index 0 indicating forecasts generated with the unit root null model and the index 1 indicating the alternative model). Then, the test statistic
p-value
GW 1;0 ¼ 0.5690 0.6060 0.6291 0.6496 0.6769
0.0116 0.0114 0.0113 0.0112 0.0109
5.94 9.26 11.42 13.41 16.21
0.0000 0.0000 0.0000 0.0000 0.0000
0.6242 0.6627 0.6118 0.6754 0.6635
0.025 0.024 0.025 0.024 0.024
5.01 6.72 4.48 7.32 6.76
0.000 0.000 0.000 0.000 0.000
0.5835 0.6059 0.5625 0.5994 0.5862
0.025 0.025 0.025 0.025 0.025
3.34 4.27 2.49 4.00 3.45
0.001 0.000 0.013 0.000 0.001
0.4355 0.4419 0.5317 0.5577 0.5971
0.025 0.025 0.025 0.025 0.024
− 2.62 − 2.36 1.28 2.33 3.98
0.009 0.018 0.202 0.020 0.000
Notes: In sample is 1987 to 2008, out of sample is estimated with recursively. The AUC statistic is a Wilcoxon–Mann–Whitney U-statistic with asymptotic normal distribution and its value ranges from 0.5 under the null to a maximum of 1 (an ideal classifier). See Jordà and Taylor (2011).
ΔL pffiffiffi d→Nð0; 1Þ σ^ L = P
ð4Þ
provides a simple test of predictive ability, where ΔL ¼
−1 1 TX 1 0 ; L −L P t¼R tþ1 tþ1
and vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u T −1 u1 X 2 σ^ L ¼ t L1 −L0 P t¼R tþ1 tþ1 (since E(ΔL) = 0 under the null. See theorem 1 in Giacomini and White, 2006); and where P = T − R + 1. The flexibility in defining the loss function permitted in the Giacomini and White (2006) framework is particularly useful for our application. The framework extends naturally to a panel context, where the forecasts for N different currencies over P periods are pooled together. In that case, assuming independence across units in the panel, we may implement the above test with the observation count P replaced by NP. Because one may suspect that the panel units are not independent, we implement a cluster-robust covariance
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
correction however. Pooled results are reported in the appendix for completeness. The traditional summary statistic of predictive performance is the well-known root-mean squared error, but even if RMSE is a major focus of the academic literature, it is of little interest to those in financial markets, since models with good (poor) fit measured by RMSE could still generate poor (respectively, good) returns. So what about investors' preferred performance criteria? Fortunately, the flexibility of the framework allows one to consider loss functions based on alternative metrics. Even before one adjusts for risk, an investor cares about average returns, an obvious first alternative. Next, it is natural to normalize returns by their volatility and the Sharpe ratio is the second metric that we consider. But the Sharpe ratio largely depends on symmetry and quadratic utility—it is a poor measure when the risk of a sudden crash is a consideration. For this reason our third metric is based on the skewness of returns, which allows us to assess the models' ability to avoid infrequent but particularly large negative returns. More details on the specific forms of these loss functions are reserved for the appendix. To make comparability easier with Tables 2, 3 and 5, Table 11 reports the average returns, Sharpe ratio and skewness for equal-weighted portfolios based on each of the strategies we have considered, using asterisks to indicate when the Giacomini and White (2006) statistics are statistically significant with respect to the null of strict no-arbitrage. 21 Table 10 summarizes the results for full in-sample (panel (a)) and the three out-of-sample (panel (b)) windows: 2002–2006, 2003–2007, and 2004–2008. In each case we report: RMSE (in percent); Annualized Return (in percent); Annualized Sharpe Ratio; and the monthly Skewness coefficient. We do this for the Naïve; the Naïve ECM the VAR; the VECM; and the TECM models. A ⁎⁎ (⁎) indicates that the Giacomini and White (2006) statistic is significant at the conventional 95% (90%) confidence level. The results in column 1 show that the difference in the models as judged by RMSE are miniscule. However, measured by metrics that matter to traders, the differences are considerable. For example, in our preferred model, in-sample the difference in annual mean returns is +3.19% for Naïve versus +6.96% for TECM. But out-of-sample the differences quickly become more dramatic, especially when looking at the last window, 2004–2008. In that case the difference is −2.89% versus +3.64%. Annual Sharpe Ratios are boosted from +0.58 to + 1.35 in-sample, to a virtual tie in the 2002–2006 and 2003–2007 out-of-sample windows at + 1.59 versus +1.58, and +0.87 versus +0.85 respectively, with the most dramatic differences coming with the global financial crisis window of 2004–2008 with −0.47 for Naïve and +0.80 for TECM. Skewness is also generally better with TECM, improving from −0.35 to + 0.47 in-sample, to −0.52 and 0.23 for the out-of-sample window 2004–2008. Even in the 2002–2006 window where the differences between strategies are small, notice that although Naïve generates a Sharpe Ratio of +1.59, which is slightly higher than the ratio for TECM at +1.58, Naïve has a skewness of −0.29 versus +0.22 for TECM. There is little doubt which of these returns a currency hedge fund manager would have preferred. The GW tests also show that, with the exception of skewness, these performance improvements are statistically significant, out-of-sample.
4.4. Predictive ability tests in possibly unstable environments One concern that the skeptical reader might harbor at this point is that we have been lucky in the choice of our out-of-sample period. We share that concern and have experimented with a variety of windows. Our analysis in Table 10 uses three non-overlapping out-of-sample windows to
21
For completeness, the pooled results are provided in the Appendix A.
85
Table 10 Forecast evaluation: loss-function statistics for equally-weighted portfolios.
(a) In sample Naïve Naïve ECM VAR VECM TECM (b) Out of sample, 2002–2006 Naïve Naïve ECM VAR VECM TECM (c) Out of sample, 2003–2007 Naïve Naïve ECM VAR VECM TECM (d) Out of sample, 2004–2008 Naïve Naïve ECM VAR VECM TECM
RMSE
RTN
Sharpe
Skew
(×1000)
(annual%)
(annual)
(monthly)
0.44* 0.44** 0.42** 0.42** 0.40**
3.19** 4.10** 5.54** 6.90** 6.96**
0.58** 0.72** 0.99** 1.14** 1.35**
0.42** 0.41** 0.40** 0.38** 0.38**
6.64** 7.74** 5.49** 7.03** 7.25**
1.59** 1.49** 1.07** 1.36** 1.58**
− 0.29 0.26 0.09 0.14 0.22
0.39 0.38* 0.38** 0.38 0.38
3.68* 4.39** 2.66 2.57 4.02*
0.87* 0.95** 0.52 0.52 0.85*
− 0.10 0.29 − 0.09 − 0.37 0.21
0.55 0.54 0.53 0.53 0.52
− 2.89 − 1.41 0.44 3.27 3.64*
− 0.47 − 0.29 0.08 0.58 0.80*
-0.35** 0.30 0.10 0.37** 0.47**
− 0.52 − 0.61 − 0.36 − 0.11 0.23
** (*) denotes significant at the 95% (90%) confidence level. To correct for crosssectional dependence, a cluster-robust covariance correction is used. See text.
evaluate our proposed carry trade strategies, and this suggests that something more fundamental than kismet is at work. In this section we try to more formally insulate our results against the fear that model and performance instability could yet undermine our claims. We can add one more layer of flexibility to our testing setup by using Giacomini and Rossi's (2010) fluctuations test. Their test is design to address the problem that predictors which work well in one period, may not work well in another. The test is meant to establish which model predicts best when there may be such instabilities. Flexibility is achieved by continuously monitoring the entire path of relative model performance rather than through averages over fixed windows. As a result, the new Giacomini and Rossi (2010) framework allows for possible instabilities. Conceptually, the Giacomini and Rossi (2010) approach borrows elements of the Giacomini and White (2006) approach used in the previous section, specifically the notion of testing local conditional predictive ability. The advantage is that by conditioning on the forecasting model, the testing environment allows for comparisons between nested and nonnested alternatives using flexible loss functions. We leave the specifics of how the fluctuations test is constructed to the original reference, and instead provide some intuition here. The fluctuations test considers average relative forecasting ability over small windows rolled-over the entire out-of-sample interval so as to allow for local variation. The test then accommodates the serial correlation naturally introduced by the rolling windows, and adjusts the asymptotic distribution to the fact that the test is not calculated as a fixed average over the entire out-of-sample interval. In addition, one must choose the size of the rolling window relative to the available out-ofsample and Giacomini and Rossi (2010) recommend that ratio to be about 0.3. Applied to our data, this ratio translates to rolling windows of 24 months in length over the six years of out-of-sample interval available. Table 11 reports the results of the fluctuation test and omits RMSE based loss, which trivially rejects the null that each proposed model considered has superior predictive ability rather overwhelmingly. Rather, we find it more helpful to concentrate on trading-based loss functions derived from measures of returns, Sharpe-ratio and skewness, just as we did in Table 10.
86
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
important cause of carry trade crashes is the arrival of a state of the world characterized by an increase in market volatility or illiquidity.
Table 11 Forecast evaluation using fluctuations test for equally weighted portfolios. Model
Naïve Naïve-ECM VAR VECM TECM
RTN
Sharpe
Skew
(annual %)
(annual)
(monthly)
3.03* 2.77 9.02** 3.85** 2.94*
3.99** 2.56 10.17** 3.97** 2.93*
− 12.41** 0.89 − 2.89 2.54 0.43
** (*) denotes significant at the 95% (90%) significance level. The critical value at 95% (90%) confidence is 3.15 (2.92). See text.
Table 11 lends support to our running story and encapsulates the findings in Table 10 rather well. By this metric, the VAR model would most strongly reject the null, albeit with positive skewness (as evinced by the negative value that the fluctuations test attains). Instead, although VECM and TECM have somewhat less dramatic rejections, they do so with negative (albeit not statistically significant) skew. To illustrate differences in returns, Sharpe-ratio and skewness over time, Fig. 1 displays the accumulated returns one would have obtained out-of-sample over the period January 2002 to December 2008 with the Naïve strategy (the focus of most of the literature) relative to the TECM strategy. We plot the accumulated returns to smooth some of the month-to-month fluctuations and because, in a moment, Section 5 displays a similar figure based on two tradable indices (now exchangetraded funds) from Deutsche Bank which loosely resemble these two strategies. The two figures are more easily comparable this way. Fig. 1 makes clear that, except for an interlude between the second half of 2007 and the first quarter of 2008, TECM-based returns would have been superior to Naïve-based returns, at times by as much as 10%. A good illustration of the differences in skewness between these two strategies is the dramatically different turns each takes in the latter part of 2008. In the space of only a few months, the Naïve strategy would have declined by about 20% whereas the opposite would have been true of the TECM strategy so that by the end of 2008, the differences in cumulated returns would reach the 35%–40% mark—a considerable difference.
5. Alternative crash protection strategies
100
110
120
130
140
150
In this section we compare the merits of our fundamentals based model with the recent literature, where two alternative crash protection strategies have emerged. The first is a contract-based strategy: Burnside et al. (2008b,c) explore a crash-free trading strategy where FX options are used to hedge against a tail event that takes the form of a collapse in the value of the high-yield currency. The second is a model based strategy: Brunnermeier et al. (2009) argue that an
2002m1
2003m1
2004m1
2005m1
2006m1
2007m1 TECM
2008m1
2009m1
Fig. 1. Out-of-sample cumulated returns: Naïve versus TECM model, 2002–08. The chart shows the cumulated returns one would have obtained out-of-sample from January 2002 to December 2008 with the Naïve and TECM strategies. The initial value is 100 in January 2002.
5.1. Options In Burnside et al. (2008b,c) a Naïve carry trade strategy is the starting point. Traders go long the high yield currencies and short the low yield currencies. The focus is also on major currencies, as here, although the sample window goes back to the 1970s era of capital controls, and applies to a larger set of 20 currencies. Returns may be computed for individual currencies, portfolios, of for subsets of currencies (e.g., groups of 1, 2 or 3 highest yield currencies against similar groups of lowest yield currencies). However, each Naïve strategy can be augmented by purchase of at-the-money put options on the long currencies. 22 This augmentation leads to positive skew by construction. It averts tail risk, but at a price. How large is that price? For the 1987/1 to 2008/1 period a Naïve unhedged equally-weighted carry trade had a return of 3.22% per annum with an annualized Sharpe ratio of 0.54 and a skewness of − 0.67. This compares with the U.S. stock market's 6.59% return, 0.45 Sharpe ratio, and skewness of − 1.16 over the same period. Normality of returns is rejected in both cases. Implementing the option-hedged carry trade boosts the Sharpe ratio a little, eliminates the negative skew, but at a significant cost. Added insurance costs of 0.71% per annum mean that the hedged strategy lowers returns to only 2.51% annually, has a Sharpe of 0.71, and a significant positive skew of 0.75. In contrast, let us now compare our similar equal-weight Naïve carry trading strategy with our fundamentals based strategy, in Table 10, panel (a). Comparing the Naïve and TECM models, our strategy increases returns on an annualized basis from 3.19% to 6.96%, a gain of 377 bps, with a Sharpe ratio of 1.35 and eliminating the negative skew of − 0.35 to generate a positive skew of 0.47. Unlike the option-based strategy, our approach can increase returns and lower skew at the same time. We make some further observations. To be fair, we recognize that the strategies implemented by Burnside et al. (2008b,c) are crude and mainly illustrative. The options they employ are at-the-money, and push skew into strong positive territory. In reality, investors might purchase option that is more out-of-the-money; at much lower costs, this could protect against only the large crashes, yet still eliminate the worst of the negative skew. This was argued by Jurek (2008), who found that only 30%–40% of Naïve carry trade profits need be spent on average to insure against crash risk, leaving the puzzle intact. Nonetheless, even if lower cost option strategies are found, our results suggest that there is a better no-cost way to hedge against crashes by using data on deviations from long-run fundamentals, and this information also serves to boost absolute returns and Sharpe ratios too. Moreover, option contracts have other drawbacks, many of which were shown to be salient issues after the financial crash of 2008. Option prices embed volatility, so in periods of turmoil the price of insurance can rise just when it is most needed, depending on the implied FX volatility (Bhansali, 2007). And as with all derivatives, options carry counterparty risk. In addition, option price and position data from exchanges may not be representative, since this activity represents only a tiny fraction of FX option trading, the vast majority being over-the-counter. Also, like forward and future derivatives, options carry bid-ask spreads that can explode such as to make markets dysfunctional in crisis periods. Finally, while Burnside et al. (2008b,c) show the viability of an options-based strategy for 20 widely-traded major currencies, FX derivative markets are thin to nonexistent for many emerging market currencies, and bid-ask 22
This strategy is also studied by Bhansali (2007).
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
6. Reality check: exchange traded funds Beyond out-of-sample forecasting, an alternative way to judge our preferred modeling and trading strategies against the Naïve benchmark
160 140
In Brunnermeier et al. (2009) it is argued that various forms of generalized asset market distress, by reducing risk appetite, may be significant contributors to carry trade crashes. The logic of the argument is that such conditions may prompt a pull back from all risky asset classes, leading to short-run losses via order flow effects, or the price impact of trades. To provide empirical support for this proposition the authors show that, on a weekly basis, changes in the VIX volatility index (a proxy for distress) correlate with the returns to Naïve carry trade strategies—that is, trading strategies which do not condition on any information except the interest differential. An increase in VIX was found to correlate with lower carry trade returns, all else equal, suggesting that liquidity risk might partially explain the excess returns. However, lagged changes in VIX had little or no predictive power for next periods returns. This prompts us to explore whether the same arguments hold true in our sample, and if we detect any difference when we use our preferred trading strategy. We found that in our data, using the same 1992–2006 sample period, at a monthly frequency, there was no evidence that the change in VIX was correlated with returns contemporaneously, nor that it helped predict next period's returns. Specifically, in the spirit of Brunnermeier et al. (2009), we computed loadings of the returns on the contemporaneous and lagged change in VIX. We first regressed the actual (signed) returns of trades based on the Naïve model on the contemporaneous change in VIX for all months and all currencies in our data over their sample period 1992–2006. We then did the same for the lagged change in VIX. We then re-ran both regressions using the actual (signed) returns of our preferred TECM model. None of the four regression was the slope coefficient significant, and in all cases it actually had the “wrong” positive sign, suggesting that rising VIX was slightly positively correlated with profits this month, or next month. We make some further observations. To be fair, we should note that Brunnermeier et al. (2009) present these results at a weekly frequency. In our dataset, we have to work at monthly frequency given the underlying price index data used to construct the real exchange rate fundamental signal. We conjecture that the VIX signal may have stronger correlations with current or future returns at weekly or even fortnightly frequency, even when placed in a model like ours augmented to account for real exchange rate fundamentals, although further research is warranted in this area. If the sample is extended using our full dataset, including through the volatile crisis period in 2007–2008, the coefficients are not stable and we do sometimes find a statistically significant role for VIX signals in predicting negative actual (signed) returns of the Naïve Model 1, although this episode is outside the window considered by Brunnermeier et al. (2009). But this result disappears, and the sign even reverses, when we include the real exchange rate term and use our preferred TECM model, suggesting that the real exchange rate deviation alone may serve as an adequate crash warning. In fact, in these models the sign of the coefficient on the change in VIX was often positive and significant, suggesting potentially the opposite interpretation: when VIX is rising, arbitrage capital may be being withdrawn, but this has a tendency to leave more profits on the table for the risk tolerant, or “patient capital,” and for traders pursuing profitable fundamental-based strategies with inherent skew protection of the sort developed here.
120
5.2. VIX signal
is to compare the recent performances of different currency funds in the real world. When comparing the merits of Naïve versus fundamentals based strategies, the relative performance of these funds in the financial crisis 2008 proves to be very revealing. First, we seek a fund representative of the Naïve long-short approach, the basic carry strategies of Burnside et al. (2008a), and the same authors with Kleshchelski (2008a,b), henceforth BER/K. For this purpose we focus on the Deutsche Bank G10 Currency Harvest USD Index (AMEX: DBCFHX, Bloomberg: DBHVG10U). This index has been constructed back to March 1993 using historical data. Since April 2006 this index has been made available to retail investors—in the United States, for example, it trades as the Powershares DB G10 Currency Harvest Fund (AMEX: DBV). These indices track a portfolio based on 2:1 leveraged currency positions selected from ten major “G10” currencies with the U.S. dollar as a base. For every $2 invested, the index goes long (+$1) in the three currencies with the highest interest rates, and short ($-1) in the three with the lowest interest rates, and with 0 invested in the other three, with $1 placed in Treasuries for margin. The portfolio is reweighted only every quarter and is implemented via futures contracts. If the U.S. dollar is one of the long or short currencies, that position makes no profit in the base currency and is dropped, so leverage falls to 1.66. This ETF's Naïve approach to forecasting any currency pair via the carry signal matches the BER/K approach. The main differences are that the index is not equal weight and tries to pick 6 winners out of 9, the fund's leverage is finite and so this fund has to pay the opportunity costs of satisfying the margin requirement, and the fund also charges a 0.75% per annum management fee. The performance of the index in the period 2002 to 2008 is shown in Fig. 2, where the starting value of the index is rebased to 100. The first five or so years, up to mid/late 2007 were a golden age for Naïve strategies: the index return was about 10% per year, and the index peaked over 150 in mid-2007. Even after some reversals, as late as the middle of 2008 the index stood at over 140, up 40% on January 2002 levels. It is much less obvious what particular peso problem can explain the high Sharpe ratio associated with the equally weighted carry trade. That strategy seems to involve “picking up pennies in front of an unknown truck that has never been seen.” Unfortunately, in the second half of 2008 the Naïve–BER/K type of carry trade strategies was hit by the financial equivalent of a runaway eighteen-wheeler. The index gave up about 10% in the third quarter and a further 20% in October in a major unwind. After all the index stood close to its January 2002 level of 100, and five and a half years of gains had been erased.
100
spreads can be wide, so in those markets currency strategists may have to seek alternative plain-vanilla approaches, like ours, that might help avert currency crash risk.
87
2002m1
2003m1
2004m1
2005m1
2006m1
2007m1
2008m1
2009m1
DBCRUSI (Carry/Value/Momentum)
Fig. 2. Naïve carry ETF versus augmented carry ETF: real-world index cumulated returns for two Deutsche Bank USD Carry Indices, 2002–08. ETFs are based on the Deutsche Bank G10 Currency Harvest USD Index (DBCFHX) and Deutsche Bank Currency Returns USD Index (DBCRUSI). The indices are based on a portfolio of nine G10 currencies versus the U.S. dollar, with weights of ± 1 or 0 depending on signals. Indices predate the ETF inception dates. Index values are daily with January 1, 2002 equal to 100.
88
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
Could a more sophisticated carry trade strategy have avoided this carnage, while still preserving positive returns overall? Yes. We can look for a different index which matches better with our more sophisticated trading strategy. We focus on the Deutsche Bank Currency Returns USD Index (Bloomberg: DBCRUSI). This investible index operates on the same set of “G10” currencies but amalgamates three different trading signals for each currency, where again the signals are +1 (long), 0 (no position), and −1 (short). The first signal is the same as before: the Naïve carry trade signal based purely on the interest rate, the 3-month LIBOR rate (what DB calls the “Carry Index”), updated every three months. The second signal is based on a calculation of the deviation of the currency from its FEER level, based on commoncurrency price levels from the OECD (what DB calls the “Valuation Index”) continuously updated every three months. The third signal is based on momentum, and takes into account the USD return to holding that currency in the previous period (what DB calls the “Momentum Index”) updated every month. Clearly this augmented carry strategy is close in spirit to our augmented models, given its mix of signals. In our model “carry” corresponds to the interest differential; “momentum” corresponds mostly to the lagged exchange rate change (given persistence in the interest differential); and “valuation” corresponds to our real exchange rate deviation. The Deutsche Bank Currency Returns Index is not available as an ETF in the U.S., but it does trade as an ETF in Frankfurt or London. A test of our approach is to ask: did investors in this more sophisticated fund fare better in 2008 than those invested in the Naïve DBV fund. As Fig. 2 shows, they did much better. On the 2003–07 upswing this index was less volatile, but returns were still strong, up about 35% at the peak. Momentum signals got the money quickly out of crashing currencies (and back in when they rose), and valuation signals hedged against dangerously overvalued currencies (and jumped in otherwise), tempering the Naïve carry signal. The real test was in 2008. The index did remarkably well: performance was absolutely flat, and kept 30% of the cumulative gains locked in. Looking at the three components of this strategy, a 30%+ loss on the carry signal in 2008, was offset by a 10% gain on the PPP signal and a 20% gain on the momentum signal. The relative performance of the ETFs lines up with the results of our trading strategies as shown back in Table 7. In the out of sample results the Naïve–BER/K type model showed a loss of 2.84% per year from 2004 to 2008. Our more sophisticated VECM showed a gain of 4.15% per year. That differential of 7% per year would cumulate to a roughly 30% divergence in index values over a four year period. Despite some differences in methods between our strategies and the ETFs, this matches the cumulative divergence seen for the real-world ETFs in Fig. 2. Fundamentals matter. Comparing the ETFs makes clear the gains from using a more sophisticated currency strategy incorporating a FEER measure. Although academic research has focused on the Naïve strategies, it is no secret in the financial markets that more refined models, taking into account the signals discussed here and many others, are actively employed by serious fund managers and are now leaking out into traded products. 7. Conclusion The global financial crisis of 2008 dealt a blow to many investments, including the carry trade. Naïve strategies based on arbitraging interest rate differentials across countries under the belief that it was equiprobable for exchange rates to appreciate or depreciate had been consistently profitable for many years. But these returns were volatile and subject to occasional crashes. Even averaging returns over four years, by the end of 2008 one could not reject the null of strict no-arbitrage. Was this the end of the carry trade puzzle? The elegant simplicity of the Naïve carry trade had made for a persuasive story. But it left on the table important clues about the secular movements of exchange rates toward fundamental long-run
equilibrium, clues that we used to construct slightly more refined carry trade strategies. These refinements are sufficient to generate persistent positive returns with moderate exposure and zero or even mildly positive skew. More importantly, they would have eked out positive returns even during the financial crisis at a time when little else could. The implied gain–loss ratios from these strategies in-sample and over different out-of-sample windows appear to exceed those associated with equity investments. These approximate arbitrage opportunities (in the sense of Bernardo and Ledoit, 1999) impose new bounds on the set of admissible pricing kernels implied by investors' attitudes toward risk. Our exhaustive diagnostic analysis indicates that these are no in-sample flukes. We arrive at these conclusions using novel methods. Information that is insufficient to scientifically validate a model of economic behavior can be sufficient to determine the returns-weighted optimal direction of trade. This observation had two consequences for our analysis. On one hand, we structured tests of predictive ability with loss functions that matter to the investor (rather than the scientist): returns, Sharpe, and skew. On the other hand, the binary choice of which currency to sell short and which to go long in falls neatly into a classification framework of the sort that is often discussed in biostatistics and signal processing. With such a framework, we were able to weave the opportunities provided by a given classifier – investment strategy – with the investor's preferences over each choice. The statistics and the connection between opportunities and choices are explored in more detail in a companion paper (Jordà and Taylor, 2011). The global financial crisis may have made the Naïve carry trade a curious relic in international finance, but our paper reinvigorates the debate using only marginally more complicated strategies.
Table 12 Forecast evaluation: loss-function statistics for pooled data.
(a) In sample Naïve Naïve ECM VAR VECM TECM (b) Out of sample 2002–2006 Naïve Naïve ECM VAR VECM TECM (c) Out of sample 2003–2007 Naïve Naïve ECM VAR VECM TECM (d) Out of sample 2004–2008 Naïve Naïve ECM VAR VECM TECM
Squared error
Return
Sharpe Ratio
Skewness
(× 1000)
(annual %)
(annual)
(monthly)
0.90** 0.88** 0.88** 0.86** 0.84**
3.19** 4.10** 5.54** 6.90** 6.96**
0.30** 0.39** 0.53** 0.66** 0.66**
− 0.17** 0.03 − 0.07 0.09 0.05
0.70** 0.69** 0.69** 0.67** 0.66**
6.64** 7.74** 5.49** 7.03** 7.25**
0.70** 0.82** 0.58** 0.75** 0.77**
− 0.12 − 0.08 − 0.09 − 0.20 − 0.07
0.70* 0.69** 0.70 0.70 0.70
3.68** 4.39** 2.66** 2.57* 4.02**
0.39** 0.47** 0.28** 0.28* 0.43**
− 0.07 − 0.03 − 0.14 − 0.16 − 0.03
1.01** 1.01** 1.00 0.99 0.97
− 2.89** − 1.41 0.44 3.27** 3.64**
− 0.27** − 0.13 0.04 0.30** 0.33**
− 0.32** − 0.18 − 0.16* − 0.04 0.02
Notes: ** (*) denotes significant at the 95% (90%) confidence level. To correct for crosssectional dependence, a cluster-robust covariance correction is used. See text.
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
Appendix A The loss functions of the Giacomini and White (2006) statistics in expression (4) that correspond to three trading-based metrics are, for the returns of a given strategy: Ltþ1 ¼ μ^ tþ1 ; for the Sharpe ratio μ^ tþ1 Ltþ1 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi ; T−1 ^ 1 P ∑t¼R μ tþ1 −μ and for skewness, we use Pearson's second definition of skewness to provide a comparable scale with the returns- and Sharpe ratio-based loss functions, specifically 3 μ^ tþ1 −μ ð0:50Þ i Ltþ1 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi ; T−1 ^ 1 P ∑t¼R μ tþ1 −μ where μ(0.50) refers to the median of μ^ tþ1 for t = R, …, T − 1. In Table 12 we report parallel results to those in Table 10 by pooling instead of by portfolios. To adjust for any heterogeneity, we use a cluster-robust correction for the panel. References Alexius, Annika, 2001. Uncovered interest parity revisited. Review of International Economics 9 (3), 505–517. Bacchetta, Philip, van Wincoop, Eric, 2006. Can information heterogeneity explain the exchange rate determination puzzle? The American Economic Review 96 (3), 552–576. Backus, David K., Gregory, Allan W., Telmer, Chris I., 1993. Accounting for forward rates in markets for foreign currency. Journal of Finance 48 (5), 1887–1908. Bakker, Age, Chapple, Bryan, 2002. Advanced country experiences with capital account liberalization. Occasional Paper, 214. International Monetary Fund, Washington, D.C. Bekaert, Geert, Hodrick, Robert J., 1993. On biases in the measurement of foreign exchange risk premiums. Journal of International Money and Finance 12 (2), 115–138. Bernardo, Antonio E., Ledoit, Olivier, 1999. Approximate Arbitrage. Anderson School of Management, UCLA. Working paper 18–99. Bernardo, Antonio E., Ledoit, Olivier, 2000. Gain, loss, and asset pricing. Journal of Political Economy 108 (1), 144–172. Bhansali, Vineer, 2007. Volatility and the carry trade. Journal of Fixed Income 17 (3), 72–84. Brunnermeier, Markus K., Nagel, Stefan, Pedersen, Lasse H., 2009. Carry trades and currency crashes. NBER Macroeconomics Annual 2008. : Daron Acemoglu, Kenneth Rogoff, Michael Woodford. University of Chicago Press, Chicago, pp. 313–347. Burnside, A. Craig, 2011. The cross-section of foreign currency risk premia and consumption growth risk: comment. The American Economic Review 101 (7), 3456–3476. Burnside, Craig A., Eichenbaum, Martin, Kleshchelski, Isaac, Rebelo, Sergio, 2006. The returns to currency speculation. NBER Working Papers, 12489. Burnside, Craig A., Eichenbaum, Martin, Rebelo, Sergio, 2007. The returns to currency speculation in emerging markets. NBER Working Papers 12916. Burnside, A. Craig, Eichenbaum, Martin, Rebelo, Sergio, 2008a. Carry trade: the gains of diversification. Journal of the European Economic Association 6 (2–3), 581–588. Burnside, Craig A., Eichenbaum, Martin, Kleshchelski, Isaac, Rebelo, Sergio, 2008b. Do peso problems explain the returns to the carry trade? CEPR Discussion Papers, 6873. June. Burnside, A. Craig, Eichenbaum, Martin, Kleshchelski, Isaac, Rebelo, Sergio, 2008c. Do peso problems explain the returns to the carry trade? NBER Working Papers 14054. Černý, Aleš, 2003. Generalised Sharpe ratios and asset pricing in incomplete markets. European Finance Review 7 (2), 191–233. Cheung, Yin-Wong, Chinn, Menzie D., Garcia Pascual, Antonio, 2005. Empirical exchange rate models of the nineties: are any fit to survive? Journal of International Money and Finance 24 (7), 1150–1175. Chinn, Menzie D., Frankel, Jeffrey A., 2002. Survey data on exchange rate expectations: more currencies, more horizons, more tests. In: Allen, W., Dickinson, D. (Eds.), Monetary Policy, Capital Flows and Financial Market Developments in the Era of Financial Globalisation: Essays in Honour of Max Fry. Routledge, London, pp. 145–167. Chong, Yanping, Jordà, Òscar, Taylor, Alan M., 2012. The Harrod-Balassa-Samuelson Hypothesis: Real Exchange Rates and their Long-Run Equilibrium. International Economic Review 53 (2), 609–634. Clarida, Richard, Waldman, Daniel, 2008. Is bad news about inflation good news for the exchange rate? And, if so, can that tell us anything about the conduct of monetary
89
policy? In: Campbell, John Y. (Ed.), Asset Prices and Monetary Policy. University of Chicago Press, Chicago, pp. 371–396. Clinton, Kevin, 1998. Transactions costs and covered interest arbitrage: theory and evidence. Journal of Political Economy 96 (2), 358–370. Coakley, Jerry, Fuertes, Ana María, 2001. A Non-linear analysis of excess foreign exchange returns. The Manchester School 69 (6), 623–642. Cochrane, John H., Saá-Requejo, Jesús, 2000. Beyond arbitrage: good-deal asset price bounds in incomplete markets. Journal of Political Economy 108 (1), 79–119. DeMiguel, Victor, Garlappi, Lorenzo, Uppal, Raman, 2009. Optimal versus naive diversification: how inefficient is the 1/N portfolio strategy? Review of Financial Studies 22 (5), 1915–1953. Diebold, Francis X., Mariano, Roberto S., 1995. Comparing predictive accuracy. Journal of Business and Economic Statistics 13 (3), 253–263. Dominguez, Kathryn M., 1986. Are foreign exchange forecasts rational? New evidence from survey data. Economics Letters 21 (3), 277–281. Dooley, Michael P., Shafer, Jeffrey R., 1984. Analysis of short-run exchange rate behavior: March 1973 to 1981. In: Bigman, David, Taya, Teizo (Eds.), Floating Exchange Rates and the State of World Trade and Payments, pp. 43–70. Engel, Charles, Hamilton, James D., 1990. Long swings in the dollar: are they in the data and do markets know it? The American Economic Review 80 (4), 689–711. Fama, Eugene F., 1984. Forward and spot exchange rates. Journal of Monetary Economics 14 (3), 319–338. Fawcett, Tom, 2006. ROC graphs with instance-varying costs. Pattern Recognition Letters 27 (8), 882–891. Fisher, Eric O'N, 2006. The forward premium in a model with heterogeneous prior beliefs. Journal of International Money and Finance 25 (1), 48–70. Frankel, Jeffrey A., 1980. Tests of rational expectations in the forward exchange market. Southern Economic Journal 46 (4), 1083–1101. Frankel, Jeffrey A., Froot, Kenneth A., 1987. Using survey data to test standard propositions regarding exchange rate expectations. The American Economic Review 77 (1), 133–153. Frenkel, Jacob A., Levich, Richard M., 1975. Covered interest rate arbitrage: unexploited profits? Journal of Political Economy 83 (2), 325–338. Froot, Kenneth A., Frankel, Jeffrey A., 1989. Forward discount bias: is it an exchange risk premium? Quarterly Journal of Economics 104 (1), 139–161. Froot, Kenneth A., Ramadorai, Tarun, 2008. Institutional portfolio flows and international investments. Review of Financial Studies 21 (2), 937–971. Froot, Kenneth A., Thaler, Richard H., 1990. Foreign exchange. The Journal of Economic Perspectives 4 (3), 179–192. Fujii, Eiji, Chinn, Menzie D., 2000. Fin de Siècle real interest parity. NBER Working Papers, 7880. Giacomini, Raffaella, Rossi, Barbara, 2010. Forecast comparisons in unstable environments. Journal of Applied Econometrics 25 (4), 595–620. Giacomini, Raffaella, White, Halbert, 2006. Tests of conditional predictive ability. Econometrica 74 (6), 1545–1578. Granger, Clive W.J., Teräsvirta, Timo, 1993. Modelling Nonlinear Economic Relationships. Oxford University Press, Oxford. Hansen, Lars Peter, Hodrick, Robert J., 1980. Forward exchange rates as optimal predictors of future spot rates: an econometric analysis. Journal of Political Economy 88 (5), 829–853. Hsieh, Fushing, Turnbull, Bruce W., 1996. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. Annals of Statistics 24 (1), 25–40. Ilut, Cosmin L., 2010. Ambiguity aversion: implications for the uncovered interest rate parity puzzle. Working Papers, 10–53. Department of Economics, Duke University. Jeanne, Olivier, Rose, Andrew K., 2002. Noise trading and exchange rate regimes. Quarterly Journal of Economics 117 (2), 537–569. Jordà, Òscar, Taylor, Alan M., 2011. Performance evaluation of zero net-investment strategies. NBER Working Paper, 17150. Jurek, Jakub W., 2008. Crash-Neutral Currency Carry Trades. Princeton University, Photocopy. Khandani, Amir E., Kim, Adlar J., Lo, AndrewW., 2010. Consumer credit-risk models via machine-learning algorithms. Journal of Banking and Finance 34, 2767–2787. Levich, Richard M., Thomas III, Lee R., 1993. The significance of technical trading-rule profits in the foreign exchange market: a bootstrap approach. Journal of International Money and Finance 12 (5), 451–474. Lusted, Lee B., 1960. Logical analysis in roentgen diagnosis. Radiology 74 (2), 178–193. Lustig, Hanno, Verdelhan, Adrien, 2007. The cross section of foreign currency risk premia and consumption growth risk. The American Economic Review 97 (1), 89–117. Mason, Ian B., 1982. A model for the assessment of weather forecasts. Australian Meterological Magazine 30 (4), 291–303. Meese, Richard A., Rogoff, Kenneth, 1983. Empirical exchange rate models of the seventies. Journal of International Economics 14 (1–2), 3–24. Michael, Panos, Nobay, A. Robert, Peel, David A., 1997. Transactions costs and nonlinear adjustment in real exchange rates: an empirical investigation. Journal of Political Economy 105 (4), 862–879. Obstfeld, Maurice, Taylor, Alan M., 1997. Nonlinear aspects of goods-market arbitrage and adjustment: Heckscher's commodity points revisited. Journal of the Japanese and International Economies 11 (4), 441–479. Obstfeld, Maurice, Taylor, Alan M., 2004. Global Capital Markets: Integration, Crisis, and Growth. Cambridge University Press, Cambridge. Pedroni, Peter, 1999. Critical values for cointegration tests in heterogeneous panels with multiple regressors. Oxford Bulletin of Economics and Statistics 61 (S1), 653–670. Pedroni, Peter, 2004. Panel cointegration: asymptotic and finite sample properties of pooled time series tests with an application to the PPP hypothesis. Econometric Theory 20 (3), 597–625.
90
Ò. Jordà, A.M. Taylor / Journal of International Economics 88 (2012) 74–90
Pepe, Margaret S., 2003. The Statistical Evaluation of Medical Tests for Classification and Prediction. Oxford University Press, Oxford. Pesaran, M. Hashem, Timmermann, Allan, 1992. A simple nonparametric test of predictive performance. Journal of Business and Economic Statistics 10 (4), 461–465. Peterson, Wesley W., Birdsall, Theodore G., 1953. The theory of signal detectability: part I. the general theory. Electronic Defense Group. : Technical Report, 13. University of Michigan, Ann Arbor. Plantin, Guillaume, Shin, Hyun Song, 2007. Carry Trades and Speculative Dynamics. Princeton University, Photocopy. Pojarliev, Momtchil, Levich, Richard M., 2007. Do professional currency managers beat the benchmark? NBER Working Papers, 13714. Pojarliev, Momtchil, Levich, Richard M., 2008. Trades of the living dead: style Differences. Style Persistence and Performance of Currency Fund Managers: NBER Working Papers, 14355. Poterba, James M., Summers, Lawrence H., 1986. The persistence of volatility and stock market fluctuations. The American Economic Review 76 (5), 1142–1151. Sager, Michael, Taylor, Mark P., 2008. Generating Currency Trading Rules from the Term Structure of Forward Foreign Exchange Premia. University of Warwick. October, Photocopy. Sarno, Lucio, Taylor, Mark P., 2002. The Economics of Exchange Rates. Cambridge University Press, Cambridge. Sarno, Lucio, Valente, Giorgio, Leon, Hyginus, 2006. Nonlinearity in deviations from uncovered interest parity: an explanation of the forward bias puzzle. Review of Finance 10 (3), 443–482.
Shleifer, Andrei, Vishny, Robert W., 1997. A survey of corporate governance. Journal of Finance 52 (2), 737–783. Silvennoinen, Annastiina, Teräsvirta, Timo, He, Changli, 2008. Unconditional Skewness from Asymmetry in the Conditional Mean and Variance. Department of Economic Statistics, Stockholm School of Economics. Photocopy. Sinclair, Peter J.N., 2005. How policy rates affect output, prices, and labour, open economy issues, and inflation and disinflation. In: Mahadeva, Lavan, Sinclair, Peter (Eds.), How Monetary Policy Works. Routledge, London, pp. 53–81. Spackman, Kent A., 1989. Signal detection theory: valuable tools for evaluating inductive learning. Proceedings of the Sixth International Workshop on Machine Learning. Morgan Kaufman, San Mateo, Calif, pp. 160–163. Swets, John A., 1973. The relative operating characteristic in psychology. Science 182 (4116), 990–1000. Taylor, Alan M., Taylor, Mark P., 2004. The purchasing power parity debate. The Journal of Economic Perspectives 18 (4), 135–158. Taylor, Mark P., Peel, David A., Sarno, Lucio, 2001. Nonlinear mean-reversion in real exchange rates: toward a solution to the purchasing power parity puzzles. International Economic Review 42 (4), 1015–1042. West, Kenneth D., 1996. Asymptotic inference about predictive ability. Econometrica 64 (5), 1067–1084.