W
HEN picking over the causes for the current economic crisis, we think we know where to point the finger: it was them greedy bankers wot dunnit. It was they who lent American house buyers more money than they could hope to repay, and sold on the risk in ever more complex, opaque packages, rewarding themselves handsomely in the process. While there is truth in that picture, it ignores the complexities of a system that led the initially small perturbation of the “sub-prime” mortgage crisis to morph into systemic collapse. Moreover, concentrating on the specific causes of that one event fixates on the past at the expense of the future. When the next crisis comes, the disturbance will probably ripple out from another quarter of the economy, taking us completely unawares. Can we prepare better? Perhaps – but only if we fundamentally change our approach to economics. Instead of making unrealistically simplistic assumptions about human behaviour and the properties of markets, we can harness the number-crunching power of modern computing, coupled with our emerging understanding of the physics of complex systems, to rebuild economic theory from the bottom up. Extending that approach to the social sciences more generally could help us develop forecasting tools to assess a whole range of problems threatening human society – not just the ravages of the markets, but wars, disease and demographic change. Investigating the roots of our recent financial crisis using conventional economics rapidly hits a problem: the theories deny that such things ever happen. Economists employed by central banks and government treasuries favour using statistical models fitted to past data or “equilibrium” models that assume markets find stability if left to their own devices, a quality characterised by the 18th-century Scottish economist Adam Smith as the “invisible hand”. Such models assume that markets never crash on their own.
The Earth simulator Packing the world of human interactions into a computer could help solve our planet’s woes, says Philip Ball 48 | NewScientist | 30 October 2010
Instead, speculative bubbles and catastrophic crashes are caused by unpredictable events in the wider world, such as political upheavals or technological innovations. That’s not the only convenient simplification. Individuals and institutions are also assumed to have access to all the information that exists on an object’s price, and make the decision on whether to buy it in their own self-interest according to a rational cost-benefit analysis. That’s like choosing to buy a chicken sandwich only after taking into account the current price of all other chicken sandwiches in the world. “That is psychologically unrealistic,” says Daniel Kahneman, an economics Nobel laureate at Princeton University. Not only do we act on the basis of incomplete information, but we often make economic decisions against our rational self-interest. We pay more for brand-name goods that are not necessarily superior to unbranded ones, for example, and we sometimes buy a sandwich regardless of the price, just because we are hungry.
darrel rees
Messy mathematics The central characteristic of agents in an economy, be they companies, banks, traders, or individual consumers like you and me, says Kahneman, “is not that they reason poorly, but that they often act intuitively”. This is the downfall of many attempts to model economic and social interactions: the assumption that everyone behaves in the same rational way. We clearly don’t. What’s more, the number-crunching power now at our fingertips means such assumptions are no longer necessary. Rather than solve mathematical models of an economy, we can use computer models to simulate how people and institutions interact, and how their actions combine to dictate the ups and downs of markets. By feeding whatever rules for human behaviour we like into a model and letting it roll, we can see the consequences without worrying about how messy it makes the mathematics. This approach, called agent-based modelling, has its roots in the computer studies of so-called cellular automata, popularised in the 1970s by the British mathematician John Conway’s “Game of Life”. Here, individual cells in a grid interact with one another according to simple rules, for example reproducing or dying depending on the number of neighbours they have. Such simple rules can rapidly give rise to complex yet organised collective behaviour. Agent-based economic models have a similar starting point. Agents representing the various participants in an economy are first assigned rules for making economic transactions. They might try to maximise profit, for example, or avoid bankruptcy, achieve a balance between profit and risk, > 30 October 2010 | NewScientist | 49
”The idea is to create a predictive model of global social, environmental and ecological behaviour” or just copy the most successful strategy followed by others. Crucially, they can also learn from experience and respond to one another’s decisions. For instance, an agent might buy a stock with a certain probability if its price falls below a threshold – a threshold which could differ between agents – but with an increased probability if other agents are also buying the same stock. Just as in the Game of Life, complex behaviours soon emerge from such simple beginnings. In 1998, economist Thomas Lux, now at the University of Kiel in Germany, and physicist Michele Marchesi of the University of Cagliari in Italy showed that a basic economic model containing agents with various trading strategies could produce realistic patterns of price fluctuation. Not only that, but they could also produce new phenomena alien to equilibrium models but familiar to real markets, such as bursts of transactions separated by relatively calm periods (Nature, vol 397, p 498). Around the same time, economist Blake LeBaron, now at Brandeis University in Waltham, Massachusetts, and his colleagues showed how a similar model could produce business-as-usual fluctuations punctuated by occasional big variations and even crashes (Artificial Life and Robotics, vol 3, p 27).
to atmospheric chemists, so any wholeeconomy model would need to draw on the knowledge of experts in finance, labour markets, supply chains, marketing, retail and perhaps even psychology and law. Vast computing power would also be needed to run the actual model and follow the interactions of millions, or perhaps billions, of agents representing companies and institutions with diverse agendas and decision-making rules. And just as climate models need a stream of data from meteorological stations to provide a constant reality check, an agent-based economic model would need colossal amounts of information about market transactions, prices, consumer behaviour, employment and business growth, government fiscal policies and interest rates. Basically, it would need everything concerned with business, banking, finance and day-today economic behaviour. Right now we don’t have that data, but we could do soon. In the US, for example, the Office of Financial Research, whose creation was mandated by Congress in July 2010, will have unprecedented powers to demand information on transactions by financial
These models show that Smith’s invisible hand is not so much invisible as non-existent. Herding behaviour, panic selling and market instability arise not just through uncontrollable external events, but also through the way we make decisions and the influences others have on our decision-making. What can we learn from this? In June this year, scientists, economists, computer modellers and policy-makers joined together at a workshop in Warrenton, Virginia, sponsored by the US National Science Foundation, to see what lessons we might draw from agent-based models of the recent crisis (see “Picking over the debris”, opposite). They also explored whether they might help us prevent any future slumps. One central question emerged: is it feasible to build an agent-based model on a scale capable of simulating a nation’s or indeed the world’s economy? Such a model would be a huge step towards better understanding the consequences of policy decisions before we make them. It would also be a huge undertaking. Just as modelling Earth’s climate requires the expertise of everyone from marine biologists 50 | NewScientist | 30 October 2010
mark chilvers/panos
Lessons learned
institutions. This would include previously confidential details such as information on individual loans and who underwrites the risk for credit defaults. Even with that data glut, we would probably need many models to generate useful predictions. “There will never be a correct and final model,” says Silvano Cincotti, a complex-systems engineer at the University of Genoa, Italy, who attended the Virginia workshop. “Economics is a self-referential system – the model influences the system you’re studying.” In other words, if people get wind of predictions, they change their behaviour on the basis of those predictions, quite possibly invalidating them. All this would imply that real-world agentbased models won’t be around any time soon. In fact, a proof of the principle already exists. Called Eurace, this is the largest agent-based model of an economy developed so far and was created by a European team, including Cincotti, between 2006 and 2009. Eurace simulates a fictitious economy with several million agents, whose markets for labour, goods, credit and finance act as the perfect place to test the effects of different
Picking over the debris economic policies. How, for example, should we deal with the massive government debts incurred as a result of the financial crisis? Is the answer fiscal tightening – reducing the debt with high taxes or low public spending – as pursued by the current UK government? Or quantitative easing, keeping taxes low and plugging the debt by selling government bonds, as has been favoured in the US? Eurace’s simulations suggest that, in the long run, the American approach boosts economic growth and employment levels more.
Crisis observatories More work is needed to prove that these and other preliminary results from Eurace scale up to the size of a national or indeed the world economy, says Cincotti. But it looks set to be a far more promising vehicle for understanding the niceties of economics than the old onesize-fits-all approach. Why stop there? If it is possible to simulate an entire economy using agent-based modelling, can we encapsulate more facets of human activities and impacts in a model? That is just what the researchers behind
Traders often use intuition rather than rational reasoning
What turned the modest drama in the US housing market in mid-2007 into a global catastrophe? A new way of modelling economies, based on the economic behaviour of individuals, or agents, is supplying us with fresh insights. What’s certain is that rising house prices and low interest rates encouraged people to release the equity on their house by refinancing, while banks were falling over themselves to offer such loans. But according to models developed by Andrew Lo at the Massachusetts Institute of Technology, one crucial factor in causing the situation to spiral out of control was the tendency of home-owners to increase their “leverage” – the ratio of their loan burden to their property value – in the good times but not decrease it in the bad times. When there was a critical fall in house prices, everyone was caught short at once. The problem also lay in the way the debt was handled. The tools banks used for calculating risk when selling
the debt on were faulty, and the “derivatives” they created had developed such byzantine complexity that it was almost impossible to keep track of what risk went where. As soon as the first bankruptcies were announced, the whole system froze because no one knew where the debts lurked, and so couldn’t tell who to grant a loan to. An advantage of agent-based models is that they can simulate the web of interactions between banks explicitly. Just as with power grids, computer viruses and disease epidemics, the topology of this web – what is linked to what else, and how many connections exist – is thought to determine how it behaves under stress. Researchers at the Bank of England, among others, are now trying to tease out the network properties in financial markets, in the hope of identifying generic vulnerabilities and dangers in the system. If they can find them, agent-based models might help us to avoid a repeat of past crises.
the pan-European Future Information and Communication Technologies consortium (FuturICT) are planning. Coordinated by Dirk Helbing, a social scientist at the Swiss Federal Institute of Technology in Zurich, they have recently submitted a proposal to the European Commission to develop a model that they dub the “Living Earth Simulator”. The idea is to plug a whole-economy model into models of other “Earth systems”, such as climate, transport, population and water resources, to create a predictive model of global social, environmental and ecological behaviour. It is an acknowledgement of the fact that it is all but impossible to study complex phenomena in isolation. The spread of infectious disease, for example, is highly dependent on transport networks, from international contagion by air travel to the small-scale details of local bus services. By bundling all the relevant systems into one model, the hope is to create a “crisis observatory” that could help us to negotiate the challenges facing humanity: disease, war, demographic changes such as ageing populations and mass migration, the security of information systems, and the human effects of globalisation and the diminution of natural resources. That potential justifies FuturICT’s €1 billion price tag, says Helbing. “There’s obviously
a gap in the knowledge landscape,” he says. “We’ve invested billions each year in areas such as particle physics, nuclear fusion, space and medicine, but just a fraction of that in socioeconomic sciences. We have directed our curiosity outwards, not towards the living Earth.” Early next year, Helbing and his colleagues, who include Cincotti, will find out if they have convinced the EU sufficiently to award them €1 million in development funding from its Flagship Programme, which seeks to fund large-scale, visionary initiatives. A decision on footing the full bill for just two Flagship proposals is due in mid-2012. With the know-how and the technology to make large-scale agent-based models feasible now coming together, FuturICT’s advocates argue that €1 billion is a small amount to gamble for our future security. “It is imperative that this admittedly bold step be taken,” says Joshua Epstein of Johns Hopkins University in Baltimore, Maryland, a social scientist who pioneered agent-based models of society and is now using them to study epidemics. After all, the credit crunch alone cost the world economy an estimated $1.8 trillion. As Epstein puts it, “This is one experiment we can’t afford not to do.” n Philip Ball is a freelance writer based in London 30 October 2010 | NewScientist | 51