August 25, 2011 8:16 pm
Economics: Rituals of rigour
By John Kay
The reputation of economists, never high, has been a casualty of the global crisis. Ever since the world’s financial system teetered on the abyss following the collapse of Lehman Brothers three years ago next month, critics from Queen Elizabeth II downwards have posed one uncomfortable yet highly pertinent question: are economists of any use at all?
Some of this criticism is misconceived. Specific predictions of economic growth or levels of the stock market – gross domestic product will rise by 1.8 per cent; the FTSE 100 index will stand at 6,500 by year-end – assert knowledge that those making such predictions cannot have. Economic systems are typically dynamic and non-linear. This means that outcomes are likely to be very sensitive to small changes in the parameters that determine their evolution. These systems are also reflexive, in the sense that beliefs about what will happen influence what does happen.
If you ask why economists persist in making predictions despite these difficulties, the answer is that few do. Yet that still leaves a vocal minority who have responded cynically to the insatiable public demand for forecasts. Mostly they are employed in the financial sector – for their entertainment value rather than their advice.
Economists often make unrealistic assumptions but so do physicists, and for good reasons. Physicists will describe motion on frictionless plains or gravity in a world without air resistance. Not because anyone believes that the world is frictionless and airless, but because it is too difficult to study everything at once. A simplifying model eliminates confounding factors and focuses on a particular issue of interest. This is as legitimate a method in economics as in physics.
Since there are easy responses to these common criticisms of bad predictions and unrealistic assumptions, attacks on the profession are ignored by professional academic economists, who complain that the critics do not understand what economists really do. But if the critics did understand what economists really do, public criticism might be more severe yet.
Even if sharp predictions of individual economic outcomes are rarely possible, it should be possible to describe the general character of economic events, the ways in which these events are likely to develop, the broad nature of policy options and their consequences. It should be possible to call on a broad consensus on the interpretation of empirical data to support such analysis. This is very far from being the case.
The two branches of economics most relevant to the recent crisis are macroeconomics and financial economics. Macroeconomics deals with growth and business cycles. Its dominant paradigm is known as “dynamic stochastic general equilibrium” (thankfully abbreviated to DSGE) – a complex model structure that seeks to incorporate, in a single framework, time, risk and the need to take account of the behaviour of many different companies and households.
The study of financial markets revolves meanwhile around the “efficient market hypothesis” – that all available information is incorporated into market prices, so that these prices at all times reflect the best possible estimate of the underlying value of assets – and the “capital asset pricing model”. This latter notion asserts that what we see is the outcome of decisions made by a marketplace of rational players acting on the belief in efficient markets.
. . .
A close relationship exists between these three theories. But the account of recent events given by proponents of these models was comprehensively false. They proclaimed stability where there was impending crisis, and market efficiency where there was gross asset mispricing.
Regulators such as Alan Greenspan, former chairman of the US Federal Reserve, asserted that the growth of trade in complex financial investments represented new and more effective tools of risk management that made the economy more stable. As late as 2007, the International Monetary Fund would justify its optimism about the macroeconomic outlook with the claim that “developments in the global financial system have played an important role, including the ability of the United States to generate assets with attractive liquidity and risk management features”.
These mistaken claims found substantial professional support. In his presidential lecture to the American Economic Association in 2003, Robert Lucas of the University of Chicago, the Nobel prizewinning doyen of modern macroeconomics, claimed that “macroeconomics has succeeded: its central problem of depression prevention has been solved”. Prof Lucas based his assertion on the institutional innovations noted by Mr Greenspan and the IMF authors, and the deeper theoretical insights that he and his colleagues claimed to have derived from models based on DSGE and the capital asset pricing model.
The serious criticism of modern macroeconomics is not that its practitioners did not anticipate that Lehman would fall apart on September 15 2008, but that they failed to understand the mechanisms that had put the global economy at grave risk.
Subsequent policy decisions have been pragmatic and owe little to any economic theory. The recent economic policy debate strikingly replays that after 1929. The central issue is budgetary austerity versus fiscal stimulus, and – as in the 1930s – the positions of the protagonists are entirely predictable from their political allegiances.
Why did the theories put forward to deal with these issues prove so misleading? The academic debate on austerity versus stimulus centres around a property observed in models based on the DSGE programme. If government engages in fiscal stimulus by spending more or by reducing taxes, people will recognise that such a policy means higher taxes or lower spending in the future. Even if they seem to be better off today, they will later be poorer, and by a similar amount. Anticipating this, they will cut back and government spending will crowd out private spending. This property – sometimes called Ricardian equivalence – implies that fiscal policy is ineffective as a means of responding to economic dislocation.
John Cochrane, Prof Lucas’s Chicago colleague, put forward this “policy ineffectiveness” thesis in a response to an attack by Paul Krugman, Nobel laureate economist, on the influence of the DSGE school. (In an essay in the New York Times Prof Krugman described comments from the Chicago economists as “the product of the Dark Age of macroeconomics in which hard-won knowledge has been forgotten”.) Prof Cochrane at once acknowledged that the assumptions that give rise to policy ineffectiveness “are, as usual, obviously not true”. For most, that might seem to be the end of the matter. But it is not. Prof Cochrane goes on to say that “if you want to understand the effects of government spending, you have to specify why the assumptions leading to Ricardian equivalence are false”.
That is a reasonable demand. But the underlying assumptions are plainly not true. No one, including Prof Cochrane himself, really believes that the whole population calibrates its long-term savings in line with forecasts of public debt and spending levels decades into the future.
. . .
But Prof Cochrane will not give up so easily. “Economists”, he goes on, “have spent a generation tossing and turning the Ricardian equivalence theory, and assessing the likely effects of fiscal stimulus in its light, generalising the ‘ifs’ and figuring out the likely ‘therefores’. This is exactly the right way to do things.” The programme he describes modifies the core model in ways that make it more complex, but not necessarily more realistic, by introducing parameters to represent failures of the model assumptions that are frequently described as frictions, or “transactions costs”.
Why is this procedure “exactly the right way to do things”? There are at least two alternatives. You could build a different analogue economy. For example, Joseph Stiglitz – another Nobel laureate – and his followers favour a model that retains many of the Lucas assumptions but attaches great importance to imperfections of information. After all, Ricardian equivalence requires that households have a great deal of information about future budgetary options, or at least behave as if they did.
Another possibility is to assume that households respond mechanically to events according to specific behavioural rules, rather like rats in a maze – an approach often called agent-based modelling. Such models can – to quote Prof Lucas – also “be put on a computer and run”. It is not obvious whether the assumptions or conclusions of these models are more, or less, plausible than those of the kind of model favoured by Profs Lucas and Cochrane.
Another line of attack would discard altogether the idea that the economic world can be described by any universal model in which all key relationships are predetermined. Economic behaviour is influenced by technologies and cultures, which evolve in ways that are certainly not random but that cannot be fully, or perhaps at all, described by the kinds of variables and equations with which economists are familiar. The future is radically uncertain and models, when employed, must be context specific.
In that eclectic world Ricardian equivalence is no more than a suggestive hypothesis. It is possible that some such effect exists. One might be sceptical about whether it is very large, and suspect its size depends on a range of confounding and contingent factors – the nature of the stimulus, the overall political situation, the nature of financial markets and welfare systems. The generation of economists who followed John Maynard Keynes engaged in this ad hoc estimation when they tried to quantify one of the central concepts of his General Theory – the consumption function, which related aggregate spending in a period to current national income. Thus they tried to measure how much of a fiscal stimulus was spent – and the “multiplier” that resulted.
But you would not nowadays be able to publish similar work in a good economics journal. You would be told that your model was theoretically inadequate – it lacked rigour, failed to demonstrate consistency. To be “ad hoc” is a cardinal sin. Rigour and consistency are the two most powerful words in economics today.
. . .
Consistency and rigour are features of a deductive approach, which draws conclusions from a group of axioms – and whose empirical relevance depends entirely on the universal validity of the axioms. The only descriptions that fully meet the requirements of consistency and rigour are completely artificial worlds, such as the “plug-and-play” environments of DSGE – or the Grand Theft Auto computer game.
For many people, deductive reasoning is the mark of science: induction – in which the argument is derived from the subject matter – is the characteristic method of history or literary criticism. But this is an artificial, exaggerated distinction. Scientific progress – not just in applied subjects such as engineering and medicine but also in more theoretical subjects including physics – is frequently the result of observation that something does work, which runs far ahead of any understanding of why it works.
Not within the economics profession. There, deductive reasoning based on logical inference from a specific set of a priori deductions is “exactly the right way to do things”. What is absurd is not the use of the deductive method but the claim to exclusivity made for it. This debate is not simply about mathematics versus poetry. Deductive reasoning necessarily draws on mathematics and formal logic: inductive reasoning, based on experience and above all careful observation, will often make use of statistics and mathematics.
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
The belief that models are not just useful tools but are capable of yielding comprehensive and universal descriptions of the world blinded proponents to realities that had been staring them in the face. That blindness made a big contribution to our present crisis, and conditions our confused responses to it. Economists – in government agencies as well as universities – were obsessively playing Grand Theft Auto while the world around them was falling apart.
The writer, an FT columnist, is a visiting professor at the London School of Economics and a fellow of St John’s College, Oxford
Macroeconomic modelling: Ways to simplify that are very different from those of a physicist
Robert Lucas, aged 73, is John Dewey professor of economics at the University of Chicago. Prof Lucas and Chicago colleagues, along with others such as Edward Prescott and Thomas Sargent, are the founders of the programme known as “dynamic stochastic general equilibrium” (DSGE), which dominates teaching and research in macroeconomics.
The programme has been described as “freshwater economics”, because the leading proponents have been based at locations such as Chicago, Rochester and Minnesota, in contrast to those in the seaboard strongholds of “saltwater economics” at Harvard, MIT and Stanford, who followed a more Keynesian tradition.
In 1995 Prof Lucas was awarded the Nobel prize for economics, and in his prize lecture he provides a succinct summary of his central model. He makes a number of assumptions. Individuals are rational, calculating welfare maximisers. They live through two periods: work in the first and retirement in the second. There is only one good, which cannot be stored, or invested in capital projects. There is only one kind of work, and older and younger generations do not support each other.
This simplification method is very different from the physicist’s simplification, which abstracts to focus on a single element of a problem. Prof Lucas has described his objective as “the construction of a mechanical artificial world populated by interacting robots”. An economic theory is something that “can be put on a computer and run”.
Structures such as these are “analogue economies”, complete systems that loosely resemble the world, but a world so pared down that everything is either known or can be made up.
Such models are akin to a computer game. If game compilers are good at their job, events and outcomes loosely resemble those observed in the real world – they can, in a phrase that Prof Lucas and colleagues popularised, be calibrated against observation.
But it obviously cannot be inferred that policies that work in a computer game are appropriate for governments and businesses. It is in the nature of these self-contained systems that successful strategies are the product of assumptions made by the authors.
Copyright The Financial Times Limited 2011.