The implications of the micro-foundations approach to macroeconomics can be assessed in the light of the introduction to Paul Krugman’s essay ‘How Did Economists Get it So Wrong’.

It’s hard to believe now, but not long ago economists were congratulating themselves over the success of their field. Those successes — or so they believed — were both theoretical and practical, leading to a golden era for the profession. On the theoretical side, they thought that they had resolved their internal disputes. Thus, in a 2008 paper titled “The State of Macro” (that is, macroeconomics, the study of big-picture issues like recessions), Olivier Blanchard of M.I.T., now the chief economist at the International Monetary Fund, declared that “the state of macro is good.” The battles of yesteryear, he said, were over, and there had been a “broad convergence of vision.” And in the real world, economists believed they had things under control: the “central problem of depression-prevention has been solved,” declared Robert Lucas of the University of Chicago in his 2003 presidential address to the American Economic Association.

These conclusions did not emerge as specific implications of any particular model. Rather, the micro-foundations approach, at least in its current form, can only work well under specific assumptions and conditions. The crucial assumptions are that the standard microeconomic model in which market outcomes are driven by the optimizing decisions of rational individuals (in typical macroeconomic models, those of a single rational individual).


Rationality everywhere

The incorporation of rational expectations into micro-based macroeconomic models went hand in hand with the acceptance of increasingly strong forms of the efficient markets hypothesis, and both fitted naturally with the rise of market liberalism. In competitive markets where participants are perfectly rational and display high levels of foresight, it is very hard to see any beneficial role for governments. Even if governments happen to better informed than market participants, they should not, in a world of perfect rationality, act on that information. Rather, they should release the information to the public, allowing market participants to combine this public information with their own private information, and secure better outcomes than would be possible from government action.

Of course, many macroeconomists, and particularly those of the New Keynesian school, explicitly rejected the ultra-rational assumptions that produced such implausible conclusions as Barro’s Ricardian equivalence. One of the standard moves in the construction of Blanchard’s haikus was to allow the ‘representative individual’ to deviate in some small way from perfect rationality.

A common example is the assumption of ‘hyperbolic’ discounting. The idea is that in assessing a choice between getting some benefit immediately, or at some point in the relatively near future, say, in a month’s time, people display a lot of impatience. They are willing to offer a big discount to get the benefit now rather than wait to get something better. But, if they are asked about two points in the future that are a month apart, they will offer only a small benefit. Such preferences, if maintained over time, are not consistent with standard rationality. The choices people make now regarding choices in the medium future are not the same as they would make if they waited until the opportunity for immediate consumption was actually available. A paper by Liam Graham and Dennis Snower showed that the combination of staggered nominal contracts with hyperbolic discounting leads to inflation having significant long-run effects on real variables, that is, to the existence of a Phillips curve relationship that might persist into the long term.

Papers in this tradition showed that small deviations from rationality can sometimes have big effects on economic outcomes. But they rarely have big implications for public policy. Rather, they point in the direction of the idea set out by Cass Sunstein and Richard Thaler in their recent book Nudge. Sunstein and Thaler argue that governments can sometimes exploit deviations from rationality by framing choices that will ‘nudge’ people’s decisions in a socially desirable direction. George Lakoff in Don’t Think of An Elephant makes the same argument in a political context, suggesting that the Republican Party has had more success than would be expected based on underlying support for its policies, because it has done a better job of ‘framing’ political issues. Rather than seeking a more rational debate, Lakoff, argues, Democrats should respond in kind.


Fiscal and monetary policy

The theoretical complacency with which the DGSE school viewed the state of macroeconomic theory was matched by a similar complacency regarding macroeconomic policy. From the early 1990s to the panic of 2008, macroeconomic policy was, for all practical purposes, monetary policy, or, more precisely interest rate policy. The standard approach involved what is called a Taylor rule, after … economist John Taylor, later the Under Secretary of the US Treasury for International Affairs under the George W. Bush|Bush Administration, who proposed in 1993. Taylor presented his rule as a way of describing the actual behavior of central banks, but it soon came to be used as a normative guide to policy.

The idea of the Taylor rule was to set interest rates in such a way as to keep two variables, the inflation rate and the rate of growth of Gross Domestic Product, as close as possible to their target values. Typical targets might be an inflation rate of 2 to 3 per cent, and a real GDP growth rate in line with long-term growth in the labour force and labour productivity, say 3 per cent for a developed country like the US. Given

Within this framework, the essential functions of macroeconomic theory are relatively simple. Complex macroeconomic models can be reduced to simple relationships between one policy instrument (interest rates) and two targets (inflation and growth). Since there are two target variables, it’s impossible to hit each target exactly, so the models give rise to a trade-off. Using the single representative agent who typically inhabits a DGSE model, it’s possible to calculate the optimal trade-off, which can be expressed as the range of acceptable variation in inflation rates.

During the Great Moderation, all this seemed to work very well, to the extent that commentators spoke of a ‘Goldilocks economy’, neither too hot, nor too cold but just right. Even with a tight target range for inflation, between 2 and 3 per cent per year, it seemed possible to stabilise growth and avoid all but the mildest recessions. In these circumstances, the comment of Robert Lucas that the “central problem of depression-prevention has been solved,” seemed only reasonable.

On to Failure
Back to Beginnings
Back to Micro based Macro (this chapter)
Back to Start for an outline of the book

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial 3.0 License