Dynamic Stochastic General Equilibrium: Birth

Macroeconomics began with Keynes. [1]Before Keynes wrote The General Theory of Employment, Interest and Money, economic theory consisted almost entirely of what is now called microeconomics. The difference between the two is commonly put by saying that microeconomics is concerned with individual markets and macroeconomics with the economy as a whole, but that formulation implicitly assumes a view of the world that is at least partly Keynesian.

Long before Keynes, neoclassical economists had both a theory of how prices are determined in individual markets so as to match supply and demand (‘partial equilibrium theory’) and a theory of how all the price in the economy are jointly determined to produce a ‘general equilibrium’ in which there are no unsold goods or unemployed workers.

The strongest possible version of this claim was presented as Say’s Law, named, somewhat misleadingly, for the classical economist Jean-Baptiste Say. Say’s Law, as developed by later economists such as James Mill, states, in essence, that recessions are impossible since ‘supply creates its own demand’.

To spell this idea out, think of a new entrant to the labour force looking for a job, and therefore adding to the supply of labor. According to the classical view of Say’s Law, this new worker plans to spend the wages he or she earns on goods and service produced by others, so that demand is increased by an exactly equal amount. Similarly, any decision to forgo consumption and save money implies a plan to invest, so planned savings must equal planned investment and the sum of consumption and savings must always equal total income and therefore can’t be changed by policy. Say’s argument allows the possibility if prices are slow to adjust, there might be excess supply in some markets, but implies that, if so, there must be excess demand in some other market. It is this idea that is at the core of general equilibrium theory.

The first formal ‘general equilibrium’ theory was produced by the great French economist Leon Walras in the 1870s. Walras, like many of the pioneers of neoclassical economics, was inclined towards socialist views, but his general equilibrium theory was used by advocates of laissez-faire to promote the view that, even if subject to severe shocks, the economy would always return to full employment unless it was prevented from doing so by government mismanagement or by the actions of unions that might hold wages above the market price of labour.

The point of Keynes’ title was that “general equilibrium” was not general enough. A fully general theory of employment must give an account of equilibrium states where unemployment remains high, with no tendency to return to full employment.

In the simplest version of the Keynesian model, equilibrium can be consistent with sustained unemployment because, unlike in the classical account of Say, the demand associated with workers’ willingness to supply labour is not effective and does not actually influence the decisions of firms. So unsold goods and unemployed labour can co-exist. Such failures of co-ordination can develop in various ways, but in a modern economy, they arise through the operation of the monetary system.

Keynes showed how the standard classical interpretation of Say’s law depended on the assumption that economic transactions could be analysed as if they were part of a barter system, in which goods were exchanged directly for other goods. In a economy where money serves both as the medium of exchange and as a store of value, the analysis works differently. In the standard classical analysis, expenditure, consisting of consumption and investment, must be equal to income for every household and for the economy as a whole, and so, by the arithmetic of accounting, savings (the difference between income and consumption) must equal investment. This equality always holds true, as you can check by looking at any good set of accounts, including the national accounts drawn up for the economy as a whole, originally by Keynes’ students such as the Australian economist Colin Clark (I work in a building named for him). [2]

But, as Keynes observed, savings initially take the form of money. If lots of people want to save, and few want to invest, total demand in the economy will fall below the level required for full employment. Actual savings will equal investment, as they must by the arithmetic of accounting, but people’s plans for consumption and investment may not be realised.

A simple and homely illustration is provided by Paul Krugman’s description of a babysitting co-operative in Washington DC, where babysitting credits worked as a kind of money. When members of the group tried to build up their savings by babysitting more and going out less, the result was a collapse of demand. The problem was eventually addressed by the equivalent of monetary expansion, when the co-operative simply issued more credits to everyone, resulting in more demand for babysitting, and a restoration of the original equilibrium.

Keynes’ analysis showed how monetary policy could work, thereby extending the earlier work of theorist such as Irving Fisher. However, the second part of Keynes’ analysis shows that the monetary mechanism by which equilibrium should be restored, may not work in the extreme recession conditions referred to as a ‘liquidity trap’. This concept is illustrated by the experience of Japan in the 1990s and by most of the developed world in the recent crisis. Even with interest rates reduced to zero, banks were unwilling to lend, and businesses unwilling to invest.

Keynes General Theory provided a justification for policies such as public works programs that had long been advocated, and to a limited extent implemented, as a response to the unemployment created by recessions and depressions ( Jean-Baptiste Say himself supported such measures in the early 19th century). More generally, Keynes analysis gave rise to a system of macroeconomic management based primarily on the use of fiscal policy to stabilise aggregate demand. During periods of recession, Keynes analysis suggested that governments should increase spending and reduce taxes, so as to stimulate demand (the first approach being seen as more reliable since the recipients of tax cuts might just save the money). On the other hand, during booms, governments should run budget surpluses, both to restrain excess demand and to balance the deficits incurred during recessions.

At first, it seemed, both to Keynes’ opponents and to some of his supporters, that Keynesian economics was fundamentally inconsistent with traditional neoclassical economics. But the work of John Hicks and others produced what came to be called the Keynesian-neoclassical synthesis, in which individual markets were analyzed using the traditional approach (now christened ‘microeconomics’) while the determination of aggregate output and employment was the domain of Keynesian macroeconomics.

The synthesis was not particuarly satisfactory at a theoretical level, but it had the huge practical merit that it worked, or at least appeared to. In the postwar era, the mixed economy derived from the Keynesian-neoclassical synthesis provided an attractive alternative both to the failed system of laissez-faire reliance on free markets and to the alternative of comprehensive economic planning, represented by the (still rapidly growing) Soviet Union.

Modified to include a theory of market failure, neoclassical microeconomics allowed for some (but only some!) government intervention in particular markets to combat monopolies, finance the provision of public goods and so on. Meanwhile, the tools of Keynesian macroeconomic management could be used to maintain stable full employment without requiring centralised economic planning or controls over individual markets.

[1] That is not to say that no-one paid attention to the economic issues with which macroeconomics is concerned: the business cycle, inflation and unemployment. On the contrary, the early 20th century saw the beginnings of serious empirical research into the business cycle, most notably by the National Bureau of Economic Research, established in the US. And there were some important theoretical contributions, from economists such as Irving Fisher. Most notably, the great economists of the Austrian School, FA von Hayek and Ludwig von Mises, produced an analysis of the business cycle based on fluctuations in credit markets that remains highly relevant today. But neither Fisher nor the Austrians took the final steps needed to create a theory of macroeconomics, and the Austrians in particular recoiled from the implications of their own ideas.

The Phillips curve

Throughout the history of capitalism it has been observed that boom periods tended to be accompanied by inflation (an increase in the general price level), and depressions by deflation. This observation formed a central part of the Keynesian economic system. While Keynes is commonly remembered for his advocacy of budget deficits to stimulate the economy in periods of recession, he also grappled with the problem of how to avoid inflation in the postwar period. In his famous and influential pamphlet, How to Pay for the War, Keynes argued that inflation was the product of an excess of demand over supply, and that the appropriate policy response was for governments to increase taxes and run budget surpluses, to bring demand into line with supply.

In 1958, New Zealand economist A.W (Bill) Phillips undertook a statistical study which formalised the relationship between unemployment and inflation. in the now-famous Phillips Curve. The curve related unemployment to the rate of change in money wages, showing that, at very low rates of unemployment, wages tended to grow rapidly. Since wages account for the majority of production costs, rapid wage inflation also implies rapid price inflation. The higher the rate of unemployment, the lower the rate of wage growth. However, because workers generally resist outright cuts in wages, the curve flattens out, with increases in unemployment beyond a certain rate (typically between 5 and 10 per cent) having little further deflationary effect.

Phillips was famous (or perhaps notorious) for having designed a hydraulic analog computer that could be used to represent the Keynesian economic model (the Faculty of Economics and Politics at Cambridge University still has a working version) and had, as a Japanese prisoner of war built a miniature radio at great risk. Despite his engineering skills, and his general reputation as an exponent of ‘hydraulic’ Keynesianism, he did not endorse a mechanical interpretation of the curve. He is said to have remarked that “if I had known what they would do with the graph I would never have drawn it.

The leading American Keynesian economists of the day, Paul Samuelson and Robert Solow, were less cautious. They estimated similar relationships for the US, and drew the conclusion that society faced a trade-off between unemployment and inflation. That is, society could choose between lower inflation and higher unemployment or lower unemployment and higher inflation. This point was spelt out in successive editions of Samuelson’s textbook, simply entitled Economics, which dominated the market from its initial publication in 1948 until the mid-1970s. Given a menu of choices involving different rates of unemployment and inflation, it seemed obvious enough that, since unemployment was the greater evil, a moderate increase in inflation could be socially beneficial.

The interpretation of the Phillips curve as a stable trade-off between unemployment and inflation led to an acceptance of higher rates of inflation as the necessary price of reducing unemployment still further below the historically low levels of the postwar boom. So, whereas previous episodes of inflation had been met with the orthodox Keynesian response of fiscal contraction aimed at reducing aggregate demand, there was no such response to the acceleration of inflation in the late 1960s. The Phillips curve idea appeared to justify expansionary fiscal policy except when unemployment was very low, and embedded the notion of Keynesian economics as a justification for budget deficits under any and all circumstances.

Friedman, Natural Rate and NAIRU…

The Keynesian adoption of the Phillips curve paved the way for Milton Friedman’s greatest intellectual victory, based on a penetrating analysis offered in the late 1960s at a time when inflation, while already problematic, was far below the double-digit rates that would be experienced in the 1970s.

In his famous 1968 Presidential address to the American Economic Association, Friedman argued that the supposed trade-off between unemployment and inflation was the product of illusion. As long as workers failed to recognise that the general rate of inflation was increasing, they would regard wage increases as real improvements in their standard of living and therefore would increase both their supply of labor and their demand for goods. But, Friedman argued, sooner or later expectations of inflation would catch up with reality. If the rate of inflation were held at, say, 5 per cent for several years, workers would build a 5 per cent allowance for inflation into their wage claims, and businesses would raise their own prices by 5 per cent to allow for the increase in anticipated costs.

Once expectations adjusted, Friedman argued, the beneficial effects of inflation would disappear. The rate of unemployment would return to the level consistent with price stability, but inflation would remain high. Interpreted graphically, this meant that the long-term Phillips curve was a vertical line.

Friedman’s analysis gave no specific answer to the question of where unemployment would stabilise. Friedman argued that this could be determined as “the level ground out by the Walrasian system of general equilibrium equations … .including market imperfections …the cost of gathering information about job vacancies and labor availabilities, the costs of mobility and so on' ” Friedman introduced the unfortunate description of this outcome as the ‘natural rate of unemployment’, although even on his own telling there was nothing natural about it. The same terminology was adopted by Edmund Phelps, who developed a more rigorous version of Friedman’s intuitive argument, for which he was awarded the Economics Nobel in 2006. These days, most economists prefer to use the euphemism "NAIRU," which stands for Non-Accelerating Inflation Rate of Unemployment.

In summary, Friedman and Phelps suggested, the beneficial effects of inflation were the product of illusion on the part of workers and employers. And by implication, they suggested that their Keynesian colleagues were subject to a more sophisticated form of the same illusions.

Within a few years, Friedman’s judgement was vindicated. The Samuelson-Solow interpretation of the Phillips curve as a stable trade-off was soon proved wrong in practice, as inflation rates increased without any corresponding reduction in unemployment, a phenomenon that came to be referred to by the ugly portmanteau word, stagflation (stagnation + Inflation). Inflation rates rose steadily, reaching double digits by the early 1970s.

The simplistic Keynesian interpretation of the Phillips curve was discredited forever. No one in the future would suggest that policymakers could exploit a stable trade-off between unemployment and inflation, except under special conditions. But this idea, dating only from the 1960s, was a late development in Keynesian thought, and its failure did not imply that Keynesian macroeconomics itself was unsound. To banish the idea that governments could and should act to stabilise the economy and preserve full employment (or even Friedman’s ‘natural rate’) the critique of Keynesianism had to be pushed further.

The New Classical school

Friedman argued that exploitation of the Phillips curve could not work for long, because expectations of inflation would eventually catch up with reality. Experience seems to support this argument, at least once inflation rates are high enough for people to take notice (anything above 5 per cent seems to do the trick).
But Friedman’s reasonable argument was neither logically watertight nor theoretically elegant enough for the younger generation of free-market economists, who wanted to restore the pre-Keynesian purity of classical macroeconomics, and became known as the New Classical school. Their key idea was to replace Friedman’s adaptive model of expectations with what they called ‘rational expectations’ (a term coined much earlier, and in a microeconomic context, by John F. Muth). Although Muth had been cautious about possible misinterpretation of the term, his successors showed no such caution. Having adopted Muth’s characterization of rational expectations as “those that agree with the predictions of the relevant economic model”, and defined the relevant economic model as their own, New Classical economists happily traded on the implicit assumption that any consumer whose expectations did not match those of the model must be irrational.
One of the first and most extreme and applications of the rational expectations idea was put forward in 1974 by Robert Barro, then an up-and-coming young professor at the University of Chicago, and who now makes regular appearances, not only in academic journals and lists of likely candidates for the Nobel Prize in Economics, but also in the Opinion pages of the Wall Street Journal.
Barro drew on the work of the first great formal theorist in economics, David Ricardo. Ricardo, a successful speculator, financier and member of the House of Commons developed the ideas presented in Adam Smith’s Wealth of Nations into a rigorous body of analyis. He observed that, if governments borrow money, say to finance wartime expenditures, their citizens should anticipate that taxes will eventually have to be increased to repay the debt. If they were perfectly rational, Ricardo noted, they would increase their savings, by an amount equal to the additional government debt, in anticipation of the higher tax burden. So it should make no difference whether the war is financed by current taxation or by debt. Having observed this theoretical equivalence, Ricardo immediately returned to reality with the observation that “the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly”.
Barro’s big contribution, in an article published in 1974, was to focus on theory rather than reality and suggest that what he called ‘Ricardian equivalence’ actually holds in practice. Barro’s claim was never widely accepted, even among opponents of Keynesianism.
Barro’s claim was made without regard to empirical evidence. Econometric testing strongly rejected the “Ricardian equivalence’ hypothesis, that current borrowing by governments would be fully offset by household saving. Some tests suggested that borrowing might moderately increase household saving, but others showed the exact opposite. Critics pointed out numerous theoretical deficiencies in addition to Barro’s reliance on ultra-rational expectations. For example, the argument assumes that households face the same interest rate as governments, which is obviously untrue.
Nevertheless, despite failing to gain significant acceptance, the Ricardian Equivalence hypothesis had a significant effect on the debate within the economics profession. Extreme assumptions about the rationality of consumer decisions, that would once have been dismissed out of hand, were now treated as the starting point for analysis and debate. In this way, Barro paved the way for what became known as the Rational Expectations revolution in macroeconomics.

Lucas critique and rational expectations

The central idea of rational expectations goes back to the early 1960s. Agricultural economists at the time often modelled price cycles in commodity markets as the outcome of lags in the production process. The idea was that a high price for, say, corn, would occur in some season because of, say, a drought or a temporary increase demand. Farmers in one season would observe the high price and plant a lot of it for the next season. The result would be large crop and a low price. Farmers would therefore plant less corn for the following season and the price would go up again. Eventually, this series of reactions and counter-reactions would bring the price back to the equilibrium level where supply (the amount of corn farmers would like to produce and sell at that price) equalled demand. As represented on the supply-and-demand diagrams economists like to draw, the path of adjustment resembled a cobweb, and so, the model is
Economist John(?) Muth saw a problem. In the cobweb model, farmers expect a high price this season to be maintained next season, and so produce high output. But this is a self-defeating prophecy, since the high output means that the price next season will be low. Why, Muth, asked would farmers keep on making such a simple, and costly, mistake. If farmers based their expectations on their own experience, they would not expect high prices to be maintained. But, what, then, would they expect? An expectation that high prices are followed by low prices, as occurs in the cobweb model, would be similarly self-defeating.
Muth’s answer was both simple and ingenious. The requirement that the price expected by farmers should equal the expected price generated by the model can be incorporated within the model itself, and this requirement closes the circle in which expectations generate prices and vice versa. Muth showed that, with this requirement, the cobweb model could not work. As long as the ‘shocks’ that raise or lower prices in one season are not correlated with the shocks in the next season, the only ‘rational’ expectation for farmers is that the price next season will be equal to the ‘average’ equilibrium price that the model generates in the absence of such shocks. If farmers expect this, they will produce, on average, the supply associated with that price, and, on average, that price will in fact arise.
In the late 1970s, Robert E. Lucas took Muth’s idea and applied it to the macroeconomic debate about inflationary expectations. Friedman had convinced most economists that, if high rates of inflation are maintained long enough, companies and workers will come to expect it and build this expectation into price-setting decisions and wage demands. He suggested a simple adjustment process in which expectations gradually catch up with a change in the inflation rate. That process was sufficient to kill off the idea of a stable trade-off between unemployment and inflation, and to explain how continued high inflation, initially associated with low unemployment, could turn into the ‘stagflation’ of the 1970s.
In Friedman’s ‘adaptive expectations’ model, there was a lag between an increase in the rate of inflation and the adjustment of inflationary expectations. That lag left open the possibility that governments could manipulate the Phillips curve trade-off, at least in the short run. Lucas used the idea of rational expectations to close off that possibility. In a rational expectations model, workers and businesses (commonly referred to in this literature as ‘economic agents’) make the best possible estimate of future inflation rates, and therefore cannot be fooled by government policy. Lucas’ ideas were developed by Tom Sargent and Neil Wallace into the ‘Policy Ineffectiveness Proposition’
Lucas developed a more general critique of economic policymaking, using the case of the Phillips curve as an example. His general point was that there was no general reason to suppose that an empirical relationship observed under one set of policies, like the Phillips curve relationship between unemployment and inflation, would be sustained in the event of a change in policies, which would, in general, imply a change in expectations. The Lucas critique works with a range of assumptions about expectations, including Friedman’s adaptive expectations, but it is most naturally associated with Lucas’ favored rational expectations model. Lucas argued that the only reliable empirical relationships were those derived from the ‘deep’ microeconomic structure of models, in which economic outcomes are the aggregate of decisions by rational agents, making decisions aimed at pursuing their own goals (maximising their utility, in the jargon of economists).
The solution it seemed was obvious, though not simple. The Keynesian separation between macroeconomic analysis, based on observed aggregate relationships and microeconomic analysis, must be abandoned. Instead, macroeconomics must be built up from scratch, on the microeconomic foundations of rational choice and market equilibrium.

More to come here

Real Business Cycle theory

Real Business Cycle theory emerged in the early 1980s as a variant of New Classical Economics. The big papers were by Plosser & Long and Kydland & Prescott. The RBC literature introduced two big innovations, one theoretical and one technical.
In theoretical terms, relative to the standard New Classical story that the economy naturally more rapidly back towards full employment equilibrium in response to any shock, RBC advocates recognised the existence of fluctuations in aggregate and employment but argued that such fluctuations represent a socially optimal equilibrium response to exogenous shocks such as changes in productivity, the terms of trade, or workers' preference for leisure.
In technical terms, RBC models were typically estimated using a calibration procedure in which the parameters of the model were adjusted to give the best possible approximation to the observed mean and variance of relevant economic variables and the correlations between them (sometimes referred to, in the jargon, as 'stylised facts'). This procedure, closely associated with a set of statistical techniques referred to as the Generalized Method of Moments, differs from the standard approach pioneered by the Cowles Commission in which the parameters of a model are estimated on the basis of a criterion such as minimisation of the sum of squared errors (differences between predicted and observed values in a given data set.
There's no necessary link between these two innovations and there gradually emerged two streams within the RBC literature. In one stream were those concerned to preserve the theoretical claim that the observed business cycle is an optimal outcome, even in the face of data that consistently suggested the opposite. In the other stream were those who adopted the modelling approach, but were willing to introduce non-classical tweaks to the model (imperfect information/competition and so on) to get a better fit to the stylised facts.
The big exception that was conceded by most RBC theorists at the outset was the Great Depression. The implied RBC analysis that the state of scientific knowledge had suddenly gone backwards by 30 per cent, or that workers throughout the world had suddenly succumbed to an epidemic of laziness was the subject of some well-deserved derision from Keynesians. A couple of quotes I've pinched from a survey by Luca Pensieroso
<blockquote>“the Great Depression [… ] remains a formidable barrier to a completely unbending application of the view that business cycles are all alike.” (Lucas (1980), pg. 273.) “If the Depression continues, in some respects, to defy explanation by existing economic analysis (as I believe it does),
perhaps it is gradually succumbing under the Law of Large Numbers.” (Lucas (1980), pg.284)</blockquote>
But towards the end of the 1990s, at a time when RBC theory had in any case lost the battle for general acceptance, some of the more hardline RBC advocates tried to tackle the Depression, albeit at the cost of ignoring its most salient features . First, they ignored the fact that the Depression was a global event, adopting a single-country focus on the US. Then, they downplayed the huge downturn in output between 1929 and 1933, focusing instead on the slowness of the subsequent recovery which they blamed, unsurprisingly, on FDR and the New Deal. The key paper here is by Cole and Ohanian who put particular emphasis on the National Industrial Recovery Act.

There are plenty of difficulties with the critique of the New Deal, and these have been argued at length by <a href="http://edgeofthewest.wordpress.com/2009/02/02/the-pony-chokers/">Eric Rauchway</a> among others. But the real problem, is that RBC can't possibly explain the Depression as most economists understand it, that is, the crisis and collapse of the global economic system in the years after 1929. Instead, Cole and Ohanian want to change the subject. The whole exercise is rather like an account of the causes of WWII that starts at Yalta.

The failure of RBC is brought into sharp relief by the current global crisis. Not even the most ardent RBC supporter has been game to suggest that the crisis is caused by technological shocks or changes in tastes, and the suggestion that it was all the fault of a minor piece of anti-redlining law (the Community Reinvestment Act) has been abandoned as the speculative excesses and outright corruption of the central institutions of Wall Street has come to light.

Unlike New Keynesian macro, where some useful insights will be relevant to policy in future periods of relative stability, it's hard to see much being salvaged from the theoretical program of RBC. On the other hand, it has given us some potentially useful statistical techniques. The idea that parameters of macroeconomic models may be selected by calibration rather than by statistical estimation has an appeal that does not depend on accepting the theoretical commitments of the RBC school.

New Keynesian macro

In the wake of their intellectual and political defeats in the 1970s, mainstream Keynesian economists conceded both the long-run validity of Friedman’s critique of the Phillips curve, and the need, as argued by Lucas, for rigorous microeconomic foundations. “New Keynesian economics” was their response to the demand, from monetarist and new classical critics, for the provision of a microeconomic foundation for Keynesian macroeconomics.

The research task was seen as one of identifying minimal deviations from the standard microeconomic assumptions which yield Keynesian macroeconomic conclusions, such as the possibility of significant welfare benefits from macroeconomic stabilization. A classic example was the ‘menu costs’ argument produced by George Akerlof, another Nobel Prize winner. Akerlof sought to motivate the wage and price “stickiness” that characterised new Keynesian models by arguing that, under conditions of imperfect competition, firms might gain relatively little from adjusting their prices even though the economy as a whole would benefit substantially.

Olivier Blanchard summarises the standard New Keynesian approach with the following, literally poetic, metaphor

A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, ¯rms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.

DSGE

Eventually, the New Keynesian and RBC streams of micro-based macroeconomics began to merge. The repeated empirical failures of standard RBC models led many users of the empirical techniques pioneered by Prescott and Lucas to incorporate non-classical features like monopoly and information asymmetries. These “RBC-lite” economists sought, like the purists, to produce calibrated dynamic models that matched the “stylised facts” of observed business cycles, but quietly abandoned the goal of explaining recessions and depressions as optimal adjustments to (largely hypothetical) technological shocks.
This stream of RBC literature <a href="http://www.econosseur.com/2009/05/leamer-and-the-state-of-macro.html">converged with New Keynesianism</a>, which also uses non-classical tweaks to standard general equilibrium assumptions with the aim of fitting the macro data.
The resulting merger produced a common approach with the unwieldy title of Dynamic Stochastic General Equilibrium (DSGE) Modelling. Although there are a variety of DSGE models, they share some family features. As the “General Equilbrium” part of the name indicates, they take as their starting point the general equilibrium models developed in the 1950s, by Kenneth Arrow and Gerard Debreu, which showed how an equilibrium set of prices could be derived from the interaction of households, rationally optimising their work, leisure and consumption choices, and firms, maximizing their profits in competitive markets. Commonly, though not invariably, it was assumed that everyone in the economy had the same preferences, and the same relative endowments of capital, labour skills and so on, with the implication that it was sufficient to model the decisons of a single ‘representative agent’.
The classic general equilibrium analysis of Arrow and Debreu dealt with the (admittedly unrealistic) case where there existed complete, perfectly competitive markets for every possible asset and commodity, including ‘state-contingent’ financial assets which allow agents to insure against, or bet on, every possible state of the aggregate economy. In such a model, as in the early RBC models, recessions are effectively impossible - any variation in aggregate output and employment is simply an optimal response to changes in technology, preferences or external world markets. DGSE models modified these assumptions by allowing for the possibility that wages and prices might be slow to adjust, by allowing for the possibility of imbalances between supply and demand and so on, thereby enabling them to reproduce obvious features of the real world, such as recessions.
But, given the requirements for rigorous microeconomic foundations, this process could only be taken a limited distance. It was intellectually challenging, but appropriate within the rules of the game, to model individuals who were not perfectly rational, and markets that were incomplete or imperfectly competitive. The equilibrium conditions derived from these modifications could be compared to those derived from the benchmark case of perfectly competitive general equilibrium.
But such approaches don’t allow us to consider a world where people display multiple and substantial violations of the rationality assumptions of microeconomic theory and where markets depend not only on prices, preferences and profits but on complicatated and poorly understood phenomena like trust and perceived fairness. As Akerlof and Shiller observe

It was still possible to discern the intellectual origins of alternative DSGE models in the New Keynesian or RBC schools. Modellers with their roots in the RBC school typically incorporated just enough deviations from competitive optimality to match the characteristics of the macroeconomic data series they are modelling, and prefer to focus on deviations that are due to government intervention rather than to monopoly power or other forms of market intervention. New Keynesian modellers focused more attention on imperfect competition and were keen to stress the potential for the macro-economy to deviate from the optimal level of employment in the short term, and the possibility that an active monetary policy could produce improved outcomes

The saltwater-freshwater distinction continued to be used to distinguish the two schools. But such a terminology suggests a deeper divide between competing schools of thoughts than actually prevailed during the false calm of the Great Moderation. The differences between the two groups were less prominent, in public at least, than their points of agreement. The freshwater school had backed away from extreme New Classical views after the failures of the early 1980s, while the distance from traditional Keynesian views to the New Keynesian position was summed up by Lawrence Summer’s observation that ‘We are now all Friedmanites, quoted at the beginning of this chapter. And even these limited differences were tending to blur over time, with many macroeconomists, and particularly those involved in formulating and implementing policy shifting to an in-between position that might best be described as ‘brackish’.

On to Implications
Back to Micro based Macro (this chapter)
Back to Start for an outline of the book

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-NonCommercial 3.0 License