Streetwise Professor

July 21, 2009

Don’t Drink the (Salt) Water!

Filed under: Economics,Financial crisis,Politics — The Professor @ 10:00 pm

The Economist recently ran an article describing the soul-searching going on in academic economics, especially macroeconomics. For the most part, I don’t have a dog in this fight, because when in graduate school so many moons ago, like many of the students described in the Economist article, I decided that macro was too abstract and detached from real world institutions for a relatively literal-minded person like me (literal-minded for an academic, anyways). Taking Robert Lucas’s macro sequence was an amazing experience—an intellectual tour de force—but I knew that despite my original intentions to go into macro, it was not for me.

The gravamen of the article is that Lucas-style dynamic stochastic general equilibrium models have failed to predict, or even explain, the current recession. With regards to prediction, that’s a red herring, because such models never aspired to such a goal, and indeed, were the basis for conclusions that such predictions were nigh on to impossible.

What’s galling about the article is the (somewhat implicit) endorsement of the Krugman-DeLong view that we need to atone for the sins of the New Classical synthesis by embracing the Old Time Religion of Keynesianism. Give me a freaking break. Keynesianism fell out of favor the first time around because of ITS abysmal failure to explain economic developments in the 1970s, most notably stagflation. And its applicability to the current situation is incredibly dubious as well. The article criticizes DSGE models for failing to incorporate a financial sector and financial institutions. Like Keynesian models (new or old) do? And, as the article notes, the Keynesian theory goes like this:

In his scheme, investment was governed by the animal spirits of entrepreneurs, facing an imponderable future. The same uncertainty gave savers a reason to hoard their wealth in liquid assets, like money, rather than committing it to new capital projects. This liquidity-preference, as Keynes called it, governed the price of financial securities and hence the rate of interest. If animal spirits flagged or liquidity preference surged, the pace of investment would falter, with no obvious market force to restore it. Demand would fall short of supply, leaving willing workers on the shelf.

This bears no relationship whatsoever to what transpired 2007-present. There was no exogenous surge in liquidity preference; the race for liquidity was a response to another shock—the decline in real estate prices—that called into question the safety of financial institutions invested heavily (directly or indirectly) in real estate. The liquidity panic was an effect, which had knock-on effects: it was not the Prime Mover.

What’s more, the increase in savings that has occurred in the last months undermines one of the rationales for a fiscal stimulus—namely, that there are large numbers of liquidity constrained households that would consume more but for this constraint. But the implication of the constraint is that people will draw savings to zero to consume as much as possible; they would borrow to consume more, but can’t, and government spending can loosen that constraint thereby permitting an increase in consumption. How you can square a liquidity-preference theory of economic slumps in which consumers spontaneously decide to save more and consume less, with a policy recommendation based on the assumption that individuals would like to consume more but cannot due to liquidity constraints escapes me at the moment. Krugman et al blithely blab away without even recognizing the contradiction.

And, for the most part, even modern economists who consider themselves Keynesian adopt much of the Friedman-Lucas-Prescott-et al critique of the Old Time Keynesian Religion that Krugman and DeLong are preaching. Specifically, they discard the silly Keynesian consumption function demolished by Friedman and adopt aspects of the permanent income hypothesis. They assume individuals are forward looking, and make forecasts about policy in a pretty rational way. They assume that individuals are not subject to fiscal illusion, and thereby recognize that government spending increases must be financed by future taxes. They depart from the New Classical approach by assuming wage and price rigidities, but allow a more reasonable, expectations-based adjustment mechanism. The rigidities lead to some broadly Keynesian features, but even in the presence of these rigidities, many aspects of the Friedman et al critique survive.

This is illustrated very well in a paper by Cogan, Cwik, Talyor and Weiland, which uses a New Keynesian model to estimate the effects of the 2009 “stimulus” package. A crude Keynesian model predicts very strong multiplier effects; as Cogan et al note, moreover, this conclusion also depends on the insane assumption that the central bank can keep the nominal interest rate at zero forever without sparking a massive inflation. As if.

A New Keynesian model that incorporates wealth effects from taxation, forward looking households, nominal rigidities with a more realistic adjustment mechanism, and a monetary policy that does not result in a hyperinflation (fingers crossed on that one) generates very small multipliers. Indeed, more realistic characterizations of the nature of the fiscal stimulus (including characterizations based explicitly on the 2009 stimulus package) generate NEGATIVE multipliers fairly soon in the future—and can generate negative multipliers immediately. This means that the “stimulus” can actually reduce output. Great.

Moreover, the increase in GDP in the analysis comes entirely from the increase in government expenditure. Indeed, the increase in government expenditure causes consumption and investment to fall because this expenditure crowds out private consumption and investment due to the wealth effects; consumers anticipate the higher tax burden, and cut back their expenditures accordingly, and business anticipate the lower consumption and cut back investment. Great again.

And as Cogan et al note, this result, as awful as it is, is based on some assumptions very favorable to the stimulus. Most notably, it assumes that $1 of government expenditure provides $1 worth of goods. Not likely, given the nature of the spending. I’d rather have $1 in private consumption than $2 in government spending, giving the stupid stuff the government actually buys with my money. But the Cogan et al analysis assumes that the $1 of government stuff is worth $1 to you and me.

Moreover, and perhaps more importantly, the Cogan experiment assumes that the taxes used to finance the higher spending are not distorting. We know taxes distort and thereby reduce output. Thus, the already paltry positive effects of the stimulus implied by the model are likely to be smaller in practice.

All in all, even a Keynesian-influenced model makes predictions very similar to what I argued in my jeremiads against the stimulus. Wasteful government consumption crowds out beneficial private consumption, and when you take into account the effect of distorting taxes, the effect of the stimulus is likely to be baleful indeed.

The Bacchanal of future spending contemplated by Congress and the administration will, in my view, be even more disastrous. I would surmise (based on the above) that New Keynesian models would come to similar conclusions.

So, I stand by my conclusion that the “saltwater” economists like Krugman and DeLong are all wet. And remember, if you drink saltwater, you go crazy.

Does that leave the New Classical folks blameless? No. But I think the diagnosis of The Economist and the saltwater types misses the problem.

I remember distinctly Lucas’s lectures on business cycles. He started out his discussion by referring to the seminal empirical work on cycles done by Wesley Mitchell for the NBER. Lucas argued that virtually every cycle that Mitchell documented exhibited a very similar pattern. From this, he (and others) concluded that there must be a common cause to these cycles. This motivated a search for the cause. The Friedman and Schwartz work suggested that monetary fluctuations were a prime suspect. Real business cycle theory went in another direction. But, both were predicated on this view that the “business cycle” is a unitary thing, with a single, primary cause.

In retrospect in particular, this reasoning is suspect. Perhaps it is the case that most fluctuations in output are traceable to a single cause, but it is possible that there can be unique circumstances that also lead to large output changes. To me, it seems that the recent experience with a housing price runup and subsequent collapse triggering a financial crisis in large part because of the heavy investment of financial institutions in assets strongly tied to housing prices is a pretty unique event.

In other words, economists of all stripes—New Classical types and Keynesians New and Old—arguably vastly overstated the temporal homogeneity and regularity of the economy. They constructed models that were conceivably Theories of a Lot, but which weren’t Theories of Everything. Unfortunately, the decent empirical success of these models in characterizing the historical record, especially the post-War experience, provided undue confidence in the universality of the models. But with models, as with investments, the old caution that Past Performance is No Guarantee of Future Performance holds. Successful backcasting, and even somewhat successful forecasting, does not imply that unprecedented outside-the-model events will not occur in the future.

This represents a challenge to economics as a science, and one that is likely to be insurmountable. The evolution of vast, complex, unplanned, interconnected systems of (mostly) maximizing agents operating in conditions of sharply limited knowledge will inevitably produce unique historic contingencies. Sure, there are regularities. Sure, some features may repeat over time (e.g., technology shocks that induce business cycles, or monetary policy changes), and these will lead to many fluctuations that exhibit some similarities over time. But from time to time, something completely unique will happen; some unique confluence of contingencies will lead to economic changes that confound the models. The primary regularity is that irregularities periodically occur, and some of these irregularities can be quite extreme.

This does not mean that the entire modeling exercise was a waste. All it means is that we should have relatively humble expectations about economic models in general, and macro models in particular. Stuff happens, and no model will ever be capable of capturing all the stuff that could happen.

Methodologically, this means that although I am sympathetic to the logical rigor that formalization brings, a diversified portfolio of approaches is needed. Less formal approaches, and militantly anti-formal ones, like Austrian Economics, can produce important insights. In particular, Austrian-influenced approaches that emphasize the implications of radically constrained knowledge and innovation in complicated systems can provide a useful framework for coming to grips with the unexpected, and the unmodelable (to coin a word). And history—and the study thereof, and narrative history in particular—matters too. Not because the past provides all that reliable a guide to the future, but because studying it makes one more aware of the role of contingency, and the confluence of unique circumstances resulting in unique effects in systems where purposeful action by billions of people leads to outcomes intended by no one.

In other words, when casting back for great economists whose work years ago can better help us understand the present and future, I look not to Keynes, but to his rival, Hayek.

Print Friendly, PDF & Email

4 Comments »

  1. Economic networks possess lot of properties of “random graphs”. I am talking about the Erdos-Renyi theory – it is an offshoot of Erdos’s attempts to use probabilistic methods in number theory. Not sure if you have seen this paper before
    http://www.tufts.edu/~yioannid/IoannidesRandom_Graph_Soc_Net3_MIT.pdf

    I believe that parsimonious, yet more realistic economic models can be built using random network theory and industrial organization. When trying to teach myself economics, I faced a similar situation. I was more attracted to Tirole’s book than many of the macro books out there.

    On a related note, BCG seems to see some offshoots using their economic indicators.
    http://www.bcg.com/impact_expertise/publications/files/BCG_Collateral_Damage_Quick-o-nomics_Update_July_2009.pdf

    I found their analyses of companies which did well during the 1931 depression rather interesting. The industry leaders by and large did very well compared to others. The companies which bravely upped their R&D spending and brought out product lines in line with the depression grew their market shares considerably.

    Getting back to the Goldman story – how does one go about gathering Var breach data? It would be interesting to compare the conditional var upon breach for the banks after recession, like you had pointed out. It would be a good measure on the risk taking tendencies of these firms. I believe the crisis is a good time for the risk management team to be taken more seriously by the top management and more importantly efforts should be made by the upper level guys to understand the models used and their limitations – instead of just demanding and obtaining numbers they dont comprehend.

    Comment by Surya — July 22, 2009 @ 2:07 am

  2. I should have said greenshoots in the place of offshoots.

    Comment by Surya — July 22, 2009 @ 2:07 am

  3. Morgan Stanley, formerly Goldman’s rival in chief reported a 159M loss.
    http://blogs.wsj.com/deals/2009/07/22/morgan-stanley-v-goldman-breaking-down-the-earnings-reports/

    “Morgan Stanley admits it needs to step up its trading risk after seeing competitors reap such big rewards. Morgan Stanley’s value-at-risk–an estimate of the probability of losses on trading positions–was $173 million in the second quarter. Goldman’s VaR was $245 million, the highest since the firm went public.”

    So, Morgan stepping into the gambling bandwagon as well?

    Comment by Surya — July 22, 2009 @ 10:57 am

  4. Yes, Surya–see my latest post. Groan.

    The ProfessorComment by The Professor — July 22, 2009 @ 9:01 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress