Sunday, January 15, 2017

What's a Macro Model Good For?

What's a macro model? It's a question, and an answer. If it's a good model, the question is well-defined, and the model gives a good answer. Olivier Blanchard has been pondering how we ask questions of models, and the answers we're getting, and he thinks it's useful to divide our models into two classes, each for answering different questions.

First, there are "theory models,"
...aimed at clarifying theoretical issues within a general equilibrium setting. Models in this class should build on a core analytical frame and have a tight theoretical structure. They should be used to think, for example, about the effects of higher required capital ratios for banks, or the effects of public debt management, or the effects of particular forms of unconventional monetary policy. The core frame should be one that is widely accepted as a starting point and that can accommodate additional distortions. In short, it should facilitate the debate among macro theorists.
At the extreme, "theory models" are purist exercises that, for example, Neil Wallace would approve of. Neil has spent his career working with tight, simple, economic models. These are models that are amenable to pencil-and-paper methods. Results are easily replicable, and the models are many steps removed from actual data - though to be at all interesting, they are designed to capture real economic phenomena. Neil has worked with fundamental models of monetary exchange - Sameulson's overlapping generations model, and the Kiyotaki-Wright (JPE 1989) model. He also approves of the Diamond-Dybvig (1983) model of banking. These models give us some insight into why and how we use money, what banks do, and (perhaps) why we have financial crises, but no one is going to estimate the parameters in such models, use them in calibration exercises, or use them at an FOMC meeting to argue why a 25 basis point increase in the fed funds rate target is better than a 50 basis point increase.

But Neil's tastes - as is well-known - are extreme. In general, what I think Blanchard means by "theory model" is something we can write up and publish in a good, mainstream, economics journal. In modern macro, that's a very broad class of work, including pure theory (no quantitative work), models with estimation (either classical or Bayesian), calibrated models, or some mix. These models are fit to increasingly sophisticated data.

Where I would depart from Blanchard is in asking that theory models have a "core frame...that is widely accepted..." It's of course useful that economists speak a common language that is easily translatable for lay people, but pathbreaking research is by definition not widely accepted. We want to make plenty of allowances for rule-breaking. That said, there are many people who break rules and write crap.

The second class of macro models, according to Blanchard, is the set of "policy models,"
...aimed at analyzing actual macroeconomic policy issues. Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals. They should be used to think, for example, about the quantitative effects of a slowdown in China on the United States, or the effects of a US fiscal expansion on emerging markets.
This is the class of models that we would use to evaluate a particular policy option, write a memo, and present it at the FOMC meeting. Such models are not what PhD students in economics work on, and that was the case 36 years ago, when Chris Sims wrote "Macroeconomics and Reality."
...though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models.
The "large models" Sims had in mind are the macroeconometric models constructed by Lawrence Klein and others, beginning primarily in the 1960s. The prime example of such models is the FRB/MIT/Penn model, which reflected in part the work of Klein, Ando, and Modigliani, among others, including (I'm sure) many PhD students. There was indeed a time when a satisfactory PhD dissertation in economics could be an estimation of the consumption sector of the FRB/MIT/Penn model.

Old-fashioned large-scale macroeconometric models borrowed their basic structure from static IS/LM models. There were equations for the consumption, investment, government, and foreign sectors. There was money demand and money supply. There were prices and wages. Typically, such models included hundreds of equations, so the job of estimating and running the model was subdivided into manageable tasks, by sector. There was a consumption person, an investment person, a wage person, etc., with further subdivision depending on the degree of disaggregation. My job in 1979-80 at the Bank of Canada was to look after residential investment in the RDXF model of the Canadian economy. No one seemed worried that I didn't spend much time talking to the price people or the mortgage people (who worked on another floor). I looked after 6 equations, and entered add factors when we had to make a forecast.

What happened to such models? Well, they are alive and well, and one of them lives at the Board of Governors in Washington D.C. - the FRB/US model. FRB/US is used as an explicit input to policy, as we can see in this speech by Janet Yellen at the last Jackson Hole conference:
A recent paper takes a different approach to assessing the FOMC's ability to respond to future recessions by using simulations of the FRB/US model. This analysis begins by asking how the economy would respond to a set of highly adverse shocks if policymakers followed a fairly aggressive policy rule, hypothetically assuming that they can cut the federal funds rate without limit. It then imposes the zero lower bound and asks whether some combination of forward guidance and asset purchases would be sufficient to generate economic conditions at least as good as those that occur under the hypothetical unconstrained policy. In general, the study concludes that, even if the average level of the federal funds rate in the future is only 3 percent, these new tools should be sufficient unless the recession were to be unusually severe and persistent.
So, that's an exercise that looks like what Blanchard has in mind, though he discusses "unconventional monetary policy" as an application of the "theory models."

It's no secret what's in the FRB/US model. The documentation is posted on the Board's web site, so you can look at the equations, and even run it, if you want to. There's some lip service to "optimization" and "expectations" in the documentation for the model, but the basic equations would be recognizable to Lawrence Klein. It's basically a kind of expanded IS/LM/Phillips curve model. And Blanchard seems to have a problem with it. He mentions FRB/US explicitly:
For example, in the main model used by the Federal Reserve, the FRB/US model, the dynamic equations are constrained to be solutions to optimization problems under high order adjustment cost structures. This strikes me as wrongheaded. Actual dynamics probably reflect many factors other than costs of adjustment. And the constraints that are imposed (for example, on the way the past and the expected future enter the equations) have little justification, theoretical or empirical.
Opinions seem to differ on how damning this is. The watershed in macroeconomists' views on large scale macreconometric models was of course Lucas's critique paper, which was aimed directly at the failures of such models. In the "Macroeconomics and Reality" paper, Sims sees Lucas's point, but he still thinks large-scale models could be useful, in spite of misidentification.

But, it's not clear that large-scale macroeconometric models are taken that seriously these days, even in policy circles, Janet Yellen aside. While simulation results are presented in policy discussions, it's not clear whether those results are changing any minds. Blanchard recognizes that we need different models to answer different questions, and one danger of the one-size-fits-all large-scale model is its use in applications for which it was not designed. Those who constructed FRB/US certainly did not envision the elements of modern unconventional monetary policy.

A modern macroeconometric approach is to scale down the models, and incorporate more theory - structure. The most well-known such models, often called "DSGE" are the Smets-Wouters model, and the Christiano/Eichenbaum/Evans model. Blanchard isn't so happy with these constructs either.
DSGE modelers, confronted with complex dynamics and the desire to fit the data, have extended the original structure to add, for example, external habit persistence (not just regular, old habit persistence), costs of changing investment (not just costs of changing capital), and indexing of prices (which we do not observe in reality), etc. These changes are entirely ad hoc, do not correspond to any micro evidence, and have made the theoretical structure of the models heavier and more opaque.
Indeed, in attempts to fit DSGE to disaggregated data, the models tend to suffer increasingly from the same problems as the original large-scale macroeconometric models. Chari, Kehoe, and McGrattan, for example, make a convincing case that DSGE models in current use are misidentified and not structural, rendering them useless for policy analysis. This has nothing to do with one's views on intervention vs. non-intervention - it's a question of how best to do policy intervention, once we've decided we're going to do it.

Are there other types of models on the horizon that might represent an improvement? One approach is the HANK model, constructed by Kaplan, Moll, and Violante. This is basically a heterogeneous-agent incomplete-markets model in the style of Aiyagari 1994, with sticky prices and monetary policy as in a Woodford model. That's interesting, but it's not doing much to help us understand how monetary policy works. It's assumed the central bank can dictate interest rates (as in a Woodford model), with no attention to the structure of central bank assets and liabilities, the intermediation done by the central bank, and the nature of central bank asset swaps. Like everyone, I'm a fan of my own work, which is more in the Blanchard "theory model" vein. For recent work on heterogeneous agent models of banking, secured credit, and monetary policy, see my web site.

Blanchard seems pessimistic about the future of policy modeling. In particular, he thinks the theory modelers and the policy modelers should go their own ways. I'd say that's bad advice. If quantitative models have any hope of being taken seriously by policymakers, this would have to come from integrating better theory in such models. Maybe the models should be small. Maybe they should be more specialized. But I don't think setting the policy modelers loose without guidance would be a good idea.

No comments:

Post a Comment