Thursday, November 12, 2015

"Permazero," by Jim Bullard

Jim Bullard has given a talk on "Permazero." Jim frames the idea as follows:
We have, after all, been at the zero lower bound in the U.S. for seven years.  In addition, the FOMC has repeatedly stressed that any policy rate increase in coming quarters and years will likely be more gradual than either the 1994 cycle or the 2004‐2006 cycle.  In short, the FOMC is already committed to a very low nominal interest rate environment over the forecast horizon of two to three years.  Perhaps short‐term nominal rates will simply be low during this period, or perhaps the economy will encounter a negative shock that will propel policy back toward the zero lower bound.
So, liftoff (an increase in the Fed's policy rate) may or may not occur soon, but even if it does, it's quite possible that we could face a world of "permazero," i.e. low nominal interest rates for a very long time. Well, so what?
The thrust of this talk is to suppose, for the sake of argument, that the zero interest rate policy (ZIRP) or near‐ZIRP remains a persistent feature of the U.S. economy.  How should we think about monetary stabilization policy in such an environment?  What sorts of considerations should be paramount? Should we expect slow growth?  Will we continue to have low inflation, or will inflation rise?  Would we be at more risk of financial asset price volatility?  What types of concrete policy decisions could be made to cope with such an environment?  Would it require a rethinking of U.S. monetary policy?
I'll leave you to read the paper, which introduces some important policy ideas, I think.

Tuesday, October 13, 2015

What Do We Know About Long and Variable Lags?

Purveyors of standard monetary policy lore argue that the effects of monetary policy are subject to "long and variable" lags. The idea appears to originate with Milton Friedman. Quoting from "A Program for Monetary Stability:"
There is much evidence that monetary changes have their effect only after a considerable lag and over a long period and that the lag is rather variable. In the National Bureau study on which I have been collaborating with Mrs. Schwartz, we have found that, on the average of 18 cycles, peaks in the rate of change in the stock of money tend to precede peaks in general business by about 16 months and troughs in the rate of change in the stock of money to precede troughs in general business by about 12 months ... . For individual cycles, the recorded lead has varied between 6 and 29 months at peaks and between 4 and 22 months at troughs.
The "National Bureau study" he mentions, which was not yet published when he wrote "A Program for Monetary Stability," is Friedman and Schwartz's "A Monetary History of the United States, 1867-1960." The Monetary History is the key empirical work backing up Friedman's monetarist ideas. Roughly, this empirical work consisted of a compilation (and construction where necessary) of monetary measurements for the United States over a long period of time, followed by the use of relatively crude statistical methods (crude in the sense that Chris Sims wouldn't get excited by the methods) to uncover regularities in the relationship between money and real economic activity.

As you can see from the quote, turning points in time series were important for Friedman. In part, he wanted to infer causality from the time series - if turning points in the money supply tended to precede turning points in aggregate economic activity, then he thought this permitted him to argue that fluctuations in money were causing fluctuations in output. But, Friedman could not find any regularity in the timing of the effects of money on output, other than that these effects took a long time to manifest themselves. Thus, the notion that monetary policy lags were long and variable.

The Monetary History formed a foundation for Friedman's monetary policy prescriptions. According to Friedman, central banks had two choices. They could either take the car and drive by looking in the rear-view mirror, or take the train. That is, the central bank could exercise discretion, put itself at the mercy of long and variable lags, and perhaps make the economy less stable in the process, or it could simply adhere to a fixed policy rule. From Friedman's point of view, the best policy rule was one which caused some monetary aggregate to grow at a fixed rate forever. If the primary source of instability in real GDP is instability in the money supply, then surely removing that instability would be beneficial, according to Friedman.

The modern version of the Monetary History approach is VAR (vector autoregression) analysis. This preliminary version of Valerie Ramey's chapter for the second Handbook of Monetary Economics is a nice survey of how VAR practitioners do their work. The VAR approach has been used for a long time to study the role of monetary factors in economic activity. If we take the VAR people at their word, the approach can be used to identify a monetary policy shock and trace its dynamic effects on macroeconomic variables - letting the data speak for itself, as it were. Ramey's paper describes a range of results, but the gist of it is that the full effects of a monetary policy shock are not manifested until about 16 to 24 months have passed. This is certainly in the ballpark of Friedman's estimates, though the typical lag (depending on the VAR) is somewhat longer than what Friedman thought. Thus, modern time series analysis does not appear to be out of line with the work of Friedman and Schartz from more than half a century ago.

But, should we buy it? First, there are plenty of things to make us feel uncomfortable about VAR results with regard to monetary policy shocks. As is clear from Ramey's paper, and to anyone who has read the VAR literature closely, results (both qualitative and quantitative) are sensitive to what variables we include in the VAR, and to various other aspects of the setup. Basically, it's not clear we can believe the identifying assumptions. Second, even if take VAR results at face value, the results will only capture the effect of an innovation in monetary policy. But, modern macroeconomics teaches us that this is not what we should actually be interested in. Instead, we should care about the operating characteristics of the economy under alternative well-specified policy rules. These are rules specifying the actions the central bank takes under all possible circumstances. For the Fed, actions would involve setting administered interest rates - principally the interest rate on reserves and the discount rate - and purchasing assets of particular types and maturities.

Once we think generally in terms of policy rules, the notion of long and variable lags goes out the window. In principle, the current state of the economy determines the likelihood of all potential future states of the economy. Then, if we know the central bank's policy rule, we know the the likelihood of all future policy actions. But some of those future economic states of the world may not arise for many years, if ever. For example, if the policy rule is well-specified, it tells us what the central bank will do in the event of another financial crisis. Under what circumstances will the Fed lend to large and troubled financial institutions? How bad does it have to get before the central bank pushes overnight nominal interest rates to zero or lower? To what extent should the central bank engage in quantitative easing? And so on. This is basically what "forward guidance" is about. In a world with forward looking people, promises about future actions matter for economic activity today - monetary policy actions need not precede effects. All of this raises doubts about what we can learn about monetary policy effects from a purely statistical analysis. Unfortunately the data is not very good at speaking for itself.

But, we have models. In those models, we can think about any policy experiments we want (within the bounds of what the model can handle of course), and we can rig those experiments in ways that allow us to think about long and variable lags in a coherent fashion. Basic frictionless models commonly used in macro have essentially no internal propagation. For example, the standard representative agent neoclassical growth model with technology shocks (i.e. RBC) exhibits some propagation through the capital stock - a positive technology shock implies higher investment today, higher capital stock tomorrow, and higher output tomorrow. But that effect is very small, and the basic RBC model fits the persistence in output by applying persistence in the technology shock. Indeed, in that model the properties of the time series of aggregate output are determined primarily by the time series properties of the exogenous technology shock, so that's not much of a theory of propagation. Add monetary elements to basic RBC without other frictions and not much is going to happen. For example, in Cooley and Hansen's cash-in-advance model, monetary impulses don't matter much, and certainly don't produce Friedman's long and variable lags.

Of course, we have frictions. Sticky prices will certainly act to produce nonneutralities of money that will persist. But it's well-known that the quantitative effects are highly sensitive to assumptions about pricing. With Calvo pricing, monetary shocks are a big deal, but with state-dependent pricing, the effects are small. Other work by Francesco Lippi and Fernando Alvarez shows that small changes in the pricing protocol - for example setting two prices (a sale price and a regular price) from which to choose - can dramatically reduce the effect of a money shock. Another propagation mechanism with some claim to support from serious theory is labor search. The fact that successful matches in the labor market take time will act to propagate any shocks in general equilibrium, including monetary shocks. However, there seems to be some debate about how quantitatively important this is.

Probably the best known attempt to quantify the dynamic effects of monetary policy, in an expanded New Keynesian model, is Christiano/Eichenbaum/Evans (CEE). CEE start by first filtering the data through VAR analysis, and then treating the impulse responses in the VAR as data that the model should explain. Thus, a key assumption in the analysis is that the monetary policy shock has been correctly identified in the preliminary VAR step. Given that heroic assumption, what CMM set out to explain are lags in monetary policy that, if not variable, are certainly long:
Output, consumption, and investment [in the VAR impulse responses] respond in a hump-shaped fashion, peaking after about one and a half years and returning to preshock levels after about three years.
So, the responses of real activity to monetary policy shocks estimated by CMM exhibit two key features. First, the effects take a long time to peak and to dissipate, in a manner that seems consistent with the Monetary History and other VAR evidence. Second, the response exhibits delay - that's what the "hump shapes" are about.

So, how do CMM go about fitting this data? Getting the persistence and delay in the effects of monetary policy will require frictions. These are the frictions in the model:

1. Sticky prices: There is Calvo pricing. Not only that, but if a firm gets to re-set its price, it must do that before knowing the current period's monetary shock.
2. Sticky wages: Households set their wages in a Calvo fashion.
3. Sticky utilization: It is costly to change the utilization rate of capital.
4. Cash-in-advance purchases of labor: This works as in Tim Fuerst's segmented markets model of monetary policy, and gives an added kick to employment from a monetary policy shock.
5. Costs of adjustment associated with investment.

What's going on here? For the most part, the five frictions above are not well-grounded in microeconomic theory, nor are they well-supported with microeconomic evidence. We of course know that firms do not make decisions continuously but at discrete points in time. But why should it be costly to adjust the capital stock, or to change capital utilization? It can be infinitely costly for a firm to change its price if the Calvo fairy does not allow it, but in the CMM model it is costless to index price decisions. Why? Because that helps in fitting the data. Ultimately, then, we have a model which does a good job of fitting VAR impulse responses, but seems to have thrown out a lot of economics along the way.

So, what do we know about long and variable lags associated with monetary policy? Not much, it seems. We don't have good theories of persistence and delay associated with monetary policy actions, and it's hard to trust the empirical evidence that is used to argue for long and variable lags. Further, the theory we have tells us that policy design is about evaluating the operating characteristics of economies under alternative policy rules. And, in that context, thinking in terms of actions and lagged responses is wrongheaded. Let's go with that.

Sunday, October 4, 2015

Some Unpleasant Labor Force Arithmetic

Words such as "grim" and "dismal" were used to describe Friday's employment report, which featured a payroll employment growth estimate for September of 142,000. Indeed, I think it would be typical, among people who watch the employment numbers, to think of performance in the neighborhood of 200,000 added jobs in a month as normal.

But what should we think is normal? As a matter of arithmetic, employment growth has to come from a reduction in the number of unemployed, an increase in the labor force, or some combination of the two. In turn, an increase in the labor force has to come from an increase in the labor force participation rate, an increase in the working-age population, or some combination of the two. So, if we want to think about where employment growth is coming from, labor force participation is an important piece of the puzzle. This chart shows the aggregate labor force participation rate, and participation rates for men and women:
As is well-known, the participation rate has been falling since about 2000, and at a higher rate since the beginning of the Great Recession. Further, participation rates have been falling for both men and women since the beginning of the Great Recession. It's useful to also slice this by age:
Thus, labor force participation has dropped among the young, and among prime-aged workers, but has held steady for those 55 and older. So, there are two effects which have reduced aggregate labor force participation since the beginning of the Great Recession: (i) participation rates have dropped among some age groups, and have not increased for any age group; (ii) the population is aging, and the old have a lower-than-average participation rate.

Next, we'll go back to the 1980s, as that period featured a major recession, but with a very different backdrop of labor force behavior.
The chart shows the population, aged 15-64 (just call this "population"), labor force, and employment (household survey) for the period from the beginning of the 1980 recession to the beginning of the 1990-91 recession, with each time series scaled to 100 at the first observation. This is a period over which the population grew at an average rate of 1.1%, while labor force and employment grew at average rates of 1.6% and 1.7%, respectively. Over this period, employment could grow at a higher rate, on average, than the population, because of an increase in labor force participation, driven primarily by the behavior of prime-age workers. It should be clear that, over the long run, population, labor force, and employment have to grow at the same average rates - again, as a matter of arithmetic. But, over the short run, employment can grow at a higher rate than the labor force if unemployment is falling, and the labor force can grow at a higher rate than the population if the participation rate is rising.

Fast forward to the recent data.
Over the period since the beginning of the Great Recession, population has grown at an average rate of 0.5%, and labor force and employment at 0.3%. As you can see in the chart, employment has essentially caught up with the labor force, reflected of course in a drop in the unemployment rate to close to its pre-recession level. Year-over-year payroll employment growth looks like this:
For more than three years, employment growth has been sustained, at close to or greater than 2%, year-over-year. And, with labor force participation falling, that growth in employment, in excess of the 0.5% growth rate in population, has come from falling unemployment.

What most people seem to view as "normal" payroll employment growth, 200,000 per month, amounts to a 1.7% growth rate per annum, given the current level of employment. To sustain that into the future, given 0.5% population growth, requires further sustained decreases in unemployment and/or an increase in the participation rate. Are there enough unemployed people out there to generate that level of employment growth? In this post, I showed unemployment rates by duration, indicating that unemployment rates for the short and medium-term unemployed have returned to pre-recession levels or lower. What remains elevated is the number of long-term unemployed - those unemployed 27 weeks or more. Here's an interesting chart:
This shows (with the two series scaled differently to highlight the correlation) the time series of long-term unemployed and the monthly flow from unemployment to not-in-the-labor-force. Clearly, the two time series track each other closely. This is related to a phenomenon labor economists call "duration dependence." During a spell of unemployment for a typical unemployed person, the job-finding rate falls. A person unemployed a few weeks is much more likely to find a job than a person unemployed for a year, for example. Thus, as we can see in the chart, it is likely that a long-term unemployed person does not find a job, and exits the labor force.

So, suppose that about 1 million (roughly the difference between the net increase in long-term unemployment from the beginning of the recession until now) long-term unemployed leave the labor force. This would imply an unemployment rate of about 4.6%. Can unemployment go much lower than 4.6%? Probably not. This means that there is little employment growth left to be squeezed out of the current unemployment pool. So, if payroll employment growth is to be sustained at 200,000 per month, this will require an increase in the labor force participation rate. Could that happen? This next chart shows the flows into the not-in-the-labor-force (NILF) state:
Here, note that the flows into NILF from both employment and unemployment are elevated relative to to pre-recession levels. Further, about 70% of the flow currently comes directly from employment. From the previous chart, it seems clear that the flow from the unemployment state will fall to normal levels as the number of long-term unemployed falls, but that should not stem the reduction in the labor force participation rate, if the high flow continues from employment to NILF. Checking what is going on with respect to flows out of the NILF state:
These flows were high relative to pre-recession levels, but are close to, and moving back to, those levels.

These charts reinforce a view that the fall in labor force participation, post-recession, has been driven by long-run factors, and those factors show no sign of abating. Thus, we should not expect the labor force participation rate to stop falling any time soon, nor should we expect it to change course soon.

Conclusion? With the population aged 15-64 growing at 0.5% per year, if we're getting payroll employment growth of more than about 60,000 per month (that's 0.5% growth in payroll employment per year), this has to be coming from the pool of unemployed people, or from those not in the labor force. But further significant flows of workers from unemployment to employment are unlikely, and the net flows from the labor force to NILF are likely to continue. Thus, employment growth of 142,000 may seem grim and dismal, but labor market arithmetic tells us that employment growth is likely to go lower in the immediate future.

Tuesday, September 22, 2015

Knut Was a Neo-Fisherian

In the midst of this Paul Krugman post, I found a description of Wicksellian dynamics:
As I’ve been trying to point out – and as others, notably Ben Bernanke, have also tried to point out – such monetary wisdom as we possess starts with Knut Wicksell’s concept of the natural interest rate. Try to keep rates too low, and inflation accelerates; try to keep them too high, and inflation decelerates and heads toward deflation.
So, I was thinking, what happens if we write that down and work it out?

To keep it simple, we'll just deal with a deterministic world. It's more or less New Keynesian, but a little different. To start, we have the standard Euler equation, which prices a one-period nominal bond - after taking logs and linearizing:

(1) R(t) = r* + ag(t+1) + i(t+1),

where R(t) is the nominal interest rate, r* is the subjective discount rate, a is the coefficient of relative risk aversion (assumed constant), g(t+1) is the growth rate in consumption between period t and period t+1, and i(t+1) is the inflation rate, between period t and period t+1. Similarly, the real interest rate is given by

(2) r(t) = r* + ag(t+1).

Assume there is no investment, and all output is consumed.

To capture Krugman's concept of Wicksellian inflation dynamics, first let r* + ag* denote the Wicksellian natural rate of interest, where g* is the economy's long-run growth rate. Krugman says that inflation goes up when the the real interest rate is low relative to the natural rate, and inflation goes down when the opposite holds. So, write this as a linear relationship,

(3) i(t+1) - i(t) = -b[r(t) - r* - ag*],

where b > 0. Then, from (2) and (3),

(4) i(t) = ba[g(t+1)-g*] + i(t+1),

which is basically a Phillips curve - given anticipated inflation, inflation is high if the growth rate of output is high.

Then, substitute for g(t+1) in equation (1), using (4), and write

(5) i(t+1) = -[b/(1-b)][R(t) - r* - ag*] + [1/1-b]i(t).

So this is easy now, as to determine an equilibrium we just need to solve the difference equation (5) for the sequence of inflation rates, given some path for R(t), or some policy rule for R(t), determined by the central bank.

First, suppose that R(t) = R, a constant. Then, from (5), the unique steady state is

(6) i = R - r* - ag*.

That's just the long-run Fisher relation - the inflation rate is the nominal interest rate minus the natural real rate of interest. But what about other equilibria? If 0 < b < 1, or b > 2, then in fact the steady state given by (6) is the only equilibrium. If 1 < b < 2 then there are many equilibria which all converge to the steady state.

Next, suppose that R(t) = R1, for t = 0, 1, 2, ..., T-1, and R(t) = R2, for t = T, T+1, T+2,..., where R2 > R1. This is an experiment in which the nominal interest rate goes up, once and for all, at time T, and this change in monetary policy is perfectly anticipated. In the case where 0 < b < 1, there is a unique equilibrium that looks like this:

So, inflation increases prior to the nominal interest increase, and achieves the Fisherian steady state in period T, and the growth rate in output and the real interest rate are low and falling before the nominal interest rate increase occurs.

We can look at the other cases, in which b > 1, and the dynamics will be more complicated. Indeed, we get multiple equilibria in the case 1 < b < 2. But, in all of these cases, a higher nominal interest rate implies convergence to the Fisherian steady state with a higher inflation rate. Increasing the nominal interest rate serves to increase the inflation rate. Keeping the nominal interest rate at zero serves only to keep the inflation rate low, in spite of the fact that this model has Wicksellian dynamics and a Phillips curve.

I'm not endorsing this model - just showing you its implications. And those implications certainly don't conform to "try to keep rates too low, and inflation accelerates; try to keep them too high, and inflation decelerates and heads toward deflation," as Krugman says. The Wicksellian process is built into the model, just as Krugman describes it, but the model has neo-Fisherian properties.

Sunday, September 20, 2015

The ZIRP Blues

Here's the time series of the fed funds rate and inflation rate in the United States, from the time Paul Volcker became Fed Chair:
Suppose an alien with a high IQ lands in my back yard. I show her this picture, and explain that the central bank moves the fed funds rate up and down so as to control inflation. Ms. Alien points out that the fed funds rate and inflation were in the neighborhood of 10% in August 1979. Now, 36 years later, the fed funds rate and the inflation rate are close to zero. So, says Ms. Alien, it looks like the central bank spent 36 years fighting the inflation rate down to zero.

Ms. Alien would be surprised to learn that most people are not happy with the current state of affairs. There are always exceptions, of course - in this case, John Cochrane. But popular views on current U.S. monetary policy fall basically into two camps:

1. Phillips curve A: These people think inflation is too low. But eventually the Phillips curve will re-assert itself, and inflation will rise of its own accord. When that happens, we can worry about liftoff - an increase in interest rates to hold inflation down.
2. Phillips curve B: These people also think inflation is too low, that, eventually, the Phillips curve will re-assert itself, and that inflation will rise of its own accord. But a Phillips curve B type thinks that we need to get ahead of the game. Milton Friedman told us that there are "long and variable lags" associated with monetary policy. If we wait too long, then monetary policy will be scrambling to keep up with higher inflation, and interest rates will need to climb at a high rate, at the expense of real economic activity.

The Phillips curve A group includes Summers, Stiglitz, and Krugman, who states that we should "wait until you see the whites of inflation’s eyes." Members of the Phillips curve A and B camps have to somehow come to grips with the Phillips curve we see in the recent data, which looks like this:
The line joins the points in the scatter plot in temporal sequence, roughly from right to left. Krugman's point in his piece is that the natural rate of unemployment (NAIRU) has been receding as we get closer to it. In this view, we're supposed to have faith that the Phillips curve looks like this:

An alternative to Phillips A/B is the neo-Fisherian view. As John Cochrane says:
But if a 0% interest rate peg is stable, then so is a 1% interest rate peg. It follows that raising rates 1% will eventually raise inflation 1%. New Keynesian models echo this consequence of experience. And then the Fed will congratulate itself for foreseeing the inflation that, in fact, it caused.
Cochrane's saying that central bankers have to come to terms with the Fisher effect. If the short-term nominal interest rate is low for a long time, we should not be surprised that the inflation rate is low. And John is quite happy with low inflation. While the Phillips curve A and B camps fight it out over how to get inflation up, and sing the ZIRP (zero interest rate policy) blues, he's hoping they never figure it out.

There's a more subtle idea in the quote from Cochrane above, which is that a neo-Fisherian could find common cause with the Phillips curve B camp. They could all agree to liftoff, the inflation rate could rise due to the Fisher effect, and the central bank "will congratulate itself for foreseeing the inflation that, in fact, it caused."

If you're wondering what central bankers are thinking, a nice summary of conventional views is in a speech by Andy Haldane, Chief Economist at the Bank of England. It's a long speech, by U.S. central banker standards, but certainly thorough. Much of the speech focuses on the "problem" of the zero lower bound (ZLB). In most of the monetary models we write down, and in the traditional thinking of central bankers, zero is a lower bound on central bank's policy interest rate. The ZLB is thought to be a problem as, once the central bank reaches it, its policy options are limited. If one takes this seriously, there are two responses: (i) stay away from the ZLB; (ii) get more creative about policy options at the ZLB.

How do we stay away from the ZLB? Haldane tells us why we're now seeing ZLB policies:
... by lowering steady-state levels of nominal interest rates, lower inflation targets ... increased the probability of the ZLB constraint binding.
He's saying that low inflation targets, i.e. average rates of inflation that are low, imply lower nominal interest rates. So,
... one option for loosening [the ZLB] constraint would simply be to revise upwards inflation targets. For example, raising inflation targets to 4% from 2% would provide 2 extra percentage points of interest rate wiggle room.
So this is entirely consistent with John Cochrane and the neo-Fisherians. If the central bank's inflation target is higher by two percentage points, then the nominal interest rate must on average be higher by two percentage points, and the chances that monetary policy will take us to the ZLB should be much smaller.

But, Haldane is certainly not a neo-Fisherian. He's more in the Phillips curve A camp, as this is his policy recommendation:
In my view, the balance of risks to UK growth, and to UK inflation at the two-year horizon, is skewed squarely and significantly to the downside.

Against that backdrop, the case for raising UK interest rates in the current environment is, for me, some way from being made. One reason not to do so is that, were the downside risks I have discussed to materialise, there could be a need to loosen rather than tighten the monetary reins as a next step to support UK growth and return inflation to target.
Haldane makes it clear that he thinks the way to "return inflation to target," i.e. 2%, is not to let the central bank's interest rate target go up. And, as I wrote here, it's not as if the UK data will make you a believer in the Phillips curve. Here's the policy problem the Bank of England faces:
The policy interest rate target is currently at 0.5% in the UK but, as in the U.S., the inflation target is at 2% and actual inflation is hovering around 0%.

Haldane discusses ways in which central banks can get creative when confronted with the ZLB. The options that have been discussed (and in some cases implemented by some central banks) are:

1. Quantitative Easing: The idea here is that, at the ZLB, purchases by the central bank of short-term government debt are essentially irrelevant, as there is no fundamental difference between short-term government debt and reserves at the ZLB. But, the central bank could purchase long-maturity government debt or other assets at the ZLB. Perhaps that does something? Post-Great Recession, the U.S. of course acquired a large portfolio of long-maturity Treasury securities and mortgage-backed securities, and maintains the nominal value of that portfolio of assets through a reinvestment policy that is still in place. Whatever the effects of U.S. QE programs, it's an inescapable reality that inflation is close to zero. But, even larger asset purchases were carried out by the Swiss National Bank, and the Bank of Japan. Here's what's happened in Switzerland:
In this case, both the policy rate and the inflation rate are well below zero. The Swiss National Bank has a goal of price stability, which it defines as less than 2% inflation. I'm not sure if they are OK with an inflation rate less than -1%.

The Bank of Japan began a program of "qualitative and quantitative monetary easing" in April of 2013. Here's the overnight interest rate and inflation rate time series for Japan:
I've included the whole 20-year period over which Japan's overnight interest rate was below 1%. Japan is, as you know, our stock example of what ZIRP produces. But what of the effects of the Bank of Japan's recent QE experiment? Don't be deceived by that burst of inflation in 2014. In April 2014, the consumption tax in Japan went up from 5% to 8%, and that feeds directly into the CPI - the prices in the index are measured after-tax. If we look at the CPI levels since the beginning of the QE program in April 2013, you can see that more clearly:
So, from April 2013 to July 2015, the CPI increased about 4%. If 3 percentage points of that is simply due to the consumption tax increase, then we're left with less than 1/2% per year in inflation since the QE program began. The Bank of Japan's inflation target is 2%, which it is missing by a wide margin on the low side, in spite of an increase in the monetary base in Japan that looks like this:
You can't blame John Cochrane for stating the following, with respect to the U.S.:
Even the strongest empirical research argues that QE bond buying announcements lowered rates on specific issues a few tenths of a percentage point for a few months. But that's not much effect for your $3 trillion. And it does not verify the much larger reach-for-yield, bubble-inducing, or other effects.

An acid test: If QE is indeed so powerful, why did the Fed not just announce, say, a 1% 10 year rate, and buy whatever it takes to get that price? A likely answer: they feared that they would have been steamrolled with demand. And then, the markets would have found out that the Fed can’t really control 10 year rates. Successful soothsayers stay in the shadows of doubt.

I've written down a model of QE, in which swaps of short-maturity assets for long-maturity assets by the central bank can have real effects. Basically, this increases the stock of effective collateral in the economy, relaxes collateral constraints, and increases the real interest rate. It's a good thing. But, if the nominal interest rate is pegged at zero, this will lower the inflation rate.

2. Lower the lower bound: If the ZLB is a problem, possibly we can make the problem go away by relaxing the bound. In models we write down, the zero lower bound arises because it is costless to hold currency which, given current technological constraints, cannot bear interest. When the central bank has excess reserves outstanding in the financial system, if an attempt were made to charge financial institutions for the privilege of holding reserves with the central bank, these institutions would opt to hold currency instead - in some of our models. But, in the real world it is not costless to hold currency. Making interbank transactions using currency is impractical, as millions of dollars in currency takes up a lot of space, and because real resources would have to be expended in preventing theft. This implies that market nominal interest rates can be negative and, indeed, some jurisdictions have opted for negative interest rates on reserve balances held at the central bank. One of those, as you can see in the chart above, is Switzerland, where the inflation rate is now below 1%. Another is the Euro area:
European overnight interest rates have not gone as low as in Switzerland, nor is the inflation rate as low, but it's a similar picture - not much inflation.

Relaxing the lower bound meets with a difficulty similar to that for QE - in the long run, this just serves to make inflation lower. To see this, consider a very crude monetary model - cash-in-advance. There's a representative consumer who gets utility u(c) from consumption goods c, and suffers disutility v(n) from supplying n units of labor, which produces n units of consumption goods. Consumption goods must be purchased with cash. There are also one period bonds, which sell at a price q at the beginning of the period, and pay off one unit of cash next period. Cash and bonds are held across periods, and fraction t of cash holdings held between periods is stolen. Suppose for simplicity that thieves steal money and burn it. To make things easy, look at an equilibrium in which the money growth rate is a constant, i. Letting B denote the discount factor, in equilibrium the price of the bond is given by

(1) q = B/(1+i)

That's just the Fisher relation. There are no liquidity effects in this model, and in equilibrium the nominal interest rate is (roughly) given by

(2) R = p + i,

where p = 1/B -1 is the real interest rate. In equilibrium c = n, i.e. all output is consumed, and c is determined by

(3) v'(c) = [B(1-t)u'(c)]/(1+i)

What's the lower bound on the nominal interest rate. It's R* = - t, that is, it's determined by the cost of holding cash. And, if the nominal interest rate is at its lower bound, R*, then the inflation rate is

(4) i* = - p - t,

so lowering the lower bound only serves to decrease the inflation rate. You can add bells and whistles - reasons for the real interest rate to be low, endogenous theft of currency, short run non-neutralities of money, or whatever, and I think the basic idea will go through.

Another suggested approach to increasing the inflation rate, given ZIRP, is:

3. Helicopter Drops: The "helicopter drop" was a thought experiment in Milton Friedman's "Optimum Quantity of Money" essay. In the thought experiment, Friedman asks you to consider what would happen if the government sent out helicopters to spew money across the countryside. People would pick up the money, spend it, and prices would go up, etc. Surely, if inflation is perceived to be too low, and we're at a loss as to how to increase it, we should be thinking about this, the argument goes. Can't the government just send people checks and make inflation go up?

Paul Krugman has a suggestion along these lines, for Japan, though what he's suggesting is not Friedman's helicopter transfers (which increase the government budget deficit), but increases in spending on goods and services, financed by printing money:

What’s remarkable about this record of dubious achievement is that there actually is a surefire way to fight deflation: When you print money, don’t use it to buy assets; use it to buy stuff. That is, run budget deficits paid for with the printing press.
Actually, that's exactly what has been going on in Japan. The Japanese government has been running a deficit, the quantity of government debt outstanding is very large (in excess of 200% of GDP) and, as we can see in the chart above, the monetary base is growing at a very high rate. That's what printing money amounts to. But, the central bank can only control the total quantity of outside money in existence, not its composition. How outside money is split between currency and reserves is determined by the banks who hold the reserves and the private firms and consumers who hold the currency. The central bank can do all the money printing it wants, but if the new money sits as reserves, as appears to be happening, it's not going to have the effect that Krugman wants.

Increasing interest rates is hard for central bankers. A decrease in rates rarely produces any flack, but central banks have few supporters when they talk about rate increases. Media pieces like this one in the NYT and this one in the Economist propagate the idea that interest rate increases are fraught with peril. One example people like to use is tightening by the Swedish Riksbank in 2010-2011. Here's the relevant chart:
The tightening that occurred was an increase of 1.75 percentage points in the Riksbank's target interest rate, in quarter-point steps, from July 2010 to August 2011. In the realm of central bank tightening phases, this isn't a big deal. Compare it to the previous tightening phase in Sweden, or the 4.25 percentage point increase that occurred in the U.S. over the 2004-2006 period. But, the Riksbank caught hell from Lars Svennson as a result. The Riksbank seems to have more or less followed Lars's advice since, but as you can see it is now keeping company with other central banks, with a negative policy rate, and inflation close to zero - two percentage points south of its target.

What are we to conclude? Central banks are not forced to adopt ZIRP, or NIRP (negative interest rate policy). ZIRP and NIRP are choices. And, after 20 years of Japanese experience with ZIRP, and/or familiarity with standard monetary models, we should not be surprised when ZIRP produces low inflation. We should also not be surprised that NIRP produces even lower inflation. Further, experience with QE should make us question whether large scale asset purchases, given ZIRP or NIRP, will produce higher inflation. The world's central bankers may eventually try all other possible options and be left with only two: (i) Embrace ZIRP, but recognize that this means a decrease in the inflation target - zero might be about right; (ii) Come to terms with the possibility that the Phillips curve will never re-assert itself, and there is no way to achieve a 2% inflation target other than having a nominal interest rate target well above zero, on average. To get there from here may require "tightening" in the face of low inflation.

Sunday, September 6, 2015

Bad Ideas?

Paul Krugman concludes that "hiking rates now is still a really bad idea." So, his opinion is clear. What's not so clear is his argument, which is this:
When the Fed funds rate was 5 percent, there was room to cut if a rate hike turned out to be premature — that is, the risks of moving too soon and moving too late were more or less symmetrical. Now they aren’t: if the Fed moves too late, it can always raise rates more, but if it moves too soon, it can push us into a trap that’s hard to escape.
So, suppose we're in the pre-financial crisis era, and the fed funds rate is 5%. As a thought experiment, suppose the FOMC decided at its regular meeting to hike the fed funds rate target to 5.25%. Then, at its next meeting it decided that the previous hike was a mistake, and undid it, reducing the fed funds rate target to 5%. I think Krugman is telling us that, in those circumstances, ex post we would prefer the policy that stayed at 5% to the one that went up a quarter point and then back down. I think he's also telling us that, once we discover the mistake, the best policy would be to reduce the fed funds rate below 5%. That's the basis for the asymmetry argument he's making - there's no problem if you're at 5%, but when you're at zero (essentially), you can't correct the mistake. So, fundamentally, this argument revolves around the assumption that there is an economically significant difference between going up to 5.25% this meeting, then down to 5% at the next meeting, vs. having stayed at 5%.

If that's the crux of it, Krugman needs to do a better job of making the case. In terms of modern macroeconomic theory, we don't think in terms of "too early" and "too late." Policy is state-dependent, i.e. data-dependent.
The policymaker takes an action based on what he or she sees, and what that indicates about where the economy is going. The question is: What is Krugman's desired policy rule, and where would that lead us? What exactly is the nature of the "hard to escape" trap that might befall us? As is, Krugman's not giving us much to go on.

Addendum: Here's another thought. Krugman seems to like the "normal" world of 5% fed funds rate better than the zero-lower-bound world - because, as he says, the normal world allows you more latitude to correct "mistakes." So why wouldn't he use that as an argument for liftoff?

Friday, September 4, 2015


Paul Romer is worried that the field of macroeconomics is too tribal - somehow our behavior is impeding scientific progress.

Romer starts his post with two statements:
1.The model in Lucas (1972), Expectations and the Neutrality of Money, made a path breaking contribution to economic theory. It is comparable in importance to the Solow model and the Dixit-Stiglitz formulation of monopolistic competition.

2. The model in Prescott and Kydland (1982), “Time to Build and Aggregate Fluctuations”, has no scientific validity.
As Romer points out, the first statement concerns a modeling contribution, while the second has to do with empirical usefulness. But Romer thinks that how we - that is, macroeconomists in particular - think about those two statements should be revealing.

Most of us can read those two statements and know how the extended arguments are likely to play out. Of course, it helps to have been around for a while - anyone under 33 would not have been born in 1982, and would see Kydland/Prescott as ancient history. And Lucas (1972), though of course highly influential, does not show up on many PhD reading lists these days. But, even if we know the typical arguments, we would like to know more. Has the author of the statement got anything new to say? How do they flesh out the argument? I might think, for example, that the author of the second statement isn't just commenting on how Kydland-Prescott fits the data. Maybe he or she has something to say about the whole methodological approach. In any case, I'm curious. I would like to know. I'm open to persuasion. Indeed, that's what economists do - we try to persuade others, using whatever means possible. And a lot of that persuasion involves words - written and spoken. My ex-colleague Deirdre McCloskey, had a lot to say about this. Here's an excerpt:
I like that. Science is human persuasion, not mechanical demonstration. From reading Romer's stuff lately, I think he believes in mechanical demonstration. According to Romer, scientific progress should be obvious to some self-appointed group of elite scientists, and if we could just get rid of some of the clutter, we would be moving on much more quickly toward ultimate Romerian truth.

In spite of my reluctance, I'll play along with Romer. He says:
Think of some macroeconomist X that you know.
Fine. Some people would say I'm a macroeconomist, so I'll volunteer. Mr. X at your service. The next step is the following:
Consider these questions:

A. Would X agree that there is an objective sense in which statements 1 and 2 can be said to be either true or false?

B. Would X agree that a reasonable person could conclude that statements 1 and 2 are both true?

C. Would X be able to examine dispassionately the evidence for and against these two statements and evaluate them independently?
So, note that I'm going Romer one better. He's asking you to put words in someone else's mouth. That seems a little weird.

In answer to A: Stupid question. (i) Give me the rest of the argument, not just a blunt statement. I want you to try to persuade me. This is definitely not about true and false. What's true and false is something we'll never know - we're just scientists in the dark trying to figure things out. (ii) What you should be asking is: Are you persuaded? Maybe, after hearing the whole argument, I'm halfway-persuaded, but I have something I can add to the argument to make it more persuasive. Maybe I've got a clarifying question. Maybe I want the author to expand on the argument.

B: No idea. First I want to see if the authors of 1 and 2 are giving me what I think is a persuasive argument.

C: No. Dispassionate? Remember, we're talking about human persuasion here. Humans are passionate. If macroeconomists were not passionate about their work, working with them would be deathly dull. I would rather paint houses for a living. And why would we be thinking about 1 and 2 independently? Indeed, given the nature of the statements, we should be thinking about these things in the same context. How you argue one could have a lot to do with how you argue the other.

Where is Romer leading us? Well, he seems to want to make the case that we (macroeconomists) are "infected by tribalism." He also argues that physicists are not tribalists.

I've argued elsewhere that, taking macroeconomics in particular, that the field is much less factional than some people would like to claim. Emphasis on factionalism sometimes makes an interesting story for undergraduate macro students. In the old days, there was a conflict between Monetarists and Keynesians - Chicago vs. the east coast. In the 1970s there was a conflict between "saltwaters and freshwaters" - CMU/Minnesota/Chicago/Rochester vs. the east coast. But, as the technology has changed, and people and ideas have moved around, it's much harder to identify warring camps, or a war. You'll note that statements 1 and 2 concern very old ideas. Romer didn't give us, say, post-2000 statements along these lines. Why? Because he would have a hard time finding such things, except perhaps on the blogosphere, where people seem to love rehashing old - and long-ago resolved - disputes.

But, researchers in macro - as with researchers in other fields in economics - will split off into groups that are internally relatively homogeneous. That's how we make progress. Persuasion is hard. If we try to work in heterogenous groups in which we're constantly going back to first principles to justify what we're doing, we're not going to advance much. Sometimes we make the most progress in a group where we can agree on assumptions. I spend some of my time interacting with a group of monetary theorists who share a common view about research methods and direction, and we tend to share an evolving set of models. I've learned a lot from that, and from the continuing relationship with people in the group. And so what if two groups are having a dispute. That's just healthy competition.

So, within economics, is macro unusual? Of course not. Indeed, the whole emphasis of post-1970 macroeconomics is to do it like everyone else. Before 1970, no one would have been discussing macro and Dixit-Stiglitz in the same sentence. Should economics work like physics? Of course not. We're studying very different problems requiring very different methods. Why would you expect economists to behave like physicists?

What's my bottom line? Romer is just leading us through an unproductive conversation - one that's not going to persuade anyone of anything. Here's something that would be more fruitful. Romer's chief beef with the macro profession seems to be that we don't give him enough credit. The two characters who wrote the articles in statements 1 and 2 get plenty of credit. They are well-cited, and they have Nobel prizes. Romer also has plenty of citations, but seems to want something more. I'm not a close follower of research on economic growth, but I see growth papers sometimes, and my familiarity with this stuff is roughly that of your average macroeconomist. Romer made a couple of key contributions to the literature on economic growth early in his career, building on the seminal work of Solow and the optimal growth theorists - Cass and Koopmans for example. Romer's work, and Lucas's for example, was highly influential, and spawned a whole literature - endogenous growth theory.

The hope for this line of research was that we would gain an understanding of the forces behind technological change. This type of research, it was thought, could give us huge rewards. Some countries are extremely poor, while others are extremely rich. If we can figure out how to make the extremely poor extremely rich, this would be a huge payoff for macroeconomic research. My impression - and I could be entirely wrong - is that this line of research has been something of a bust. Most of the insight we have into economic growth and the sources of disparities in standards of living in the world comes mainly through the lens of the Solow growth model, and Solow's paper was published in 1956.

So, I think it is incumbent on Romer, if he wants more credit, and more recognition, to make the case for himself - for his older ideas - and to give us some new ideas. I'm willing to be persuaded, as I'm sure most macroeconomists are. But, arguments about "mathiness," "macro gone wrong," and unsubstantiated charges of dishonesty aren't persuading anyone, as far as I can tell.