I am still thinking about the best assessment of potential output and productivity growth that we have–that of the extremely-sharp John Fernald’s “Productivity and Potential Output Before, During, and After the Great Recession”. And I am–slowly, hesitantly, and unwillingly–coming to the conclusion that I have to mark my beliefs about the process of economic technological change to market, and revise them significantly.

Let’s start with what I wrote last July:

Brad DeLong: I Draw a Different Message from John Fernald’s Calculations than He Does…: John Fernald….

U.S. labor and total-factor productivity growth slowed prior to the Great Recession. The timing rules out explanations that focus on disruptions during or since the recession, and industry and state data rule out ‘bubble economy’ stories related to housing or finance. The slowdown is located in industries that produce information technology (IT) or that use IT intensively…

But when I look at this graph:

NewImage

I see, from 2003:I to 2007:IV, a healthy growth rate of 3.2%/year according to Fernald’s potential-output series. Then after 2007:IV the growth rate of Fernald’s potential-output series slows to 1.45%/year. The slowdown from the late 1990s era of the internet boom to the pace of potential output growth prior to the Lesser Depression is small potatoes relative to the slowdown that has occurred since. Thus Fernald’s claim that the “timing rules out explanations that focus on disruptions during or since the recession”. As I see it, the timing is perfectly consistent with:

  • a small slowdown in potential output growth that starts in the mid-2000s as the tide of the infotech revolution starts to ebb, and
  • a much larger slowdown in potential output growth with the financial crisis, the Lesser Depression, and the jobless recovery that has followed since.

I say this with considerable hesitancy and some trepidation. After all, John Fernald knows and understands these data considerably better than I do. Perhaps it is simply that I spend too much time down in Silicon Valley and so cannot believe that the fervor of invention and innovation that I see there does not have large positive macroeconomic consequences.

Nevertheless, I have come to believe that macroeconomists think that their assumption the trend is separate from and independent of the cycle are playing them false. This assumption was introduced for analytical convenience and because it seemed true enough for a first cut. I see no reason to imagine that it is still true.

That’s what I said last July. And now I have been trying to think about it some more…

There are actually two ways to read Fernald’s potential series. The first–the one I gravitated to–is this:

I Draw a Different Message from John Fernald s Calculations than He Does Thursday Focus for July 17 2014 Washington Center for Equitable Growth

More-or-less smooth growth from 2003 to 2009, with a serious slowdown in potential output growth starting in 2009 as serious hysteresis from the Lesser Depression hits the long-run growth potential of the American economy.

The second way to look at it is this:

I Draw a Different Message from John Fernald's Calculations than He Does Thursday Focus for July 17 2014 Washington Center for Equitable Growth

A sharp drop in the rate of potential output growth starting in 2005, with the Lesser Depression having had–so far–little negative hysteretic effect on potential output growth: basically, that another 1970s-magnitude productivity growth slowdown hit the U.S. economy in 2005, and that even without the financial crisis and the Lesser Depression we would today be more-or-less where we in fact are.

The second reading of the time series has to take the 2009 potential output estimate as an anomaly: a measurement error. The first reading of the time series has to take 2005 and 2008 as (smaller) measurement-error anomalies. The first reading suggests enormous headroom for cyclical recovery provided we can overcome effects of hysteresis. The second reading suggests that there is no such headroom–that what we have is only a little bit less than the best we can reasonably expect.

An alternative approach to the data is to use Okun’s Law–to assume cointegration between potential GDP, actual real GDP, and either the unemployment rate or the employment-to-population ratio, with an Okun’s Law coefficient of 1.25 for the employment-to-population ratio (a 1% fall in the employment-to-population ratio reduces real GDP below potential by an extra 1.25%) and of 2.0 for the unemployment rate (a 1%-point fall in the employment-to-population ratio reduces real GDP below potential by an extra 2.0%). This assumption allows us to construct potential output series for the employment-to-population ratio (assuming that full employment-to-population is 63%) and for the unemployment rate (assuming that full employment is attained at a 5% unemployment rate):

Graph Civilian Employment Population Ratio FRED St Louis Fed

Both the employment-to-population ratio-based and the unemployment rate-based Okun’s Law potential output series tell the same story about the output gap and the business cycle from 1990-2010, and it is a reasonable story: the recession of the early 1990s and the subsequent “jobless recovery” producing an output gap that was not especially large but that was long-lasting; productivity-growth acceleration in the late-1990s as the high-tech sector reaches critical mass; “overheating” with output above potential in 2000 and 2001; small but persistent output gaps in the first half of the 2000s; growth with production at more-or-less potential output from 2005 through 2007; the collapse and the emergence of the enormous output gaps of the Lesser Depression in 2008-2009. And then, starting in 2010, the stories the two measures tell diverge markedly. The unemployment rate-based Okun’s Law potential output series tells the same story as Fernald potential interpretation (2) above: of a collapse in potential output growth due to hysteresis starting after 2009. The employment-to-population ratio-based Okun’s Law potential output series telling us that potential has continued at its pre-2008 pace: that if only we could get employment back up to 63% of adults we would find no shift in potential growth at all (a story that, given the aging of the population, is surely too optimistic).

The preferred-Fernald (1) story of a growth slowdown starting in 2005 can be seen in the Okun’s Law potential output estimates: you can draw a line from 2005 potential through 2007 potential and extend it to get to today’s unemployment rate-based Okun’s Law potential output estimate:

Graph Civilian Employment Population Ratio FRED St Louis Fed

But such a procedure magnifies the 2009-2010 anomaly noted in the discussion of the preferred-Fernald (1) story above.

Once again, what you conclude depends very much on what priors you started out with.

The magnitude of the analytical puzzle we fact is well-expressed in Fernald’s Figure 1:

Www frbsf org economic research publications working papers wp2014 15 pdf

which shows well the collapse in total factor productivity and the reduction in capital deepening comparing 1948-1973 with 1973-1995, and the subsequent return of total factor productivity growth and of (real) capital deepening over 1995-2003. And then the mystery of what happened afterwards. We have stories–quantitatively inadequate stories, I agree–of what happened to turn 1948-1973 to 1973-1995. But at least we have stories, and we have a lot of them. We have a story–the Byrne-Oliner-Sichel story–of how the high-tech sector attained critical mass both for total factor productivity growth and for real capital-deepening channels in the mid-1990s, and so a pretty good explanation of what turned 1973-1995 into 1995-2003.

But after 2003?

We didn’t have a collapse in the high-tech sector. The engine of Silicon Valley continues to hum and purr much as it had before. We do have the standard problems of measurement and appropriability. If you want to get all techno-utopian (and I do) your estimates of real economic growth should take account of the fact that while the extra consumer surplus derived from the production of rival and excludible goods might be thought of as roughly equal to the GDP-account value, the extra consumer surplus derived from the production of non-rival and not-very-excludible goods is a much larger multiple of the GDP-account value–five times?, ten times?–because the eyeballs and the ancillary services that producers like Google sell to get their revenue are worth much less to those who pay to buy them than the free commodities Google gives away to create the eyeballs and the valuable ancillary services are worth to those who benefit from the free commodities, which are what Google really makes. But did those problems suddenly become bigger after 2003? The appropriability crisis had always been there: it did not emerge after 2003.

Fernald has a very nice graph of what his total factor productivity growth estimates tell him by broad tech-intensivity sector: the IT-producing industries, the IT-using industries, and the non-IT-intensive industries:

Www frbsf org economic research publications working papers wp2014 15 pdf

The picture painted is of (a) nothing happening in non-IT-intensive industries, (b) a large but transitory wave of productivity in the IT-producing industries, and (c) eight years later an echoing wave of productivity in the IT-intensive using industries, followed by a post-2006 return to lower than the previous 1973-1995 normal. But what actual, observable, patterns and stories of organization and narrative out here in the real world correspond to these striking moves in the numbers that underpin the GDP and productivity accounts? I have a hard time seeing any of them.

My problem is that I believe in the slow diffusion of technology, the importance of incremental improvements, the usefulness of the incentives provided by the fact that it is easy to make a lot of money by figuring out a cheaper way to produce and supply things that people are willing to pay a lot of money for, and the law of large numbers. These make me think that–modulus the business cycle and measurement error–total factor productivity should be smooth in the level and smooth in the growth rate as well: whatever processes were going on last year that led to invention, innovation, deployment, and thus higher productivity in a potential-output sense ought to be almost as strong or only a little stronger this year. An oil shock, the entrance into the labor force of a baby-boom generation, a redirection of investment to controlling rather than increasing pollution, a decline in union power that makes it worthwhile to redirect investment from increasing the productivity of the workers you have to enabling the value chain to function even should one have to carry through the threat of a mass layoff–all these ought to be able to produce relatively large shifts in total factor productivity growth relatively quickly. But we should be able to see them. And we see them in the 1970s and 1980s. But I do not see them in the 2000s, do we?

It is clear that the failure of the numbers that the–extremely smart–John Fernald calculates from the data we have to conform to my prior beliefs about the smoothness of aggregate total factor productivity growth is a problem. It is not clear to me whether it is a problem with me, a problem with whether our data accurately reflects the universe, or a problem with–or rather a fact that–the universe does not conform to my Visualization of the Cosmic All.

I asked John Fernald. And he emailed back his–well-informed–view that it was a problem with my expectations, and a fact about how the world works that I need to wrap my mind around and accept.

John Fernald: Yes, it’s a problem with the universe…

…Start micro. Firms, plants, and even shifts on plants seem to have output that varies in ways only loosely related to inputs. Some of it is measurement challenges, but attempts to control for those don’t make the variation in growth go away. Some of that, in turn, is lumpy adoption of new technology, so it shouldn’t be smooth—but it’s not clear it’s all of it. (E.g., time varying learning by doing; or a tiny reorganization that make things work better.)

Moving macro, it’s reasonable that the law of large numbers would kick in. But no one ever finds that technological progress broadly, in the Solow sense of an aggregate production function, is smooth. Again, some of it is measurement error around the business cycle. But maybe not all of it.

But even if, despite the evidence, technical progress was smooth over the business cycle, it’s not far-fetched that it might differ by decade. All of the G[eneral ]P[urpose ]T[echnology] stories have that flavor.

My Macro Annual paper is mostly about the past decade…. Utilization by any empirical measure is where it was a decade ago. And my informal polling of firms doesn’t suggest a lot of slack within companies—they’ve adjusted to the weakness of demand by reducing headcounts and capacity. So you might be offended by the [sharp, sudden] changes in trend, but they’re in the data [and in the world out there].


2401 words