Lunchtime Must-Read: Paul Krugman: The Temporary-Equilibrium Method

Paul Krugman: The Temporary-Equilibrium Method: “David Glasner has some thoughts… that I mostly agree with, but not entirely. So, a bit more…

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of ‘temporary equilibrium’… using quasi-static methods to analyze a dynamic economy… simply as a tool…. So is IS-LM really Keynesian? I think yes–there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story…. The real question is whether the method of temporary equilibrium is useful. What are the alternatives? One… is to do intertemporal equilibrium all the way… DSGE–and I think Glasner and I agree that this hasn’t worked out too well…. Economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket…. Disequilibrium dynamics all the way?… I have never seen anyone pull this off…. Hicks… often seems to hit a sweet spot between rigorous irrelevance and would-be realism that ends up being just confused…. Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised…. I’m not sure that this is always true. Hicks did indeed assume static expectations… but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right. It’s true that you need some wage stickiness to explain what you see… but that isn’t necessarily about false expectations…. In the end, I wouldn’t say that temporary equilibrium is either right or wrong; what it is, is useful…

Lunchtime Must-Read: David M. Byrne, Stephen D. Oliner, and Daniel E. Sichel: Is the Information Technology Revolution Over?

David M. Byrne, Stephen D. Oliner, and Daniel E. Sichel: Is the Information Technology Revolution Over?: “Given the slowdown in labor productivity growth in the mid-2000s…

some have argued that the boost to labor productivity from IT may have run its course. This paper contributes three types of evidence to this debate. First, we show that since 2004, IT has continued to make a significant contribution to labor productivity growth in the United States, though it is no longer providing the boost it did during the productivity resurgence from 1995 to 2004. Second, we present evidence that semiconductor technology, a key ingredient of the IT revolution, has continued to advance at a rapid pace and that the BLS price index for microprocesssors may have substantially understated the rate of decline in prices in recent years. Finally, we develop projections of growth in trend labor productivity in the nonfarm business sector. The baseline projection of about 13⁄4 percent a year is better than recent history but is still below the long-run average of 21⁄4 percent. However, we see a reasonable prospect — particularly given the ongoing advance in semiconductors — that the pace of labor productivity growth could rise back up to or exceed the long-run average. While the evidence is far from conclusive, we judge that ‘No, the IT revolution is not over’.

Financing the rise in income inequality

High salaries and other forms of compensation in the finance sector aren’t a surprise to anyone. Even before the 2008 financial crisis, the high pay of bankers on Wall Street and in “the City” were a source of contention. But the causes behind the large wage premium aren’t as apparent. A new paper documenting trends in pay in finance sheds light onto why pay is so high.

The paper, by Joaane Lindley of King’s College London and Steven McIntosh, of the University of Sheffield, is summarized by the authors in a column for VoxEU. The two economists look at the data for the finance industry for the United Kingdom and especially the City, the hub of financial activity in London.

First, the authors show just how large the wage premium, or the gain for a worker moving into the industry, is for the U.K finance industry. They find that, on average, workers in this sector sees their wages increase by 37 percent when moving to the finance industry from a non-finance industry. The work of the employee didn’t change. The pay premium is based solely on working in the finance industry. Importantly, this premium has been increasing over time, rising by 57 percent from 1997 to 2011.

The authors then test to see whether the wage premium is the result of the industry hiring employees from more highly paid occupations. They find that the premium exists up and down the occupational distribution. Finance executives make more than non-finance executives and customer service representatives for financial firms make more than representatives for non-financial firms.

Lindley and McIntosh offer three possible explanations for the high and rising finance pay premium. The first is skill intensity. Finance firms might just employ workers with more skills. The authors find that the industry employs more skilled workers and that this may contribute to the size of the premium. Yet the skill intensity doesn’t appear to have changed over time even though the premium has. So this explanation is incomplete.

The same goes for so called skill-biased technological change, the idea that increasing wage inequality is due to increased demand for skilled labor. The finance sector might employ more workers with non-routine skills that can’t be replaced with technology. But that also hasn’t changed much over time, according to data analyzed by the authors.

The authors’ preferred explanation is the firm sharing “rents” with its workers—rents in this case meaning excessive profits made above what the industry would have made in a more competitive marketplace. Financial deregulation beginning in the late 1970s through the mid-2000s created opportunities to extract these rents from other industries in need of the finance industry’s services. And then those rents were shared with workers up and down the financial industry’s pay ladder, resulting in the large premium for finance-industry workers.

Lindley and McIntosh’s data are only for the United Kingdom, but the premium exists in the United States as well. A 2012 paper by economists Thomas Philippon of New York University, and Ariell Reshef of the University of Virginia find a large and rising finance sector pay premium. The average premium is 50 percent over non-finance sectors, but 250 percent for executives in the finance industry over their counterparts in other sectors.

The large premium for finance workers, particularly executives, results in the industry’s overrepresentation at the very top of the income ladder. In 2005, about 14 percent of taxpayers in the top 1 percent and 18 percent of those in the top 0.1 percent were employed in the finance industry, compared to just under 8 percent of the top 1 percent and 11 percent of the top 0.1 percent in 1979.

In light of the 2008 financial crisis, the economic value of the U.S. finance industry as currently constituted has been called into question. The fact that new research shows the high pay in the financial services sector doesn’t come from economic efficiency but rather from excessive rents is another point in the case for policies that might redress the imbalances in pay. Policymakers should consider reining in pay packages in the finance sector because this step could well increase economic stability and efficiency and help reduce inequality, too

Potential Output and Total Factor Productivity since 2000: Marking My Beliefs to Market: The Honest Broker for the Week of September 26, 2014

I am still thinking about the best assessment of potential output and productivity growth that we have–that of the extremely-sharp John Fernald’s “Productivity and Potential Output Before, During, and After the Great Recession”. And I am–slowly, hesitantly, and unwillingly–coming to the conclusion that I have to mark my beliefs about the process of economic technological change to market, and revise them significantly.

Let’s start with what I wrote last July:

Brad DeLong: I Draw a Different Message from John Fernald’s Calculations than He Does…: John Fernald….

U.S. labor and total-factor productivity growth slowed prior to the Great Recession. The timing rules out explanations that focus on disruptions during or since the recession, and industry and state data rule out ‘bubble economy’ stories related to housing or finance. The slowdown is located in industries that produce information technology (IT) or that use IT intensively…

But when I look at this graph:

NewImage

I see, from 2003:I to 2007:IV, a healthy growth rate of 3.2%/year according to Fernald’s potential-output series. Then after 2007:IV the growth rate of Fernald’s potential-output series slows to 1.45%/year. The slowdown from the late 1990s era of the internet boom to the pace of potential output growth prior to the Lesser Depression is small potatoes relative to the slowdown that has occurred since. Thus Fernald’s claim that the “timing rules out explanations that focus on disruptions during or since the recession”. As I see it, the timing is perfectly consistent with:

  • a small slowdown in potential output growth that starts in the mid-2000s as the tide of the infotech revolution starts to ebb, and
  • a much larger slowdown in potential output growth with the financial crisis, the Lesser Depression, and the jobless recovery that has followed since.

I say this with considerable hesitancy and some trepidation. After all, John Fernald knows and understands these data considerably better than I do. Perhaps it is simply that I spend too much time down in Silicon Valley and so cannot believe that the fervor of invention and innovation that I see there does not have large positive macroeconomic consequences.

Nevertheless, I have come to believe that macroeconomists think that their assumption the trend is separate from and independent of the cycle are playing them false. This assumption was introduced for analytical convenience and because it seemed true enough for a first cut. I see no reason to imagine that it is still true.

That’s what I said last July. And now I have been trying to think about it some more…

There are actually two ways to read Fernald’s potential series. The first–the one I gravitated to–is this:

I Draw a Different Message from John Fernald s Calculations than He Does Thursday Focus for July 17 2014 Washington Center for Equitable Growth

More-or-less smooth growth from 2003 to 2009, with a serious slowdown in potential output growth starting in 2009 as serious hysteresis from the Lesser Depression hits the long-run growth potential of the American economy.

The second way to look at it is this:

I Draw a Different Message from John Fernald's Calculations than He Does Thursday Focus for July 17 2014 Washington Center for Equitable Growth

A sharp drop in the rate of potential output growth starting in 2005, with the Lesser Depression having had–so far–little negative hysteretic effect on potential output growth: basically, that another 1970s-magnitude productivity growth slowdown hit the U.S. economy in 2005, and that even without the financial crisis and the Lesser Depression we would today be more-or-less where we in fact are.

The second reading of the time series has to take the 2009 potential output estimate as an anomaly: a measurement error. The first reading of the time series has to take 2005 and 2008 as (smaller) measurement-error anomalies. The first reading suggests enormous headroom for cyclical recovery provided we can overcome effects of hysteresis. The second reading suggests that there is no such headroom–that what we have is only a little bit less than the best we can reasonably expect.

An alternative approach to the data is to use Okun’s Law–to assume cointegration between potential GDP, actual real GDP, and either the unemployment rate or the employment-to-population ratio, with an Okun’s Law coefficient of 1.25 for the employment-to-population ratio (a 1% fall in the employment-to-population ratio reduces real GDP below potential by an extra 1.25%) and of 2.0 for the unemployment rate (a 1%-point fall in the employment-to-population ratio reduces real GDP below potential by an extra 2.0%). This assumption allows us to construct potential output series for the employment-to-population ratio (assuming that full employment-to-population is 63%) and for the unemployment rate (assuming that full employment is attained at a 5% unemployment rate):

Graph Civilian Employment Population Ratio FRED St Louis Fed

Both the employment-to-population ratio-based and the unemployment rate-based Okun’s Law potential output series tell the same story about the output gap and the business cycle from 1990-2010, and it is a reasonable story: the recession of the early 1990s and the subsequent “jobless recovery” producing an output gap that was not especially large but that was long-lasting; productivity-growth acceleration in the late-1990s as the high-tech sector reaches critical mass; “overheating” with output above potential in 2000 and 2001; small but persistent output gaps in the first half of the 2000s; growth with production at more-or-less potential output from 2005 through 2007; the collapse and the emergence of the enormous output gaps of the Lesser Depression in 2008-2009. And then, starting in 2010, the stories the two measures tell diverge markedly. The unemployment rate-based Okun’s Law potential output series tells the same story as Fernald potential interpretation (2) above: of a collapse in potential output growth due to hysteresis starting after 2009. The employment-to-population ratio-based Okun’s Law potential output series telling us that potential has continued at its pre-2008 pace: that if only we could get employment back up to 63% of adults we would find no shift in potential growth at all (a story that, given the aging of the population, is surely too optimistic).

The preferred-Fernald (1) story of a growth slowdown starting in 2005 can be seen in the Okun’s Law potential output estimates: you can draw a line from 2005 potential through 2007 potential and extend it to get to today’s unemployment rate-based Okun’s Law potential output estimate:

Graph Civilian Employment Population Ratio FRED St Louis Fed

But such a procedure magnifies the 2009-2010 anomaly noted in the discussion of the preferred-Fernald (1) story above.

Once again, what you conclude depends very much on what priors you started out with.

The magnitude of the analytical puzzle we fact is well-expressed in Fernald’s Figure 1:

Www frbsf org economic research publications working papers wp2014 15 pdf

which shows well the collapse in total factor productivity and the reduction in capital deepening comparing 1948-1973 with 1973-1995, and the subsequent return of total factor productivity growth and of (real) capital deepening over 1995-2003. And then the mystery of what happened afterwards. We have stories–quantitatively inadequate stories, I agree–of what happened to turn 1948-1973 to 1973-1995. But at least we have stories, and we have a lot of them. We have a story–the Byrne-Oliner-Sichel story–of how the high-tech sector attained critical mass both for total factor productivity growth and for real capital-deepening channels in the mid-1990s, and so a pretty good explanation of what turned 1973-1995 into 1995-2003.

But after 2003?

We didn’t have a collapse in the high-tech sector. The engine of Silicon Valley continues to hum and purr much as it had before. We do have the standard problems of measurement and appropriability. If you want to get all techno-utopian (and I do) your estimates of real economic growth should take account of the fact that while the extra consumer surplus derived from the production of rival and excludible goods might be thought of as roughly equal to the GDP-account value, the extra consumer surplus derived from the production of non-rival and not-very-excludible goods is a much larger multiple of the GDP-account value–five times?, ten times?–because the eyeballs and the ancillary services that producers like Google sell to get their revenue are worth much less to those who pay to buy them than the free commodities Google gives away to create the eyeballs and the valuable ancillary services are worth to those who benefit from the free commodities, which are what Google really makes. But did those problems suddenly become bigger after 2003? The appropriability crisis had always been there: it did not emerge after 2003.

Fernald has a very nice graph of what his total factor productivity growth estimates tell him by broad tech-intensivity sector: the IT-producing industries, the IT-using industries, and the non-IT-intensive industries:

Www frbsf org economic research publications working papers wp2014 15 pdf

The picture painted is of (a) nothing happening in non-IT-intensive industries, (b) a large but transitory wave of productivity in the IT-producing industries, and (c) eight years later an echoing wave of productivity in the IT-intensive using industries, followed by a post-2006 return to lower than the previous 1973-1995 normal. But what actual, observable, patterns and stories of organization and narrative out here in the real world correspond to these striking moves in the numbers that underpin the GDP and productivity accounts? I have a hard time seeing any of them.

My problem is that I believe in the slow diffusion of technology, the importance of incremental improvements, the usefulness of the incentives provided by the fact that it is easy to make a lot of money by figuring out a cheaper way to produce and supply things that people are willing to pay a lot of money for, and the law of large numbers. These make me think that–modulus the business cycle and measurement error–total factor productivity should be smooth in the level and smooth in the growth rate as well: whatever processes were going on last year that led to invention, innovation, deployment, and thus higher productivity in a potential-output sense ought to be almost as strong or only a little stronger this year. An oil shock, the entrance into the labor force of a baby-boom generation, a redirection of investment to controlling rather than increasing pollution, a decline in union power that makes it worthwhile to redirect investment from increasing the productivity of the workers you have to enabling the value chain to function even should one have to carry through the threat of a mass layoff–all these ought to be able to produce relatively large shifts in total factor productivity growth relatively quickly. But we should be able to see them. And we see them in the 1970s and 1980s. But I do not see them in the 2000s, do we?

It is clear that the failure of the numbers that the–extremely smart–John Fernald calculates from the data we have to conform to my prior beliefs about the smoothness of aggregate total factor productivity growth is a problem. It is not clear to me whether it is a problem with me, a problem with whether our data accurately reflects the universe, or a problem with–or rather a fact that–the universe does not conform to my Visualization of the Cosmic All.

I asked John Fernald. And he emailed back his–well-informed–view that it was a problem with my expectations, and a fact about how the world works that I need to wrap my mind around and accept.

John Fernald: Yes, it’s a problem with the universe…

…Start micro. Firms, plants, and even shifts on plants seem to have output that varies in ways only loosely related to inputs. Some of it is measurement challenges, but attempts to control for those don’t make the variation in growth go away. Some of that, in turn, is lumpy adoption of new technology, so it shouldn’t be smooth—but it’s not clear it’s all of it. (E.g., time varying learning by doing; or a tiny reorganization that make things work better.)

Moving macro, it’s reasonable that the law of large numbers would kick in. But no one ever finds that technological progress broadly, in the Solow sense of an aggregate production function, is smooth. Again, some of it is measurement error around the business cycle. But maybe not all of it.

But even if, despite the evidence, technical progress was smooth over the business cycle, it’s not far-fetched that it might differ by decade. All of the G[eneral ]P[urpose ]T[echnology] stories have that flavor.

My Macro Annual paper is mostly about the past decade…. Utilization by any empirical measure is where it was a decade ago. And my informal polling of firms doesn’t suggest a lot of slack within companies—they’ve adjusted to the weakness of demand by reducing headcounts and capacity. So you might be offended by the [sharp, sudden] changes in trend, but they’re in the data [and in the world out there].


2401 words

Morning Must-Read: Andy Jalil: A New History of Banking Panics in the United States, 1825–1929: Construction and Implications

Andy Jalil: A New History of Banking Panics in the United States, 1825–1929: Construction and Implications: “There are two major problems in identifying the output effects…

…of banking panics of the pre-Great Depression era. First, it is not clear when panics occurred because prior panic series differ in their identification of panic episodes. Second, establishing the direction of causality is tricky. This paper addresses these two problems (1) by deriving a new panic series for the 1825-1929 period and (2) by studying the output effects of major banking panics via vector autoregression (VAR) and narrative-based methods. The new series has important implications for the history of financial panics in the United States.

Things to Read on the Morning of September 22, 2014

Must- and Shall-Reads:

 

  1. Max Sawicky: Economic security and “the great disturbing factors of life”: “Steve Randy Waldman of the interfluidity blog pulls me back into Universal Basic Income (UBI) land…. I share his foreboding of a political future without a labor movement. It’s unpleasant to imagine how bad things could get, even aside from that whole destruction of the planet thing. In troubled times, there is a natural conflict between trying to preserve old, embattled forms of social protection and casting about for new, more viable ones. In general I have no problem with providing unconditional cash money to the poor rather than in-kind benefits. The problem of course is that we have in-kind benefits for food and housing because of the historic, political weakness of free-standing cash assistance. So we need a political environment that would be conducive to some kind of conversion. The Supplemental Nutrition Assistance Program (SNAP), formerly known as “Food Stamps,” got its political boost from agri-business interests, support which is waning…. I believe SRW’s characterization of the libertarian impulse is wrong. At its root I would say is not some desire for minimal bureaucracy and free choice, but a drive to drown a whittled-down welfare state in the bathtub. If you don’t like bureaucracy, try not to spend much time dealing with private health insurance companies. The Koch-fueled libertarians use UBI to trash existing programs and advocate a wholesale trade. Big government for all its flaws provides some measure of protection from predators that abound in the private sector…. Steve claims the UBI is a bridge from the U.S. to welfare states that are more effective in addressing poverty. By this criterion the U.S. certainly ranks comparatively low. The question is where such a bridge would lead. Existing, more effective welfare states are built on big social insurance, not UBIs…”

  2. Jeffrey Frankel: Piketty’s Fence: “One could just easily find other a priori grounds for reasoning that countervailing forces might kick in if things get bad enough. Democracy is one such force. Progressive taxation arose in the 20th century, following the excesses of the Belle Époque. A political trend of that sort could recur in this century if the gap between rich and poor continues to grow. A few years ago, American voters and politicians were persuaded to reduce federal taxes on capital income and estates. They phased out the estate tax completely (effective in 2010), even though this would benefit only the upper 1 per cent. This is standardly viewed as an example of the rich manipulating the political economy for their own benefit. Indeed, we know that campaign contributions buy some very effective advertising. But imagine that in the future we lived in a Piketty world, a return to the golden age of Austen and Balzac, where inheritance and unearned income were the sources of stratospheric income inequality. Would a majority of the 99 % still be persuaded to vote against their self-interest?”

  3. Rob Stavins: Climate Realities: “In theory, we can avoid the worst consequences of climate change with an intensive global effort over the next several decades. But given real-world economic and, in particular, political realities, that seems unlikely…. The world is now on track to more than double current greenhouse gas concentrations in the atmosphere by the end of the century. This would push up average global temperatures by three to eight degrees Celsius and could mean the disappearance of glaciers, droughts in the mid-to-low latitudes, decreased crop productivity, increased sea levels and flooding, vanishing islands and coastal wetlands, greater storm frequency and intensity, the risk of species extinction and a significant spread of infectious disease…. Two points are important to understand if we’re going to be serious about attacking this problem. One, it will be costly…. And two, things become more challenging when we move from the economics to the politics…. If the new technologies we hope will be available aren’t, like one that would enable the capture and storage of carbon emissions from power plants, the cost estimates more than double. Then there are the politics, which are driven by two fundamental facts. First, greenhouse gases mix globally…. Second, some of these heat-trapping gases–in particular, carbon dioxide–remain in the atmosphere for centuries…. Reducing greenhouse gas pollution will require the unalloyed cooperation of at least the 15 countries and one region (the European Union) that together account for about 80 percent of global carbon dioxide emissions…. Making matters more difficult, climate change is essentially unobservable by the public. On a daily basis, we observe the weather, not the climate. This makes it less likely that public opinion will force action the way it did 50 years ago when black smoke rose from industrial smokestacks…”

  4. Robert Skidelsky: Vanguard Scotland?: “Many are now convinced that the current way of organizing our affairs does not deserve such unquestioning allegiance; that the political system has closed down serious debate on economic and social alternatives; that banks and oligarchs rule; and that democracy is a sham. Nationalism promises an escape from the discipline of ‘sensible’ alternatives that turn out to offer no alternative. Nationalists can be divided into… those who genuinely believe that independence provides an exit from a blocked political system, and those who use the threat of it to force concessions from the political establishment. Either way, nationalist politicians enjoy the huge advantage of not requiring a practical program: all good things will flow from sovereignty…. Practically all of Europe’s existing nation-states contain geographically concentrated ethnic, religious, or linguistic minorities. Moreover, these states’ incorporation into the European Union–a kind of voluntary empire–challenges their citizens’ allegiance…. People are more willing to discount nationalism’s costs, because they have come to doubt the benefits of its liberal capitalist rival. Ordinary Russians, for example, refuse to face the costs of their government’s Ukraine policy, not just because they underestimate them, but because they somehow seem unimportant relative to the huge psychological boost the policy brings. Nationalism today is not nearly as virulent as it was in the 1930s, because economic distress is much less pronounced. But its revival is a portent of what happens when a form of politics claims to satisfy every human need except the coziness of communal belonging – and then lets the people down.”

  5. Heidi Moore et al. Why is Thomas Piketty’s 700-page book a bestseller?: “There’s been a bizarre phenomenon this year… it’s a bestseller…. Why this book? The themes that Piketty brings up have been enshrined in discussion about progressive economists for decades…. Now, over to the experts…. Stephanie Kelton: What explains the Piketty phenomenon?… The title,… doesn’t exactly carry the titillating allure of a bestseller like, say, Fifty Shades of Grey…. The Occupy movement laid the groundwork for a great debate. What was happening to America? Were we witnessing the rise of a plutocracy or the emergence of a meritocracy? Chris Hayes and Joe Stiglitz made the case on the left, while Tyler Cowen and David Brooks provided a counter-narrative for the right… but it was Piketty whose meticulous examination of the evidence, seemed to provide the impartial proof audiences were craving. The left was right…. Tyler Cowen: Thomas Piketty’s Capital in the Twenty-First Century has been a hit for several reasons, most notably the quality of the work. But I’d like to focus on a neglected reason why the book has found so much support, namely it appears to strengthen the case for redistribution…. As these issues get processed by the public there is a common attitude–whether justified or not–that many of the lower earners are partially or fully responsible for their own plight. The egalitarians don’t tend to win these policy debates…. If you are an activist who favors lots of redistribution, the Piketty story is a lot easier to tell yourself and to tell your audiences–and that is yet another reason for its popularity…. Emanuel Derman: Economists are the new nuclear physicists, turned to by governments for advice as though they are heirs to the power of the scientists who created Hiroshima…. Though I should, I can’t bring myself to read Thomas Piketty. I wish I could…. I am just spiritually weary of the ubiquitous cockiness of economists, though Piketty sounds as though he’s less guilty of this than most of the pundits in the daily papers…. My gripe with economists is not that their models don’t work well–they don’t, look at the role of central banks in the financial crisis–but that they seem so reluctant to acknowledge the riskiness of their advice. And yet, beware their fearsome unelected power…”

  6. Andy Jalil: A New History of Banking Panics in the United States, 1825–1929: Construction and Implications: “There are two major problems in identifying the output effects of banking panics of the pre-Great Depression era. First, it is not clear when panics occurred because prior panic series differ in their identification of panic episodes. Second, establishing the direction of causality is tricky. This paper addresses these two problems (1) by deriving a new panic series for the 1825-1929 period and (2) by studying the output effects of major banking panics via vector autoregression (VAR) and narrative-based methods. The new series has important implications for the history of financial panics in the United States.”

Should Be Aware of:

 

  1. Eric T. Swanson and John C. Williams: Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates: “The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to standard macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, these models also imply that asset prices and private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for asset prices and the economy, and it is unclear to what extent those yields have been affected by the zero lower bound. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by comparing the sensitivity of those interest rates to macroeconomic news when short-term interest rates were very low to that during normal times. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.”

  2. Chris Dillow: Capitalism & the low-paid: “Is capitalism compatible with decent living standards for the worst off* This old Marxian question is outside the Overton window, but it’s the one raised by Ed Miliband’s promise to raise the minimum wage to £8 by 2020…. Miliband’s promise… amounts to little better than a pledge that the incomes of the low paid won’t fall in real terms…. This raises the issue: what would serious policies to help the low-paid entail? I suspect they would require: Macroeconomic policies to boost employment…. A serious jobs guarantee; this would help give work to the less skilled, who might not benefit so much from higher aggregate demand alone…. Policies to strengthen the bargaining power of low-paid workers…. Are these policies compatible with capitalism?… This is precisely the debate we should be having. It could be that, whilst the constraints imposed by capitalism are tight, Labour might be over-estimating their tightness, and so will under-deliver for the low-paid.”

  3. Roy Edroso: Rice, Peterson NFL Scandals Really About Liberals’ Plan to Pussify America, Say Rightbloggers: “Even non-fans know that Baltimore Raven Ray Rice was seen cold-cocking his fiancee and Minnesota Viking Adrian Peterson allegedly beat his kid hard enough with a switch to raise welts…. Rightbloggers used the controversies to promote their pet cultural theories: for example, that it’s really liberals, not football players, who beat up women, and that the NFL, which is liberal like all corporations, is being Rice-baited into paying off feminists and sissies who, like liberal sportswriters, just want to ruin America’s Game for conservatives. The most prominent… has come from Rush Limbaugh, who you may remember attacked the NFL as liberal-biased when they wouldn’t let him buy a share of the St. Louis Rams in 2009. Limbaugh has so much yak… we could fill this column with it, but let’s not: The lesser rightbloggers are much more fun. Many of the brethren were outraged that popular sportswriters did not immediately say “what’s the big deal?” but actually acted appalled…. ‘No man has ever hit a woman because she “throws like a girl”‘, point-missed [Ben] Shapiro. ‘But plenty of young men have hit women because they had no moral compass and did not believe in basic concepts of virtue — and plenty of young men lack such a moral compass and belief in virtue thanks to lack of male role models’…. At National Review, Andrew C. McCarthy criticized ‘tendentious “sports journalists”… decidedly left of center… much less guarded about their hostility to conservatives’…. He gave exactly one example of this: ESPN allowed its own correspondent, Kate Fagan, to speak on the issue…. Fagan… wanted the NFL to… [go] into schools… ‘talking to young men about dealing with anger about how they treat women: I think that’s where you’re going to see change… going into the school systems and the younger spaces and really reprogramming how we raise men’. This McCarthy took to mean that ‘boys would be instructed that differentiating men from women breeds domestic violence’, and that was ‘how radical ideas–like the Left’s war on boys–get mainstreamed’. He proposed instead that we focus on ‘the breakdown of the family, the scorn heaped on chivalry, the disappearance of manners, and the general coarsening of our society that result from relentless progressive attacks on traditional values and institutions’. If only boys opened doors for girls again, there’d be no need for this reprogramming! (Other key phrases in McCarthy’s column: ‘the Obama Left’s agenda’, ‘ACORN’, ‘Al Sharpton’s National Action Network’, and ‘Alinsky-style community organizing’)…”

  4. Melting Asphalt: Ads Don’t Work That Way: “A lot of ads work simply by raising awareness…. Liquid Draino, for example, is a product that thrives on simple awareness…. Occasionally an ad will attempt overt persuasion…. Perhaps the most important mechanism used by ads (across the ages) is making promises. These promises can be explicit, in the form of a guarantee or warrantee, but are more often implicit, in the form of a brand image…. There’s one more honest ad mechanism to discuss… honest signaling…. ‘We’re willing to spend a lot of money on this product. We’re committed to it. We’re putting money where our mouths are.’… But… not every ad is so straightforward and above-board…. Corona wasn’t specifically designed for the beach, nor does ‘beach-worthiness’ emerge from any distinguishing features of Corona…. Cultural imprinting is the mechanism whereby an ad, rather than trying to change our minds individually, instead changes the landscape of cultural meanings — which in turn changes how we are perceived by others when we use a product. Whether you drink Corona or Heineken or Budweiser ‘says’ something about you. But you aren’t in control of that message; it just sits there, out in the world, having been imprinted on the broader culture by an ad campaign. It’s then up to you to decide whether you want to align yourself with it…. The class of products for which this is the case is surprisingly large. Beer, soft drinks, gum, every kind of food (think backyard barbecues). Restaurants, coffee shops, airlines. Cars, computers, clothing. Music, movies, and TV shows (think about the watercooler at work). Even household products send cultural signals, insofar as they’ll be noticed when you invite friends over to your home. Any product enjoyed or discussed in the presence of your peers is ripe for cultural imprinting. For each of these products, an ad campaign seeds everyone with a basic image or message. Then it simply steps back and waits–not for its emotional message to take root and grow within your brain, but rather for your social instincts to take over…”

Morning Must-Read: Rob Stavins: Climate Realities

Rob Stavins: Climate Realities: “In theory, we can avoid the worst consequences of climate change…

…with an intensive global effort over the next several decades. But given real-world economic and, in particular, political realities, that seems unlikely…. The world is now on track to more than double current greenhouse gas concentrations in the atmosphere by the end of the century. This would push up average global temperatures by three to eight degrees Celsius and could mean the disappearance of glaciers, droughts in the mid-to-low latitudes, decreased crop productivity, increased sea levels and flooding, vanishing islands and coastal wetlands, greater storm frequency and intensity, the risk of species extinction and a significant spread of infectious disease…. Two points are important to understand if we’re going to be serious about attacking this problem. One, it will be costly…. And two, things become more challenging when we move from the economics to the politics…. If the new technologies we hope will be available aren’t, like one that would enable the capture and storage of carbon emissions from power plants, the cost estimates more than double. Then there are the politics, which are driven by two fundamental facts. First, greenhouse gases mix globally…. Second, some of these heat-trapping gases–in particular, carbon dioxide–remain in the atmosphere for centuries…. Reducing greenhouse gas pollution will require the unalloyed cooperation of at least the 15 countries and one region (the European Union) that together account for about 80 percent of global carbon dioxide emissions…. Making matters more difficult, climate change is essentially unobservable by the public. On a daily basis, we observe the weather, not the climate. This makes it less likely that public opinion will force action the way it did 50 years ago when black smoke rose from industrial smokestacks…

Morning Must-Read: Jeffrey Frankel: Piketty’s Fence

Jeffrey Frankel: Piketty’s Fence: “one could just easily find other a priori grounds for reasoning…

…that countervailing forces might kick in if things get bad enough. Democracy is one such force. Progressive taxation arose in the 20th century, following the excesses of the Belle Époque. A political trend of that sort could recur in this century if the gap between rich and poor continues to grow. A few years ago, American voters and politicians were persuaded to reduce federal taxes on capital income and estates. They phased out the estate tax completely (effective in 2010), even though this would benefit only the upper 1 per cent. This is standardly viewed as an example of the rich manipulating the political economy for their own benefit. Indeed, we know that campaign contributions buy some very effective advertising. But imagine that in the future we lived in a Piketty world, a return to the golden age of Austen and Balzac, where inheritance and unearned income were the sources of stratospheric income inequality. Would a majority of the 99 % still be persuaded to vote against their self-interest?

Morning Must-Read: Max Sawicky: Economic Security and “The Great Disturbing Factors of Life”

Max Sawicky: Economic security and “the great disturbing factors of life”: “Steve Randy Waldman of the interfluidity blog pulls me back into Universal Basic Income (UBI) land…

…I share his foreboding of a political future without a labor movement. It’s unpleasant to imagine how bad things could get, even aside from that whole destruction of the planet thing. In troubled times, there is a natural conflict between trying to preserve old, embattled forms of social protection and casting about for new, more viable ones. In general I have no problem with providing unconditional cash money to the poor rather than in-kind benefits. The problem of course is that we have in-kind benefits for food and housing because of the historic, political weakness of free-standing cash assistance. So we need a political environment that would be conducive to some kind of conversion. The Supplemental Nutrition Assistance Program (SNAP), formerly known as “Food Stamps,” got its political boost from agri-business interests, support which is waning…. I believe SRW’s characterization of the libertarian impulse is wrong. At its root I would say is not some desire for minimal bureaucracy and free choice, but a drive to drown a whittled-down welfare state in the bathtub. If you don’t like bureaucracy, try not to spend much time dealing with private health insurance companies. The Koch-fueled libertarians use UBI to trash existing programs and advocate a wholesale trade. Big government for all its flaws provides some measure of protection from predators that abound in the private sector…. Steve claims the UBI is a bridge from the U.S. to welfare states that are more effective in addressing poverty. By this criterion the U.S. certainly ranks comparatively low. The question is where such a bridge would lead. Existing, more effective welfare states are built on big social insurance, not UBIs…

Lunchtime Must-Read: Heidi Moore et al.: Why is Thomas Piketty’s 700-Page Book a Bestseller?

**Heidi Moore et al. Why is Thomas Piketty’s 700-page book a bestseller?: “There’s been a bizarre phenomenon this year… it’s a bestseller…. Why this book? The themes that Piketty brings up have been enshrined in discussion about progressive economists for decades…. Now, over to the experts…

Stephanie Kelton: What explains the Piketty phenomenon?… The title,… doesn’t exactly carry the titillating allure of a bestseller like, say, Fifty Shades of Grey…. The Occupy movement laid the groundwork for a great debate. What was happening to America? Were we witnessing the rise of a plutocracy or the emergence of a meritocracy? Chris Hayes and Joe Stiglitz made the case on the left, while Tyler Cowen and David Brooks provided a counter-narrative for the right… but it was Piketty whose meticulous examination of the evidence, seemed to provide the impartial proof audiences were craving. The left was right….

Tyler Cowen: Thomas Piketty’s Capital in the Twenty-First Century has been a hit for several reasons, most notably the quality of the work. But I’d like to focus on a neglected reason why the book has found so much support, namely it appears to strengthen the case for redistribution…. As these issues get processed by the public there is a common attitude–whether justified or not–that many of the lower earners are partially or fully responsible for their own plight. The egalitarians don’t tend to win these policy debates…. If you are an activist who favors lots of redistribution, the Piketty story is a lot easier to tell yourself and to tell your audiences–and that is yet another reason for its popularity….

Emanuel Derman: Economists are the new nuclear physicists, turned to by governments for advice as though they are heirs to the power of the scientists who created Hiroshima…. Though I should, I can’t bring myself to read Thomas Piketty. I wish I could…. I am just spiritually weary of the ubiquitous cockiness of economists, though Piketty sounds as though he’s less guilty of this than most of the pundits in the daily papers…. My gripe with economists is not that their models don’t work well–they don’t, look at the role of central banks in the financial crisis–but that they seem so reluctant to acknowledge the riskiness of their advice. And yet, beware their fearsome unelected power…