Must-read: Amos Tversky and Daniel Kahneman (1974): “Judgment under Uncertainty: Heuristics and Biases”

Must-Read: Amos Tversky and Daniel Kahneman (1974): Judgment under Uncertainty: Heuristics and Biases: “Many decisions are based on beliefs concerning the likelihood of future events…

…These beliefs are usually expressed in statements such as ‘I think that…’, ‘the chances are…’, ‘it is unlikely that…’, and so forth. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. What determines such beliefs? How do people assess the probability of an uncertain event or he value of an uncertain quantity?… People rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors…

Live at Project Syndicate: “Rescue Helicopters for Stranded Economies”

Live at Project Syndicate: Rescue Helicopters for Stranded Economies: BERKELEY – For countries where nominal interest rates are at or near zero, fiscal stimulus should be a no-brainer…. Some point to the risk that, once the economy recovers and interest rates rise, governments will fail to make the appropriate adjustments to fiscal policy. But… governments that wish to pursue bad policies will do so no matter what decisions are made today…. Aversion to fiscal expansion reflects raw ideology, not pragmatic considerations…. This debate is no longer an intellectual discussion–if it ever was. As a result, a flanking move might be required. It is time for central banks to assume responsibility and implement ‘helicopter money’… **Read MOAR at Project Syndicate

Weekend reading: “Participating in the labor market” edition

This is a weekly post we publish on Fridays with links to articles that touch on economic inequality and growth. The first section is a round-up of what Equitable Growth has published this week and the second is work we’re highlighting from elsewhere. We won’t be the first to share these articles, but we hope by taking a look back at the whole week, we can put them in context.

Equitable Growth round-up

This week we published the third installment in our “Equitable Growth in Conversation” series. In this interview, Ben Zipperer talks to David Card and Alan Krueger about advances in empirical techniques in labor economics among other topics.

Increasing concern about a lack of competition in the U.S. economy has most people thinking about how to reduce consolidation among companies. But we shouldn’t ignore the importance of competition in the labor market.

U.S. labor force participation has changed quite a bit over the past 40 years. But as workers have entered and exited the labor force, where have they ended up? And how have these trends changed across the income distribution? Check out our interactive graph to find out.

After being in the wilderness for a few decades, the idea of a universal basic income is making a bit of a comeback. While there aren’t a lot of recent research ideas, it’s worth thinking about some of the potential effects of the program.

In their interview, Card and Krueger talk about advances in techniques to draw out the causal effect of programs and economic shocks. These advances are important and quite interesting, but there is no gold standard when it comes to the techniques.

Links from around the web

Productivity growth has been quite weak in the wake of the Great Recession. Toby Nangle argues that the lack of strong productivity gains may mean the global economy faces a “new impossible trinity”—unless stronger labor bargaining power actually starts to boost productivity. [ft alphaville]

The decline of the middle class in the United States has a number of causes. But the decline of government jobs in recent years looks to be a significant contributor, especially for the black middle class. Annie Lowrey dives into the subject. [nyt magazine]

When it comes to U.S. government debt, the public and policymakers are usually concerned that there’s too much of it. But Narayana Kocherlakota makes a good argument that there should actually be more government debt to help the global economy. [bloomberg view]

The source of low measured productivity growth in the United States is far from clear. And there are a number of hypotheses for why it’s happening. Neil Irwin runs through three scenarios: the depressing, the neutral, and the happy. [the upshot]

Gross domestic product, probably the most cited economic statistic, has come under attack in recent years. A number of researchers, policymakers, and advocates have argued that the statistic is outdated, doesn’t accurately represent changes in living standards, and doesn’t measure economic output well. The Economist joins the skeptical choir. [the economist]

Friday figure

Figure from “What explains the rise in income inequality at the top of the income distribution?” by Matt Markezich

When it comes to causality, no one technique should have all that power

Economist Alan Krueger speaks at the 2014 Fiscal Summit in Washington.

In the most recent interview in our “Equitable Growth in Conversation” series, our own Ben Zipperer talks with David Card of the University of California, Berkeley and Alan Krueger of Princeton University. The interview covers a number of areas, but it centers on Card and Krueger’s role in advancing empirical methods that help show causality. As you’ve probably heard a few hundred times in your life, correlation doesn’t imply causation. But the two economists sparked thinking about how other researchers could show the actual causal impact of a new policy or a shock to the economy—and this thinking is now a key part of the profession.

When it comes to showing causation in a hard science like physics or chemistry, the path forward is relatively well tread: Set up an experiment that controls for all the factors except the one you want to understand the impact of. But that’s not exactly possible when it comes to a social science like economics. You might want to understand what happens to economic growth when you implement a new policy, but it’s really hard to sort out the impact of everything else going on in the economy.

But researchers, like Card and Krueger, have figured out ways to tease out causality—one of which is a natural experiment. Think of Card and Krueger’s famous study on the minimum wage, for example. They used the fact that the minimum wage was going up in New Jersey but not Pennsylvania, and then looked at what happened to employment at fast food restaurants along the border in both states. The intent of raising the minimum wage wasn’t to cause an experiment, but the two economists used it as such.

Another way to get at causal effect is to use an instrumental variable. Let’s say you’re trying to understand how the amount of schooling a person has affects the wages he or she will earn. You’ll run into the problem that there are factors that affect how long a person will attend school while also affecting how much they might earn, such as innate talent. So how would you tease out just the effect of an exact year of school? You could find something that’s strongly correlated with the amount of time spent in school but not correlated at all with the other factors—say, talent—that might also affect wages, and then see how that affects how much a person earns.

A third option is to set up something very close to a real-world experiment, in a technique known as a randomized controlled trial, or RCT. A good example is the well-known Oregon Health Insurance Experiment. In 2008, the state of Oregon had enough money to expand its Medicaid plan but not enough money to expand it to all the people who wanted it. So the state ended up using a lottery to decide who got Medicaid and who didn’t. As the lottery was random, researchers could compare very similar people with and without health insurance knowing that difference was determined by nothing but chance. That certainty about randomness helps show the causal impact of health insurance.

But as Card says in the interview, all these methods have “some strengths and some weaknesses.” Natural experiments, for example, are great because they actually happen in the real world, but it’s not always clear how random they are. It’s tough to know how random the changes to policy are. At the same time, a randomized controlled trial is great for knowing the effect of the change in the trial to a great degree of confidence, but it’s hard to know how applicable those results are to circumstances outside those that happened during the trial. So while it might be nice to be able to point to one technique to rule over all others, giving such power to one technique probably isn’t for the best.

Must-reads: April 27, 2016


Should-reads:

Must-read: David Glasner: “What’s Wrong with Monetarism?”

Must-Read: An excellent read from the very sharp David Glasner. I, however, disagree with the conclusion: the standard reaction of most economists to empirical failure is to save the phenomena and add another epicycle. Why not do that in this case too? Why not, as someone claimed to me that John Taylor once said, stabilize nominal GDP by passing a law mandating the Federal Reserve keep velocity-adjusted money growing at a constant rate?

David Glasner: What’s Wrong with Monetarism?: “DeLong balanced his enthusiasm for Friedman with a bow toward Keynes…

…noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction… in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed… as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses…. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable. Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines….

Friedman parried such attacks… [saying that] the point of a gold standard… was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold… True, eventuaally the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable…. If the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money? That was why Friedman believed that his k-percent rule… trumped the gold standard….

For at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s…. So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez faire dogmatists…. So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal–the k-percent rule–was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do? Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

Must-see: Raj Chetty and Nathaniel Hendren: “The Impacts of Neighborhoods on Intergenerational Mobility”

Must-See: Raj Chetty and Nathaniel Hendren: The Impacts of Neighborhoods on Intergenerational Mobility: W@4PM, Wells-Fargo Room: Stream: http://bluejeans.com/617756972 : “We characterize the effects of neighborhoods on children’s earnings and other outcomes in adult- hood…

…by studying more than five million families who move across counties in the U.S. Our analysis consists of two parts. In the first part, we present quasi-experimental evidence that neighborhoods affect intergenerational mobility through childhood exposure effects. In partiular, the outcomes of children whose families move to a better neighborhood – as measured by the outcomes of children already living there – improve linearly in proportion to the time they spend growing up in that area. We distinguish the causal effects of neighborhoods from confounding factors by comparing the outcomes of siblings within families, studying moves triggered by displacement shocks, and exploiting sharp variation in predicted place effects across birth cohorts, genders, and quantiles. We also document analogous childhood exposure effects for college attendance, teenage birth rates, and marriage rates. In the second part of the paper, we identify the causal effect of growing up in every county in the U.S. by estimating a fixed effects model identified from families who move across counties with children of different ages. We use these estimates to decompose observed intergenerational mobility into a causal and sorting component in each county. For children growing up in families at the 25th percentile of the income distribution, each year of childhood exposure to a one standard deviation (SD) better county increases income in adulthood by 0.5%. Hence, growing up in a one SD better county from birth increases a child’s income by approximately 10%. Low-income children are most likely to succeed in counties that have less concentrated poverty, less income inequality, better schools, a larger share of two-parent families, and lower crime rates. Boys’ outcomes vary more across areas than girls, and boys have especially poor outcomes in highly-segregated areas. In urban areas, better areas have higher house prices, but our analysis uncovers significant variation in neighborhood quality even conditional on prices.

An interactive history of U.S. labor force participation

If you want to know how the labor market has changed over time, you usually look at the unemployment rate or maybe the employment-to-population ratio. But while those summary statistics are important, they don’t tell us about what people outside the labor force are doing. Are they in school? Acting as a primary caregiver? Disabled? Retired from the workforce?

The chances a worker is in any of those roles at a specific age during their life has changed quite a bit over the years. Inspired by Matt Bruenig of Demos, we looked at the trends in labor force status by age since 1975, using data from the Current Population Survey.

The interactive graph below shows the share of U.S. workers at different ages who are:

  • Employed part-time or full-time
  • Officially unemployed
  • Disabled
  • In-home caregivers
  • Students in school
  • Retired
History of Labor Participation Interactive
An interactive look at participation in the labor force by age
Click an area on the chart to isolate that category. Slide along the GDP growth graph under the chart to look at a different time period.
Slide to pick a year (recessions are shaded), red lines indicate a major change to the CPS survey.
Note: This chart is updated monthly. Data is from the Census Bureau's Current Population Survey. Basic monthly data are used and all months are averaged together for each year. The survey was revised in 1989 and 1994; changes to both question wording and survey weights result in discontinuities in these years that may not be attributable to real changes in the economy. GDP data from: US. Bureau of Economic Analysis, Gross Domestic Product [GDP], retrieved from FRED, Federal Reserve Bank of St. Louis https://research.stlouisfed.org/fred2/series/GDP. Recession data from: Federal Reserve Bank of St. Louis, NBER based Recession Indicators for the United States from the Period following the Peak through the Trough [USREC], retrieved from FRED, Federal Reserve Bank of St. Louis https://research.stlouisfed.org/fred2/series/USREC, March 1, 2016.

 

Methodology

The data assembled span three versions of the Current Population Survey, with new surveys being instituted in 1989 and 1994. All three surveys feature a labor force participation item that is generated based on responses to a series of yes/no questions on the survey. This variable is called ESR, LFSR, and PEMLR, respectively, on the three versions of the survey. A second variable—called major activity, or MAJACT, on the first two surveys and PENLFACT on the post-1994 survey—was used to distinguish between certain categories of non-labor force respondents. Finally, a question on total hours worked was used to distinguish full-time workers from part-time workers.

The results are fairly consistent across surveys for certain age groups but there are important discrepancies. Most notably, the pre-1989 survey did not allow respondents to specifically identify themselves as retired. Instead, the “other” category included retirees. The wording and question order of the 1989-1993 survey appears to bias respondents in favor of choosing “carer” over “retired,” so another break in the retired series is evident in 1994. Minor changes in the survey may also have contributed to the uptick in respondents identifying as “disabled” in the most recent version of the survey.

This project’s github includes the Python code that was used to analyze the raw monthly CPS data, including our survey-weighting procedure and all coding decisions made.

The basic economics of a guaranteed income

In the 1975 book “Equality and Efficiency: The Big Tradeoff,” economist Arthur Okun coined the phrase “leaky bucket” to describe how efforts to redistribute income might reduce economic efficiency. Discussing the merits of his larger argument is a task for another time, but what’s interesting for right now is the contemporary policy example he uses to illustrate the idea.

In the wake of the 1972 presidential election, Okun discusses the competing proposals for guaranteed income from the candidates of the two major political parties. To someone reading 40 years later, the idea that a guaranteed income or a basic income was once in the political mainstream seems almost unbelievable. But the idea is slowly making its way back into policy debates.

At FiveThirtyEight, Andrew Flowers runs through the basics of such a plan, as well as the history of the idea, and points to attempts to implement it in the real world. With a basic or guaranteed income, the government would send a check to every citizen or resident to ensure that each individual has a certain level of income. And under a basic income, every person would receive a check of the same amount—whether that person is content with not working and can live on a small budget, or whether that person is a high-earning workaholic.

But there’s also the slightly different idea of a negative income tax, favored by economist Milton Friedman. Under a negative income tax, everyone would be guaranteed the same income, except the check from the government would decrease as the worker earns more. That description might sound familiar if you know about the workings of the Earned Income Tax Credit.

There are some major differences, however, between that credit and a negative income tax. Importantly, the Earned Income Tax Credit requires that a worker actually has a job to receive the credit whereas a negative income tax does not. By increasing the post-tax returns of employment, the Earned Income Tax Credit increases the labor supply and reduces hourly wages. The result, as a paper by University of California, Berkeley economist Jesse Rothstein shows, is that low-wage workers don’t receive the full value of the Earned Income Tax Credit because some of that value is captured by employers through lower wages. For every $1 spent on the program, after-tax incomes only go up by $0.73. But with a negative income tax, every $1 spent actually increases incomes by $1.39.

How does that happen? Because a negative income tax doesn’t require someone to work in order to get it, the reduction in the amount of work people are willing to do (labor supply) ends up boosting wages. So programs like a negative income tax or a universal basic income would likely reduce the amount of labor supply and boost wages.

Will the reduction be because workers already employed work fewer hours (a reduction on the intensive margin) or because workers drop out of the labor force (a reduction on the extensive margin)? There isn’t a sufficient amount of good and recent research on basic income to know how large the effects are or which effect would be larger. There’s also the possibility that making the program not phase out and getting rid of many antipoverty programs with drop-offs in benefits at certain levels would produce some offsetting increases in labor supply. But we just don’t know enough yet.

Thankfully, as Flowers reports, there’s been an uptick in new research starting in the area. So while these questions continue to be open, we may find some closure in the near future.

Competition in the U.S. labor market

The Obama administration has raised concerns about the increasing prevalence of occupational licensing and non-compete agreements.

When the Obama administration announced a new emphasis on competition policy last week, many observers almost certainly heard “competition policy” and immediately thought “antitrust enforcement.” Yet while antitrust enforcement is a key part of competition policy, the Administration is clear that it’s just one aspect. And there are some important areas of competition policy that you may not think are related to competition at all.

The labor market, for example, has seen the rise of several trends that have reduced competition in the labor market, to the detriment of many workers. In fact, the Obama administration has raised concerns about two little-known but increasingly prevalent labor market institutions: occupational licensing, and non-compete agreements.

Occupational licensing refers to the requirement that workers in certain occupations must get a license before getting a job. According to a report from the Treasury Department’s Office of Economic Policy, the President’s Council of Economic Advisers, and the Department of Labor, about 25 percent of American workers need a license to do their jobs. That share has increased by roughly 400 percent since the 1950s.

These requirements, however, can lock workers out of occupations, reduce competition, and create economic rents for some professions. New data from the Department of Labor, as Ben Casselman of FiveThirtyEight reports, show that workers without licenses get locked out of occupations and move into lower-paying ones. This doesn’t mean all licenses are unnecessary, but policymakers should keep an eye on them.

Similarly, non-compete agreements (which we’ve detailed here) are another labor market institution that has become surprisingly common and should be looked over. According to one estimate, about 18 percent of U.S. workers are currently under non-competes, and 37 percent have been under one at some point in their career.

Given that a large percentage of non-competes are not enforceable and employers still use them, there’s evidence that their increasing use is less about protecting intellectual property and more about shifting the balance of power toward firms. And as Noah Smith points out at Bloomberg View, this kind of policy is exactly the opposite of what we want in an era of low and declining firm and labor dynamism.

Outside of these specific policies, we should also point out that a lack of competition and consolidation among firms can have effects in the labor market. If firms increasingly have power in the market for their products, they may also have increasing power in the market for labor to create those products. Say that the hospital market becomes increasingly concentrated and the remaining hospitals get more market power. Then that market power may also extend to the market for, say, nurses. Monopoly power may beget monopsony power, which in turn depresses wages and employment.

It might be nice and neat to think about competition as something just to consider when one company tries to buy another. But it’s increasingly clear that it’s an area of concern that extends across large swathes of the economy.