Are U.S. government economic surveys reaching the right mix of respondents?
Surveys conducted by U.S. government statistical agencies are not an inherently exciting topic. Most people are fine knowing that the federal government calculates the unemployment rate. They might even care to know that the survey used to calculate that rate is known as the Current Population Survey, though that might be stretching it. Beyond that, the specifics of survey methodology aren’t going to grab headlines anytime soon.
But policymakers depend upon these statistical surveys to inform them about the U.S. economy, which means they should be concerned about potential bias in the data. Such bias isn’t intentional, instead stemming from the reality that sometimes it’s hard to get people to sit down for an interview for a survey. After all, for a survey to be able to say anything accurate about the entire population, it has to be representative. This is why polls and surveys will get reweighted to account for the fact that the demographics of the people they interviewed won’t be exactly the same as the country as a whole. Maybe they interviewed a lower percentage of African Americans than the national percentage or they interviewed relatively more women. The final results will account for that difference.
But what if the survey doesn’t account for some differences among workers that could bias statistics? According to a new research paper, such bias may exist in some important government surveys. The new paper by economists Ori Heffetz and Daniel B. Reeves at the National Bureau of Economic Research shows that there’s a relationship between how easy it is to get a person to respond to certain survey and their answers to the questions in the survey. In other words, the willingness or ability to respond to a survey is a characteristic of respondents that survey designers should consider because it skews results.
The economists consider three surveys—the Current Population Survey, the Consumer Expenditure Survey, and Behavioral Risk Factor Surveillance System—but let’s look at the relatively more familiar Current Population Survey for an example of the dynamics they examine. The CPS keeps track of how many times it takes to contact a respondent for an interview (one time, two times, or three or more times). Heffetz and Reeves look to see if there is any relationship between the difficulty of reaching someone and their answers about their working status and whether they are unemployed. They find that there is a relationship—people who are harder to reach are more likely to be in the labor force and have a job. In other words, the labor force participation rate for these respondents is higher and the unemployment rate is lower.
Now how does this produce a bias? Well, if we think of non-respondents as people who are extremely hard to contact then this would mean they are likely to have higher labor force participation rates and lower unemployment rates. Of course, without additional information on non-respondents, the authors can’t directly verify this assumption. But if this assumption is true, it means that as the response rate for surveys such as the Current Population Survey and other government surveys decline—as they have been in recent years—then this bias will increase. For policymakers and scholars interested in having unbiased economic data, this trend presents quite an unfortunate problem.