The alarming deterioration of U.S. household surveys
A peak behind the curtain that shields the U.S. economic data-making process can be alarming at times. At least, that’s the upshot of a National Bureau of Economic Research paper published earlier this week. The paper, by Bruce D. Meyer of the University of Chicago, Wallace K.C. Mok of the Chinese University of Hong Kong, and James X. Sullivan of the University of Notre Dame, looks at the quality of household surveys used in creating government data sets. Their findings could have important ramifications for the measurement of poverty and inequality, and even more broadly, the creation of economic data moving forward.
A large share of the economic data the U.S. government produces through agencies such as the Bureau of Labor Statistics and the Census Bureau are based upon surveys. Consider the unemployment rate: The rate is calculated using the results of a survey that asks questions of 60,000 households every month. The accuracy of the unemployment rate is dependent upon the ability of the survey, known as the Current Population Survey, to get an accurate sample of the population and to get accurate information from respondents. This is true of other surveys, too, such as the Survey of Income and Program Participation, which gives information on participation in government transfer programs, and the Consumer Expenditure survey, which provides data that helps calculate inflation.
Prior to the new paper by Meyer, Mok, and Sullivan, there has been concern over the quality of household surveys due to a decline in participation. Households, once contacted by the surveyors, are less likely to respond than they were in the past. But Meyer, Mok, and Sullivan argue that this decline in the response-rate is just one of three factors affecting the quality of surveys, and the least consequential for measuring the receipt of government transfer programs.
According to the three economists, surveys are becoming less accurate because of what they call “item nonresponse.” This happens when a household responds to a survey, but fails to answer all the questions. A person, for example, might agree to take the survey but fail to answer the question about the amount received from the Supplemental Nutritional Assistance Program.
Yet “item nonresponse” is not as big of an issue as measurement error. This third problem is the mismeasurement that arises when respondents underreport the amount of assistance they receive from government programs. In this case, a respondent does answer the question about government nutrition assistance, but doesn’t report the correct amount. One of the probable causes of this error appears to happen because respondents have trouble remembering exactly how much they received, as many of these programs are paid out infrequently. According to Meyer, Mok, and Sullivan, mismeasurement is the biggest threat to survey quality, larger than the other two ill-effects combined.
In order to discern the size of these ill-effects on the amount of reported government funds provided to survey respondents, the economists compare survey data from New York from 2000 to 2012 to administrative data that comes directly from government records. The size of the difference is significant. According to their results, at least 30 percent–and up to 50 percent– of the dollar value of electronic payments to recipients of supplemental nutrition assistance are missed. Similarly, 32 percent of the dollar amount of unemployment insurance and 54 percent of workers’ compensation for disabilities suffered on the job are missed.
If survey data are missing this large a share of transfer incomes—economics parlance for any government program that provides assistance of one sort or another to people in need—then perhaps other important economic surveys contain biases. For example, a study by Massachusetts Institute of Technology economist David Autor, the London School of Economics’ Alan Manning, and the Federal Reserve Board economist Christopher Smith finds a fair amount of measurement error in wage variables in the Current Population Survey.
The release of this study by Meyer, Mok, and Sullivan could not be timelier. A bill recently passed by the House of Representatives would increase the use of administrative data, which in conjunction with survey data would appear to be much more useful. Hopefully this will help researchers and policy makers alike get a more accurate picture of how much bias actually existences in important data sets.