In Conversation with Erica Groshen

Overview

Equitable Growth in Conversation is a recurring series where we talk with economists and other academics to help us better understand whether and how economic inequality affects economic growth and stability. In this installment, Austin Clemens, the director of economic measurement policy at the Washington Center for Equitable Growth, speaks with Erica Groshen, senior economics advisor at Cornell University’s Industrial and Labor Relations School. She has previously served as commissioner of the U.S. Bureau of Labor Statistics. Groshen is a labor economist, who also is active in work to advance the infrastructure of the federal statistical system.

In a recent conversation, Clemens and Groshen discussed:

  • The promise of real-time economic measurement
  • The trade-offs to consider as the sources of data change
  • Policymakers’ demand for more timely and actionable data
  • The real-time measurement of Unemployment Insurance data
  • Improving real-time measurement of UI data and other economic data
  • The importance of job skills, not credentials, in the U.S. labor market
  • A new data-gathering initiative by the U.S. Chamber of Commerce Foundation

 
On October 27, Groshen will join Equitable Growth again for our 1-hour event, “Opportunities and challenges of real-time economic measurement.”

Austin Clemens: So, first of all, I would just like to say, this past recession was my first one as an economic observer.

Erica Groshen: Oh. Okay. You’re making me feel old.

The promise of real-time economic measurement

Clemens: I should say I was not an economic observer because during the Great Recession of 2007–2009 I was in graduate school, and I was taking political science, so I was not as keyed into this kind of thing. But it’s really striking to me how quickly economists responded to this one. We had working papers published on our website over the past 18 months where, in some cases, the paper was analyzing data that were 2 weeks old. For starters, is this a relatively new phenomenon in this recession? And if so, what’s contributing to this sudden demand and the sudden supply of all this high-frequency research?

Groshen: I think there are at least three factors at play here. One is more availability of data and computational power than we ever had before, with some from the private sector and some from the public sector. A second unusual factor is that we knew exactly when this recession started. And we had a good sense that it was going to be deep and profound. We didn’t know exactly what it was going to look like because it was unprecedented, but usually it takes 6 to 8 months for the National Bureau of Economic Research [the arbiters for dating U.S. recessions] to declare the beginning of a recession. And this time we didn’t need NBER to declare it, but NBER did it very fast because, again, we all knew it happened and that’s not typical. So, everybody was primed and on the same page. No one was disputing whether it was happening at all. That whole part of the discussion just was skipped entirely.

The third part is that even though policymakers always like to complain about how economists are always reinventing the wheel and don’t really learn anything, the truth is, we have learned things about recessions and how to think about them and how to measure them. Our recent research and the policy experiments that have taken place have taught us a lot of lessons. So, economists were primed intellectually in a way that we hadn’t been before. All the studies on previous recessions, conducted with modern data and modern theory fed directly into policymakers having the tools ready to address the coronavirus recession.

Clemens: And I would just add that because everyone immediately knew we were in a recession, there was so much policymaker demand for answers and solutions. Right?

Groshen: Yes. I was really impressed and heartened by how quickly the policy community reacted, even in this time of polarization, which seems to interfere with almost anything else getting done. Our leaders enacted some very strong and innovative policy responses, which worked remarkably well for how quickly they were implemented and how large they were. I found it really heartening that policymakers could and did do this. For me, it was like this wonderful moment of “yes, we can do much more than we think we can do.” It was awesome.

Clemens: And the policy responses were responsive to the data we were seeing, right?

Groshen: Yes.

Clemens: And responsive too, I think, to what economists were saying.

Groshen: Right.

Clemens: So, building on that, let’s just take it as a given that more timely estimates are better, and that we’d like more timely estimates going forward.

Groshen: Absolutely. More timely. More granular. More flexible.

Clemens: Right. That’s a significant demand to place on the federal statistical system. In a general sense, what do we need to get there? Is it primarily a question of using data we have in better ways? Or is it going to be primarily a question of finding new data, collecting new data? And what do you think are the big places in the federal data infrastructure that we need to work on?

Groshen: The statistical system really expanded its capacity to inform decision-making when survey methodology was developed [around the 1940s]—specifically, when the statistics and the cognitive work was done so that agencies figured out how to choose samples, process the information, and ask the questions that would elicit the information needed. And so that was a huge focus of the statistical system for many years.

Economists always used some administrative data because that was all they had to begin with. So, they started with administrative data, then they developed a huge survey capacity that has been really powerful and essential. Now, in some ways, we’re at a third iteration, where external administrative and private data have burgeoned. So, the statistical agencies have to get access to that information (and that’s not been trivial), and then find ways to use it to augment or substitute for survey data.

I think survey data will remain a very important part of what statistical agencies do because there are some things you cannot measure without asking people, such as why they are not working.

Yet this wealth of other new data out can be tapped to improve our official statistics. You can divide those other data into two parts: government administrative data and private-sector data. In order for statistical agencies to make better use of government administrative data, some practices and legislation will have to change to allow more access and, in some cases, improve the administrative data. Such changes are difficult, although inroads are being made.

Turning to private data, we’re still at the infancy of figuring out how to work with the private sector appropriately. Many holders of private data want to help improve official statistics, but they have important concerns—for example, about protecting confidentiality and intellectual property. Getting the right mechanisms in place is a real challenge because of the needs of the two parties. Even with some commonality of interests, there are some needs that seem to conflict, so the parties will have to work through them.

The trade-offs to consider as the sources of data change

Clemens: While we’re discussing surveys, let’s talk about the Household Pulse and Small Business Pulse surveys. They have a couple of unusual characteristics. They’re online, and the samples are massive with very low response rates. And they’re specifically crafted to capture COVID-related economic concerns. What do you think the future is for this kind of survey? Is this going to help us make the traditional surveys better? And are we going to see Census deploying more one-offs like that?

Groshen: It depends on how broadly you define it. But I’m a big advocate for the statistical agencies having mechanisms to ask one-off questions or address one-off situations, because “stuff happens.” We don’t know in advance what the next crisis will be, so the option value of these facilities is huge. Exactly what form these flexible programs will take, I don’t know. One thing you did not mention—and I think I know why—is that the Bureau of Labor Statistics also did a one-off survey of establishments and asked some key questions. It was done just once, not weekly, like the pulse surveys, but it had a large sample with a very high response rate, and thus much more statistical reliability. Despite the high quality of the BLS version, it has garnered less attention than the Census pulse surveys, perhaps because it was not as timely.

So, there can be this trade-off between quality and timeliness and responsiveness. It may be that the federal statistical agencies will need various instruments all along different spectrums that will be available as needed. Or some design may come to dominate the others.

We’ve seen a lot of experimentation by statistical agencies during the pandemic. The White House Office of Management and Budget has helped this. It is the gatekeeper for any kind of survey like this [OMB is required to review and approve any data collection effort that imposes a burden on the respondents], moved extremely rapidly in approving these, and that’s why they happened. So, OMB staff deserve a lot of credit for their willingness to speed up their processes to allow these novel one-off surveys to hit the ground so fast.

Part of the reluctance of the statistical agencies to launch one-off surveys in the past has been funding. They don’t get funding for such an effort at the drop of a hat. Thus, the agencies have to divert funding from someplace else to do it. During COVID, many agencies could redirect money that had been budgeted for travel to new uses. Usually, though, starting something unexpected like this means shorting some other planned activity, so that’s hard.

Another source of reluctance is that the statistical agencies aim to produce gold-standard data. Thus, it can be uncomfortable for the agencies to issue products that they would not normally consider to be of publishable quality. Going forward, the statistical agencies will likely continue to put out one group of things that are gold-standard official statistics and then perhaps some other sets of information that are not gold standard, but rather bronze or copper.

Clemens: I think, notably, that the Census Bureau was saying throughout the year that one of the reasons they could field the pulse surveys was because they were saving some money because they were spending less on facilities and travel and things like that. And then, also, that it’s a cheap survey to field. I think it was quite a bold move on their part, especially because government economists can be very cautious—for very valid reasons, to be clear. They want to put out good estimates, and they don’t want to be seen as serving partisan ends, and they want to be reliable and fair. So, there is this very reasonable concern that as you speed up data production, you introduce some error. The federal statistical system is used to this. Products get revised a lot. But it’s still a barrier.

Policymakers’ demand for more timely and actionable data

Clemens: So, putting your government statistics hat on, because it’s very different for academics and government statisticians, how do you think about that trade-off between trying quickly to give policymakers, and the public frankly, what they need and then also wanting to be reasonably accurate. How should agencies be approaching that?

Groshen: Here, transparency is the key. Policymakers must make decisions that are hard, often because information is incomplete. Government statisticians understand that an estimate with a large standard error can be a lot better than having no estimate at all. So, I think statistical agencies’ transparency practices about data quality are essential. The agencies must be upfront about data virtues and limitations, and guide policymakers on how to treat the data. Then, they can stand back.

An interesting feature of being an official statistician is that after you put numbers out there, you have no control over how they are used. You have to accept that people may use official statistics in all sorts of crazy ways. The goal of the statistical agency is to provide enough guidance and information that people won’t misuse data accidentally.

Clemens: I think the flip side of that, as we see amid this pandemic, is that academic economists using administrative datasets put out sometimes-conflicting numbers. Some of the consumption datasets from the private sector, for example, told us different things. What is the government role, do you think?

Groshen: I think of official statistics as information infrastructure: the basic building blocks that people need to understand economic conditions and other factors to help them make the best decisions possible. National statisticians are a group of technical professionals, whose job is to understand what needs to be measured and to go out and measure those concepts using state-of-the-art methods. But statistics are inherently imperfect. They are always approximations to some measurement objective. And just because it’s been the best way to do it in the past doesn’t mean it’s the best way to do it now. So, the statistical agencies are always adjusting how they do what they do. They tend to be more behind the curve rather than ahead of the curve because there’s a tremendous value to consistency as well.

Consider industrial, occupational, and geographic classifications, which evolve in our dynamic economy. Agencies know they must change classifications over time, but not every week. That would make it hard to analyze data and create occupations or industries that, it turned out, didn’t matter. So, it’s easy to see why you don’t change those classifications constantly. But with consumption and inflation, the entry of different kinds of outlets and different kinds of good and services, more frequent adjustments are necessary.

It’s been a real challenge in the Consumer Price Index and the Producer Price Index, particularly as the economy has become more service-based. Goods are inherently easier to price than services. Most new goods build on previous models, and qualities are standard within the model. By contrast, when a new kind of service appears, statisticians usually must design a brand new pricing strategy that covers some part of those services and can reasonably represent trends for the whole. This is just so much more complicated.

A very recent challenge has been that consumption patterns changed dramatically and quickly as a consequence of the COVID pandemic. Normally, adjusting the market basket every 2 years [using the Consumer Expenditure Survey results] was adequate. During the pandemic, this embedded sluggishness became a more serious limitation for understanding what was happening to people’s cost of living. Private-sector estimates played a role in illuminating the extent of these limitations. But they could not have made their estimates without the groundwork provided by the official statistics.

Getting back to your question, the government role is to provide the baseline information from which the private sector can work to understand what they see in their part of the world. This is a good outcome. Sometimes, comparisons of official statistics with private sources reveals underlying heterogeneity hidden by lack of granularity in official data. Other times, it reveals a difference in measurement concepts or sample selection biases. And often, the findings agree, validating both methods and increasing confidence in the inferences you can draw.

It is unfortunate that some observers see private-sector statistics as competition for official data. To the contrary, most government statisticians see new sources as a challenge in a positive sense. Their attitude toward findings that seem to conflict with released estimates in something like, “okay, here’s a new validation exercise that somebody else did for us. It shows some issues. Now, we need to look at it further.” This approach reflects a realistic humility that comes from being attuned to all the assumptions embedded in their estimates. When those assumptions need reexamination, national statisticians are often the first ones to call for the change. In fact, they probably wrote a paper 5 years ago saying this could be a problem.

The real-time measurement of Unemployment Insurance data

Clemens: Let’s turn to something that I know you’ve been working on for a long time, which is the data on Unemployment Insurance. This is a real-time measurement question of considerable significance right now because we have the end of enhanced pandemic-related Unemployment Insurance in certain states, and then in all states. First, can you give us a brief primer? Where does UI data come from?

Groshen: The Unemployment Insurance system is a federal system. It is administered by states with the federal government refunding the states and overseeing the program in general. The states have a fair amount of leeway—but not total freedom—in how they run their programs. The states collect three kinds of important information. First, when people file a claim, the states collect information from the claimants, which the states then use to adjudicate the claims, to decide how much the claimants should be paid, and to track the payments.

The states also collect two other kinds of information, not about claims but rather about everybody covered by Unemployment Insurance, which is more than 99 percent of the workers in the country. The first kind is a set of information on employer UI accounts. If you are an employer, you start a UI account, and you give the UI system some information about your business, which industry you’re in, and what your location is. And you also tell them, on a quarterly basis, how many people you had on payroll in each month of that quarter and how much you paid out in wages during those months. So, that’s the employer record. In addition, employers submit employee wage records. That’s a report for every worker they employed each quarter, listing how much they earned in each month during the quarter.

So, in total, there are employer records, worker records, and claimant records collected by the states. The federal government gets some information from those records to run the UI system. Those records go to the Employment and Training Administration in the U.S. Department of Labor, which runs the UI system.

In addition, the U.S. Bureau of Labor Statistics pays the states to curate the employer records, but just the employer records, so that they will be consistent across all the states. The state staffs curate those records and send them to BLS, which compiles them into a register of all employers in the country. BLS uses that register as a sampling frame for employer surveys and also analyzes the data directly. This is the BLS program called the Quarterly Census of Employment and Wages, with its spin-off Business Employment Dynamics program.

Improving real-time measurement of UI data and other economic data

Clemens: What I love about this is that it’s a big administrative dataset that it feels like we’re not using to its full potential yet. And you have a great proposal to enhance those records. Can you talk a little bit about how that works?

Groshen: Sure. The Bureau of Labor Statistics has no access to the two other kinds of UI records collected by the states—the worker and claims records. So, right now, the claims records are used by the Employment and Training Administration for its administrative purposes and also to publish a weekly release on the volumes of initial and continuing UI claims. Those claims releases certainly have some relationship to economic trends that economists and policymakers look at. They are also very timely, but they are not constructed to be national economic indicators. In particular, many week-to-week fluctuations in the numbers reflect peculiarities of state processing steps and administrative changes, rather than underlying economic trends. Thus, the releases can be difficult to interpret for those who want to use them as an economic indicator.

One aspect of my proposal, in brief, would provide the Bureau of Labor Statistics with access to claims records and funding to create reliable economic indicators from this information. If implemented, this new data program would filter out the impact of administrative changes to tell us what’s going on in the U.S. labor market with more granularity [geographic, industrial, demographic, and occupational] and a close relationship to economic concepts. You can’t really expect the experts running the UI program to also be experts in creating national economic indicators. That’s the job of the Bureau of Labor Statistics. So, that would make sense.

The second part of the proposal concerns worker wage records. Up to this point, these worker records have been used by some states individually, by some states together, and also by the Census Bureau, which has a special deal with each of the states to create the impressive Longitudinal Employer-Household Dynamics program. The Bureau of Labor Statistics has many programs that would benefit from ongoing access to the comprehensive information in these worker records. Right now, those records are not available to BLS at all.

If you enhanced worker records and gave BLS access to them, BLS could replace all or parts of a number of its surveys and also improve the statistics coming out of those and other programs. Already, each record contains the worker’s identity, their employer, and the location and the industry of the employer. The states could also collect hours worked to allow calculation of hourly earnings and the ability to distinguish part-time, full-time, and overtime work. BLS also would want job titles, which can be converted into standard occupational codes by the statistical agencies. Job titles are important for tracking emerging occupations in order to inform the workforce development system about the jobs of the future, which are going away, and what’s going on by geography and industry.

BLS will also need information on work locations because many employers have multiple work locations. If you’re trying to disaggregate job trends within a state, you want to know where in the state people actually work. Last, but not least, it’s just really important to have demographics to track distributional issues by age, race, and ethnicity. These things are important for understanding who’s in crisis and who’s recovering, so that you can direct resources properly. With that information, you can answer many of the distributional questions that the Washington Center for Equitable Growth is interested in and that policymakers are very concerned about. You can also better target programs to where they’re needed.

One challenge with surveys is that they’re expensive. Thus, surveys tend to either collect a lot of information about a lot of people infrequently, or they’re very timely but they don’t cover very many people or ask many questions, so they lack granularity. One of the ways that statistical programs based on surveys can be made more timely and granular is by modeling from administrative data that have broad coverage. These UI data we’re talking about are not super timely, like weekly. Yet they could be used to improve more timely measures. With these data, BLS could use results from more timely surveys to estimate real-time impacts by demographic groups, occupation, industry, and geography.

Clemens: The other thing I love about this proposal is that I think we’re going to see more of this. People are looking at whether to add some measurement questions to our administrative data. We see that right now with taxes. People are asking, “could we add some demographic questions to the 1040 individual income tax return?”

The importance of job skills, not credentials, in the U.S. labor market

Clemens: We’ve mostly been talking about data, but I want to briefly dip into some recent work you did because it involves both Equitable Growth grantee Peter Blair at Harvard University and Equitable Growth Board member Byron Auguste, who is also the CEO and co-founder of the nonprofit organization Opportunity@Work, which seeks to expand employees’ access to more career opportunities. You did some work on credentialism, the prevalence of bachelor’s degree requirements in the workforce. Can you tell us a little bit about that work and especially how it applies to our current situation, where it seems like job matches are coming kind of slowly in our current jobs market?

Groshen: Sure. I think it’s really relevant to what we’re seeing today. The group for whom the recovery is lagging most—where unemployment rates are still most elevated—are people without college degrees. And that’s disproportionately people of color, so it feeds into other important problems that we see. Peter Blair and Papia Debroy at Opportunity@Work have been conducting research on a large group of people who have developed important skills on the job but who do not have the same upward mobility as those who hold college degrees.

Very importantly, when these skilled workers leave their jobs, they are much less likely to land new jobs that use the skills they developed, compared to people with college degrees. The sorting mechanisms that recruiters use may be blocking their access to a large talent pool. Indeed, BLS projects that over the next 10 years, three-quarters of the jobs created will be in occupations for which employers say they usually require a college degree. Yet less than half of our workforce has a college degree. That’s a problem for the economy as a whole, for employers, and for the workers being excluded.

The recent study that Blair and I and our co-authors at Opportunity@Work and Accenture Research did looks at career paths that should be open to people without college degrees by dint of their success working in occupations with similar skill needs. There are signs that many people have made transitions to these jobs, but that those transitions are less frequent for people without college degrees.

Fixing this requires a mindset shift for employers, but it also requires information for the workers so that they know what jobs they should be applying for and what training might be most beneficial for them. When an employer hires somebody with a college degree, often they say, “They don’t really need to know already what I want them to do. Because they have a college degree, I think they’re trainable.” So, they’re essentially going to train them all about the product that they make. Instead, the employer could consider hiring somebody who knows a lot about that product or about selling or manufacturing and just needs training about the specifics of that company’s environment. Such a shift in strategy could be win-win all around.

Clemens: I think it’s definitely time for employers to be rethinking certain tenets they may have held. So, I thought it was very timely, really interesting work.

Groshen: Well, it’s interesting that we wrote that paper before the pandemic, when the labor market was tight. Now, with all the pandemic-related job losses and labor market flux, it’s still relevant—perhaps even more so than before.

A new data-gathering initiative by the U.S. Chamber of Commerce Foundation

Clemens: I have to let you go in a second—I want to be conscious of your time. I’ll just give you an opportunity here at the end. Is there anything in the realm of real-time measurement that you’re excited about? Perhaps something you saw during the pandemic that you think people are doing really well? Or maybe things that we should be doing? Barriers or challenges that you think are ripe for research?

Groshen: Yes. It’s a bit different than what we’ve talked about already, getting access to data being kept by employers. There’s an effort, interestingly enough, by the U.S. Chamber of Commerce Foundation to create interoperable worker, job, and learning records for companies. The idea is that right now, more and more things are being digitized, including human resource and payroll records. But these records aren’t necessarily the same between companies, and they don’t always accord as closely as they could with what the statistical agencies need.

This creates problems when companies merge, do due diligence, or want to apply uniform analytics. It makes it harder for them to fill out government survey forms. It makes it harder for them to switch software providers. So, there are lots of reasons why it would be good to have industry standards for training and employment records. It could be helpful to companies, statistical agencies, and indeed to workers, who could then request a kind of standard verifiable employment history that they could provide as part of job applications.

I see this exciting project as a potential model for improving and sharing of other corporate data going forward as well. The more we get this kind of public-private discussion going about how to manage our data resources, the closer we get to the good data that we need to make decisions that best improve lives. Some of this work is clearly the work of government, but some will need this cooperation, and I think could be hugely beneficial to the policymaking community and to the nonprofit organizations that want to deal with all of these issues as well.

Clemens: It’s a great effort, and it’s nice to have an area where everyone gets something out of it.

Groshen: Absolutely. It’s a great example that this is possible.

Clemens: Thanks for taking the time today with me and with Equitable Growth.

Groshen: Thanks for the opportunity.

Related

post

Measuring economic outcomes for all U.S. workers and their families will hold policymakers accountable to creating broad-based growth

Inequality & Mobility
Past Event

Equitable Growth Presents: Opportunities and challenges of real-time economic measurement

Inequality & Mobility
Coronavirus Recession

Structural racism and the coronavirus recession highlight why more and better U.S. data need to be widely disaggregated by race and ethnicity

Inequality & Mobility
report

Disaggregating growth

Inequality & Mobility
Executive action to spur equitable growth

Executive actions to improve U.S. economic measurements

Inequality & Mobility
Past Event

Data Infrastructure for the 21st Century: A Focus on Racial Equity

Inequality & Mobility
Connect with us!

Explore the Equitable Growth network of experts around the country and get answers to today's most pressing questions!

Get in Touch