The misplaced debate about job loss and a $15 minimum wage

Overview

The leading criticism of the “Fight for $15” campaign to raise the federal minimum wage to $15 an hour is the presumed loss of jobs. Employers, the argument goes, would eliminate some workers or reduce their hours in the short-term, and in the longer run, further automate their operations in order to ensure that they will need fewer low-wage workers in the future. For many leading minimum wage advocates, even a gradually phased-in $12 wage floor would take us into “uncharted waters” that would be “a risk not worth taking.”

On the other side is the long historical concern with making work “pay,” even if that means some job loss. In this view, the most important consideration is the overall employment impact on low-wage workers, after accounting for the additional job creation that will come with higher consumer spending from higher wages, which will almost certainly at least offset any direct initial job losses. And even more importantly, what really matters in this view are the likely huge overall net benefits of a large increase for minimum-wage workers and their families.

Download File
The misplaced debate about job loss

Read the PDF in your browser

If we are serious about job opportunities for low-wage workers then there are many effective ways to compensate those who lose their jobs, ranging from expansionary economic policy to increased public infrastructure spending, more generous unemployment benefits and above all, public-sector job creation. A related issue is whether it makes moral, economic and fiscal sense to maintain a low federal minimum wage and then ask taxpayers to subsidize the employers of low-wage workers by propping up the incomes of poor working families only via means-tested programs such as the Earned Income Tax Credit and supplemental nutrition assistance.

The debate has been, effectively, a stalemate, with the federal minimum wage set at extremely low levels ($7.25 since 2008) by both historical and international standards. Part of the explanation for our persistent failure to establish a minimally decent wage floor at the federal level has been the way the discourse has been framed—even by many of the strongest advocates for substantially higher minimum wage.

In recent years, the best evidence shows that moderate increases from very low wage floors have no discernible effects on employment, which has helped make the case for substantial increases in the minimum wage. But the very strength of this new evidence— research designs that effectively identify employment effects at the level of individual establishments—has contributed to the adoption of a narrow standard for setting the “right” legal wage floor—defined as the wage that previous research demonstrates will pose little or no risk of future job loss, anywhere. For all sides, the central question has become: Whose estimate of the wage threshold at which there is no job losses whatsoever is the most credible?

Some economists, for example, point to existing evidence that the effects on employment when the minimum wage is increased within the $6-to-$10 range are minimal. Yet other researchers continue to argue, with credible statistical support, that sizable increases within this $6-to-$10 range do cause at least some job loss in some establishments in some regions, even if limited to high-turnover teenagers.

But there certainly is no evidence that can be relied upon to identify the no-job-loss threshold for a legal wage floor that would apply to the entire United States—the wage below which it is known that there is little or no risk of job loss anywhere, and above which there is known to be a risk of job loss that is high enough to be not worth taking. The only truly reliable way to do this would be to regularly increase the federal minimum wage while carefully monitoring the employment effects, much as the United Kingdom’s Low Pay Commission has done for the minimum wage that was instituted there in 1999.

There are different stakeholders in this debate. On the one side, there are the academic economists who care deeply about empirical confirmation of price-quantity tradeoffs and restaurant owners who care equally as much about their profit margins. On the other side, there are workers and their advocates who desire the establishment of a minimum living wage. Given the many parties with a big stake in the outcome, relying on evidence-based criteria about job loss for setting the wage floor all but guarantees unresolvable controversy.

The methodological double bind in setting the minimum wage

Then there is the methodological problem—a classic case of “Catch 22.” Because the identification of the wage at which there is expected to be zero job loss must be evidence-based, there is no way to establish the higher nationwide wage floors necessary for empirical tests. There are other places that have enacted higher minimum wages—think Santa Monica, Seattle, New York state, France, Australia or the United Kingdom—but they would face the same problem if they relied exclusively on zero job loss as the criterion for the proper wage floor. In practice, high minimum wage locations have relied on other criteria when making the political choice to set the legal wage, namely a wage that more closely approximates a minimum living wage than what the unregulated market generates.

In practical terms, local and state government’s past reliance on statistical tests for other jurisdictions not only means that we must assume that they are directly applicable (why would evidence from Seattle, New York state or the United Kingdom be a reliable guide to the effects at the level of the entire U.S. labor market?), but also requires that places imposing a no-job-loss standard must always lag far behind the leaders, and effectively condemns them to setting the wage floor well below the actual wage that will start generating job loss. In short, the no-job-loss criterion cannot stand on its own as a coherent and meaningful standard for setting the legal wage floor, and by relying on old statistical results from other places, ensures a wage that is too low on it own terms.

Ignoring the net benefits of raising the minimum wage

When the criterion for raising the minimum wage is concerned only with the cost side of an increase, the costs of some predicted job losses are all that matters. If the wage floor is set above the no-job-loss level, what kind of jobs will be lost? Who will be the job losers? What alternatives were available to them? These are the kinds of questions that must be asked to determine the costs of minimum wage related job losses. But there are obviously benefits to raising the legal wage floor. Shouldn’t they be counted and compared to the costs?

Those benefits are evident directly for the workers receiving wage increases as a result of a rise in the minimum wage, either because they are earning between the old minimum wage and the new one (say, between $7.25 and $15) or because they earn a bit above the new minimum wage—because employers increase wages to maintain wage differentials among workers by skill or seniority. The benefits also are evident for taxpayers–with a much higher minimum wage there would be less need to rely on means-tested redistribution to increase the after-tax and benefit incomes of working families.

Forgetting the ethical and efficiency arguments for raising the minimum wage

Relaying on the no-job-losses criterion for setting an appropriate federal wage floor entirely ignores the main traditional justification for the minimum wage: The moral, social, economic, and political benefits of a much higher standard of living from work for tens of millions of workers. On both human rights and economic efficiency grounds, workers should be able to sustain at least themselves and ideally their families. And on the same grounds, it is preferable to do so from their own work rather than from either tax-based public spending or private charity.

It is hard to put this argument for a living wage better than Adam Smith did several centuries ago:

A man must always live by his work, and his wages must at least be sufficient to maintain him. They must even upon most occasions be somewhat more; otherwise it would be impossible for him to bring up a family…. No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable.

A public policy straightjacket

Determining a suitable federal minimum wage based solely on a zero job loss rule is a public policy straightjacket that would effectively rule out any significant raise of the wage floor above that which already exists. Yet from a historical perspective, strict adherence to such policymaking criteria would have also made it impossible to ban child labor (job losses!), as well as many critical environmental and occupational health and safety regulations. It would also foreclose any consideration of policies like paid family leave, which exists in every other affluent country.

Conclusion

Breaking out of this public policy straightjacket requires policymakers to rethink their criteria for raising the minimum wage. It also means that economists must shake off their fear of challenging the prevailing orthodoxy—a no-immediate-harm-to-anyone way of thinking—and see the longer-term benefits to millions of workers. It is estimated that the move to a $15 minimum wage by both California and New York state will directly raise the pay for over one-third of all workers.

If we really care about maximizing employment opportunities then we should not hold a decent minimum wage hostage to the no-job-loss standard. Rather, we should put a much higher priority on full-employment fiscal and monetary macroeconomic policy, minor variations of which would have massively greater employment effects than even the highest statutory wage floors that have been proposed.

But it is also well within our capabilities to counter any job loss that can be linked to the adoption of what the prominent University of Chicago economist J. B. Clark in 1913 called “emergency relief” such as extended unemployment benefits, education and training subsidies, and public jobs programs. A minimum living wage combined with other policies common throughout the affluent world, such as meaningful child-cash allowances, would put the United States back among other rich nations that promote work incentives while all but eliminating both in-work poverty and child poverty. It would put the country into waters that most other affluent nations have charted and are already navigating.

—David Howell is a professor of economics and public policy at The New School in New York City. This note reflects and builds on the material that appears in the working paper published by the Washington Center for Equitable Growth, “What’s the Right Minimum Wage? Reframing the Debate from ‘No Job Loss’ to a ‘Minimum Living Wage,’” co-authored with Kea Fiedler and Stephanie Luce.

Photo Uncredited, Associated Press

U.S. top one percent of income earners hit new high in 2015 amid strong economic growth

The top 1 percent income earners in the United States hit a new high last year, according to the latest data from the U.S. Internal Revenue Service. The bottom 99 percent of income earners registered the best real income growth (after factoring in inflation) in 17 years, but the top one percent did even better. The latest IRS data show that incomes for the bottom 99 percent of families grew by 3.9 percent over 2014 levels, the best annual growth rate since 1998, but incomes for those families in the top 1 percent of earners grew even faster, by 7.7 percent, over the same period. (See Figure 1.)

Figure 1

Overall, income growth for families in the bottom 99 percent was good again in 2015 as it had been last year, marking the second year of real recovery from the income losses sparked by the Great Recession of 2007-2009. After a large decline of 11.6 percent from 2007 to 2009, real incomes of the bottom 99 percent of families registered a negligible 1.1 percent gain from 2009 to 2013, and then grew by 6.0 percent from 2013 to 2015. Hence, a full recovery in income growth for the bottom 99 percent remains elusive. Six years after the end of the Great Recession, those families have recovered only about sixty percent of their income losses due to that severe economic downturn.

In contrast, families at or near the top of the income ladder continued to power ahead. These families at or near the top of the income ladder did substantially better in 2015 than those below them. The share of income going to the top 10 percent of income earners—those making on average about $300,000 a year—increased to 50.5 percent in 2015 from 50.0 percent in 2014, the highest ever except for 2012. The share of income going to the top 1 percent of families—those earning on average about $1.4 million a year—increased to 22.0 percent in 2015 from 21.4 percent in 2014.

Income inequality in the United States persists at extremely high levels, particularly at the very top of the income ladder. Figure 1 shows that the incomes (adjusted for inflation) of the top 1 percent of families grew from $990,000 in 2009 to $1,360,000 in 2015, a growth of 37 percent. In contrast, the incomes of the bottom 99 percent of families grew only by 7.6 percent–from $45,300 in 2009 to $48,800 in 2015. As a result, the top 1 percent of families captured 52 percent of total real income growth per family from 2009 to 2015 while the bottom 99 percent of families got only 48 percent of total real income growth. This uneven recovery is unfortunately on par with a long-term widening of inequality since 1980, when the top 1 percent of families began to capture a disproportionate share of economic growth.

The 2015 numbers on income have been built using the new filling-season statistics by size of income published by the Statistics of Income division of the IRS. These statistics can be used to project the distribution of incomes for the full year. We have used these new statistics to update our top income share series for 2015, which are part of our World Top Incomes Database. These statistics measure pre-tax cash market income excluding government transfers such as the disbursal of the earned income tax credit to low-income workers.

Timely statistics on economic inequality are key to understanding whether and how inequality affects economic growth. Policymakers in particular need to grasp whether past efforts to raise taxes on the wealthy—in particular the higher tax rates for top U.S. income earners enacted in 2013 as part of the 2013 federal budget deal struck by Congress and the Obama Administration—are effective at slowing income inequality.

The latest data from the IRS suggests the 2013 reforms proved to be fleeting in terms of reducing income inequality. There was a dip in pre-tax income earned by the top one percent in 2013, yet by 2015 top incomes are once again on the rise—following a pattern of growing income inequality stretching back to the 1970s.

—Emmanuel Saez in a professor of economics at the University of California-Berkeley and a member of the Washington Center for Equitable Growth’s steering committee.

Photo by Steve Johnson, via flickr

Working mothers with infants and toddlers and the importance of family economic security

Anne Quirk and her 11-month-old son Kieran play in the front yard of their home in Providence, R.I.

Overview

For families in the United States with children ages five and under—whether in married- or single-parent homes—mothers have been essential to bolstering economic security. Mothers’ increased working hours helped stabilize and boost family income. In the face of decreasing economic security, though, these large increases in hours worked by mothers, especially in households with young children who require physical and emotional care and nurturing, comes at a price: time.

As more mothers spend their days outside of the home trying to deliver much-needed financial stability, we need to understand the consequences of their work. As Heather Boushey documents in her book, “Finding Time: The Economics of Work-Life Conflict,” families now rely on those added hours and earnings of women. But for different types of families the transformations in the women’s role at home and at work mean different things. Especially for families with an infant or pre-school aged child, the challenges of how to address work-life conflicts can be acute. Without sufficient social infrastructure to help while parents are at work —such as paid family leave, paid sick days, flextime, predictable schedules, childcare, or universal high-quality prekindergarten programs—families are increasingly struggling to balance economic security with caregiving at home.

Download File
Working mothers with young children and family economic security (pdf)

Read the full pdf in your browser

This issue brief builds on the findings from “Finding Time” and explores how women’s increased hours of work and higher earnings have affected the incomes for families with young children. We unpack the role that women have played in helping stabilize family incomes across married-parents and single-parents with children age five or younger. Our findings are telling:

  • Across the board, married-parent families with young children have higher incomes than single-parent families, although between 1979 and 2013 both married- and single-parent families increased their incomes at similar low rates.
  • While in both 1979 and 2013 women from married-parent families worked more hours than single-mothers with young children, both groups of women saw similar and significant increases in their hours of work across all income groups.
  • The rise in women’s hours have been important for trends in family income. Between 1979 and 2013, in both married- and single-parent families, women’s earnings from higher wages and added hours have been positive across all income groups. In fact, for families with young children, women’s earnings from more working hours in particular was substantially large.

The changing role of women and the composition of families

Over the past four decades in United States, the composition of families with children has changed markedly. Most importantly, there is an increase in diversity of family types. There is no longer a dominant “typical” family, especially not one with a breadwinning father, a care-taking mother, and their dependent children. Instead, there is a wide array of family types. Our definition of who comprises a family now—more than ever before—has expanded to include singles living alone, biologically unrelated individuals, or even a boarder who joins in on family dinner and helps out with homework.

Trends in marriage and fertility have both contributed to greater family complexity. Marriage (if it happens at all) happens later in life, and the median age of first marriage is now 29 for men and 27 for women—higher than at any point since the 1950s. And, of course, same sex marriage is now legal across the nation. At the same time, many women are delaying childbearing and the typical woman has her first child now at age 26. Further, children are increasingly being born into families with unmarried parents; in 2014, 40.3 percent of all births in 2014 were to an unmarried mother. What this means is that there is more complexity of family types.

A second set of changes is who works and what this looks like across the income spectrum. While it used to be that most children were raised in married-couple families, be they at the top or the bottom of the income ladder. Now, however, while families at the top continue to raise children inside marriage—typically with both parents holding down a fairly high-paying job—children in families at the bottom of the income distribution—and now many in the middle—are living with a single, working parent, most often a mother. (See Figure 1.)

Figure 1

 

While families have become more complex, incomes have become more unequal. Faced with greater economic insecurity, families had to find ways to cope. One strategy was for women to increase their engagement with the economy. Initially, the “American wife” with school-age children migrated to the workforce, and soon after those with even younger children joined in. As women became more integrated into the workforce, they eventually became their families’ breadwinners, with two-thirds now either the main breadwinner or sharing that responsibility with their husband.

Using data from the Current Population Survey, we chronicle how family incomes changed between 1979 and 2013 for low-income, middle-class, and professional families by family type. Specifically, we decompose these changes over time into differences in male earnings, female earnings from higher wages, female earnings from more hours worked, and other sources of income, which include Social Security and pensions. (See Box.)

Defining income groups and family types

The analysis in this issue brief follows the same methodology presented in “Finding Time.” For ease of composition, we use the term “family” throughout the brief even though the analysis is done at the household level. We split households in our sample into three income groups:

  • Low-income households are those in the bottom third of the income distribution, earning less than $25,440 per year in 2015 dollars.
  • Professional households are those in the top fifth of the income distribution who have at least one household member with a college degree or higher; these households have an income of $71,158 or higher in 2015 dollars.
  • Everyone else falls in the middle-class category.

In this issue brief, we also refer to two different family types who have “young” (age five or less) children:

  • Married-parent families: The parents are married and at least one own young child is present in the household. (Unfortunately, at this time, the data limit our analysis heterosexual couples only.) Within these households, other, older children or adults, related or not and including adult-age children may be present and may contribute to the family’s income. Most families in this category, however, have two parents and their children alone.
  • Single-parent families: Either the mother alone or the father alone and at least one young child is present within the household. Within the household, there are no other adults related or unrelated adults who have earnings from a job or income from other sources.

Our sample only includes working-age families, where at least one person in the household is between the ages of 16 and 64.

In the United States, only 19.1 percent of families have a child under age six. Table 1 shows the distribution of married parent and single-parent families across the income spectrum with and without one or more young children at home. Due to small sample sizes for certain groups, we were unable to conduct our analysis for a variety of family types, but we can break down the shares of different types of families by income group. For the purposes of this analysis, we select households where both parents are married from the “both parents only” and “both parents living with other adults” categories to get our “married-parent” families. For our “single-parent” families, we select households from the “single-parent” category. These are bolded in Table 1.

Table 1

 

Table 2 breaks down the sample for this analysis, showing the share of each of these family types across the three income groups for 2013. Notably, the share of single-parent families decrease as we move up the income ladder. We exclude single-parent families with young children from our analysis of professional families as the sample size is too small.

Table 2

 

Setting some context

Before turning to the decomposition analysis, let’s first set some broad context for how family incomes and women’s hours changed between 1979 and 2013 for low-income, middle-class, and professional families with young children.

How did income change between 1979 and 2013 for families with young children?

Between 1979 and 2013, while married-parent families had higher family income than single-parent families with young children, both types of families saw similar rates of growth in their income. (See Figure 2.)

Figure 2

 

Low-income families

Figure 2 shows that between 1979 and 2013, low-income families with young children—both married and unmarried—saw a slight rise in their incomes. Married-parent families with young children earned substantially more than single-parent families. In 1979, low-income married-parent homes, on average, brought in $32,965 annually compared to the $22,443 earned by single-parent families with children age five and under. By 2013, these disparities still persisted, with married-parent families earning $36,606 and single-parent families earning $21,848 annually.

The gap in average annual income between married-parent and single-parent family types can be often—but not always—explained by simple addition: Married-couples, now more than ever before, often have two sources of income. Although low-income single-parent families had a smaller annual income, on average, than married-parent families, between 1979 and 2013, both their incomes grew at relatively small rates (1.9 percent and 11.0 percent, respectively). These rates of income growth for families with young children indicate that income stalled.

Middle-class families

Figure 2 also shows that for middle-class families with young children, income rose between 1979 and 2013. As was the case for families across the low-income group, middle-class married-parent families with young children earned more, on average, in 1979 and 2013 than single-parent families. Yet, despite earning more, married-parent families’ income had similar rates of growth to single-parent families’; between 1979 and 2013, both married- and single-parent families with children age five and under grew their incomes by 26.4 percent and 29.0 percent, respectively.

Professional families

Across the board, Figure 2 also shows that between 1979 and 2013, professional families with young children have seen their incomes soar, and married-families with young children, in particular, have seen outstanding gains. In 1979, professional married couple families with young children earned, on average, $143,099. By 2013, their average annual income had grown by 65.2 percent to $236,400. The gap between married-parent professional families’ income and low-income and middle-class families’ income has widened markedly over the past four decades.

How did women’s working hours change between 1979 and 2013 for families with young children?

Taking a look at the hours that women from different families and income groups work gives us some insight into why families with young children increased their incomes between 1979 and 2013. Across the board, over the past four decades, women from married-parent and single-parent families grew their hours of work markedly. (See Figure 3.)

Figure 3

 

Low-income families

Figure 3 shows that in 1979, mothers of young children (and other working women) in married-couple families worked 590 hours annually (about 11.4 hours per week) and single-mothers worked 394 hours (7.6 hours per week). By 2013, women from both low-income married- and single-parent families with young children had grown their annual hours of work by 67.0 percent and 89.8 percent to 937 hours (18.0 hours per week) and 747 hours (14.4 hours per week), respectively.

Middle-class families

Just like women from low-income families, Figure 3 shows that women from middle-class families with young children greatly increased their hours at work between 1979 and 2013. In 1979, women from middle-class married-parent and single-parent families with children age five and under worked an annual average of 965 hours (or 18.6 hours per week) and 343 hours (6.6 hours per week), respectively. In 2013, women from married-parent families with young children had increased their hours of work by 58.1 percent to 1,525 hours annually (29.3 hours weekly), and mothers in middle-class single-parent families had more than doubled their hours (an increase of 114.5 percent) to 736 (14.2 hours weekly).

Professional families

Again, like we noted for other income groups, for professional families, women from homes with children age five and under had large increases in their hours of employment. Figure 3 shows that among professional families in 1979, women in married-couple families worked an annual average of 1,072 hours (20.6 hours per week), growing their hours by 67.0 percent to 1,791 annually (34.4 hours weekly) by 2013.

Though mothers (and other women) in professional married-couple families work more hours than middle-class and low-income, the rates of increase are relatively comparable across the board, corroborating the narrative that more and more women with young children have joined the workforce.

Decomposing the changes in income for families with young children

In Figure 2, we saw that between 1979 and 2013, across the board, married-parent families have higher incomes but similar rates of increase in income to single-parent families with young children. Figure 3 highlights that women from families with young children have been working more hours and perhaps have seen some of the largest increases in their hours at work.

Given these broad trends, a natural question arises about how the large increases in women’s hours relate to the large increases in family income for these families with young children. To unpack this correlation, we decompose the changes in families’ average household income between 1979 and 2013 into male earnings, female earnings, and income from other non-employment-related sources, which include Social Security and pensions and other sources.

Specifically, we divide female earnings into two parts: the portion due to women working more hours per year and that due to women earning more per hour. To calculate female earnings stemming directly from the additional hours worked, we take the difference between the 2013 female earnings and the hypothetical earnings of women if they earned 2013 hourly wages but worked the same hours as women did in 1979. (For more on how we did this calculation, please see our Methodology.) We find that within families with young children across the income ladder, the added hours of mothers have near single-handedly been a large and positive factor for income growth for low-income and middle-class families while women’s earnings overall have outweighed men’s positive earnings at the top. (See Figure 4.)

Figure 4

 

Low-income families

Figure 4 shows that for low-income families with young children, both women’s earnings from more hours and from higher wages protected against falling family incomes between 1979 and 2013. For married- and single-parent families with children five and under, men’s earnings pulled income down at varying degrees. Men in married-parent families and fathers in single-parent families lost $1,748 and $1,938 in earnings between 1979 and 2013, respectively.

In contrast, women’s added hours and higher pay boosted incomes in both low-income family groups. For low-income married-parent families with young children, women’s higher wages increased family incomes by an average of $1,013 while women’s added hours grew family income by an average of $3,541. Single-parent families saw similar substantial gains in mother’s economic contributions: Between 1979 and 2013, women’s higher wages contributed $224 to earnings and added hours boosted incomes by $4,114.

The changes in “other income” are also of interest. For low-income married-parent families, other income grew by $860, but for single-parent families, other income decreased by $1,997. For single-parent families, this decrease in reliance on other sources of income—which could include federal transfers such as supplemental nutrition assistance and Temporary Assistance for Needy Families as well as Social Security benefits—indicates that these policies may not be adequately supportive or sensitive to the needs of parenting alone.

Middle-class families

Across the board, middle-class families, like low-income families, saw positive increases in their income largely due to the contributions of women and their increased labor force participation. Figure 4 shows that for both middle-class married- and single-parent families of young children, male earnings made a relatively small, positive addition of $1,205 and $3,706, respectively.

Women’s earnings, in contrast, were positive and large. Women’s earnings from higher wages added $6,041 and $2,768 for married-parent and single-parent families, respectively. The additions due to women’s added hours at work were more impressive, as women from married- and single-parent homed secured an additional $11,380 and $11,482, respectively.

Other income across the the two middle-class family types with also helped increase income.

Professional families

As we saw in Figures 2 and 3, not only do mothers in professional married-parent families with young children work the longest hours but also their family incomes have also grown considerably. These changes are well-captured when we decompose family income, where we find that both women’s added earnings from higher wages and hours are important. At the same time, we see that men have made near-equal contributions to their families’ income growth, as well.

Figure 4 shows that between 1979 and 2013, men in professional married-parent families with young children added $39,540 to family income. Despite the immense boost from male earnings, female earnings added the most to family income—a total of $52,738, which breaks down into $21,965 from higher wages and $30,773 from more hours worked.

Conclusion

Our findings tell is that working mothers with children ages five and under are indispensable to their families’ bottom line. So what does that mean for the other indispensable role played by mothers—as caregivers? Policymakers need to consider how a full panoply of policies, such as universal high-quality childcare and prekindergarten programs, paid family and medical leave, and flexible scheduling at work can help them balance the lives of these mothers as productive members of our workforce and caregivers.

It’s not enough just to have these policies in place, though. How we address the time-squeeze on U.S. families must be sensitive to the changing definitions of what it means to be a family in the United States and what that tangibly means for the way in which they give care.

—Heather Boushey is the Executive Director and Chief Economist at the Washington Center for Equitable Growth and the author of the book from Harvard University Press, “Finding Time: The Economics of Work-Life Conflict.” Kavya Vaghul is a Research Analyst at Equitable Growth.

Acknowledgements

The authors would like to thank John Schmitt, Ben Zipperer, Dave Evans, Ed Paisley, David Hudson, and Bridget Ansel. All errors are, of course, ours alone.

Methodology

The methodology used for this issue brief is identical to that detailed in the Appendix to Heather Boushey’s “Finding Time: The Economics of Work-Life Conflict.”

In this issue brief, we use the Center for Economic and Policy Research extracts of the Current Population Survey Annual Social and Economic Supplement for survey years 1980 and 2014 (calendar years 1979 and 2013). The CPS provides data on income, earnings from employment, hours, and educational attainment. All dollar values are reported in 2015 dollars, adjusted for inflation using the Consumer Price Index Research Series available from the U.S. Bureau of Labor Statistics. Because the Consumer Price Index Research Series only includes indices through 2014, we used the rate of increase between 2014 and 2015 in the Consumer Price Index for all urban consumers from the Bureau of Labor Statistics to scale up the Research Series’ 2014 index value to a reasonable 2015 index estimate. We then used this 2015 index value to adjust all results presented.

For ease of composition, throughout this brief we use the term “family,” even though the analysis is done at the household level. According to the U.S. Census Bureau, in 2014, two-thirds of households were made up of families, defined as at least one person related to the head of household by birth, marriage, or adoption.

We divide our sample into three income groups—low-income, middle-class, and professional households—using the the definitions outlined in “Finding Time.” For calendar year 2013, the last year for which we have data at the time of this analysis, we categorized the income groups as follows:

  • Low-income households are those in the bottom third of the size-adjusted household income distribution. These households had an income of below $25,440 (as compared to $25,242 and below for 2012). In 1979, 28.3 percent of all households were low-income, increasing to 29.7 percent in 2013. These percentages are slightly lower than one third because the cut-off for low-income households is based on household income data that includes persons of all ages, while our analysis is limited to households with at least one person between the ages of 16 and 64. The working-age population (16 to 64) typically has higher incomes than older workers, and as a result, the working-age population has somewhat fewer households that fall into this low-income category.
  • Professionals are those households that are in the top quintile of the size-adjusted household income distribution and have at least one member who holds a college degree or higher. In 2013, professional households had an income of $71,158 or higher (as compared to $70,643 or higher in 2012). In 1979, 10.2 percent of households were considered professional, and by 2013, this share had grown to 16.8 percent.
  • Everyone else falls in the middle-class category. For this group, the household income ranges from $25,440 to $71,158 in 2013 (as compared to $25,242 to $70,643 in 2012); the upper threshold, however, may be higher for those households without a college graduate but with a member who has an extremely high-paying job. This explains why within the middle-income group, the share of households exceeds 50 percent: The share of middle-income households declined from 62 percent in 1979 to 53.4 percent in 2013.

Note that all cut-offs above are displayed in 2015 dollars, using the inflation adjustment method presented earlier.

In our analysis, we limit the universe to persons with non-missing, positive income of any type. This means that even if a person does not have earnings from some form of employment but does receive income from Social Security, pensions, or any other source recorded by the CPS, they are included in our analysis.

These data are decomposed into income changes between 1979 and 2013 for low-income, middle-class, and professional families. The actual household income decomposition uses a simple shift-share analysis to find the differences in earnings between 1979 and 2013 and calculate the extra earnings due to increased hours worked by women.

To do this, we first calculate the male, female, and other earnings by the three income categories. To calculate the sex-specific earnings per household, we sum the income from wages and income from self-employment for men and women, respectively. The amount for other earnings is derived by subtracting the male and female earnings from total household earnings. We average the household, male, female, and other earnings by each income group for 1979 and 2013, and take the differences between the two years to show the raw changes in earnings by each income group.

To find the change in hours, for each year by household, we sum the total hours worked by men and women. We average these per-household male and female hours, by year, for each of the three income groups.

Finally, we calculate the counterfactual earnings of women. We use the 2013 earnings per hour for women and multiply it by the 1979 hours worked by women. Finally, we subtract this counterfactual earnings from the female earnings in 2013, arriving at the female earnings due to additional hours.

We repeated this analysis for families of different family types that had children age five and below (young children). The first family type we analyze was married-parent families—households that have both a mother and father who are married and their own young child. These married-parent households may also include older children or adults, both related and unrelated, including adult children, some of whom may be earning and contributing to household income.

The second family type we observed was single-parent families—households where either a mother and her own young child or a father and his young child is present. This family type excludes other adults if they are contributing personal income of any type to household income. Because of small household sample sizes, single-parent families were excluded from the analysis of professional families. While these family type categories do dissect some of the nuance in family structures, we acknowledge that they are oversimplifications of complex family inter-relationships and that they do not capture the diversity of family types that exist today. However, breaking the categories down smaller does not give us enough of a sample size for our analysis.

One important point to note is that because of the nature of this shift-share analysis, the averages don’t exactly tally up to the raw data. Therefore, when presenting average income, we use the sum of the decomposed parts of income. While economists typically show median income, for ease of composition and the constraints of the decomposition analysis, we show the averages so that the data are consistent across figures. Another important note is that we make no adjustments for changes over time in topcoding of income, which likely has the effect of exaggerating the increase in professional families’ income relative to the other two income groups.

Equitable Growth in Conversation: An interview with Claudia Goldin

“Equitable Growth in Conversation” is a recurring series where we talk with economists and other social scientists to help us better understand whether and how economic inequality affects economic growth and stability.

In this installment, Equitable Growth’s Executive Director and Chief Economist Heather Boushey talks with economist Claudia Goldin about the gender wage gap and some of its implications. Read their conversation below.


Heather Boushey: I want to focus on your work on the gender wage gap. Lots of us have been thinking about this for a long time and noticed that you have gotten a lot of attention in the press for your recent research on this, so I wanted to ask you some questions teasing out both what it is and what some of the implications are.

In your paper, “A Grand Gender Convergence: Its Last Chapter“—and I love the title of that—you argue that the gender wage gap cannot be explained by differences in productivity between men and women. Instead, when we look at occupations, we see that there is a price paid for flexibility in the workplace. And given what people are thinking about in terms of policy, that seemed like a really good place to start our conversation today. Can you tell me a little bit more about this result?

Claudia Goldin: So the key finding is that there is a gender wage gap. But the question is why? We know from lots of people’s work that we used to be able to squeeze a lot of the gap away due to differences in education—differences in your college major, whether you went to college or not, whether you have a Ph.D., an M.D., whatever. We were also able to squeeze a lot away on the basis of whether you had continuous work experience or not.

Today, we are not able to squeeze much away. In fact, women on average have more education than men. The quantities [of women with college degrees] are higher, and even the qualities [of degrees] aren’t that much different anymore. And the extent of past labor force participation is pretty high. Lifecycle labor force participation for women is very, very high. So we can’t squeeze that much away anymore.

What’s also really striking is that, given lots of factors such as an individual’s education level, many occupations have very large gender gaps and some occupations have very small gender gaps. Looking at occupations at the higher part of the income spectrum, which is also the higher part of the education spectrum — so occupations where about 50 or 60 percent of all college graduates are—we see that the biggest gaps are in occupations in the corporate and finance field, in law, and in health occupations that have high amounts of self-employment. And the smallest gaps are found in occupations in technology, in science, and in lots of the health occupations where there is a very low level of self-employment.

That’s sort of a striking finding.

Then when we dig deeper and look at particular occupations—in law, for example, and in the corporate and finance field—we see a couple of things. We see that differences in hours have very high penalties even on a per hour basis. Differences in short amounts of time off have very high penalties, unlike in other fields. And many of the differences occur at the event of or just after the event of first birth. So there is something that looks like women disproportionately, relative to men, are doing something different after they have kids.

When we look at men and women in the finance and corporate fields who haven’t taken any time off and among the women who don’t have kids, we find that the differences are really tiny. So those are the differences that are coming about, not surprisingly, from the fact that women are valuing predictability, and flexibility, and many other aspects of the job that many men are not valuing.

So, looking at data for the United States, we find that this change from being an employee, a worker, and a professional, to being an employee, a worker, a professional, and a parent has a disproportionate impact on women.

Now one might say, isn’t that because the United States has really lousy coverage in terms of parental leave policy, and in terms of subsidized daycare? Well, there are two very interesting papers, one for Sweden and one for Denmark. Both countries have policies that are just about the best in the world, and these studies, using these extraordinary cradle-to-grave data that they have, look at the widening in the — what men are getting versus women is occurring at — they can do an event study at that [having a child].

And women are moving into occupations that have more flexibility, but they are working fewer hours and getting less per hour. And the same sorts of things are going on even in countries that have incredibly good parental leave policies, subsidized daycare, schools that appear to us to be better, and what we think of as social norms that are better.

Boushey: One of the things that you found in your research that you haven’t mentioned yet is this idea that some workers are more substitutable—this idea that the industries with a high level of self-employment play some role in the gender pay gap. Could you explain that a little bit?

Goldin: Well, it would be very nice for us to go to each one of these occupations and take part in each one of these occupations and learn something about them. We can’t do that so instead we use the O*NET database, which gives us a lot of information about what goes on in these occupations.

And in O*NET, there are certain characteristics of the occupations that seem to map very nicely into aspects that would appear to be important, such as how predictable the job is, what the time demands are, whether you have to deal with clients, or whether work relationships are important.

And much of that is related to the issue about whether if an individual wants to leave work at 11 o’clock in the morning but do the same task at 11 o’clock at night, whether that’s severely penalized. That would be penalized if the individual can’t easily hand off work to someone else if it is needed at 11 a.m. That would be important if the fidelity of the information would be altered, if the client would feel that the individual wasn’t a very good substitute, and so on.

So using this information from O*NET, I find that the occupations that have the largest gender gaps are those that have the least predictability and the greatest time demands. And the occupations that have the smallest gender gaps are on the other side. It’s not necessarily causal, but it’s pretty good evidence that there is something going on.

And then I drill down deeper into particular occupations, such as the work that I have done on MBAs in the corporate and finance sector, and the longitudinal information that exists on lawyers. And finally, there’s a very interesting occupation that went through tremendous change during the 20th century and into the 21st century, and that’s pharmacy.

Pharmacists used to own their own businesses by and large, and they hired other pharmacists to work with them, often part-time. Many of these part-time workers were women, but there were few women who were owners. Well, ownership involves lots of responsibility, and as the owner, you are the residual claimant [the person with the last claim to the firm’s assets]. So in 1970 or so, women got about 66 cents on the male dollar in terms of pharmacy. Today, women working full-time full-year get 92 cents on the male dollar, uncorrected for any other differences and a lot more adding other relevant factors.

There are three things going on here. One is that there is no longer a lot of self-employment. Pharmacists by and large are not working for independent pharmacies anymore. They are working for big chains, national chains, regional chains, world chains. So the residual claimant now is the owner of the stock. There is professional management, and then there are just people who work there who are pharmacists.

The second thing is that there is very good use of IT. Every pharmacist now knows all the prescriptions that you have under your health plan, not just the ones that were filled in that pharmacy. And the third thing is that the drugs themselves are highly standardized by and large, so it isn’t that you are very attached to a particular pharmacist because they fill your prescriptions better or because they know you better. Pharmacists are highly paid professionals, but they are very good substitutes for each other.

Boushey: I’m glad you brought that study up, because I was going to ask you about it. My great uncle was a pharmacist, so I also just find it personally a fascinating example.

If you look at O*NET and the kinds of things that you are measuring, it seems like there are some cases where it seems very logical—especially in the case of pharmacists—that the substitutability is related to the profitability of the firm. It seems like a real strong business case.

Have you found in your research examples where perhaps not the substitutability but the job requirements around predictability or schedules may be more about keeping some workers out than they are about what’s good for the firm?

Goldin: Well, I’m all ears. (Laughter.)

Boushey: Yeah, I don’t know that I have answers there. I just think it begs the question. And I don’t know if you have thought about how to discern that difference in terms of —

Goldin: It’s that firms are leaving very large amounts of money on the ground. And so, if they are able to do that, they are able to pay for their taste for discrimination, then they can [discriminate]. And so that’s what one would look for, whether there are invaders standing at the gates. And if there aren’t, then they can do that and get away with it.

But the question is, where are the invaders that should be standing at the gates?

Boushey: And if part of what you have found is that a lot of this happens right after a child, that’s an invader of a different kind, perhaps.

Goldin: What’s interesting in the case of the MBAs is that it’s not right after the kid. It’s like two years later.

Babies are easier to take care of than 2-year-olds, and so it’s not that the firm then says, “Aha, we have one of those that has kids. We’ll just make certain that she doesn’t get the clients.” And one hears a lot of those stories, and those are the ones that the HR people are always talking about and making certain that people in their firm don’t do that—don’t have sexist paternalism, as it’s called.

But that doesn’t seem to be what is going on. I’m not doubting that there isn’t some of that, but what seems to be going on is that the individual tries and tries—in our data at least, in the Chicago Booth [School of Business] data—and eventually it’s just too much. There are too many demands, so they decide to scale back somewhat.

Boushey: Then I guess there are two questions. It sounds like it is that scaling back that causes the gender pay gap, right?. And what can we do about it?

Goldin: If a firm somehow believes, or it’s the case that right now, its production function is such that working 80 hours a week is worth a lot more than having two workers work 40 hours a week, then that produces non-linearities in pay and it leads to exactly what we are seeing. End of story.

Boushey: And on the policy side, it sounds like there isn’t a lot of incentive from the firm’s side to fix that

Goldin: No, there’s a lot of incentive on the firm’s side. If I’m paying someone more than twice as much to work 80 hours a week than I’m paying two people to work 80 hours a week, then I should think about ways of reducing my costs.

And if I am working people 80 hours a week and that leads people with skills, very expensive skills, to leave, then I should want to do something to keep them there and to figure out how to make certain that they aren’t working 80 hours a week.

I often hear how the CEO of a company has said, “We really want to keep our talent—women as well as men who don’t want to work 80 hours a week, who don’t want the pressure of being called up when they are at a soccer game with their kids, on a Sunday or a Saturday or an evening, or whatever.” The CEO will set down a policy to ensure that doesn’t happen, but then there are a lot of managers who don’t hear that or who claim they don’t hear that. So lots of firms hire HR people to go around and make certain that this is policed.

And these issues are present even in the military. Some time ago at a conference on workplace flexibility, Adm. Mike Mullen, former Chairman of the Joint Chiefs of Staff, essentially said “I’m having trouble doing it, and I’m the head of the entire military.”

So there are principal-agent problems that firms would like to rein in. So they are losing money.

Boushey: Yeah. Well, the federal government implemented a “right to request” policy in one of the agencies—I believe it was OPM, the Office of Personnel Management. I talked to them when they were starting to implement that and the folks we were talking to were super excited, and then they told me, “Oh, yeah, we had some problems with middle management actually implementing it.” And then they stopped the experimenting and I never heard about it again.

Goldin: Yeah.

Boushey: And I think it’s a real challenge how firms are making that connection between that profit motive that the big guys are thinking about and what’s actually happening.

Goldin: Right. But there are lots of firms that have what they call work-life balance, or work-family balance; where, if you work at 11 at night versus 11 in the morning, that’s perfectly fine with them.

I was talking with a very senior partner at a well-known consulting firm once and I asked, “Well, what do you do when clients [call people up at 11 p.m.]?” And she said, “I call up the clients and I say, I have staff and they are not your slaves.” Well. (Laughter.)

Boushey: Good for her.

Goldin: Good for her, and right. But let’s just say that there are cases in which we don’t want someone to have a perfect substitute. I do not want my president, for example, to turn around and say, “oh, by the way, I really don’t like this unpredictability business. You know? That little red button on the phone—every now and again, I say, you know, I’m really not here right now.” (Laughter.)

Because there are cases in which that person better be on 24/7 and that’s it. And we know that in the world of work, those people get higher pay—or, in the case of our president, just get better ratings.

So there are going to be cases in which individuals who are willing to work long hours, work unpredictable hours, be on call, whatever we want to call it, are going to get more. And they are not going to be substitutable. And information is not going to flow perfectly, with total high fidelity.

The question is, what fraction of the occupations in the economy are like that? And I think you and I would agree that the fraction is probably a lot lower than appears to be the case right now.

Boushey: So what should folks who are thinking about policy do about this? Is there a role for us, or is this just a business case? Do they all have to learn this lesson on their own, or is there something policymakers can do?

Goldin: Yeah, we have a policy. It’s called public schools. We’ve had it for a very, very long time. We have public schools that get out nationwide at about 2:30 or 3:00, that end sometime in June, that begin school at 5 years old or 6 years old. None of that was ever discussed as being the optimal way to run schools.

It is suboptimal with respect to individuals who have kids, because kids are not one- or two-year capital goods. Family leave policy is not the only thing that’s going to help families with kids, because the kids live, I hope, for many, many years after they are 2 years old. That’s the policy.

Boushey: I love it. That’s a fantastic way to end this interview, and something I will take with me in my travels here in Washington. Thank you so much, Claudia.

Goldin: Thank you.

This interview has been edited for length and clarity.

Environmental regulation and technological development in the U.S. auto industry

Introduction

In 2013, the U.S. Chamber of Commerce commissioned a study aimed at undermining the U.S. Environmental Protection Agency’s claims that it and other federal agencies’ regulatory efforts created thousands of jobs. Naturally, the study challenged EPA’s findings, and it did so by questioning not only the agency’s findings but also its methods—particularly EPA’s economic models—arguing that there was little evidence that the agency even tried to analyze the effects of its regulations on employment.1 This disagreement is hardly novel. Anti-regulatory efforts often highlight the job-stifling effects of regulation while pro-regulatory positions often argue in favor of job growth and overall economic growth as a benefit of regulation. The debates often hinge on profoundly different economic models of employment—and employment patterns are notoriously complicated to model in the first place.

Download File
Environmental regulation and technological development in the U.S. auto industry

Read the full pdf in your browser

Rather than add to this noise about economic modeling, in this paper I will discuss the role of regulation in technological development, looking specifically at the automobile industry. New technologies are clearly related to economic growth, but modeling that relationship is complicated and actually requires consideration of “counterfactuals,” or assessing whether certain technologies would have emerged without regulation. Considering the question of how new technologies developed in response to regulation and then how those technologies affect job growth is even more complicated and beyond the scope of this paper. That’s why this paper focuses on the question of how regulation, technological development, and economic growth are linked by examining a multi-part, historical case study of the automobile industry and environmental regulations.

This paper considers the role of initial efforts in California to address the problem of smog and the ways in which that relatively small-scale regulation subsequently modeled and then fed a national-scale regulatory effort leading to the creation of the U.S. Environmental Protection Agency and the Clean Air Act and its amendments around 1970. This story, however, only begins with the Clean Air Act and the EPA because things got more interesting as emissions-control technologies took on a life of their own in the ensuing decades. Engineers started with a variety of ways to meet the new environmental regulations of the 1970s, but by the 1980s, controlling the emissions of the passenger car became an interesting engineering problem unto itself and led to an increasingly important role for electronics in the automobile. This period is also explored in this report.

The paper finds that regulation played an important catalytic role—it forced automobile companies to start to address emissions and air pollution, a move unlikely without regulation—but also that the ensuing thread of technological innovation gained a life of its own as engineers figured out how to optimize the combustion of the automobile in the 1980s and ‘90s, specifically using computer technology. In short, what began as a begrudging effort to reduce harmful emissions evolved into a successful reshaping of the automobile into a computer with an engine and wheels.

Technology-forcing regulation, emerging technologies, and the entrepreneurial state

David Evans

In this section, I will lay out three ideas that will guide the forthcoming case study. The first is that regulation aimed at reducing air pollution in the 1970s was designed to force the auto industry to develop new devices to control vehicle emissions and exhaust—it was not a specification of what these devices might be or how they should operate. This approach of forcing firms to develop new technologies that will meet certain desired outcomes is a common feature of concerns about emerging technologies in the 21st century.

Second, an examination of the analyses aimed at unpacking the challenges faced by engineers who are obliged to develop emerging technologies is useful when considering the aims and effectiveness of technology-forcing regulation, as I do in this paper. And lastly, I will bring into the analysis the “entrepreneurial state” analytical framework of Mariana Mazzucato, a noted professor of the economics of innovation at the Science Policy Research Institute of the University of Sussex, as a way to consider the role of the state in encouraging technological development. Mazzucato writes about the various roles of the entrepreneurial state in building institutions to create long-term economic growth strategies, “de-risk” the private sector, and “welcome” or cushion short-term market failures.2 Rather than a laissez-faire state, an active, entrepreneurial state eases the route to economically important innovations produced by the private sector.

Technology-forcing regulation

Economists and others refer to legislation with a goal of forcing industry to develop new solutions to technological problems as “technology-forcing” regulation.3 The Clean Air Act of 1970, which targeted both stationary and mobile/non-point air pollution sources, is probably the most famous piece of technology-forcing legislation in U.S. history. The act specified that various industries meet standards for emissions for which no technologies existed when the act was passed. The Clean Air Act set standards that firms had to meet; importantly, the technology to be used to meet the standards was not specified, allowing firms to either develop or license a new device to add to the technological system in order to meet the stated criteria. Under this sort of technology-forcing regulation, both the electrical power utilities and the automobile industry developed a wide array of new exhaust-capturing and emission-modifying devices.

The legislative strategy of writing regulations that presented standards for which the current in-use technologies were insufficient was upheld by the U.S. Supreme Court in 1976 in the case of Union Electric Company v. Environmental Protection Agency. In this case, Union Electric challenged the Clean Air Act’s provision that a state could shut down a stationary source of pollution in order to force it to comply with a standard for which a feasible, commercial mediation technology did not yet exist.4 The U.S. Supreme Court upheld the legislation; states could set unfeasible emission limits where it was necessary to achieve National Ambient Air Quality Standards, which set limits on certain pollutants in ambient, outdoor air.5 Not upholding the statute would have opened the door to firms moving to locations with better air quality in order to emit more pollution and still keep the measured pollutants under the limits. This was where performance-based standards met technology-forcing regulations.

Emerging technologies

When the Clean Air Act was amended in 1977, Part C of the statute was dedicated to elaborating the regulation that said new sources in locations must meet or exceed National Ambient Air Quality Standards by using the “best available” technology. Part D stipulated that new plants in locations that did not meet these standards had to comply with the lowest achievable rate. There would be no havens for polluters under the Clean Air Act. These regulations spurred the development of commercially viable “scrubbers,” which remove sulfur from flue gases with efficiencies in the 90 percent range and were developed around 1980. The effort to use regulation to force power plants to develop technology to reduce smokestack pollution worked. There were all sorts of economic effects that require complicated cost-benefit analysis over the subsequent decade or so, but the regulation did what it intended in terms of forcing technological development to meet the needs of a cleaner environment.

In-use and emerging technologies

Stepping back from the specifics of clean air regulations for a moment, let’s examine the ways in which already-in-use technologies can be modified by regulations. It may be helpful to compare and contrast in-use technologies or mature technologies with new or emerging technologies. The contrast between the two isn’t as strong as the vocabulary might suggest. Emerging technologies are simultaneously full of promise and danger, while mature technologies seem to be more predictable. The promises of a mature technology are manifest and the dangers seemingly under control. The term “emerging technology” is only used as an indicator that these technologies have the potential to go either way—toward a utopian or dystopian future.

When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming.
— University of Aston professor David Collingridge (1980)

Emerging technologies are often considered a particularly challenging regulatory problem. The “Collingridge dilemma” in the field of technology assessment—named after the late professor David Collingridge of the University of Aston’s Technology Policy Unit—posits that it is difficult, perhaps impossible, to anticipate the impacts of a new technology until it is widely in use, but once it is in wide use, it is then difficult or perhaps impossible to control the pace of change (regulate) of the technology precisely because it is in wide use.6 Nowadays, it is often common in economic and political rhetoric to hear of the speed of technological development as a further obstacle to effective regulation.

The Precautionary Principle in the 20th Century: Late Lessons from Early Warnings (London: Earthscan Publications, 2002). Others argue for the co-evolution of technologies with their regulatory structures. An example of the co-evolutionary model of technology and regulatory structures is anticipantory governance, particularly characterized by researchers at Arizona State University. For an explanation and application of anticipatory governance, see Sean A. Hays and others, eds., Nanotechnology, the Brain, and the Future (Dordrecht: Springer, 2012) and David H. Guston, “Understanding Anticipatory Governance” Social Studies of Science Vol. 44, No. 2 (April 2014):218-242.[/footnote] Yet others address the Collingridge dilemma by thinking that regulation is stifling to emerging technologies, which need to first emerge and achieve some market success before being regulated.7 Often the automobile is used as an example of the latter, and this portrayal is not wholly inaccurate.

Air quality regulation, in particular, was reactive rather than prophylactic in the case of air pollution coming from both the internal combustion engine and stationary sources such as electrical power plants. But it was also a long process that was highly responsive to both changing understandings of the chemistry of air pollution and to the development of a long list of diverse, new, pollution-reduction technologies. In the 1970s, when many of these technologies were developed, demonstrated, and ultimately introduced into the marketplace, the political rhetoric about regulation was less contentious than it is today. Regulation was seen as a legitimate role for the state and even companies who would be affected by regulations demanded that regulations be settled on. Uncertainty about regulation could be seen as a greater threat than regulation themselves.

Seeing in-use technologies as having common features with emerging technologies opens up a useful space to consider technological change. In what sense is today’s automobile the same technology as the carbureted, drum-brake-equipped, body-on-frame vehicle of the 1960s, let alone the hand-cranked Model T of 1908? There are many continuities, of course, in the evolution of the Model T to the present-day computer-controlled, fuel-injected, safety-system-equipped, unibody-constructed car. But looking at today versus 1908, one might argue that there’s actually little shared technology in the two devices. I would suggest that the category of mature technology isn’t an accurate one if “mature” implies any sense of stasis. As these technologies developed, there was definitely a sense of path dependency—that modifications to the technology were constrained by the in-use device.

This dynamic was especially true in systems that the driver interacted with, such as the braking system. Engineers working on antilock braking systems in the 1960s took it as a rule that they could not change the brakes in any way that would ask drivers to change their habits.8 But they were much more dynamic than the term “mature” implies and certain sub-systems—among them microprocessors, sensors, and software—were, in fact, classic, enabling emerging technologies.

So what happens if a mature technology like the automobile is treated like an emerging technology for regulatory or governance purposes? Perhaps the Collingridge dilemma should be taken seriously for any regulatory process and shouldn’t be focused on the uniqueness of regulating emerging technologies? The impacts of regulation can be as hard to anticipate for technologies that “emerged” as they are for technologies that are new to the market. The case of the automobile demonstrates this dilemma, since no one at the time anticipated the computerization of the car as an outcome of emissions and safety regulations in the 1970s.

Looking at game-changing regulation of in-use technologies does show, though, that Collingridge overstated the (inherent) difficulty of changing technologies once they’re on the market. The automobile’s changes since the 1970s are significantly driven by the regulation of technologies emerging in the 1960s and 1970s; in hindsight, the technology was changeable in far more ways than the historical actors imagined. This is an optimistic point about technological regulation—technologies are never set in stone.9

One difference between emerging technologies and automobiles is that the economic actors and stakeholders are much clearer in the case of cars. This means that government regulators need to work in concert with known industrial players. Regulation constitutes a collective effort between firms and regulatory agencies. This doesn’t mean the process isn’t antagonistic; in fact, because the interests of the state and private firms don’t align, the process is typically very contentious. But the way forward is not to eliminate or curtail either the state or private industry’s role. The standard-setting process itself is a critical one.10 The role of government, though, should be thought of as more entrepreneurial than bureaucratic in the case of technology-forcing regulation.

The role of government regulation in technological development

Mariana Mazzucato’s concept of the “Entrepreneurial State” offers a framework for thinking about how governments in the 21st century are able to facilitate the knowledge economy by investing in risky and uncertain developments where the social returns are greater than the private returns.11 States typically have three sets of tactics for intervening in technological development:

  • Direct investment in research or tax incentives to corporations for research and development
  • Procurement standards and consumption
  • Technology-forcing regulation

Mazzucato’s book focuses on the first, but the other two played equally important roles in the second half of the 20th century.

Because there are multiple modes of state action, the state plays a central role in fostering and forcing the invention of new devices that offer social as well as, in the long run, private returns. My focus here is obviously on the role of regulation as a stick to force firms to develop in-use technologies in very particular ways. But it is important to recognize the way the federal government wrote the legislation that forced technological development. Lee Vinsel, assistant professor of science and technology studies at the Stevens Institute of Technology, calls these technology-forcing regulations “performance standards” because they were written to specify both a criteria and a testing protocol so that firms have wide latitude to design devices that could be considered to meet the criteria through standardized tests.12 Armed with the ideas that technologies are never static and mature and that the state plays critical and diverse roles in fostering and forcing new technologies to market, let’s now turn our attention to the phenomena of emissions regulations for automobiles since the late 1960s.

Setting standards for automobile emissions

David Evans

While I could provide a long timescale picture of the car’s development, my focus here is on the gradual computerization of the automobile in response to regulatory efforts to make it both safer and less polluting. Computer control of various functions of the automobile began first with electronic fuel injection and then antilock brakes in the late 1960s.13 The most important developments in electronic control occurred with the development of emissions-control technologies in the 1970s and 1980s, so that by about 1990 the automobile was being fundamentally re-engineered to be a functional computer on wheels. (See Figure 1.)

Figure 1

The implementation of these new systems was fairly gradual and followed a typical pattern of initial installation on expensive models that then trickled down to cheaper ones. Standardized parts and economies of scale are critical manufacturing techniques in the automobile industry, so there was and is often an incentive to use the same devices across different models. Therefore, when California led the creation of auto emissions regulations, the state was a large enough market that automobile manufacturers could almost treat the state standards as national ones. There was no way the auto industry would benefit from different emissions standards set state by state. Large markets such as California created an incentive for the automobile industry to want national standards instead of state-by-state regulation, even if some states might create much lower and easily achieved goals.14

The types and forms of clean air regulations were important to making them effectively force technological development. Regulations had to be predictable because dynamic standards would create higher levels of uncertainty, which makes firms nervous and unhappy. Technical personnel often advocated standards that could increase as new technologies were developed and implemented—engineers, in particular, often exhibited an epistemic preference for the “best” (the highest achievable) standards. This stance was particularly common for air pollution as there was no clear public health threshold about how much of nitrogen oxides or sulfur dioxide was dangerous or harmful. Many engineers thought the technologies should constantly improve and remove ever more chemicals known to be harmful to human lungs and ecosystems.

This debate about whether standards should either be static or instead increase with the development of ever more complicated technologies wasn’t just occurring inside government. In 1969, the director of emission control at Chrysler, Charles Heinen, presented a rather controversial paper titled “We’ve done the job—What’s Next?” at the Society of Automotive Engineers meeting that January. In the paper, Heinen lauded auto manufacturers for making exactly the needed improvements to clean up emissions and then pronounced the job done. Clearly speaking for the automotive industry, Heinen also questioned the health effects of automobile emissions, writing, “automobile exhaust is not the health problem it has been made out to be.”15 He argued that since life expectancies for Los Angeles residents were equal to or greater than national averages, the concentration of exhaust and the prevalence of smog could not be conclusively called dangerous to health. Furthermore, he argued that any further reductions in emissions would come only at a high cost—pegging the national cost to develop such technologies at $10 billion, and claiming that further reductions would likely come at the cost of decreased fuel economy for the user.

Heinen was positioning the industry against what he called catalytic afterburners to further burn off carbon dioxide, nitrogen oxides, and hydrocarbons. Yet some government regulators were specifically trying to force the development of catalytic afterburners or converters. The result was a victory for the regulators—by 1981, the three-way catalytic converter would become the favored technology to meet precisely the kind of escalating standards Heinen was worried about.

But it was a victory for quite a number of engineers, too. Phil Myers, a professor of mechanical engineering at the University of Wisconsin and president of the Society of Automotive Engineers in 1970-71, presented a paper in 1970 titled “Automobile Emissions—A Study in Environmental Benefits versus Technological Causes,” which argued that “we, as engineers concerned with technical and technological feasibilities and relative costs, have a special interest and role to play in the problem of air pollution.”16 He wrote that while analysis of the effects of various compounds wasn’t conclusive in either atmospheric chemistry or public health studies, engineers had an obligation to make evaluations and judgments.

To Professor Myers, this was the crucial professional duty of the engineer—to collect data where possible but to realize that data would never override the engineer’s responsibility to make judgments. He strengthened this position in a paper he presented the following year on “Technological Morality and the Automotive Engineer,” in a session he organized on Engineers and the Environment about bringing environmental ethics to engineers. Myers argued that engineers had a moral duty to design systems with the greatest benefits to society and not to be satisfied with systems that merely met requirements. At the end of his first paper in 1970, Myers also made his case for the role of engineers in both technological development and regulation, writing:

At this point, it would be simple for us as engineers to shrug our shoulders, argue that science is morally neutral and return to our computers. However, as stated by [the noted physician, technologist, and ethicist O. M.] Solandt, “even if science is morally neutral, it is then technology or the application of science that raises moral, social, and economic issues.” As stated in a different way by one of the top United States automotive engineers, “our industry recognizes that the wishes of the individual consumer are increasingly in conflict with the needs of society as a whole and that the design of our product is increasingly affected by this conflict.”17

Myers then cited atmospheric chemist E.J. Cassell’s proposal to control pollutants “to the greatest degree feasible employing the maximum technological capabilities.”18 For Myers, the beauty of Cassell’s proposal was its non-fixed nature—standards would change (become more stringent) as technologies were developed and their prices reduced through mass production, and when acute pollution episodes threatened human well-being, the notion of feasibility could also be ratcheted up. Uncertainty for Myers constituted a call to action whereas to Heinen it was a call to inaction. Heinen would have set fixed standards as both morally and economically superior—arguing “we’ve done the job.”

Myers also took a position with regard to the role of the engineer to spur consumer action. He wrote:

…our industry recognizes that the wishes of the individual consumer are increasingly in conflict with the needs of society as a whole and that the design of our product is increasingly affected by this conflict. This is clearly the case with pollution—an individual consumer will not voluntarily pay extra for a car with emissions control even though the needs of society as a whole may be for increasingly stringent emissions control. …there is universal agreement that at some time in the future the growth of the automobile population will exceed the effect of present and proposed controls and that if no further action is taken mass rates of addition of pollutants will rise again.19

Myers was arguing that federal emissions standards should be written such that they can become more stringent in the near future but also that regulation was a necessary piece of this system because it would serve to force the consumer’s hand (or to put it another way, to force the market). According to Myers, engineers were the ideal mediators of these processes.

In response to these debates, which occurred among government regulators, industry experts, and technical professionals, the Clean Air Act and its subsequent amendments set multiple standards.20 Fixed, performance-based standards, which changed more often than Myers would have predicted, became the norm, but only after a disagreement about whether to measure exhaust concentration or mass-per-vehicle-mile. The latter was written into the Clean Air Act and remained the standard, but not without attack from those who argued that the exhaust concentration was a better proxy for clean emissions.

The initial specified standards under the Clear Air Act covered carbon monoxide, volatile organic compounds (typically unburned hydrocarbons or gasoline fumes), and nitrogen oxides. Looking only at nitrogen oxides as an example, the standards became significantly tougher over 30 years. The initial standard for passenger cars was 3.1 grams per mile, which automobile manufacturers had to show they had achieved by 1975.21 With the 1977 amendments, the requirement would drop to 2 gpm by 1979 and to 1 gpm by 1981.

The 1990 Clean Air Act Amendment set a new standard called “Tier 1,” which specified a 40 percent reduction and moved the standard for nitrogen oxides emissions in passenger cars down to 0.6 gpm. Then, in 1998, the Voluntary Agreement for Cleaner Cars between the EPA, the automobile manufacturers, and several northeastern states again reduced the nitrogen oxides goal by another 50 percent to 0.3 gpm (just 10 percent of the original standard from 1975) by 2001. This agreement was unusual in that it was not mandated by the Clean Air Act but would nonetheless affect cars nationally.

At the same time as the Voluntary Agreement for Cleaner Cars was unveiled, the EPA issued the Tier 2 nitrogen oxides standards, which reduced the standard to 0.07 gpm by 2004. Tier 2 also specified a change in the formulation of gasoline so that these and other standards were more readily achievable. So in the 30 years between 1975 and 2005, nitrogen oxides emission standards were reduced by approximately 98 percent. What did auto manufacturers have to do to meet these seemingly draconian standards?

Meeting emissions standards: The catalytic converter

The general strategy of automobile engineers for meeting California standards in the 1960s and federal standards in the first half of the 1970s was to add devices to the car to catch, and often reburn or chemically transform, unwanted exhaust and emission gases22. The first several add-ons were to reduce unburned hydrocarbons (gasoline fumes), which were targeted in 1965, before the Clean Air Act, by the Motor Vehicle Air Pollution Control Act. Once the focus shifted to nitrogen oxides in the 1970s, the challenge for engineers was to invent technologies that didn’t undo a decade’s worth of hydrocarbon reduction. Nitrogen oxides are formed through an endothermic process in an engine and are produced only at high combustion temperatures, greater than 1,600 degrees centigrade. Therefore, one key to reducing them was reducing the temperature of combustion, but in the 1960s, combustion temperatures had often been increased to more completely burn hydrocarbons. Reducing combustion temperatures threatened to increase unburned hydrocarbons.

The solution was to invent a device that would only engage once the engine was hot and would at that point reduce combustion temperature. The first of these devices was the engine gas recirculation system—a technology that required a temperature sensor in the car’s engine to tell it when to engage. This wasn’t the first automotive technology reliant on sensors, but sensors of all kinds would become increasingly important in the technologies developed to meet the ever-rising nitrogen oxides standards. Sensors were also not as reliable as either automobile manufacturers or car owners wanted, and the engine gas recirculation system was the first of many technologies best known to drivers by lighting up the dashboard “check engine” indicator. The new technology was useful but it failed to offer enough change in the chemistry of combustion to meet the falling nitrogen oxides allowances. A more complex add-on device was in the near future.

Members of the Society of American Engineers first started to hear about devices to transform vehicle exhaust gases using catalysts in 1973. European and Japanese engineers were very common in these sessions, as they were convinced that catalytic emissions technologies were going to be required by their governments (in contrast to the performance-standards approach already in place in the United States). It wasn’t that U.S.-based engineers opposed catalytic devices, but they knew that the EPA was concerned with the testable emissions standards rather than the use of any particular device.

The catalytic converters of the 1970s could transform several different exhaust gases. Engineers were particularly interested in the transformation of carbon monoxide into carbon dioxide, changing unburned hydrocarbons into carbon dioxide and water, and dealing with nitrogen oxides by breaking them apart into nitrogen and oxygen. The converters presented great promise for these reactions, and converters that performed all three reactions were called three-way catalytic converters.

But catalytic converters also presented challenges to auto manufacturers. The catalysts in them were easily fouled; that is, other materials bonded with the catalysts and prevented them from catalyzing the reactions they were supposed to generate. The most common fouling substance was lead, which in the 1970s was added to gasoline to prevent engines from knocking. Lead had to be removed from gasoline formulations if catalytic converters were to be used. This had to be negotiated between the federal government, the oil industry, and the automobile industry, none of which were satisfied with the solution.

Second, the catalytic converters depended on a reliable exhaust gas as an input. This meant that to use the converter on different cars, it had to receive the same chemical compounds, and therefore each model of vehicle had to have the same pre-converter emissions technologies.

Lastly, the catalytic converter depended on a consistent combustion temperature, which required more temperature and oxygen sensors in the engine. Along with electronic fuel injection, which was increasingly replacing carburetion as a technique for vaporizing the car’s fuel (mixing fuel and air) as it entered the intake manifold or cylinder, engine gas recirculation systems and catalytic converters were the chief engine technologies that were driving the move to computer control in the car. All three of these technologies depended on data input from sensors, logic control, and relays that engage the system under the right conditions.

Reconceptualizing combustion

Once the catalytic converter was introduced and increasingly made standard in the late 1970s and 1980s, work to further reduce nitrogen oxides emissions depended on either continuous improvements to the catalytic converter or the addition of new technologies. But many engineers, especially those working on more expensive models who had more opportunity to design new systems and not resort to “off the shelf” emission-reduction technologies that were becoming more and more common, were dissatisfied with the Rube Goldberg-like contraption that the car’s emissions control was becoming. There had to be a better design that would make the targets achievable. The promise lay in managing the whole combustion process—not once in a while to turn on an engine gas recirculation system, but continuously to optimize combustion with respect to undesirable emission compounds. Developing the technologies to manage combustion electronically took another decade.

Combustion management started to appear with the gasoline direct injection approach, which began development in earnest in the 1970s and started to appear on the market in the 1990s.23 Gasoline direct injection required the car to have a central computer or processing unit to read input data from a series of sensors and produce electric signals to make combustion as clean as possible, especially focused on the production of nitrogen oxides. Gasoline direct injection changed the ratio of fuel to air continuously. A warm engine under no load could burn a very lean mixture of fuel, while a cold engine or an engine under heavy load needed a different mixture to keep nitrogen oxides at the lowest possible level.

Yet early introductions of this new technology, such as Ford’s PROCO system on the late 1970s Crown Victoria, failed to meet increasingly stringent standards and production of the system ended by being canceled. It took well over a decade to mature the gasoline direct injection technology into a system that could help achieve the Tiers 1 and 2 nitrogen oxides standards.

Other technologies aided in the approach that combustion could be managed to optimize output under widely different conditions. Variable Valve Timing and Lift systems, which are often characterized generally as VTEC systems even though that’s the trademarked name of Honda’s system, custom control the timing and degree of an engine’s valves opening. These are mechanically and electronically complex systems that can improve performance and reduce emissions and which require data analysis by a computer processing unit. They were introduced in the late 1990s.

By the 1990s, clean combustion was the goal of automotive engine designers. The car should produce as few exhaust gases as possible, beyond carbon dioxide, water, and essentially air, composed primarily of oxygen and nitrogen. The carbon dioxide output is really the issue today, as it is now the primary pollutant produced by the automobile. It’s not that earlier engineers characterized it as inert but rather that combustion will inevitably generate carbon dioxide in quantities far too large to either capture or catalyze. There is no catalytic path to eliminating carbon dioxide after the combustion cycle, and combustion necessarily produces carbon dioxide. And capturing it presents insurmountable problems due to volume and the obvious question of storage. Already in 1970, Phil Myers did, perhaps surprisingly, raise the question of carbon dioxide: “increasing concern about [CO2’s] potential for modifying the energy balance of the earth, that is the greenhouse effect.” And Myers called CO2 the automobile’s “most widely distributed and abundant pollutant.”24 The internal combustion engine is more than a carbon-dioxide-producing machine, but it is nothing less than that either.

The recent scandal over Volkswagen’s computerized defeat device falls directly into this concern over computer control of combustion and its output of nitrogen oxides, although the engines currently under scrutiny with VW are diesel-powered and not gasoline engines, and are therefore subject to different Clean Air Act regulations and involve some different systems and technologies than those that are described in this paper. Still, it is not surprising that nitrogen oxides are the pollutants that the VW engines produce in excess of regulatory levels.

The technologies designed since the 1970s to produce such a striking reduction in nitrogen oxides all presented trade-offs with vehicles’ performance and gas mileage. Gas mileage is also the subject of federal regulation, but the corporate average fuel economy standards, or CAFE standards, only specify an average for an entire class of vehicles. Volkswagen’s choice to implement software that would override the nitrogen-oxides-controlling systems except when a vehicle was under the precise conditions of testing was a choice to privilege what they perceived as customers’ preferences over meeting U.S. requirements for pollution emission. There’s much to say about VW, but little of it involves the subject of this paper—the role of regulation in generating socially desirable innovations.

Conclusion: Regulation, new technologies, and economic growth

David Evans

The purpose of the paper was to analyze the historical interactions between regulation and technological development. In the case of automotive innovations, it is clear that high emissions standards did force the development of new technologies by jumpstarting a quest to improve the car, to make it less environmentally taxing and harmful to human health. More importantly, the continually escalating emissions standards, exemplified here by increasingly stringent nitrogen oxides standards, led to fundamental changes in the car that made it not only less polluting but also more reliable as a (largely unanticipated) byproduct of computerization.

Cars also have become much safer through regulation, with deaths per miles driven dropping from approximately 20.6 deaths per 100,000 people in 1975 to about 10.3 per 100,000 in 2013—a drop of 50 percent.25 Few of the technologies used to improve the car existed prior to the enactment of legislation.26 Technology-forcing regulations can be effective and the opposition of industries affected by them is usually temporary, only a factor until new technologies are available. Given the complexity of many of today’s technological challenges, especially those in the energy sector and involving climate change, the U.S. government needs to consider its role as a driver of technological change.

The link between regulation and technological innovation in automobiles has had a variety of complicated effects on economic growth and the labor market—from creating new jobs to produce new automotive electronic technologies to changing the economic environment of the repair garage. This process of continuous innovation in response to regulation has continued since the 1990s, and now we face the challenges and opportunities of self-driving cars or autonomous vehicles. The adoption of these new technologies for a new kind of car over the next decades will require major changes in legislation, in driver behavior and attitudes, and of course in technologies themselves, including large technological systems. All of these will have important economic effects, some of which are fairly predictable, others of which are not, but all of which will certainly be politicized by interested parties.

About the author

Ann Johnson is an associate professor of science and technology studies at Cornell University. Her work explores how engineers design new technologies, particularly in the fields of automobiles, nanotechnology, and as a tool of nation-state building.

Acknowledgements

Special thanks to W. Patrick McCray, David C. Brock, Lee Jared Vinsel, James Rodger Fleming, Adelheid Voskuhl, and Cyrus Mody for conversations and comments, and the other members of the History of Technology workshop at the Washington Center for Equitable Growth, especially Jonathan Moreno and Ed Paisley.

Emerging technologies, education, and the income gap

Visit the full index of our History of Technology series.

Overview

Technologies and industries do not emerge in isolation—as science and technology professors Braden R. Allenby and Daniel R. Sarewitz observe in their book “The Techno-human Condition”—but instead are coupled with complex social and natural systems. “Coupled” means that a change in one situation, such as the emergence of a new technology, will cause changes in both the social and natural systems.27 “Complex” means that these changes are hard to predict because of the nature of complex systems. Ever-greater complexity defines today’s emerging technologies, and as they emerge, they could end up supporting a social system that emphasizes merit or instead one that makes it more difficult for low- and middle-income people to achieve success.

The question is how to create social systems that maximize the opportunity for those not at the very top of the economic spectrum to succeed. Income disparities interfere with economic growth, argues Nobel Prize-winning economist Joseph Stiglitz, in part because talented, motivated people who could add value to society are not given a fair shake at showing what they could do.28 The “History of Technology” series published by the Washington Center for Equitable Growth over the past few months delves into where the introduction of new technologies in the recent past resulted in more equitable economic growth and sustained job creation, opening avenues for many workers to contribute meaningfully to economic growth—even though the disruptions to social systems caused by these technologies were often jarring and widespread.

In my closing essay for this series, I will hazard some thoughts about how newly emerging technologies today may play out in our social and natural systems, examining in particular how new educational systems could help ensure the wide and widening income gap in the United States is reversed rather than attenuated by the complexity of new technologies. I take some comfort from the presentation of the other papers in this series and my own view of the emerging use of telephones nearly a century and a half ago that the net result of today’s emerging technologies will be positive. Yet I also see there are other, less benign paths that our society and nation could take when one looks at today’s technologies.

The case of the telephone

Most inventors in the early 1870s thought that the “reverse salient” in communications technology was bandwidth, meaning that only two messages could be sent down the same telegraph wire at the same time.29 Alexander Graham Bell thought the transmission of speech would be transformative, and after a long series of relatively simple experiments, he obtained a patent for any device that would produce what he called an undulatory (sinusoidal) current.30 Speech was of no interest to Elisha Gray—Bell’s rival—and Gray’s backers at Western Union, nor to Thomas Edison, who developed a superior telephone transmitter but was blocked by a patent obtained by the new Bell Corporation. Western Union then turned down the opportunity to buy the fledgling Bell Corporation, and the latter’s telephone gained market share over the former’s telegraph system, especially after the invention of the switchboard.

Were telegraph operators deprived of jobs because of the switch to a new technology? Telegraph operating was a prized skill in the early 1870s, much like modern-day web design. Telegraph operators like Edison in his youth could show up in almost any city and get a job. The need for such operators declined over the period of the telephone’s growth, but telegraphy had always been a technology needed by many for occasional use—except for businesses and government—whereas the telephone was a technology needed by many and used far more often because one could do it from one’s home, not a telegraph office.

One of the best outcomes of telephony was a growth in employment for women, who had trouble breaking into the business of operating telegraphs but were far more successful at taking over the operation of telephone switchboards. Women also became heavy users of the telephone.31 So the telegraph-to-telephone transition was not disruptive for employment even though it did lead to the rise of the Bell Telephone Company and the decline of Western Union and slowly changed the way people communicated with each other.32

Telephones were originally owned mostly by the wealthy, as illustrated in 1936 when the Liberty Digest infamously predicted that Republican presidential candidate Alf Landon would defeat Franklin Delano Roosevelt in a virtual landslide, based on a poll of telephone users. That year, 60 percent of the highest wage earners in the United States had telephones but only 20 percent of the lowest earners—and those who were wealthy were more inclined toward Landon. By 1941, 100 percent of families making $10,000 or more had phones, as opposed to only 12 percent of families making less than $500. By 1970, only the poorest American households did not have phones.33 As a result, telephone polling because much more accurate.

Emerging technologies as of 2015

Carl Benedikt Frey and Michael A. Osborne, both of the University of Oxford, argue that computers have hollowed out middle-income jobs, with the lower income going to services and the higher incomes going to people who work with computers and people on functions such as management.34 This pattern will change as computers do lower-end service jobs. Websites are already replacing the function of travel agents, robots can automate manufacturing and potentially much of the fast food industry, and self-driving vehicle technology could eliminate the need for truck drivers. The roles of cashiers and telemarketers might be replaced by software that could handle a wide range of verbal queries. Even infrastructure jobs are at risk as robot manufacturing plants could build bridges designed to be dropped into place, and perhaps automated trucks could drive them there, potentially reducing the number of humans involved. According to Frey and Osborne, 47 percent of existing jobs may disappear as a result.

These changes will exacerbate the rich/poor divide because high-skill and high-wage employment is one of the classes of jobs that cannot be automated. These jobs depend on social intelligence such as the ability of chief executives to confer with board members, coordinate people, negotiate agreements, and generally run the affairs of companies.35 Before the stock market collapse in 2008, CEOs in the United States were paid on average 240 times the salary of workers in their industries—on the grounds that it took this kind of pay to attract and retain this kind of talented leadership.36 Is the social intelligence of a CEO worth this kind of premium?

CEOs benefit from what the 20th century American sociologist Robert K. Merton labeled the Matthew effect: “the rich get richer at a rate that makes the poor become relatively poorer.”37 Merton was thinking primarily about scientists; those awarded Nobel prizes get much more credit for their contributions than relatively unknown scientists who make similar contributions. As one of the laureates Merton interviewed said, “The world … tends to give the credit to [already] famous people.”38 “The world” in this case refers to the social world in which the scientists do their work, including their peers, the funding agencies, the press, policymakers, and politicians. A similar phenomenon may exist in the case of the CEO who went to the right business school, made the right connections, burnished her or his reputation at every opportunity, and finally arrived at the top.

Is what a CEO knows really 200 times more valuable than what the average worker knows? For the late Steve Jobs of Apple Inc., Bill Gates of Microsoft Corp., and the legendary investor Warren Buffett, it probably is. But then there are corporate executives like Ken Lay and Jeff Skilling, who took over a successful pipeline company they renamed Enron and ruined it because they encouraged unethical business practices that masked whether their company was really making any money.39 Similarly, Bernie Ebbers, the CEO of WorldCom, accumulated huge debts by growing his company through acquisitions until there was nothing else to acquire, and then his accountants started fudging results so the stock price would stay high.40

Indeed, in both of these cases and in many others, such as the collapse of Value America, the CEOs focused on keeping the stock price high instead of paying attention to whether their companies were actually making money.41 All were at one time or another praised as business geniuses. The social intelligence possessed by some CEOs may be the ability to create their own Matthew effects through relentless self-promotion.

Convergent technologies to enhance human performance

Nanotechnology in combination with advances in medicine, information technology, and robotics will be transformative.42 Machine learning algorithms are now trading on Wall Street at nanosecond speed, far beyond the capacity of the human nervous system, which suggests that human traders will soon be obsolete.43 Consider the possibility of reproducing organs on 3-D printers, or neural-device interfaces that allow the brain to control devices, or developing cochlear implants that enhance hearing, or extending the human lifespan through a combination of technologies, including genetic modification of human embryos.44

Or just consider what IBM’s Watson supercomputer has already accomplished, beating all but one former champion in the game of Jeopardy!—the lone exception being physicist and former Congressman Rush Holt (D-NJ). Jeopardy! requires contestants to guess the questions that produced answers visible to all players. Colloquial language and puns are often used in the answers. Speed is essential; the quickest one to identify the question wins.  Watson solves the problem by using its computing power to analyze multiple possible solutions in parallel, matching language patterns to a huge database of documents.

Watson therefore outperforms humans in the same way that its predecessor at IBM, Deep Blue, beats chess grandmasters—by exploring millions more options than a human can. Could computers someday begin to scaffold human social intelligence by, say, looking rapidly for socially appropriate responses to a query in a cross-cultural situation? Here, computational tools become expert assistants, not only in situations where mathematical or scientific problems need to be solved but also in social situations. Such enhancements could be so expensive that only the rich could afford them, turning the income gap into a capability gap so large that the rich become almost another species.

Technologies can create momentum in a direction, but they do not determine our futures. As Seth Finkelstein, programmer and winner of the Electronic Frontier Foundation Pioneer Award, notes:

The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence; rather it’s a political choice.45

In short, technology creates new opportunities for success and failure for a society dedicated to equal opportunity.

Bands that cannot sign with a record label can put up a website and a YouTube video and get a following. The website Kickstarter allows people to get funding for their newest ideas.  Cell phones can be used to start businesses and organize revolutions—or instead become tools for oppression and misinformation. Will cognitive aids become as ubiquitous as cell phones, with universal translators, sensory enhancements, and neural-computational interfaces available to all? Or will these technologies be priced and marketed and controlled so that only elites can obtain them?

The key is governance mechanisms that promote equal opportunity. One means is quality education for all, which would include access to these and other newly emerging technologies alongside instructions on how to take advantage of them.

Education and the income gap

“The path from poverty to the middle class has changed—now it runs through higher education,” argued journalist Jim Tankersley of The Washington Post in a series on education and economic growth and opportunity.46 Take the case of Darren Walker. He was born in a charity hospital. Head Start gave him an early jump into education and Pell grants paid much of his tuition at the University of Texas. He is now president of the Ford Foundation. “Even at 8 or 9 years old, I knew that America wanted me to succeed, [but] the mobility escalator has simply stopped for some Americans,” Walker told a reporter recently. “We fix it by recommitting ourselves to the idea of public education. We have the capacity. The question is, do we have the will?”47

The author Joshua Davis in his 2014 book “Spare Parts: Four Undocumented Teenagers, One Ugly Robot, and the Battle for the American Dream” tells how four children of immigrants from Mexico went to Carl Hayden Community High School in East Phoenix, where they formed a robotics team with the help of two dedicated teachers and won a robot competition against college teams that included one from the Massachusetts Institute of Technology.48 Members of the second-place MIT team got prestigious positions. Only one of the Carl Hayden boys finished college, graduating from Arizona State University and then going back to Mexico to apply for U.S. citizenship, which he received after his case became famous. He served his adopted country by joining the U.S. Army.

Equal educational opportunity is one of the best ways to provide more options to those at the bottom of the economic ladder—if a society can guarantee access to a good education for all. According to the Pew Research Center:

On virtually every measure of economic well-being and career attainment—from personal earnings to job satisfaction to the share employed full time—young college graduates are outperforming their peers with less education. And when today’s young adults are compared with previous generations, the disparity in economic outcomes between college graduates and those with a high school diploma or less formal schooling has never been greater in the modern era.49

Poverty has the same effect. Harvard University economist Amartya Sen describes poverty as “unfreedom” because the poor have limited options. He tells the story of a Muslim who found work on a house in a Hindu neighborhood and had to go there on a day when there were anti-Muslim riots because his family lived day to day off his income. The man was stabbed in the back and died.50

Less violently, the smallest disruption in their meager incomes can cause lower-income students in the United States to drop out of college in order to find work, especially if they have already amassed significant debt from loans. And this unfreedom extends to other levels of education. In Charlottesville, I sent my three sons through public school and their education and opportunities were as good as anyone who went through one of the private schools I could not have afforded. But our schools were full of middle-class kids whose parents were committed to education and could make sure their children were well fed and dressed.

In contrast, Sonya Romero-Smith, an elementary school teacher at Lew Wallace Elementary in Albuquerque, begins the day with “an inventory of immediate needs: Did you eat? Are you clean? A big part of my job is making [the children] feel safe.”51 Fifty-one percent of public school children in the United States today live in poverty. “A child in a high-poverty school faces multiple handicaps in mastering foundational skills,” explain professors Frank Levy of the Massachusetts Institute of Technology and Richard Murnane of Harvard University in their book “Dancing with Robots: Human Skills for Computerized Work.” These handicaps include “a majority of classmates with weak, preschool preparation, students transferring in and out of class during the year, and a low chance of being taught by a stable set of skilled teachers who work together to improve instruction over an extended period of time.”52

Nevada just passed a law providing vouchers equivalent to the amount the state spends to educate a public school student (about $5,700) that could be applied to private school tuition instead.53 But the vouchers would not cover the cost of most private schools, which means middle-class families could move out of the public schools, leaving the poor behind.  The only fair voucher program would admit students to private schools need-blind, so the poor could go for the price of the public school stipend. Even in this situation, the poorer students would be less prepared academically, and might not make the admission requirements. The rationale for voucher programs is that they increase choice. What the proponents ignore is Amartya Sen’s maxim: Poverty reduces freedom. Vouchers therefore have the potential to accelerate the gap between rich and poor.

The same is true for state universities, which are subsidized to provide a merit-based path to higher education. But even a top public school student from Virginia who gets into the University of Virginia may have trouble paying for it because the state, like many other states, has drastically reduced its support to all of its universities and colleges.54

But education is not only important for the economy, it is also important for security. Back in 1947, the President’s Scientific Research Board advised that “the security and prosperity of the United States depend today, as never before, upon the rapid extension of scientific knowledge. So important, in fact, has this extension become to our country that it may reasonably be said to be a major factor in national survival.”55 The security issue in 1947 was the emerging Cold War. The launch of Sputnik by the Soviet Union furthered the emphasis on science education, but focused it more on graduate education: To beat the Soviets, America needed to have elite scientists and engineers.

The civil rights era brought a renewed emphasis on lower-income students.56 But in 1980, the Reagan administration cut back on funding for lower-income students to attend college.At the present time, federally guaranteed loans have become the most available support for lower-income students, which encourages these students to begin college but drop out when the first financial emergency occurs. Between 2004 and 2012, there was a 70 percent increase in both the number of students taking out loans for college and the average outstanding balance for each student.57 The Lumina Foundation has developed a Rule of 10: Families should pay for college with 10 percent of their discretionary income saved over 10 years, and students should work 10 hours a week in college. But college costs have risen 45 percent over the past decade while household incomes have declined by about 7 percent.58

Student debt can affect decisions about when (and whether) to buy a home and get married, and can also delay putting aside money for retirement. If the education results in a better job, then the investment was worth it, but for those trying to rise out of poverty, loans impose a significant additional burden.

Is distance education a solution?

Distance education has the potential to lower the barriers to education by allowing a student from virtually any part of the world to take a course from a top-ranked university for free. The Massachusetts Institute of Technology, for example, has an Open Courseware website.59 This kind of education is reminiscent of the great Open University programs in the United Kingdom, which used to be broadcast free over the radio. Alas, obtaining an online degree from the Open University now costs roughly $8,000 a year.

At the other end of the education spectrum, Sesame Street in the United States has produced educational benefits similar to Head Start programs, though the former cannot replicate the latter’s family support and health benefits.60 Yet if Sesame Street can provide valuable, free education for preschoolers, could distance learning do the same for adults? Universities including Stanford University and the University of Virginia are providing free courses over the Internet, as are companies such as Udacity, Inc., Coursera Inc., and edX Inc. But no one has figured out a revenue model that will at least cover the costs of these courses, which include faculty and graduate teaching assistant hours and continuously evolving technologies.61

At universities, at least some classes or modules should be directed toward future workforce capabilities. The adaptations could vary with majors. For humanities majors, it might be sufficient for them to gain knowledge about emerging technologies and how to work with them. For engineers, it might be sufficient knowledge to work with those with business, law, and humanities backgrounds. This ability to work across disciplines is referred to as T-shaped expertise, where the vertical bar of the T is expertise in a discipline like history or computer science and the horizontal bar is additional disciplines whose language and practices the T-shaped expert can understand. Team-teaching across disciplines and interdiscipinary projects are two ways of enhancing both T-shaped expertise and the ability to acquire it.62 Universities have become increasingly bureaucratized and siloed in ways that make such collaboration more difficult.

I teach a seminar63 that includes students from parts of Virginia too far for them to commute to campus.64 They join the course online, and participate as fully as possible in the discussion. The courses are writing and speaking intensive, and one involves a role-playing simulation of the National Nanotechnology Initiative in which the students play roles that include:

  • Congress
  • Regulatory agencies like the Food and Drug Administration or the Environmental Protection Agency
  • Funding agencies such as the Defense Advanced Research Projects Agency, or DARPA, and the National Science Foundation and National Institutes of Health
  • Established companies and entrepreneurial startups
  • University laboratories
  • Non-governmental organizations such as the Project on Emerging Technologies at the Woodrow Wilson International Center for Scholars
  • A NanoPost newspaper

The simulation requires students to set short- and long-term strategic goals for the National Nanotechnology Initiative and then play roles involved in achieving those goals.

In these simulations, laboratories seek funding for technologies from the funding agencies and decide which nanotechnologies to create, in cooperation or competition with each other. Congress decides how much funding to give the National Science Foundation, the National Institutes of Health, DARPA, and the regulatory agencies, and meets regularly with its scientific and commercial constituents, considering their views. Non-governmental organizations use various strategies to encourage or block technologies. Companies try to get patents and make other groups pay to use the technologies they own.

Students build their own technology roadmap, continually negotiating about what goals ought to be included. These technologies are put on a tree similar to those used in games such as the Civilization series.65 The goal is to give the engineering students enough T-shaped expertise to work with policymakers, funding agencies, non-governmental organizations, and regulators by providing them with vicarious experience.

My distance-learning students can talk and send messages to students in the classroom, but the engagement is far from seamless and in no case can students see each other over a distance. Lowering the cost of online technologies could make courses like this into full seminar experiences that could be taken anywhere.

There are other ways to facilitate distance learning. There will always be free media—think TED talks and YouTube videos. But education is important in learning how to tell which among the proliferation of web sources are based on sound research. Those in poverty also have less access to reliable, high-speed Internet.66 How will they gain the skills necessary to benefit from online learning opportunities—even if they can find time away from one or more low-paying jobs and get to a library with a public computer? How will they be able to participate in a learning community like my nanotechnology policy class?

Students need education that fosters the ability to learn and adapt to potentially disruptive changes in the nature of work, not education that is too focused on current employment. The National Nanotechnology Initiative, for example, is premised on providing social benefits as well as scientific and technological advances—especially programs to train the existing workforce for new jobs in a rapidly growing industry.

Community colleges could play a major role in this kind of training, though efforts to date have not been very promising. Consider the example of NanoInk, a company that was inspired by the nano visions of researchers at Northwestern University and manufactured apparatus for nano applications outside of Chicago for 10 years. The hope was to establish a nano corridor in an old industrial area. To prepare workers, Oakton Community College introduced a nano-focused curriculum. But after 10 years, NanoInk’s major backer pulled out, concluding that the financial returns were too slow and would continue to be too small.67 The students who had taken the nano curriculum at Oakton Community College did not graduate into a local nano economy where their skills would be at a premium.

Education needs to prepare students for a lifetime of learning, gaining new skills and knowledge to shape the opportunities created by the emergence of new sociotechnical systems. Support for this kind of adaptability should come from industry as well as government. IBM Corp., for example, is supporting conferences on education for T-shaped expertise.68 Chieh Huang, the CEO of Boxed, a mobile commerce startup, will pay college tuition for the children of his employees out of his stake in the company.69

But all of these efforts are no substitute for federal, state, and local funding of education: Support for universities, secondary and primary schools in most states is lower than before the Great Recession of 2007–2009.70 These funding cuts have also hurt state economies because of widespread layoffs of teachers and staff members as schools shrink their budgets. Education cannot correct all lack of freedoms experienced by those in poverty;71 however, as the co-evolution between technology and society accelerates, the value of education also accelerates.

—Michael Gorman is a professor in the Department of Engineering and Society at the University of Virginia, where he teaches courses on ethics, invention, the psychology of science, and communications. He worked for two years as a program director in the Science, Technology & Society program at the National Science Foundation and is president of the International Society for the Psychology of Science and Technology.

Garnering economic security is complicated for young families

Sara Gustoff, right, reads to her children Abigail, from left, Nathanael, Benjamin, and Jonah while at the kitchen table in their home in Des Moines, Iowa.

Overview

Over the past 40 years, women in the United States have played an increasingly important role in family economic well-being. Women have increased their levels of educational attainment and their participation in the labor force and have seen increases in pay. This transformation in how women spend their days means that most families must figure out how to make do without a full-time, stay-at-home caregiver.

While conflicts between the demands of work and caregiving are now commonplace, families too often are left on their own to cope, without the support of sufficient social infrastructure—such as affordable child care and elder care, paid time off for medical and family leave, and the flexible work hours—and macroeconomic policies that would reduce unemployment, increase wages, and encourage full employment. These findings are detailed in Heather Boushey’s recently released book, “Finding Time: The Economics of Work-Life Conflict,” which explores how women’s increased hours of work over the past four decades helped American families maintain economic security.

In this issue brief, we unpack women’s role in helping stabilize family incomes for a specific subset of the U.S. population: young families, or families where at least one person is above the age of 16 and everyone is under the age of 35. Using data from the Current Population Survey, we chronicle how family incomes changed between 1979 and 2013 for young low-income, middle-class, and professional families. Specifically, we decompose the differences in male earnings, female earnings from greater pay, female earnings from more hours worked, and other sources of income over this time period.

Download File
Garnering economic security is complicated for young families (pdf)

Read the full pdf in your browser

We find that even though women in young families have increased their hours of work as much as women in the working-age families in our issue brief “Women have made the difference for family economics security,” these young families have seen less income growth.

Here are our key findings:

  • Even as women’s hours increased, declining male earnings pulled down family income in both low-income and middle-class young families. Among young low-income families, income fell by 22.9 percent over that time period, while among young middle-class families, income only grew by 3.4 percent.
  • Between 1979 and 2013, women’s added hours of work boosted young families’ income across income groups. For low-income families, women’s added hours were the only growth factor. In both middle-class and professional young families, women’s higher earnings from increased pay also bolstered family incomes.
  • While the work hours of women in young families increased at similar rates to the hours of women in working-age families across all three income groups, young families saw much smaller growth in women’s wages and larger losses or smaller gains in male earnings compared to working-age families.

The economic state of young workers

Despite recent improvements in the U.S. labor market, young workers continue to face tough conditions. This past March, for example, the unemployment rate for all people over the age of 16 was 5.0 percent, but was 8.4 percent for young workers (ages 20 to 24). The challenges, however, go beyond relatively high unemployment and underemployment (compared to older workers) and include slow wage growth, limited opportunities to move up the job ladder, and, for those that have not been able to find a firm foothold in the job market, the long-term scarring of their earnings potential.

Higher unemployment among younger workers is due to a number of factors, not all of which are bad. We expect that young workers will change jobs more often—ideally, transitioning between jobs to find better offers—as they build their careers and grow their earnings. To the extent that young workers’ higher unemployment is due to spending more time seeking jobs or moving to a different city, this is not necessarily bad.

But there also are not-so-good reasons for higher unemployment. Young workers are often first-time job seekers with limited work experience, which makes them more likely to be passed over in hiring decisions. Even once they are hired, they are typically the most junior employees and thus most susceptible to being laid off or let go when their firms run into trouble.

The Great Recession of 2007–2009 created a host of challenges for young workers: Those who entered the labor market at that time, couldn’t find their footing, and then were often overlooked in favor of “fresh” workers in later years. Research by economists Giuseppe Moscarini at Yale University and Fabien Postel-Vinay at University College London finds that during the Great Recession, many workers became trapped in low-wage jobs, which they describe as the job ladder “shutting down.” On top of this, for many young workers who earned a college degree, the added burden of increasing student debt loads delayed steps in the traditional economic lifecycle, among them homeownership, car ownership, and even marriage.

Economic struggles are compounded for the nearly half of young families with children. In 2013, almost half (43.7 percent) of young families had a child under age 18 present in the home. The higher up a young family is on the income ladder, the less likely they are to have a child at home. Among young professional families, only 22.6 percent have a child at home, compared to 37.7 percent among middle-class families and 57.6 percent among low-income families.

Young families struggle with how to address work-life conflict within the context of this tough labor market. As Heather Boushey documents in her book, “Finding Time: The Economics of Work-Life Conflict,” families over the past four decades have relied on the added hours and earnings of women to boost income. Women’s increased participation in the labor force has been an effective coping mechanism amid the shifting fortunes for male workers in the U.S. economy over this period, directly contributing to family economic security. But this can be a tough strategy without policies to help address the day-in, day-out conflicts between work and family life. This can be even harder for young workers who are least likely to have built up reserves of sick or vacation time or may be more vulnerable to layoffs.

This issue brief extends the analysis in “Finding Time” and explores what this looks like specifically for young families up and down the income ladder. Using data from the Current Population Survey, we calculate how family income has changed between 1979 and 2013 for low-income, middle-class, and professional families who are “young”—where at least one person is above the age of 16 and everyone in the household is 35 years old or younger. (See Box.) We decompose these changes over time into differences in male earnings, female earnings from more pay, female earnings from more hours worked, and other sources of income, which include Social Security and pensions, which are minimal given the age of workers in these families.

Between 1979 and 2013, young families saw comparatively small gains—and in the case of low-income families, large losses—in family income. When we break down the changes in household income, we find that over those 34 years, for low-income young families, the only positive contribution to income was the added earnings women received by working more hours. For young middle-class and professional families, female earnings (from both more pay and hours) have been crucial in mitigating steep drops or even smaller increases, respectively.

Defining income groups and young families

The analysis in this issue brief follows the same methodology presented in “Finding Time.” For ease of composition, we use the term “family” throughout the brief, even though the analysis is done at the household level.

In this issue brief, we refer to what we call “young” families and compare their experiences to “working-age” families. A working-age family (the subject of an earlier issue brief) is one where at least one person in the household is between the ages of 16 and 64. Young families are a subset of these working-age families: Young families are those that have at least one person over the age of 16 and where everyone is under 35 years of age.

We split households in our sample into three income groups:

  • Low-income households are those in the bottom third of the income distribution, earning less than $25,440 per year in 2015 dollars.
  • Professional households are those in the top fifth of the income distribution who have at least one household member with a college degree or higher. These households have an income of $71,158 or higher in 2015 dollars.
  • Everyone else falls in the middle-class category.

Table 1 breaks down the share of young families (a subset of these working-age families) across the three income groups in 2013. Young families are more likely than working-age families to be low-income.

Table 1

Setting some context

Before focusing on the changes in family income, let’s first set some broad context for the changes in family economics between 1979 and 2013.

How did income change for young families?

Between 1979 and 2013, young low-income families lost income, middle-class ones experienced small gains, and professionals saw their income soar. These trends follow those more generally for working-age families. The key difference between young and working-age families, however, is that young families’ income levels are lower than those of working-age families more generally. (See Figure 1.)

Figure 1

Young families have seen the same rising inequality that has affected families overall. In 1979, low-income young families had an average annual household income of $24,845 in 2015 dollars. Between 1979 and 2013, these families saw their income drop by 22.9 percent, down to $19,154. This is a significant decrease; between 1979 and 2013, low-income working-age families’ income fell only by 2.0 percent on average. Over the same time period, young middle-class families’ income stalled. In 1979, young middle-class families had an average household income of $63,648, which had grown only slightly—by 3.4 percent—to $65,783 in 2013. Young professional families, however, saw their income rise 36.6 percent, going from $104,031 in 1979 to $142,075 in 2013.

Inter-group disparities in family income are not only an indication of widening inequality but also may indicate that “filtering down” is underway. Recently, because there have not been enough jobs to employ all the young workers who need a job, those with a college degree (some of which are categorized as professionals in our analysis) have been scooping up a disproportionate share of the jobs available—even those jobs that do not require a college degree. This crowds out young workers without a college degree (most of whom fall into either our middle-class or low-income groups), making it much harder for less-educated workers to find suitable employment. Instead, they must either accept an even lower-paying job or exit the labor market completely. This might shed some light on why young low-income families have seen larger losses in income.

How did women’s working hours change in young families?

Between 1979 and 2013, across all three income groups, women in young families increased their working hours. In 1979, on average, women from young low-income households worked 662 hours annually (about 13 hours per week), and by 2013, their hours had grown by 23.0 percent to 814 (or 16 hours per week). Over this same time period, women from young middle-class families, on average, grew their annual hours by 19.8 percent, from 1,109 in 1979 to 1,328 in 2013 (or from 21 hours per week to 26 hours per week). Similarly, women in young professional families saw a 27.4 percent rise in their hours of work. (See Figure 2.)

Figure 2

The shift in hours is virtually identical across young and working-age women in professional families, but the trends differ by age within low-income and middle-class families. In middle-class families, women in working-age families put in more hours than those in young families. This could be due to more women in young middle-class families being in school rather than working. Yet within low-income families, women in young families are slightly more likely to have a paying job than are women in working-age families, and this was true in both 1979 and 2013.

Decomposing the changes in young families’ income

Figures 1 and 2 show that between 1979 and 2013, hours of work for women in young families increased across all income groups, yet family income has not increased commensurately across all three groups. To understand what’s going on, we decompose the changes in young families’ average household income between 1979 and 2013 into male earnings, female earnings, and income from other non-employment-related sources, which include Social Security and pensions. Specifically, we divide female earnings into the portion due to women earning more per hour and the portion due to women working more per year. To calculate female earnings stemming directly from the additional hours worked, we take the difference between 2013 female earnings and the hypothetical earnings of women if they earned 2013 hourly wages but worked the same hours as women did in 1979. (For more on how we did this calculation, please see our Methodology.)

We find that within young families across the income spectrum, women’s contributions, particularly from more working hours, have been the most important factor in boosting family incomes. Yet incomes have not risen in tandem, as both men’s earnings in low-income and middle-class families pulled down family income. Without women’s added hours and higher earnings, family income would have fallen, all else being equal. (See Figure 3.)

Figure 3

Between 1979 and 2013, young low-income families saw their income fall sharply. Most of this decline is due to the drop in men’s earnings—a loss of $6,305—although women’s earnings per hour also fell, reducing income by $365. Within young low-income families, the only positive contribution to household income was the added work hours of women, which boosted average annual income by $1,410.

Over this same period, in young middle-class families, average annual household income grew by more than $2,000, even though male earnings dragged family income down by $5,210. The only reason middle-class families saw any income gains was because of increases in women’s earnings, both in terms of higher pay per hour and more hours of work. Women’s earnings from more work hours accounted for the largest component of the gain, adding $3,729 to average annual income. The second-largest component was women’s earnings from higher pay, which added $2,210. Income from other (non-employment) sources also helped boost the incomes of young middle-class families.

Young professional families experienced significant growth in average income. Combined, women’s higher earnings from higher pay and additional hours of work boosted family income by $22,790, close to 60 percent of the total change. In stark contrast to men in young low-income or middle-class families, men in young professional families saw their earnings rise—adding $14,886 to family income. Young professional families had a relatively negligible positive change in other sources of income.

How does the experience of young families compare to working-age families?

Young families have not fared as well as working-age families more generally. When we look at the changes to family income between 1979 and 2013 for young and working-age families side by side, the challenges facing young families is put in sharp relief. Specifically, we compare the percent change in women’s hours and wages—both of which are components used in calculating women’s earnings due to more hours worked—and men’s earnings for young and working-age low-income, middle-class, and professional families. (See Figure 4.)

Figure 4

Across income groups, women from young and working-age families have seen similar rates of increase in their working hours, a fact that we saw earlier in Figure 2. But despite these similarities, the wages of women from young families did not grow nearly as much as the wages of women from working-age families. In fact, in low-income young families, women’s wages fell by 5.6 percent, compared to 8.1 percent growth in women’s wages for low-income working-age families. For the middle-class and professional income groups, young families saw much smaller gains—roughly half—in women’s wages than working-age families.

What is also striking is that across the board, men had worse earnings outcomes in young families in comparison to working-age families. At the bottom of the income ladder, men from young families saw their earnings fall by 43.8 percent, while the earnings of men from working-age families only fell by 20.4 percent. Middle-class men’s earnings for young and working-age families dropped by relatively similar percentages (11.7 percent and 9.0 percent, respectively). At the top, men from young professional families saw a 21.5 percent increase in their earnings compared to a 27.9 percent increase in men’s earnings in working-age professional families.

Conclusion

Across income groups, women’s increased work hours and—for all but low-income families—rising pay have helped young families secure their income. When we compare their changes in family income between 1979 and 2013, young families are much worse off than working-age families, seeing greater losses and smaller gains in women’s wages and men’s earnings across the board.

So while women’s earnings from both more pay and hours have made a tangible positive difference for young families, it is simply not enough to strengthen their economic security. That’s why policies that would reduce unemployment, increase wages, and encourage full employment for young workers are essential. And when the labor market is weak, ensuring that safety net programs adequately support young workers and their families is an important way to give them an equitable chance to improve their futures.

Further, with women’s added hours of work being so important to family economic well-being, the reality is that young families and working-age families alike need access to policies to help them address work-life conflicts. Nearly half of young families have a small child at home and are balancing the needs of parenting young children with holding down a job. They need access to the same basket of policies other workers need, including paid sick days, paid family and medical leave, and access to safe, affordable, and enriching child care. Many young people are also trying to navigate a work schedule with earning an educational degree, and policies such as those that promote predictable schedules can help them invest in their future while holding down a job to make ends meet.

Heather Boushey is the Executive Director and Chief Economist at the Washington Center for Equitable Growth, and the author of the book “Finding Time: The Economics of Work-Life Conflict” from Harvard University Press. Kavya Vaghul is a Research Analyst at Equitable Growth.

Acknowledgements

The authors would like to thank John Schmitt, Ben Zipperer, Dave Evans, Ed Paisley, David Hudson, and Bridget Ansel. All errors are, of course, ours alone.

Methodology

The methodology used for this issue brief is identical to that detailed in the Appendix to Heather Boushey’s “Finding Time: The Economics of Work-Life Conflict.” wanted to find out from you whether I also need to include these cutoffs. Thanks!s adequately support young workers and their fa

In this issue brief, we use the Center for Economic and Policy Research extracts of the Current Population Survey Annual Social and Economic Supplement for survey years 1980 and 2014 (calendar years 1979 and 2013). The CPS provides data on income, earnings from employment, hours, and educational attainment. All dollar values are reported in 2015 dollars, adjusted for inflation using the Consumer Price Index Research Series available from the U.S. Bureau of Labor Statistics. Because the Consumer Price Index Research Series only includes indices through 2014, we used the rate of increase between 2014 and 2015 in the Consumer Price Index for all urban consumers from the Bureau of Labor Statistics to scale up the Research Series’ 2014 index value to a reasonable 2015 index estimate. We then used this 2015 index value to adjust all results presented.

For ease of composition, throughout this brief we use the term “family,” even though the analysis is done at the household level. According to the U.S. Census Bureau, in 2014, two-thirds of households were made up of families, defined as at least one person related to the head of household by birth, marriage, or adoption.

We divide our sample into three income groups—low-income, middle-class, and professional households—using the definitions outlined in “Finding Time” and detailed in the box presented in the analysis above. For calendar year 2013, the last year for which we have data at the time of this analysis, we categorized the income groups as follows:

  • Low-income households are those in the bottom third of the size-adjusted household income distribution. These households had an income below $25,440 (as compared to $25,242 and below for 2012). In 1979, 28.3 percent of all households were low-income, increasing to 29.7 percent in 2013. These percentages are slightly lower than one-third because the cut-off for low-income households is based on household income data that includes people of all ages, while our analysis is limited to households with at least one person between the ages of 16 and 64. The working-age population (ages 16 to 64) typically has higher incomes than older workers, and as a result, the working-age population has somewhat fewer households that fall into this low-income category.
  • Professionals are those households that are in the top quintile of the size-adjusted household income distribution and have at least one member who holds a college degree or higher. In 2013, professional households had an income of $71,158 or higher (as compared to $70,643 or higher in 2012). In 1979, 10.2 percent of households were considered professional, and by 2013, this share had grown to 16.8 percent.
  • Everyone else falls in the middle-class category. For this group, the household income ranges from $25,440 to $71,158 in 2013 (as compared to $25,242 to $70,643 in 2012); the upper threshold, however, may be higher for those households without a college graduate but with a member who has an extremely high-paying job. This explains why within the middle-income group, the share of households exceeds 50 percent: The share of middle-income households declined from 62 percent in 1979 to 53.4 percent in 2013.

Note that all cut-offs above are displayed in 2015 dollars, using the inflation-adjustment method presented earlier.

In our analysis, we limit the universe to people with non-missing, positive income of any type. This means that even if a person does not have earnings from some form of employment but does receive income from Social Security, pensions, or any other source recorded by the CPS, they are included in our analysis. Additionally, we limited our sample to young families—or households where at least one person is older than 16 and everyone is under the age of 35.

These data are decomposed into income changes between 1979 and 2013 for low-income, middle-class, and professional families. The actual household income decomposition uses a simple shift-share analysis to find the differences in earnings between 1979 and 2013 and calculate the extra earnings due to increased hours worked by women.

To do this, we first calculate the male, female, and other earnings by the three income categories. To calculate the sex-specific earnings per household, we sum the income from wages and income from self-employment for men and women, respectively. The amount for other earnings is derived by subtracting the male and female earnings from total household earnings. We average the household, male, female, and other earnings by each income group for 1979 and 2013 and take the differences between the two years to show the raw changes in earnings by each income group.

To find the change in hours, for each year, by household, we sum the total hours worked by men and women. We average these per-household male and female hours, by year, for each of the three income groups.

Finally, we calculate the counterfactual earnings of women. We use the 2013 earnings per hour for women and multiply it by the 1979 hours worked by women. Finally, we subtract these counterfactual earnings from the female earnings in 2013, arriving at the female earnings due to additional hours.

One important point to note is that because of the nature of this shift-share analysis, the averages don’t exactly tally up to the raw data. Therefore, when presenting average income, we use the sum of the decomposed parts of income. While economists typically show median income, for ease of composition and the constraints of the decomposition analysis, we show the averages so that the data are consistent across figures. Another important note is that we make no adjustments for changes over time in topcoding of income, which likely has the effect of exaggerating the increase in professional families’ income relative to the other two income groups.

Equitable Growth in Conversation: An interview with David Card and Alan Krueger

“Equitable Growth in Conversation” is a recurring series where we talk with economists and other social scientists to help us better understand whether and how economic inequality affects economic growth and stability.

In this installment, Equitable Growth Research Economist Ben Zipperer talks with economists David Card and Alan Krueger. Their discussion touches on the origins of empirical techniques they advanced, how the United States is falling behind when it comes to data, and two conflicting threads of contemporary economic theory.

Read their conversation below.


Ben Zipperer: A common theme in both of your work involves isolating specific interventions or plausibly exogenous changes in the phenomena you’re studying, say in the case of your famous study comparing restaurants in New Jersey and Pennsylvania after a minimum wage increase. What kind of challenges did you face early on in that research—in the days before words or phrases like “research design” and “natural experiment” were kind of ubiquitous terms in the field of economics?

And then also, can you talk a little bit about the influence of the quasi-experimental approach on labor economics today and maybe the field of economics as a whole?

David Card: There are several origin stories that meet sometime in the late ’80s, I would say, in Princeton. One part of the origin story would be Bob LaLonde’s paper on evaluating the evaluation methodologies. So, in the 1970s, if you were taking a class in labor economics, you would spend a huge amount of time going through the modeling section and the econometric method. And ordinarily, you wouldn’t even talk about the tables. No one would even really think of that as the important part of the paper. The important part of the paper was laying out exactly what the method was.

But there was an underlying current of how believable are these estimates, what exactly are we missing. And some of that came to the fore in LaLonde’s paper.

He was a grad student at Princeton in the very first cohort that I advised: He was actually a grad student when I was a grad student, but he was a couple years behind me.  Then I was his co-adviser with Orley Ashenfelter. And in the course of doing that work, it became pretty obvious that these methods were very, very sensitive: If you played around with them, you got different answers.

The impetus of that paper was some work that Orley and I were asked to do evaluating the old CETA programs. There were a bunch of different methods that were around and they would give very different answers. So Orley had the idea of setting Bob on that direction and that really evolved that way.

So that was one part of the origin story. Another part was the move from macro-type evidence to micro evidence. There was growing appreciation of that. And the first person that I saw really use the phrase “natural experiment” was Richard Freeman.

Alan Krueger: That’s who I learned it from, too. Richard always had an interest in evidence-based natural experiments. He was an enormous fan of the work by LaLonde; also, the paper Orley did in JASA [the Journal of the American Statistical Association] on the negative income tax experiment. Richard always had a soft spot for natural experiments. But I think he used the term differently than we would.

He applied it to big shocks. So to him, the passage of the Civil Rights Act was a natural experiment. The tight labor market in the 1960s was another natural experiment. I think the way he viewed it was a bit different from the way it started to get applied, which was that the world opened up and made a change for some group that could be viewed as random. When Josh Angrist and I looked at compulsory schooling, we looked at a small change.  The natural experiment was just being born on one side or the other of the threshold for starting school, which then affected how many years of education you ultimately got because of different compulsory schooling laws and students would reach the minimum schooling age in different grades.

But that’s where I first heard the term.

Card: Right. And you mentioned research design. I remember Alan was an assistant professor and I was a professor at Princeton and Alan sat next to me. And he, for some reason, got a subscription to the New England Journal of Medicine. (Laughter.) And —

Zipperer: Intentionally?

Krueger: Yeah. I loved reading the New England Journal of Medicine.

Card: Yeah. And the New England Journal would come in every week, so there was a lot of stuff to read. And the beginning of each article would have “research design.”

Krueger: And “methods.”

Card: Yes, and if you’ve never seen that before and you were educated as an economist in the 1970s  or 1980s, that just didn’t make any sense. What is research design? And I remember one time I said, “I don’t think my papers have a research design.”

And so that whole set of terms entered economics as a result of those kinds of changes in orientation. But I would say that another thing that happened was that Bob LaLonde got a pretty good job and his paper got a lot of attention. And then Josh Angrist, again following up a suggestion from Orley to look at the Vietnam draft—that paper got a lot of attention. And it looked like there was a market, in a way, for this new style of work. It’s not like we were trying to sell something that no one wanted. There was actually a market out there generally, in the labor economics field, at least.

Krueger: There was, but there was also resistance. (Laughter.)

I agree with everything David said. The other thing—which I think helped to support this, although maybe it gets overrated—is that data became more available, and big datasets like the Census were easier to use.

Historically, when the 1960 Census microdata first became available, Jacob Mincer used it and had an enormous impact. And I think the fact that we were inventorying more data meant that if you wanted to look at a natural experiment – for example, a change in social security benefits which affected one cohort and not another —  the data were out there to do it.

I think another thing — which was a bit new when we did it for our American Economic Review article on the minimum wage — was to go out and collect our own data when we saw the opportunity to study a natural experiment. But in other situations the fact that there were just data out there to begin with, I think, helped this movement.

Card: Yeah. That was the case with my Mariel Boatlift paper. It was written a little bit before we started working on minimum wages. And in that case, it just so happened that the Outgoing Rotation Group files were available starting in 1979. And so, with those files, it was fairly straightforward to do an analysis of what affected even the Miami labor market.

And in retrospect there’s a new paper by George Borjas flailing around trying to overturn the results in my paper. But in truth, if somebody had been on the ground in Miami in 1980 and gotten their butts in gear, there would have been so much more interesting stuff to do.

For instance, when Hurricane Andrew happened, people actually convinced the CPS to do a survey or supplement, right?

Krueger: Yes.

Card: So, I think the whole, not just the profession, but even maybe the government, has become a little bit more aware of the importance of really strategically moving resources around and collecting data.

And now the administrative data is available for some things as well.

Zipperer: Speaking of data access, how important do you think it is now for work on the research frontier of labor economics, say, to have administrative data access, or access to often-restricted-access datasets? Is the United States positioned as a leader in this? Or are we paling in comparison to other countries?

Card: Well, we’ve got a lot of disadvantages. One problem is that we don’t have a centralized statistical agency. And so you’ll forever run into someone who wants to do a project and they’re not able to do it because there’s a bureaucratic obstacle to using this particular dataset or that particular dataset.

So for example, matching the LEHD [Longitudinal Employer-Household Dynamics] data to the Census of manufactures or the Census of firms. That would be a natural thing to do, but not that easy to do. If it was one statistical agency, we would have a lot more ease.

And then the laws of the United States—not just the federal but then the state laws—governing access to, say, the UI [unemployment insurance] files. Partially, those are available to the Feds when they’re constructing the LEHD data or other types of datasets, but they’re not available to individual researchers.

Although Alan and I have both used, for example, data from New Jersey. So individual researchers can, in some cases, contact the state and get some help. But that often requires some combination of a person on the other side who actually wants to answer the phone and talk to you, and maybe some resources.

Krueger: Yes, so I would say we’re behind other countries in terms of administrative datasets. We’ve long been behind Scandinavia, which has provided linked data for decades. And we’re now behind Germany, where a lot of interesting work is being done.

And it’s unfortunate because we did lead the world, I would say, in labor force surveys. The rest of the developed world copied our labor force survey and copied our practice of making the data available for researchers to use.

It’s much more cumbersome, bureaucratic, and idiosyncratic here to get access to the administrative data. And I don’t think that’s good for American economists or for studies of the economy.

And it’s going to make it much harder to replicate work going forward. And that’s unfortunate because I think a strength in economics has been the desire to replicate results.

Card: But I think it is absolutely critical for front-line research in the field to have access to some kind of data. Either you get access to administrative data through personal connections like a lot of people do. Or there are certain countries that make it available, like Germany, for instance—I’ve done a lot of work there—or Portugal. Or like Alan has done where he’s used some of the resources available at Princeton to do some specialized surveys and connect the responses with the administrative data. That’s probably the frontier at this point. But that’s not going to be a thing that a typical person can do very easily.

Krueger: And we haven’t caught up in terms of training students to collect original survey data. I’ve long thought we should have a course in economic methods—going back to the New England Journal of Medicine—and cover the topics that applied researchers really rely upon, but typically are forced to learn on their own. Index numbers, for example. Or ways of evaluating whether a questionnaire is measuring what you want it to measure. And survey design, sampling design and the effect of non-response bias on estimates.

These are topics that other social science fields often teach and we just take for granted that students know it. And there’s a lot of work that’s being done, especially in development economics, on implementing randomized experiments, which I think is a net positive. But there’s also a lot of noise being produced. And I think having more training in terms of data collection, survey design, experimental design, would be helpful for our field.

Zipperer: You mentioned randomized experiments. What are your views on the pluses and minuses of what seem to be a variety of different empirical approaches now common in economic research, such as randomized experiments, actually conducting an experiment? Or a quasi-experimental approach, compared to say, a more model-centric approach? Or even more recent kinds of data mining techniques that let the data tell us the research design?

Card: I would say, and I think Alan would probably agree with me, that at the end of the day, you probably want to have all those things if possible. And each of them has some strengths and some weaknesses.

The strength of a randomized controlled trial is the ability to say you’ve got this treatment and this control group and it’s random. So that means that you’re internally consistent. The weakness is that the set of questions you can ask and the context in which you can ask those questions is often very contrived.

So the one extreme is the lab experiment, where you’re getting a bunch of students and you’re asking them to pretend that they’re two sides of a bargaining table or something similar. And by changing the way you set the protocols for those experiments, as people that work in that field are aware, you can get somewhat different answers. To some extent, the criticisms of psychology that you would see played out in the newspapers recently has a lot to do with those difficulties. It’s not just how you read the script but how you set up the lab and everything else that kind of matters.

So the great advantage of a quasi-experiment or natural experimental like minimum wage is that it’s a real intervention. It’s real firms that are all affected. You get part of the general equilibrium effect. That’s pretty important for understanding the overall story. The disadvantage is that someone can always say, well, it isn’t truly random. And the number of units might be small. So you might only have two states. At some abstract level, there’s only two degrees of freedom there. And so that’s a problem.

And then there’s a third set of problems, which I’ve alluded to before, which is the types of questions that you can ask. And this is where my former colleague, Angus Deaton, is well-known for his vitriolic criticism of RCTs in development economics.

And I think one interpretation of his concern is the set of questions that can be asked are really so small, relative to the bigger questions in the field. Now that isn’t always the case but that is a concern.

Krueger: Yes, I would just add that no research design is going to be perfect. And you can poke holes in anything. And I think if you believe that existing research is great and we have answered so many questions and we were on the right track before, then one might be hostile towards the growth of randomized controlled trials. But that’s not how I view the earlier state of research.

In my mind, there are two great strengths of randomized experiments. One is that the treatment is exogenous by design. And the other is that it makes specification searching more constrained. It’s pretty clear what you’re going to do. You’re going to compare the treatment group and the control group.

I’ve seen cases where people muck around to generate a result from an experiment. For example, look at Paul Peterson’s work on school vouchers, where he finds no impact overall and kind of buries that, but looks at a restricted sample of African Americans in some cities and argues that we’ve got these great effects from school vouchers, which turn out not to hold up if you actually expand the sample. So I’m not saying that randomized experiments totally ties people’s hands. But I think they do so more than is the case with non-experimental methods applied to observational data.

I’ve become more eclectic over time regarding research method, as I mentioned at the event earlier today. I mean, I was struck when I worked in the White House at the range of questions I would get from the President. And you’d want to do the best job answering them. That was your job.

And there were some cases where there was very little evidence available and there was some modeling which, if you buy the assumptions of the modeling, could answer a lot of questions.

And I think that was probably better than the alternative, which is having a department come in and plead its case based on no evidence or model whatsoever.

So I encourage economists to use a variety of different research styles. What I think on the margin is more informative for economics is the type of quasi-experimental design that David and I emphasize in the book.

But the other thing I would say, which I think is underappreciated, is the great value of just simple measurement. Pure measurement. And many of the great advances in science, as well as in the social sciences, have come about because we got better telescopes or better microscopes, simply better measurement techniques.

In economics, the national income and product accounts is a good example. Collecting data on time use is another good example. And I think we underinvest in learning methods for collecting data—both survey data, administrative data, data that you can collect naturally through sensors and other means.

Card: Yeah. For instance, take the American administrative data that’s collected by the Social Security Administration. If you wanted to do something very simple to that dataset that would make it possible to do a lot more, you could ask each employer, who reports their employees’ Social Security earnings data to also report the spells that they worked — the starting and ending of the job.

That simple kind of information—which could be collected, maybe with some burden, but in many cases, almost trivially—would expand the use of that dataset amazingly, for just an amazing set of purposes.

It turns out, that’s what they do in other countries. So you can then take an administrative dataset like Social Security Administration and that suddenly becomes a spell-based dataset, because you’ve got every employment spell that somebody had during the year, automatically, for free.

It’s not perfect, but it’s just a quantum improvement. Unfortunately, though, we don’t have anybody saying, well, what could we do to make administrative datasets better and more useful for research?

There are people at the Census Bureau who are kind of working on matching administrative and non-administrative survey type datasets. But often times that’s way down in the subterranean levels, partially because of the concern that if people knew that you can actually take the Numident [Numerical Identification System] file and attach a Social Security number to every piece of paper going through, that they would be shocked somehow. So we have quite a problem here.

Zipperer: So, to take another concrete case where measurement seems to be particularly important and related to work that you’ve done on minimum wages, what kind of wage spillover effects do minimum wages generate for people who are, say, earning above a new minimum wage after a minimum wage increase?

There’s a lot of work showing that there are spillover effects and there are questions about how big they are, perhaps due to a measurement error in wages and survey data. What are your views about why these spillover effects seem to exist?

Krueger: Let me make some initial comments. In our book, we discovered spillover effects. When I say we discovered it, we asked in a very direct way when the minimum wage went from $3.35 to $4.25, and you had a worker who was making $4.50, did that worker get a raise as a result?

And what we found was that a large share of fast food restaurants responded “yes.” We had these knock-on effects or spillover effects.

Interestingly, they tended to occur within firms that were paying below the new minimum wage. You had some restaurants that were already above the new minimum wage. And the increase in the minimum wage had very little effect on their wage scales, which suggests that internal hierarchies matter for worker morale and productivity.

Only to economists is that surprising. The rest of the world knows that the way that they’re treated compared to other people influences their behavior, and the way that they view their job and how likely they are to continue on their job, and so on.

The standard textbook model, by contrast, views workers as atomistic. They just look at their own situation, their own self-interest, so whether someone else gets paid more or less than them doesn’t matter. The real world actually has to take into account these social comparisons and social considerations. And the field of behavioral economics recognizes this feature of human behavior and tries to model it. That thrust was going on, kind of parallel to our work, I’d say.

Now, I also found it interesting that when the minimum wage was at a higher level compared to a lower level, the spillover effects were less common.

So to some extent, the spillover effects are voluntary and the companies are willing to put up with somewhat lower morale when the minimum wage is at a relatively higher level. And I always found it curious that companies would complain, “It’s not the minimum wage itself, it’s that I’m going to have to pay more than everybody else.” Well, that shows that you’re actually not behaving the way the model that you just cited to argue that you are going to hire fewer workers says you should behave. Because you’re voluntarily choosing to pay people, who were working before at a lower wage, a higher wage.

And it also gets you to think, well, maybe the wage from a societal perspective was too low to start with. And the fact that employers are taking into account these spillover effects when they set the starting wage means that from a societal perspective, we could get stuck in an equilibrium where the wage is too low.

Now, I always suspected that the spillover effects kind of petered out when you got 50 cents or a dollar an hour above the new minimum wage. But interestingly, work by David Lee, who was a student of David’s and mine at Princeton, suggests that the spillover effects are pretty pervasive throughout the distribution. And he used a different method, one that I think is quite compelling to look at: What happened around minimum wage increases in states where they really had more of a binding effect?

And he found quite significant spillover effects. So one area where I think the literature has deviated from what we concluded in our book was we thought the spillover effects were there but they were modest. And I would say, if anything, it points to a larger impact of the minimum wage because of the spillovers.

Card: Thinking about why these occur—Laura Giuliano, who attended the conference today, has a very interesting new paper studying a large retailer that has establishments all across the country, where wages were set at the company level.

And the paper shows that employees who were above the minimum wage, but in stores where different fractions of the employees below them got bigger and smaller raises, have differential quit behavior. So it’s really strong direct evidence of this channel that everyone has always thought is probably true.

I think that our understanding of exactly all the forces that determine the equilibrium wage distribution is pretty limited, to tell you the truth.

In the United States, for example, it’s very, very difficult to get an administrative dataset that would say: Here’s everybody that works together at the firm. And let’s treat that, as Alan was saying, as part of the social group. What things do they share? What features of their outcomes seem to be mediated through the fact that they all work for the same employer?

And in the Scandinavian countries, there’s quite a bit of work that’s going in that direction. One really simple example is if a male boss at a firm takes leave when his wife has a baby, then the other employees do too. So that’s just a really simple example of the kind of work that you could do if you had the ability to match these datasets together and show they were all the firm.

I think outside of economics, in sociology for instance, they’ve always thought that a very important part of everyone’s identity is the firm they work for and who they work with.

And it has to be really influential in how you think about your life and how you organize your time and people you hang out with and so on. But in a standard economics model, that’s all thrown out the window. And for some questions, it might be second-order at best. But for other questions, it seems like it’s first-order.

Zipperer: Do you see that changing somewhat with, for example, your and others’ work on the nature of the firm influencing inequality?

Card: Well, I’m always hopeful. (Laughter.)

Krueger: Yes, I would say the success of behavioral economics is a major development in economics.

Card: And in labor economics especially, I’d say.

There is an interesting thing going on in economics. So, we see job market candidates that come through every year. And there’s sort of two sides of economics in their work simultaneously.

One side is uber-technical. More and more technical stuff every year. You cannot believe the complicated ideas that people are trying to pretend that individuals are working with and choosing whether to do this or that.

And on the other side, behavioral economics is almost a reaction to that. It says, “Actually, those effects are all third-order. The first-order thing is the concern is about how you rank relative to your peers.”

So the great advantage of behavioral economics is that it is saying, “OK. I’m going to try and simplify away from this incredibly complicated thing where your choice about whether to participate in a welfare program is influencing how you’re going to divide up the surplus between you and your husband and whether you’re going to be divorced next year.”

I saw a paper like this last week and I honestly thought, “If I could think this through myself, it would be a miracle.” (Laughter.) I spent my life thinking about that.

Krueger: And you oversimplified it: You’re considering each step in the way, assuming you will make optimal choices each year in the future, and then integrating back to figure out what to do today.

Card: So there are these two strands of economics that are really fighting it out right now in the theory side. And in a way, behavioral economics is much more closely linked to what I think someone earlier today was calling institutional economics. So it’s the idea that people are doing a set of things, maybe rules of thumb and so on, that are influencing how they choose what they do. That maybe we would gain a lot from understanding those things a little bit better.

Zipperer: At the beginning of this discussion, a lot of arrows seemed to point back to Orley Ashenfelter. Could you talk about his influence on your work and maybe the field generally?

Card: Well, for me it’s very strong because he was my thesis adviser and really the reason why I went to Princeton as a grad student. And even as an undergraduate, the two professors who I took courses from that had the most influence on me were students of Orley’s.

So my connection to him goes back a long time. And we wrote a bunch of papers together over the years and advised many students. But also many of the people of my generation of labor economists, like Joe Altonji, John Abowd, or other people like that, were strongly influenced by Orley.

Right from the get-go, he was a very, very strong proponent of “experiments if you can do them” and “collect your own data if you can do it” and “spend the money if you can.” One time, he and Alan went to Twinsburg Twins Festival and collected data on twins.

Krueger: One time? Four summers in a row we went to Twinsburg, Ohio, with a group of students. We brought a dozen students. (Laughter.)

And it was actually classic Orley because he spent a lot of time choosing the restaurant for dinner, a lot of time chatting with some people, and not too much time collecting data, as I recall.

I read Orley’s work when I was an undergraduate. And a big part of the attraction for me to come to Princeton was Orley, and then David was just really a bonus who I ended up working with so closely for a decade.

And I think Orley kind of set the tone for the Industrial Relations Section. He had done work on the minimum wage with Bob Smith at Cornell, on non-compliance and how much non-compliance there was—which made us think that, if you really want to look for the effects on minimum wage, you need to look in places where it’s binding and companies are complying.

He had a healthy dose of skepticism about the research that had come from the National Minimum Wage Study Commission. Which sometimes he called, as I recall, the National Minimum Study Commission.

Card: Minimum Study Wage Commission.

Krueger: The Minimum Study Wage Commission. (Laughter.)

Card: You can quote me on that.

Krueger: We’re just quoting him. (Laughter.) And he used to like to tell a story, which I remember vividly, where he met with some restaurant group when he worked, I think, at the Labor Department. And they said, we’ve got a problem in our industry: The minimum wage is too low and we can’t get enough workers.

And that’s inconsistent with the kind of view that the market determines the wage, and you get all the workers you want at the going wage, and you can raise the wage if you can’t get enough workers. And I think he was always sympathetic to the famous quote, in “A Wealth of Nations,” where Adam Smith said that employers rarely get together when the subject doesn’t turn to how to keep wages low; that there’s a tacit and constant collusion by employers. So I think he kind of set a tone where it was acceptable if you found results that went against the conventional wisdom.

And I came from an environment where even Richard Freeman at the time, who was a somewhat heterodox economist, had written that there’s a downward sloping demand curve for low-wage workers and a higher minimum wage reduces employment, but not all that much, but you get the conventional effects. So that was my background coming in.

Zipperer: Well, thanks very much. This was a great discussion.

Krueger: Sure.

Card: Sure.

Zipperer: Thank you.