Must-read: John Stoehr: Thomas Frank and the Illusion of Presidential Omnipotence

Must-Read: John Stoehr: Thomas Frank and the Illusion of Presidential Omnipotence: “In Listen, Liberal, Frank describes President-elect Barack Obama, as the financial crisis is beginning to unfold, as…

…a ‘living, breathing evidence that our sclerotic system could still function, that we could rise to the challenge, that could change course. It was the perfect opportunity for transformation.’ Yet, Frank says, that transformation didn’t happen. So Obama and the Democrats failed. But what could they have been done differently? While he excels at calling the Democrats to account, Frank falls short in offering policy recommendations, even rough sketches of policy. There are none. Populists don’t take such questions seriously, because such questions assume that knowledge, method, and procedure are more important than believing in the righteousness of the cause. Frank is no exception….

Frank and Sanders are right in one very big way—inequities of wealth, income, and power threaten our lives, livelihoods, and republican democracy. All of us need big bold ideas and the political courage to see them realized. Being right in one very big way is the primary strength of populism. Progressives do the work, but populists are the voices of conscience, the moral scolds, the screaming Jeremiahs. But they are wrong too. The current president has done more with more resistance in the name of progress than any president since nobody knows. Along with flawed-but-good health care reform, financial regulation, and sustainable energy policy, Obama has achieved: gender-equity laws; minimum wage rules for government contractors; a labor relations board that serves labor; and a tax rule barring corporate ‘inversions.’ And he formally ended two wars…

Well, Obama could have pushed the paper on appointments–a head of FHFA willing to use the GSEs as tools of macro policy should that become necessary, and a filled up-Federal Reserve Board to counteract the baneful influence of too-many regional bank presidents who did not understand the situation and would not learn. He could have used bank reliance on TARP money to take equity–and then shut down their lobbying efforts in opposition to Dodd-Frank. Given that Republican obstructionism was very predictable, the right move was to pass a very large Reconciliation package in January 2009: a carbon tax to deal with global warming, Medicare-for-all, a first Recovery Act and the path greased for a second Recovery Act to be passed by a bare majority should it become necessary, plus an infrastructure bank. Then you could bargain back to the cap-and-trade policies you really wanted from a position in which failure to come to the table was much more painful for the Republicans than coming to the negotiating table. There were lots of things that could have been done.

But in some ways what I have just written confirms Stoehr’s overall point: there were things that Obama could have done, or tried to do. But Sanders and Frank appear ignorant of them…

Memo to self: Monetary policy since 1985

FRED Graph FRED St Louis Fed

Major Federal Reserve Policy Moves since 1985:

The Federal Reserve overshoots and overtightens. But the effect on the economy is diminished because more-responsible fiscal policy leads to a fall in the term and risk premiums:

Preview of Pounding Nails in Nevada

The Federal Reserve eases monetary policy to fight the recession and jobless recovery caused by its previous overshoot:

Preview of Pounding Nails in Nevada

The Federal Reserve tightens to–successfully–try to keep inflation from rising; the first bond market “conundrum” as the endogenous duration of mortgage-backed securities produces a much tighter-than-expected gearing between the short-term safe nominal interest rate i and the long-term risky real interest rate r:

Preview of Pounding Nails in Nevada

The Federal Reserve loosens during the international financial crisis of 1998:

Preview of Pounding Nails in Nevada

The Federal Reserve tightens to try to prevent “overheating” in the late stages of the dot-com boom:

Preview of Pounding Nails in Nevada

The Federal Reserve loosens to fight the recession brought on by the collapse of the dot-com boom:

Preview of Pounding Nails in Nevada

The Federal Reserve keeps policy stimulative and delays its interest-rate tightening cycle given the weakness of the recovery; the bond market first does not and then does credit the Federal Reserve’s statements:

Preview of Pounding Nails in Nevada

The Federal Reserve eases as the magnitude of the subprime-driven financial crisis becomes apparent; but the collapse in financial market trust and the financial crisis come anyway:

Preview of Pounding Nails in Nevada

With the recovery inadequate, the Federal Reserve decides to extend the period of emergency stimulative extraordinary monetary policy–but the long-term risky real interest rate r sticks at 3%, and does not go any lower:

Preview of Pounding Nails in Nevada

With the unemployment rate now in the range associated with full employment, the Federal Reserve decides that it is time to “normalize” interest rates:

Preview of Pounding Nails in Nevada

Inflation Control:

The Federal Reserve has overdone it on inflation control–successfully kept inflation from getting “too high”, and in fact pushed inflation “too low”:

Graph Consumer Price Index for All Urban Consumers All Items Less Food and Energy FRED St Louis Fed

Full Employment:

Before 2008, macroeconomic stabilization performance on full employment was quite good. 2008-2010 was a disaster. How we evaluate what follows depends on whether we look at unemployment or employment:

Graph Consumer Price Index for All Urban Consumers All Items Less Food and Energy FRED St Louis Fed Graph Consumer Price Index for All Urban Consumers All Items Less Food and Energy FRED St Louis Fed

Structural Adjustment: “Pounding Nails in Nevada…”

Was a recession in 2009 any sense “needed” to move people out of construction employment as the housing boom collapsed? Was a rise in unemployment a necessary first step in rebalancing the late-2000s economy?

No: Look at the key components of aggregate demand:

FRED Graph FRED St Louis Fed

As of November 2008, when John Cochrane gave his “we should have a recession… people who spend their lives pounding nails in Nevada need something else to do…” keynote address to the 2008 CRSP Forum, residential investment had already fallen by 3.5%-points of GDP and was within 0.5%-points of what had been its nadir. The recession came after the move of labor out of construction had been all but completely finished:

FRED Graph FRED St Louis Fed

*If you were going to say “we should have a recession” on the grounds that a recession was a necessary part of the structural adjustment required to climb down from overinvetment in housing, the moment to have said that was 2005. And those who said that then were wrong: we did not read a recession in order to move those “pounding nails in Nevada” into other sectors while keeping them employed…

Must-See: Omer Moav: Geography, Transparency, and Institutions

Must-See: Omer Moav: Geography, Transparency, and Institutions: “April 25 | 4:10-5:30 p.m. | 639 Evans Hall…

…We propose a theory by which geographic variations explain cross-regional institutional differ- ences in: (1) the scale of the state, (2) the distribution of power in the state hierarchy, and (3) farmers’ property rights over land. The mechanism underlying our theory is based on the effect of geography on transparency of farming, which in turn determines the state’s extractive capacity. We apply our theory to explain differences in the institutions of Egypt, Southern Mesopotamia and Northern Mesopotamia in antiquity.

Equitable Growth in Conversation: An interview with David Card and Alan Krueger

“Equitable Growth in Conversation” is a recurring series where we talk with economists and other social scientists to help us better understand whether and how economic inequality affects economic growth and stability.

In this installment, Equitable Growth Research Economist Ben Zipperer talks with economists David Card and Alan Krueger. Their discussion touches on the origins of empirical techniques they advanced, how the United States is falling behind when it comes to data, and two conflicting threads of contemporary economic theory.

Read their conversation below.


Ben Zipperer: A common theme in both of your work involves isolating specific interventions or plausibly exogenous changes in the phenomena you’re studying, say in the case of your famous study comparing restaurants in New Jersey and Pennsylvania after a minimum wage increase. What kind of challenges did you face early on in that research—in the days before words or phrases like “research design” and “natural experiment” were kind of ubiquitous terms in the field of economics?

And then also, can you talk a little bit about the influence of the quasi-experimental approach on labor economics today and maybe the field of economics as a whole?

David Card: There are several origin stories that meet sometime in the late ’80s, I would say, in Princeton. One part of the origin story would be Bob LaLonde’s paper on evaluating the evaluation methodologies. So, in the 1970s, if you were taking a class in labor economics, you would spend a huge amount of time going through the modeling section and the econometric method. And ordinarily, you wouldn’t even talk about the tables. No one would even really think of that as the important part of the paper. The important part of the paper was laying out exactly what the method was.

But there was an underlying current of how believable are these estimates, what exactly are we missing. And some of that came to the fore in LaLonde’s paper.

He was a grad student at Princeton in the very first cohort that I advised: He was actually a grad student when I was a grad student, but he was a couple years behind me.  Then I was his co-adviser with Orley Ashenfelter. And in the course of doing that work, it became pretty obvious that these methods were very, very sensitive: If you played around with them, you got different answers.

The impetus of that paper was some work that Orley and I were asked to do evaluating the old CETA programs. There were a bunch of different methods that were around and they would give very different answers. So Orley had the idea of setting Bob on that direction and that really evolved that way.

So that was one part of the origin story. Another part was the move from macro-type evidence to micro evidence. There was growing appreciation of that. And the first person that I saw really use the phrase “natural experiment” was Richard Freeman.

Alan Krueger: That’s who I learned it from, too. Richard always had an interest in evidence-based natural experiments. He was an enormous fan of the work by LaLonde; also, the paper Orley did in JASA [the Journal of the American Statistical Association] on the negative income tax experiment. Richard always had a soft spot for natural experiments. But I think he used the term differently than we would.

He applied it to big shocks. So to him, the passage of the Civil Rights Act was a natural experiment. The tight labor market in the 1960s was another natural experiment. I think the way he viewed it was a bit different from the way it started to get applied, which was that the world opened up and made a change for some group that could be viewed as random. When Josh Angrist and I looked at compulsory schooling, we looked at a small change.  The natural experiment was just being born on one side or the other of the threshold for starting school, which then affected how many years of education you ultimately got because of different compulsory schooling laws and students would reach the minimum schooling age in different grades.

But that’s where I first heard the term.

Card: Right. And you mentioned research design. I remember Alan was an assistant professor and I was a professor at Princeton and Alan sat next to me. And he, for some reason, got a subscription to the New England Journal of Medicine. (Laughter.) And —

Zipperer: Intentionally?

Krueger: Yeah. I loved reading the New England Journal of Medicine.

Card: Yeah. And the New England Journal would come in every week, so there was a lot of stuff to read. And the beginning of each article would have “research design.”

Krueger: And “methods.”

Card: Yes, and if you’ve never seen that before and you were educated as an economist in the 1970s  or 1980s, that just didn’t make any sense. What is research design? And I remember one time I said, “I don’t think my papers have a research design.”

And so that whole set of terms entered economics as a result of those kinds of changes in orientation. But I would say that another thing that happened was that Bob LaLonde got a pretty good job and his paper got a lot of attention. And then Josh Angrist, again following up a suggestion from Orley to look at the Vietnam draft—that paper got a lot of attention. And it looked like there was a market, in a way, for this new style of work. It’s not like we were trying to sell something that no one wanted. There was actually a market out there generally, in the labor economics field, at least.

Krueger: There was, but there was also resistance. (Laughter.)

I agree with everything David said. The other thing—which I think helped to support this, although maybe it gets overrated—is that data became more available, and big datasets like the Census were easier to use.

Historically, when the 1960 Census microdata first became available, Jacob Mincer used it and had an enormous impact. And I think the fact that we were inventorying more data meant that if you wanted to look at a natural experiment – for example, a change in social security benefits which affected one cohort and not another —  the data were out there to do it.

I think another thing — which was a bit new when we did it for our American Economic Review article on the minimum wage — was to go out and collect our own data when we saw the opportunity to study a natural experiment. But in other situations the fact that there were just data out there to begin with, I think, helped this movement.

Card: Yeah. That was the case with my Mariel Boatlift paper. It was written a little bit before we started working on minimum wages. And in that case, it just so happened that the Outgoing Rotation Group files were available starting in 1979. And so, with those files, it was fairly straightforward to do an analysis of what affected even the Miami labor market.

And in retrospect there’s a new paper by George Borjas flailing around trying to overturn the results in my paper. But in truth, if somebody had been on the ground in Miami in 1980 and gotten their butts in gear, there would have been so much more interesting stuff to do.

For instance, when Hurricane Andrew happened, people actually convinced the CPS to do a survey or supplement, right?

Krueger: Yes.

Card: So, I think the whole, not just the profession, but even maybe the government, has become a little bit more aware of the importance of really strategically moving resources around and collecting data.

And now the administrative data is available for some things as well.

Zipperer: Speaking of data access, how important do you think it is now for work on the research frontier of labor economics, say, to have administrative data access, or access to often-restricted-access datasets? Is the United States positioned as a leader in this? Or are we paling in comparison to other countries?

Card: Well, we’ve got a lot of disadvantages. One problem is that we don’t have a centralized statistical agency. And so you’ll forever run into someone who wants to do a project and they’re not able to do it because there’s a bureaucratic obstacle to using this particular dataset or that particular dataset.

So for example, matching the LEHD [Longitudinal Employer-Household Dynamics] data to the Census of manufactures or the Census of firms. That would be a natural thing to do, but not that easy to do. If it was one statistical agency, we would have a lot more ease.

And then the laws of the United States—not just the federal but then the state laws—governing access to, say, the UI [unemployment insurance] files. Partially, those are available to the Feds when they’re constructing the LEHD data or other types of datasets, but they’re not available to individual researchers.

Although Alan and I have both used, for example, data from New Jersey. So individual researchers can, in some cases, contact the state and get some help. But that often requires some combination of a person on the other side who actually wants to answer the phone and talk to you, and maybe some resources.

Krueger: Yes, so I would say we’re behind other countries in terms of administrative datasets. We’ve long been behind Scandinavia, which has provided linked data for decades. And we’re now behind Germany, where a lot of interesting work is being done.

And it’s unfortunate because we did lead the world, I would say, in labor force surveys. The rest of the developed world copied our labor force survey and copied our practice of making the data available for researchers to use.

It’s much more cumbersome, bureaucratic, and idiosyncratic here to get access to the administrative data. And I don’t think that’s good for American economists or for studies of the economy.

And it’s going to make it much harder to replicate work going forward. And that’s unfortunate because I think a strength in economics has been the desire to replicate results.

Card: But I think it is absolutely critical for front-line research in the field to have access to some kind of data. Either you get access to administrative data through personal connections like a lot of people do. Or there are certain countries that make it available, like Germany, for instance—I’ve done a lot of work there—or Portugal. Or like Alan has done where he’s used some of the resources available at Princeton to do some specialized surveys and connect the responses with the administrative data. That’s probably the frontier at this point. But that’s not going to be a thing that a typical person can do very easily.

Krueger: And we haven’t caught up in terms of training students to collect original survey data. I’ve long thought we should have a course in economic methods—going back to the New England Journal of Medicine—and cover the topics that applied researchers really rely upon, but typically are forced to learn on their own. Index numbers, for example. Or ways of evaluating whether a questionnaire is measuring what you want it to measure. And survey design, sampling design and the effect of non-response bias on estimates.

These are topics that other social science fields often teach and we just take for granted that students know it. And there’s a lot of work that’s being done, especially in development economics, on implementing randomized experiments, which I think is a net positive. But there’s also a lot of noise being produced. And I think having more training in terms of data collection, survey design, experimental design, would be helpful for our field.

Zipperer: You mentioned randomized experiments. What are your views on the pluses and minuses of what seem to be a variety of different empirical approaches now common in economic research, such as randomized experiments, actually conducting an experiment? Or a quasi-experimental approach, compared to say, a more model-centric approach? Or even more recent kinds of data mining techniques that let the data tell us the research design?

Card: I would say, and I think Alan would probably agree with me, that at the end of the day, you probably want to have all those things if possible. And each of them has some strengths and some weaknesses.

The strength of a randomized controlled trial is the ability to say you’ve got this treatment and this control group and it’s random. So that means that you’re internally consistent. The weakness is that the set of questions you can ask and the context in which you can ask those questions is often very contrived.

So the one extreme is the lab experiment, where you’re getting a bunch of students and you’re asking them to pretend that they’re two sides of a bargaining table or something similar. And by changing the way you set the protocols for those experiments, as people that work in that field are aware, you can get somewhat different answers. To some extent, the criticisms of psychology that you would see played out in the newspapers recently has a lot to do with those difficulties. It’s not just how you read the script but how you set up the lab and everything else that kind of matters.

So the great advantage of a quasi-experiment or natural experimental like minimum wage is that it’s a real intervention. It’s real firms that are all affected. You get part of the general equilibrium effect. That’s pretty important for understanding the overall story. The disadvantage is that someone can always say, well, it isn’t truly random. And the number of units might be small. So you might only have two states. At some abstract level, there’s only two degrees of freedom there. And so that’s a problem.

And then there’s a third set of problems, which I’ve alluded to before, which is the types of questions that you can ask. And this is where my former colleague, Angus Deaton, is well-known for his vitriolic criticism of RCTs in development economics.

And I think one interpretation of his concern is the set of questions that can be asked are really so small, relative to the bigger questions in the field. Now that isn’t always the case but that is a concern.

Krueger: Yes, I would just add that no research design is going to be perfect. And you can poke holes in anything. And I think if you believe that existing research is great and we have answered so many questions and we were on the right track before, then one might be hostile towards the growth of randomized controlled trials. But that’s not how I view the earlier state of research.

In my mind, there are two great strengths of randomized experiments. One is that the treatment is exogenous by design. And the other is that it makes specification searching more constrained. It’s pretty clear what you’re going to do. You’re going to compare the treatment group and the control group.

I’ve seen cases where people muck around to generate a result from an experiment. For example, look at Paul Peterson’s work on school vouchers, where he finds no impact overall and kind of buries that, but looks at a restricted sample of African Americans in some cities and argues that we’ve got these great effects from school vouchers, which turn out not to hold up if you actually expand the sample. So I’m not saying that randomized experiments totally ties people’s hands. But I think they do so more than is the case with non-experimental methods applied to observational data.

I’ve become more eclectic over time regarding research method, as I mentioned at the event earlier today. I mean, I was struck when I worked in the White House at the range of questions I would get from the President. And you’d want to do the best job answering them. That was your job.

And there were some cases where there was very little evidence available and there was some modeling which, if you buy the assumptions of the modeling, could answer a lot of questions.

And I think that was probably better than the alternative, which is having a department come in and plead its case based on no evidence or model whatsoever.

So I encourage economists to use a variety of different research styles. What I think on the margin is more informative for economics is the type of quasi-experimental design that David and I emphasize in the book.

But the other thing I would say, which I think is underappreciated, is the great value of just simple measurement. Pure measurement. And many of the great advances in science, as well as in the social sciences, have come about because we got better telescopes or better microscopes, simply better measurement techniques.

In economics, the national income and product accounts is a good example. Collecting data on time use is another good example. And I think we underinvest in learning methods for collecting data—both survey data, administrative data, data that you can collect naturally through sensors and other means.

Card: Yeah. For instance, take the American administrative data that’s collected by the Social Security Administration. If you wanted to do something very simple to that dataset that would make it possible to do a lot more, you could ask each employer, who reports their employees’ Social Security earnings data to also report the spells that they worked — the starting and ending of the job.

That simple kind of information—which could be collected, maybe with some burden, but in many cases, almost trivially—would expand the use of that dataset amazingly, for just an amazing set of purposes.

It turns out, that’s what they do in other countries. So you can then take an administrative dataset like Social Security Administration and that suddenly becomes a spell-based dataset, because you’ve got every employment spell that somebody had during the year, automatically, for free.

It’s not perfect, but it’s just a quantum improvement. Unfortunately, though, we don’t have anybody saying, well, what could we do to make administrative datasets better and more useful for research?

There are people at the Census Bureau who are kind of working on matching administrative and non-administrative survey type datasets. But often times that’s way down in the subterranean levels, partially because of the concern that if people knew that you can actually take the Numident [Numerical Identification System] file and attach a Social Security number to every piece of paper going through, that they would be shocked somehow. So we have quite a problem here.

Zipperer: So, to take another concrete case where measurement seems to be particularly important and related to work that you’ve done on minimum wages, what kind of wage spillover effects do minimum wages generate for people who are, say, earning above a new minimum wage after a minimum wage increase?

There’s a lot of work showing that there are spillover effects and there are questions about how big they are, perhaps due to a measurement error in wages and survey data. What are your views about why these spillover effects seem to exist?

Krueger: Let me make some initial comments. In our book, we discovered spillover effects. When I say we discovered it, we asked in a very direct way when the minimum wage went from $3.35 to $4.25, and you had a worker who was making $4.50, did that worker get a raise as a result?

And what we found was that a large share of fast food restaurants responded “yes.” We had these knock-on effects or spillover effects.

Interestingly, they tended to occur within firms that were paying below the new minimum wage. You had some restaurants that were already above the new minimum wage. And the increase in the minimum wage had very little effect on their wage scales, which suggests that internal hierarchies matter for worker morale and productivity.

Only to economists is that surprising. The rest of the world knows that the way that they’re treated compared to other people influences their behavior, and the way that they view their job and how likely they are to continue on their job, and so on.

The standard textbook model, by contrast, views workers as atomistic. They just look at their own situation, their own self-interest, so whether someone else gets paid more or less than them doesn’t matter. The real world actually has to take into account these social comparisons and social considerations. And the field of behavioral economics recognizes this feature of human behavior and tries to model it. That thrust was going on, kind of parallel to our work, I’d say.

Now, I also found it interesting that when the minimum wage was at a higher level compared to a lower level, the spillover effects were less common.

So to some extent, the spillover effects are voluntary and the companies are willing to put up with somewhat lower morale when the minimum wage is at a relatively higher level. And I always found it curious that companies would complain, “It’s not the minimum wage itself, it’s that I’m going to have to pay more than everybody else.” Well, that shows that you’re actually not behaving the way the model that you just cited to argue that you are going to hire fewer workers says you should behave. Because you’re voluntarily choosing to pay people, who were working before at a lower wage, a higher wage.

And it also gets you to think, well, maybe the wage from a societal perspective was too low to start with. And the fact that employers are taking into account these spillover effects when they set the starting wage means that from a societal perspective, we could get stuck in an equilibrium where the wage is too low.

Now, I always suspected that the spillover effects kind of petered out when you got 50 cents or a dollar an hour above the new minimum wage. But interestingly, work by David Lee, who was a student of David’s and mine at Princeton, suggests that the spillover effects are pretty pervasive throughout the distribution. And he used a different method, one that I think is quite compelling to look at: What happened around minimum wage increases in states where they really had more of a binding effect?

And he found quite significant spillover effects. So one area where I think the literature has deviated from what we concluded in our book was we thought the spillover effects were there but they were modest. And I would say, if anything, it points to a larger impact of the minimum wage because of the spillovers.

Card: Thinking about why these occur—Laura Giuliano, who attended the conference today, has a very interesting new paper studying a large retailer that has establishments all across the country, where wages were set at the company level.

And the paper shows that employees who were above the minimum wage, but in stores where different fractions of the employees below them got bigger and smaller raises, have differential quit behavior. So it’s really strong direct evidence of this channel that everyone has always thought is probably true.

I think that our understanding of exactly all the forces that determine the equilibrium wage distribution is pretty limited, to tell you the truth.

In the United States, for example, it’s very, very difficult to get an administrative dataset that would say: Here’s everybody that works together at the firm. And let’s treat that, as Alan was saying, as part of the social group. What things do they share? What features of their outcomes seem to be mediated through the fact that they all work for the same employer?

And in the Scandinavian countries, there’s quite a bit of work that’s going in that direction. One really simple example is if a male boss at a firm takes leave when his wife has a baby, then the other employees do too. So that’s just a really simple example of the kind of work that you could do if you had the ability to match these datasets together and show they were all the firm.

I think outside of economics, in sociology for instance, they’ve always thought that a very important part of everyone’s identity is the firm they work for and who they work with.

And it has to be really influential in how you think about your life and how you organize your time and people you hang out with and so on. But in a standard economics model, that’s all thrown out the window. And for some questions, it might be second-order at best. But for other questions, it seems like it’s first-order.

Zipperer: Do you see that changing somewhat with, for example, your and others’ work on the nature of the firm influencing inequality?

Card: Well, I’m always hopeful. (Laughter.)

Krueger: Yes, I would say the success of behavioral economics is a major development in economics.

Card: And in labor economics especially, I’d say.

There is an interesting thing going on in economics. So, we see job market candidates that come through every year. And there’s sort of two sides of economics in their work simultaneously.

One side is uber-technical. More and more technical stuff every year. You cannot believe the complicated ideas that people are trying to pretend that individuals are working with and choosing whether to do this or that.

And on the other side, behavioral economics is almost a reaction to that. It says, “Actually, those effects are all third-order. The first-order thing is the concern is about how you rank relative to your peers.”

So the great advantage of behavioral economics is that it is saying, “OK. I’m going to try and simplify away from this incredibly complicated thing where your choice about whether to participate in a welfare program is influencing how you’re going to divide up the surplus between you and your husband and whether you’re going to be divorced next year.”

I saw a paper like this last week and I honestly thought, “If I could think this through myself, it would be a miracle.” (Laughter.) I spent my life thinking about that.

Krueger: And you oversimplified it: You’re considering each step in the way, assuming you will make optimal choices each year in the future, and then integrating back to figure out what to do today.

Card: So there are these two strands of economics that are really fighting it out right now in the theory side. And in a way, behavioral economics is much more closely linked to what I think someone earlier today was calling institutional economics. So it’s the idea that people are doing a set of things, maybe rules of thumb and so on, that are influencing how they choose what they do. That maybe we would gain a lot from understanding those things a little bit better.

Zipperer: At the beginning of this discussion, a lot of arrows seemed to point back to Orley Ashenfelter. Could you talk about his influence on your work and maybe the field generally?

Card: Well, for me it’s very strong because he was my thesis adviser and really the reason why I went to Princeton as a grad student. And even as an undergraduate, the two professors who I took courses from that had the most influence on me were students of Orley’s.

So my connection to him goes back a long time. And we wrote a bunch of papers together over the years and advised many students. But also many of the people of my generation of labor economists, like Joe Altonji, John Abowd, or other people like that, were strongly influenced by Orley.

Right from the get-go, he was a very, very strong proponent of “experiments if you can do them” and “collect your own data if you can do it” and “spend the money if you can.” One time, he and Alan went to Twinsburg Twins Festival and collected data on twins.

Krueger: One time? Four summers in a row we went to Twinsburg, Ohio, with a group of students. We brought a dozen students. (Laughter.)

And it was actually classic Orley because he spent a lot of time choosing the restaurant for dinner, a lot of time chatting with some people, and not too much time collecting data, as I recall.

I read Orley’s work when I was an undergraduate. And a big part of the attraction for me to come to Princeton was Orley, and then David was just really a bonus who I ended up working with so closely for a decade.

And I think Orley kind of set the tone for the Industrial Relations Section. He had done work on the minimum wage with Bob Smith at Cornell, on non-compliance and how much non-compliance there was—which made us think that, if you really want to look for the effects on minimum wage, you need to look in places where it’s binding and companies are complying.

He had a healthy dose of skepticism about the research that had come from the National Minimum Wage Study Commission. Which sometimes he called, as I recall, the National Minimum Study Commission.

Card: Minimum Study Wage Commission.

Krueger: The Minimum Study Wage Commission. (Laughter.)

Card: You can quote me on that.

Krueger: We’re just quoting him. (Laughter.) And he used to like to tell a story, which I remember vividly, where he met with some restaurant group when he worked, I think, at the Labor Department. And they said, we’ve got a problem in our industry: The minimum wage is too low and we can’t get enough workers.

And that’s inconsistent with the kind of view that the market determines the wage, and you get all the workers you want at the going wage, and you can raise the wage if you can’t get enough workers. And I think he was always sympathetic to the famous quote, in “A Wealth of Nations,” where Adam Smith said that employers rarely get together when the subject doesn’t turn to how to keep wages low; that there’s a tacit and constant collusion by employers. So I think he kind of set a tone where it was acceptable if you found results that went against the conventional wisdom.

And I came from an environment where even Richard Freeman at the time, who was a somewhat heterodox economist, had written that there’s a downward sloping demand curve for low-wage workers and a higher minimum wage reduces employment, but not all that much, but you get the conventional effects. So that was my background coming in.

Zipperer: Well, thanks very much. This was a great discussion.

Krueger: Sure.

Card: Sure.

Zipperer: Thank you.

Must-reads: April 25, 2016


Should-reads:

Must-read: Nick Bunker: “Why is U.S. labor market fluidity drying up?”

Must-Read: Nick Bunker: Why is U.S. labor market fluidity drying up?: “The U.S. labor market is a far less dynamic place than it was 30 years ago…

…Workers today are less likely to get a job while unemployed, move into unemployment, switch jobs, or move across state lines. You’d think just the opposite would be true given some of the discussion about our rapidly changing digital economy, but the data show what the data show. Even still, the reason—or reasons—for the decline in fluidity aren’t known…. Molloy… Smith… Trezzi… and… Wozniak… find that overall fluidity in the U.S. labor market has fallen between 10 percent and 15 percent since the early 1980s. But for some of the individual flows, the decline has been as large as one-third…. The authors find no evidence… that the gain from switching jobs has declined…. While the authors do find some speculative evidence that declines in fluidity are related to declines in social trust, the results aren’t particularly strong…. After their analysis, it seems more likely than not that the decline in labor market fluidity is harmful…

Must-read: Mark Muro: “Adjusting to Economic Shocks Tougher”

Must-Read: I gotta go back and reread Blanchard and Katz on regional adjustment in the early 1992. How much of it is that adjustment is faster? How much of it is that the shock they study–to LA-sector aerospace employment–was different? How much of it is that back then aggregate demand policy was supportive of adjustment?

Mark Muro: Adjusting to Economic Shocks Tougher: “In the last six months a burst of new empirical work…

…much of it focused on the region-by-region aftermath of the Great Recession—is shredding key aspects of the standard view and suggesting a much tougher path to adjustment for people and places…. Joe Parilla and Amy Liu, David Autor, David Dorn, and Gordon Hanson focus on the ‘stunningly slow’ adjustment of exposed local labor markets to the ‘China shock’ and argue that the story challenges ‘much of the received empirical wisdom about how labor markets adjust to trade shocks.’

Autor and his colleagues do not see much evidence at all of a frictionless labor market in which the rapid mobility of workers across firms, industries, and regions guarantees rapid adjustment to new realities. Instead they see a series of slow-moving crises in particular metro areas. ‘Switching costs’ and other frictions inhibit workers’ ability to shift quickly to new, less-threatened firms or industries.  Many workers never recoup lost earnings and depend more on transfer payments. Little offsetting growth is detected in industries not exposed to the shock…

Must-read: James Kwak: “Profits in Finance”

Must-Read: It used to be that we collectively paid Wall Street 1% per year of asset value–which was then some 3 years’ worth of GDP–to manage our investment and payments systems. Now we pay it more like 2% per year of asset value, which is now some 4 years’ worth of GDP. My guess is that, at a behavioral finance level, people “see” commissions but do not see either fees or price pressure effects.

Plus there is the cowboy-finance-creates-unmanageable-systemic-risk factor, plus the corporate-investment-banks-have-no-real-risk-managers factor. We are paying a very heavy price indeed for having disrupted our peculiarly regulated and oligopoly-ridden post-Great Depression New Deal financial system:

James Kwak: Profits in Finance: “Expense ratios on actively managed mutual funds have remained stubbornly high…

…Even though more people switch into index funds every year, their overall market share is still low—about $2 trillion out of a total of $18 trillion in U.S. mutual funds and ETFs. Actively managed stock mutual funds still have a weighted-average expense ratio of 86 basis points. Why do people pay 86 basis points for a product that is likely to trail the market, when they could pay 5 basis points for one that will track the market (with a $10,000 minimum investment)? It’s probably because they think the more expensive fund is better. This is a natural thing to believe. In most sectors of the economy, price does correlate with quality, albeit imperfectly…. And this is one area where I think marketing does have a major impact, both in the form of ordinary advertising and in the form of the propaganda you get with your 401(k) plan…. The persistence of high fees is partly due to the difficulty of convincing people that markets are nearly efficient and that most benchmark-beating returns are some product of (a) taking on more risk than the benchmark, (b) survivor bias, and (c) dumb luck.

Must-read: Paul Krugman: “In Hamilton’s Debt”

Must-Read: Paul Krugman: In Hamilton’s Debt: “Hamilton’s pathbreaking economic policy manifestoes…

…his 1790 ‘First Report on the Public Credit’… remains amazingly relevant today…. Why did Hamilton want to take on those state debts? Partly to establish a national reputation as a reliable borrower… give wealthy, influential investors a stake in the new federal government…. Beyond that, however, Hamilton argued that the existence of a significant, indeed fairly large national debt would be good for business. Why? Because:

in countries in which the national debt is properly funded, and an object of established confidence, it answers most of the purposes of money.

That is, bonds issued by the U.S. government would provide a safe, easily traded asset that the private sector could use as a store of value, as collateral for deals, and in general as a lubricant for business activity. As a result, the debt would become a ‘national blessing’…. This argument anticipates, to a remarkable degree, one of the hottest ideas in modern macroeconomics: the notion that we are suffering from a global ‘safe asset shortage.’ The private sector, according to this argument, can’t function well without a sufficient pool of assets whose value isn’t in question–and for a variety of reasons, there just aren’t enough such assets these days. As a result, investors have been bidding up the prices of government debt, leading to incredibly low interest rates. But it would be better for almost everyone, the story goes, if governments were to issue more debt, investing the proceeds in much-needed infrastructure even while providing the private sector with the collateral it needs to function. And it’s a very persuasive story to just about everyone who has looked hard at the evidence.

Unfortunately, policy makers won’t do the right thing, largely because they keep listening to fiscal scolds…. Alexander Hamilton knew better. Unfortunately, Hamilton isn’t around to help counter foolish debt phobia. But maybe reminding policy makers of his wisdom is one way to chip away at the wall of folly that still constrains policy. And having his face out there every time someone pulls out a ten can’t hurt, either.

Must-reads: April 24, 2016


Should Reads: