Must-Read: Scott Lemieux: Why Did Obama Do so Well at the Supreme Court?

Must-Read: Scott Lemieux: Why Did Obama Do so Well at the Supreme Court?: “The last week of the Supreme Court’s last full term of the Obama era…

…was a microcosm of his administration’s relationship with the Roberts Court…. In a one-sentence opinion, the Supreme Court left in place a lower court ruling that the president’s DAPA immigration program… was illegal, meaning that it will almost certainly not be implemented before President Obama leaves office. Still, the news… was good. A surprising majority opinion upheld the University of Texas’s affirmative action program, and a somewhat less surprising majority opinion struck down Texas’s draconian abortion statute…. Looking at the Supreme Court’s major decisions during the Obama administration as a whole, the story is similar. The last time a Democratic president successfully passed an ambitious progressive agenda with a Republican-controlled Supreme Court, the result was a constitutional crisis…. [But] the Roberts Court left Obama’s domestic agenda mostly intact, while delivering the Democratic coalition some major victories it would not have been able to win any other way, most notably on abortion and LGBT rights.

One interpretation of the Court’s behavior is that it is isolated from the pressures that have caused the other institutions of American politics to become cripplingly polarized. This interpretation, however, is probably wrong. The relative moderation of the Roberts Court is likely the last gasp of the previous partisan order…. There have been plenty of… major conservative judicial victories during the Obama era, most notably the gutting of the most important civil rights statute since Reconstruction in the 2013 decision Shelby County…. Even worse than the result of the case was the shoddiness of Roberts’s opinion…. Since then, many Republican-controlled states have wasted little time passing discriminatory voting restrictions, undercutting the Court’s conclusion that the strong enforcement of the Voting Rights Act was no longer necessary. While the Roberts Court has permitted the states to engage in a wide array of vote suppression tactics on the one hand, it has prevented state and federal governments from passing campaign finance restrictions on the other. And in lower-profile cases, the Court has consistently ruled against the interests of consumers and the rights of employees when interpreting federal law….

With the admittedly crucial exception of Sebelius, the liberal victories of the Roberts Court were due to one man: Anthony Kennedy…. Since early in the Nixon administration, the median vote on the Court on the most politically salient issues has been a Republican, but a moderate, country-club Republican: Potter Stewart, Lewis Powell, Sandra Day O’Connor, and now Kennedy. The issue going forward is that this kind of Republican is rapidly going extinct…. Future Republican nominees are going to be in the mold of Samuel Alito and Roberts….

The Supreme Court has historically been a centrist institution… [because] elites—from whose ranks Supreme Court justices are generally chosen—tend to have less polarized views than ordinary members of the party…. A decade from now, the Supreme Court will almost certainly not be controlled by either a moderate Republican like Anthony Kennedy or a heterodox liberal like Byron White…. The median vote on the Court will almost certainly be a conservative in the mold of Alito or Roberts, or a liberal in the mold of Ruth Bader Ginsburg…. This polarization is not symmetrical…. Alito is further to the right than Ginsburg is to the left…. Could anything stop the Court from becoming as polarized as the rest of the political order? If current party polarization persists, probably not…. In the short term… whether the Court will be controlled by a liberal Democratic faction or a conservative Republican one… means that the presidential and Senate elections in November will be high-stakes contests indeed.

Have mutual funds boosted CEO pay in the United States?

A new working paper finds a relationship between common ownership of an industry and the kind of compensation contracts that chief executives receive

Understanding the rise in CEO pay is important for understanding the rise of top-end inequality in the United States. There are a number of possible explanations for why U.S. corporate executives make so much more relative to the rest of the workforce than they did back in the late 1970s. Firms are larger today, increasing the importance of good management at the top. Declining tax rates for those at the very top also induces executives to bargain for higher salaries. Or, as a new paper argues, perhaps increased common ownership of public firms by way of the mutual fund industry leads to higher executive pay.

The new paper is by economists Miguel Anton of the Universidad de Navarra, Florian Ederer of Yale University, Mireia Gine of the University of Pennsylvania, and Martin Schmalz at the University of Michigan. They build on an argument that increased common ownership of public firms is creating problems for competition in the U.S. economy—Increasingly because the ownership of public firms is becoming more and more concentrated in the hands of mutual funds. These funds buy stakes in a number of firms in order to diversify their risk. This common ownership means that a fund often owns shares in competing firms in the same industry. These passive owners have an incentive for these firms not to compete against each other. The lack of competition drives up total profits across the same industry, which in turn increases the returns to these diversified mutual fund investors.

Previous work shows how increased common ownership in the airline industry led to higher prices. The new paper argues that increased common ownership results in higher CEO pay that is less tied to competition. The four economists find that funds with large common ownership stakes want to see less competition in the markets they have invested in, so industries with higher common ownership are more likely to see managers have contracts with fewer incentives to compete and more unconditional pay (that is, pay not based on performance).

Looking at the data, the co-authors show a relationship between common ownership of an industry and the kind of compensation contracts that chief executives receive. In more commonly-owned industries, executive compensation is less tied to the performance of the specific firm and more with the performance of rivals, meaning its more closely tied to the overall performance of the industry. More common ownership is also correlated with higher unconditional pay for executives, again showing less of a connection between executive pay and performance. The authors additionally use variation in ownership from a mutual fund scandal to argue that these correlations reflect a causal relationship between compensation and common ownership.

This paper draws another connection between declining competition in the U.S. economy and rising income inequality. Similar to other research connecting increased economic rents and higher inter-firm inequality—increasing market power for firms resulting in “superfirms” that pay relatively well—the findings in this new paper also a link between competition and income inequality. These trends might also negatively affect economic growth, a clear and troubling possibility.

Must-Read: Pseudoerasmus: Did Inequality Cause the First World War? Contra Hobson-Lenin-Milanovic

Must-Read: Pseudoerasmus: Did Inequality Cause the First World War? Contra Hobson-Lenin-Milanovic: “In a small section in his new book, Branko Milanovic argues that the First World War…

…was ultimately caused by income & wealth inequality within the belligerent countries… John A. Hobson, Rosa Luxemburg, and Lenin…. High domestic inequality => ‘underconsumption’ by the masses & ‘surplus’ savings by the elites => capital exports, i.e., search for overseas outlets for investment => the ‘scramble for colonies’ & imperialism => (a major cause of the) WAR…. But… Ferguson’s The Pity of War has many problems, but one thing he’s very right about is the war that never broke out in the late 19th century between Britain and France, or between Britain and Russia…. Annoyingly, the Great Powers kept on resolving colonial disputes peacefully… too much European compromise and cooperation….

Furthermore, the ‘financier parasites’ of Hobson and Lenin had simply the wrong interest… feared rivalry… for the very good and rational reason that they had everything to lose from it…. The colonial disputes which Britain took most seriously and was willing to go to war over–Egypt (Fashoda), South Africa (German tensions over Transvaal), Afghanistan (Russian relations)–were all related in some way to monopolising maritime access, and eliminating all traces of threat, to India…. All else… was largely open to negotiation. Except, of course, for the naval rivalry in the North Sea. What actually soured Anglo-German relations was that Germany’s naval programme was perceived as an existential threat…. German dreadnoughts just a ‘few hours from the English coast’ were somewhat more important than Samoa or the Caprivi Strip….

Germany’s rulers believed the country’s political standing and national prestige was incommensurate with its sudden and dramatic rise as an economic superpower…. Imagine the chafing if Taiwan, and not the PRC, still represented China on the UN Security Council…. Who actually took the decision to go to war in Germany[?]… ‘Structural factors’ still require some kind of mechanism exerting pressure on the actual actors…. Mark Harrison…. “No country went to war for commercial advantage. Business interests favoured peace in all countries. Public opinion was considered mainly when the leading actors worried about the legitimacy of actions they had already decided on. If capital and labour had been represented in the Austrian, German, and Russian cabinets, there would have been no war.”

The capitalist bourgeoisie did not have the final power in Germany (let alone Austria or Russia). And the small and specific group of decision-makers is identifiable…. Fritz Fischer… [argued] that Germany had already taken the decision to go to war in 1912, based on a high-level meeting that year which seemed eerily to reflect much of German behaviour in July 1914…. In all three [of] Germany, Austria, and Russia, a feudal-agrarian-military elite governed over an increasingly bourgeois-industrial society (but especially in Germany). Those decision-makers held the unilateral power to go to war. And they took the decision unaccountably. When it came to matters of war, it’s not even clear that the East-Elbian Prussian Junker class really cared about the opinions of the country’s industrial and banking magnates.

I must confess I am considerably more sympathetic to Hobson (if not to Luxemburg and Lenin). As I read Hobson, his argument goes thus: (1) Income inequality leads to underconsumption–which means that investment and government purchases must be high share of national income in order for anything like full employment to be maintained. (2) Governments that do not maintain near-full employment most of the time are likely to fall. (3) Governments that do maintain near-full employment most of the time are likely to persist in office. (4) Imperialist governments that spend public money on overseas wars for vent-for-surplus colonies are likely to have higher shares of exports, investment, and government purchases in national income. (5) Militaristic governments that seek military advantage over other European powers are likely to have even higher shares of government purchases in national income. (6) Thus the political-economic logic of underconsumption puts pressure on the political system to produce more high politicians in office who like to build, play with, and ultimately use their military toys.

This seems to me to be not implausible, in contrast to the Lenin-Luxemburg version of the argument, which I agree is very implausible.

Must-Reads: July 6, 2016


Should Reads:

Must-Read: Kevin Drum: NAFTA and China Aren’t Responsible for Our Steel Woes

Must-Read: Kevin Drum: NAFTA and China Aren’t Responsible for Our Steel Woes: “Donald Trump stood in front of a pile of scrap metal yesterday in Pittsburgh and blasted both NAFTA and the accession of China into the World Trade Organization…

…He was positively poetic about how his trade policies would affect the steel industry…. There’s no question that the American steel industry has suffered over the past three decades, thanks to cheap steel imports from other countries. But this began in the 1980s and had almost nothing to do with either NAFTA or China…. Do you see a sudden slump in US steel production after NAFTA passed? Or after China entered the WTO? Nope…. It started with Japan and South Korea in the ’80s and later migrated to other countries not because of trade agreements, but because Japan and South Korea got too expensive. And it’s not as if no one noticed this was happening. Ronald Reagan tried tariffs on steel and they didn’t work. George H.W. Bush tried tariffs again. They didn’t work. George W. Bush tried tariffs a third time. No dice.

For all his bluster, when it came time for Trump to lay out his plan to ‘bring back our jobs,’ it was surprisingly lame. It was seven points long but basically amounted to withdrawing from the TPP and getting tough on trade cheaters. This would accomplish next to nothing…. The bottom line is simple: If we want access to markets overseas, we have to give them access to our markets. Donald Trump… [could be] promising to build a huge tariff wall around the entire country. He’s not willing to do that because even he knows it would trash the US economy. So instead he blusters and proposes a toothless plan. Sad.

Must-Read: Narayana Kocherlakota: Three Antidotes to the Brexit Crisis

Must-Read: Correct, IMHO, from the very sharp Narayana Kocherlakota. Now perhaps his successor Neel Kashkari and the other Reserve Bank presidents not named Charlie Evans might give him some back up?

The one thing I do not like is Narayana’s “Granted, there is a risk that such steps will spook markets by signaling that the Fed is concerned about the state of the U.S. financial system.” That sentence seems to me to misread market psychology completely. As I see it–and as the people in markets I talk to say–right now markets are fairly completely spooked by their belief that the Federal Reserve is unconcerned, and takes that lack of concern as a sign of Federal Reserve detachment from reality. Narayana’s following sentences seems to me to be highly likely to be the right take: “I’d say the markets are already pretty spooked” and “By demonstrating that it is paying attention to these obvious signals, the Fed can help to bolster confidence in its economic management”.

Let me stress that, at least from where I sit, that confidence in Federal Reserve economic management is, right now, lacking.

The people I talk to in financial markets tend to say that they believe markets took Stan Fischer on January 5 to be something of a wake-up call with respect to Fed groupthink:

Liesman: When I looked at where the market is priced, the market is priced below where the Fed median forecast is. Quite a bit. Two rate hikes really, if you count them in quarter points. Does that concern you that the market needs to catch up with where the Fed is or is it a matter of you think the Fed needs to recalibrate to where the market is?

Fischer: Well, we watch what the market thinks, but we can’t be led by what the market thinks. We’ve got to make our own analysis. We make our own analysis and our analysis says that the market is underestimating where we are going to be. You know, you can’t rule out that there is some probability they are right because there’s uncertainty. But we think that they are too low.

For eight straight years now the Federal Reserve has been more optimistic than the markets. And for eight straight years now the markets have been closer to being correct. And yet the Federal Reserve still believes that it “can’t be led by what the market thinks” and has “got to make our own analysis”? Why?

Narayana Kocherlakota: Three Antidotes to the Brexit Crisis: “The Fed should ensure that banks have enough loss-absorbing equity capital…

…not allow them to return equity to shareholders…. The measure should apply to all banks, so markets won’t read it as a signal about individual institutions’ relative strength. Second, there’s a risk that investors’ flight to safe assets could develop into a broader credit freeze. To mitigate this, the Fed should lower its short-term interest-rate target…. Finally, the Fed should consider reviving the Term Auction Facility, which allows banks to borrow funds from the central bank with less of the stigma…. Granted, there is a risk that such steps will spook markets by signaling that the Fed is concerned about the state of the U.S. financial system. That said, as an outsider who gets much of his information from Twitter, I’d say the markets are already pretty spooked. By demonstrating that it is paying attention to these obvious signals, the Fed can help to bolster confidence in its economic management. One important lesson of the last financial crisis is that the guarantors of stability must be proactive if they want to be effective. It’s time for the Fed to put that lesson into practice.

The misplaced debate about job loss and a $15 minimum wage

Overview

The leading criticism of the “Fight for $15” campaign to raise the federal minimum wage to $15 an hour is the presumed loss of jobs. Employers, the argument goes, would eliminate some workers or reduce their hours in the short-term, and in the longer run, further automate their operations in order to ensure that they will need fewer low-wage workers in the future. For many leading minimum wage advocates, even a gradually phased-in $12 wage floor would take us into “uncharted waters” that would be “a risk not worth taking.”

On the other side is the long historical concern with making work “pay,” even if that means some job loss. In this view, the most important consideration is the overall employment impact on low-wage workers, after accounting for the additional job creation that will come with higher consumer spending from higher wages, which will almost certainly at least offset any direct initial job losses. And even more importantly, what really matters in this view are the likely huge overall net benefits of a large increase for minimum-wage workers and their families.

Download File
The misplaced debate about job loss

Read the PDF in your browser

If we are serious about job opportunities for low-wage workers then there are many effective ways to compensate those who lose their jobs, ranging from expansionary economic policy to increased public infrastructure spending, more generous unemployment benefits and above all, public-sector job creation. A related issue is whether it makes moral, economic and fiscal sense to maintain a low federal minimum wage and then ask taxpayers to subsidize the employers of low-wage workers by propping up the incomes of poor working families only via means-tested programs such as the Earned Income Tax Credit and supplemental nutrition assistance.

The debate has been, effectively, a stalemate, with the federal minimum wage set at extremely low levels ($7.25 since 2008) by both historical and international standards. Part of the explanation for our persistent failure to establish a minimally decent wage floor at the federal level has been the way the discourse has been framed—even by many of the strongest advocates for substantially higher minimum wage.

In recent years, the best evidence shows that moderate increases from very low wage floors have no discernible effects on employment, which has helped make the case for substantial increases in the minimum wage. But the very strength of this new evidence— research designs that effectively identify employment effects at the level of individual establishments—has contributed to the adoption of a narrow standard for setting the “right” legal wage floor—defined as the wage that previous research demonstrates will pose little or no risk of future job loss, anywhere. For all sides, the central question has become: Whose estimate of the wage threshold at which there is no job losses whatsoever is the most credible?

Some economists, for example, point to existing evidence that the effects on employment when the minimum wage is increased within the $6-to-$10 range are minimal. Yet other researchers continue to argue, with credible statistical support, that sizable increases within this $6-to-$10 range do cause at least some job loss in some establishments in some regions, even if limited to high-turnover teenagers.

But there certainly is no evidence that can be relied upon to identify the no-job-loss threshold for a legal wage floor that would apply to the entire United States—the wage below which it is known that there is little or no risk of job loss anywhere, and above which there is known to be a risk of job loss that is high enough to be not worth taking. The only truly reliable way to do this would be to regularly increase the federal minimum wage while carefully monitoring the employment effects, much as the United Kingdom’s Low Pay Commission has done for the minimum wage that was instituted there in 1999.

There are different stakeholders in this debate. On the one side, there are the academic economists who care deeply about empirical confirmation of price-quantity tradeoffs and restaurant owners who care equally as much about their profit margins. On the other side, there are workers and their advocates who desire the establishment of a minimum living wage. Given the many parties with a big stake in the outcome, relying on evidence-based criteria about job loss for setting the wage floor all but guarantees unresolvable controversy.

The methodological double bind in setting the minimum wage

Then there is the methodological problem—a classic case of “Catch 22.” Because the identification of the wage at which there is expected to be zero job loss must be evidence-based, there is no way to establish the higher nationwide wage floors necessary for empirical tests. There are other places that have enacted higher minimum wages—think Santa Monica, Seattle, New York state, France, Australia or the United Kingdom—but they would face the same problem if they relied exclusively on zero job loss as the criterion for the proper wage floor. In practice, high minimum wage locations have relied on other criteria when making the political choice to set the legal wage, namely a wage that more closely approximates a minimum living wage than what the unregulated market generates.

In practical terms, local and state government’s past reliance on statistical tests for other jurisdictions not only means that we must assume that they are directly applicable (why would evidence from Seattle, New York state or the United Kingdom be a reliable guide to the effects at the level of the entire U.S. labor market?), but also requires that places imposing a no-job-loss standard must always lag far behind the leaders, and effectively condemns them to setting the wage floor well below the actual wage that will start generating job loss. In short, the no-job-loss criterion cannot stand on its own as a coherent and meaningful standard for setting the legal wage floor, and by relying on old statistical results from other places, ensures a wage that is too low on it own terms.

Ignoring the net benefits of raising the minimum wage

When the criterion for raising the minimum wage is concerned only with the cost side of an increase, the costs of some predicted job losses are all that matters. If the wage floor is set above the no-job-loss level, what kind of jobs will be lost? Who will be the job losers? What alternatives were available to them? These are the kinds of questions that must be asked to determine the costs of minimum wage related job losses. But there are obviously benefits to raising the legal wage floor. Shouldn’t they be counted and compared to the costs?

Those benefits are evident directly for the workers receiving wage increases as a result of a rise in the minimum wage, either because they are earning between the old minimum wage and the new one (say, between $7.25 and $15) or because they earn a bit above the new minimum wage—because employers increase wages to maintain wage differentials among workers by skill or seniority. The benefits also are evident for taxpayers–with a much higher minimum wage there would be less need to rely on means-tested redistribution to increase the after-tax and benefit incomes of working families.

Forgetting the ethical and efficiency arguments for raising the minimum wage

Relaying on the no-job-losses criterion for setting an appropriate federal wage floor entirely ignores the main traditional justification for the minimum wage: The moral, social, economic, and political benefits of a much higher standard of living from work for tens of millions of workers. On both human rights and economic efficiency grounds, workers should be able to sustain at least themselves and ideally their families. And on the same grounds, it is preferable to do so from their own work rather than from either tax-based public spending or private charity.

It is hard to put this argument for a living wage better than Adam Smith did several centuries ago:

A man must always live by his work, and his wages must at least be sufficient to maintain him. They must even upon most occasions be somewhat more; otherwise it would be impossible for him to bring up a family…. No society can surely be flourishing and happy, of which the far greater part of the members are poor and miserable.

A public policy straightjacket

Determining a suitable federal minimum wage based solely on a zero job loss rule is a public policy straightjacket that would effectively rule out any significant raise of the wage floor above that which already exists. Yet from a historical perspective, strict adherence to such policymaking criteria would have also made it impossible to ban child labor (job losses!), as well as many critical environmental and occupational health and safety regulations. It would also foreclose any consideration of policies like paid family leave, which exists in every other affluent country.

Conclusion

Breaking out of this public policy straightjacket requires policymakers to rethink their criteria for raising the minimum wage. It also means that economists must shake off their fear of challenging the prevailing orthodoxy—a no-immediate-harm-to-anyone way of thinking—and see the longer-term benefits to millions of workers. It is estimated that the move to a $15 minimum wage by both California and New York state will directly raise the pay for over one-third of all workers.

If we really care about maximizing employment opportunities then we should not hold a decent minimum wage hostage to the no-job-loss standard. Rather, we should put a much higher priority on full-employment fiscal and monetary macroeconomic policy, minor variations of which would have massively greater employment effects than even the highest statutory wage floors that have been proposed.

But it is also well within our capabilities to counter any job loss that can be linked to the adoption of what the prominent University of Chicago economist J. B. Clark in 1913 called “emergency relief” such as extended unemployment benefits, education and training subsidies, and public jobs programs. A minimum living wage combined with other policies common throughout the affluent world, such as meaningful child-cash allowances, would put the United States back among other rich nations that promote work incentives while all but eliminating both in-work poverty and child poverty. It would put the country into waters that most other affluent nations have charted and are already navigating.

—David Howell is a professor of economics and public policy at The New School in New York City. This note reflects and builds on the material that appears in the working paper published by the Washington Center for Equitable Growth, “What’s the Right Minimum Wage? Reframing the Debate from ‘No Job Loss’ to a ‘Minimum Living Wage,’” co-authored with Kea Fiedler and Stephanie Luce.

Photo Uncredited, Associated Press

The importance of childhood education and the birth lottery for U.S. innovation

A new paper looks at the role of education in the innovation gap.

The U.S. economy could use more innovation these days. Despite the proliferation of apps and concerns about a robot takeover of the labor market, U.S. productivity growth just isn’t very strong these days. The exact reasons for weak productivity growth aren’t fully understood by economists, but a jump in innovation would seem likely to help increase it.

Getting such a boost, of course, is easier said than done. Most policies that seek to increase innovation in the United States are focused on getting as much out of current innovators as possible. But because the first seeds of new innovation sprout in many different ways perhaps policymakers should instead be focused on creating more innovators. New research shows that the key to increasing the ranks of innovators may reside in the childhoods of potential innovators.

The new research is from a working paper by Alex Bell of Harvard University, Raj Chetty of Stanford University, Xavier Jaravel of Harvard, Neviana Petkova of the U.S. Department of the Treasury, and John Van Reenen of the Massachusetts Institute of Technology. The paper takes a look at the background of innovators, as measured by individuals who filed for a patent between 1996 and 2014. (An assumption here being that patents are good indicator of innovation.) The authors have access to administrative data collected by the federal government on these patent holders that lets the economists take a look at the patent holders’ family backgrounds.

Unsurprisingly, children from low-income backgrounds are much less likely to end up getting a patent than children from high-income backgrounds. Patent holders also are more likely to be white and male. What explains these two gaps? The economists seek to answer this question by looking at the standardized test scores for all individuals who passed through the New York City public school system from 1989 and 2009. The test scores cover from the 3rd grade to the 8th grade. By linking this data with the data on patent holders, the economists can see how much of this innovator gap is related to a test score gap.

The co-authors find that only 30 percent of the difference in patenting rates between low- and high-income children can be explained by the gap in math test scores in the 3rd grade. The amount of the gap for women and children of color explained by test scores is even smaller, at 3 percent and roughly 10 percent, respectively. So education, as measured by standardized test scores, clearly plays a role in patenting later in life, at least when it comes to explaining the income gap. But a significant portion of the income gap can’t be explained by educational differences.

So that leaves the advantages of growing up in rich households. Those advantages include wealthy family and community connections, better educational opportunities, and exposure to people working in innovative fields.

Current U.S. innovation policy is focused on getting more out of the innovators we already have, such as tax credits for research and development, which increases innovation on the intensive margin. But policymakers also should be focused on increasing innovation on the extensive margin: raising more children to become innovators. This would mean less focus on tax policies that reward innovation in the short term and more on policies that can help expand the pool of next-generation innovators to include more women, people of color, and those from low-income backgrounds. The upshot might be not only more economic growth but also more equitable growth.

Must-Read: John Holbo: Podcasts I Just Listened to

Must-Read: John Holbo: Podcasts I just listened to: “I just listened to a Federalist podcast interview with Randy Barnett…

…Not my cup of tea, usually, but I have an interest in Barnett’s stuff. The guy really has a bug in his ear about John Roberts. A couple months back he was blaming Roberts for Trump and I was like–fine, fine, you lost your Obamacare case. You are a bit bitter, venting steam. But he’s still banging on about how Roberts is the betrayer-in-chief of the Constitution, hence to blame for Trump. This is polemically unfair, in ways I could spell out, but won’t. (If you really want to ask, that’s what comments are for.)

But I’ve got to wonder whether this sort of thing isn’t really pissing off Roberts. It would piss me off, if I were Roberts. Barnett isn’t just some guy. He’s like the brain and soul of the Federalist Society, these days. A bit of on-again, off-again grousing about Roberts’ ‘bad’ decisions is one thing. But Roberts is shaping up to be this consistent, vile Judas in the conservative imaginary. Roberts is going to be Chief for a while, I expect. Dale Carnegie would suggest that the way to work the refs effectively is not this. If Roberts actually turns into some flaming Living Constitutionalist slave-to-the-democratic-mob in 20 years, maybe you can give Barnett half credit.

Corey Robin: “It might piss Roberts off to hear this kind of talk now from Barnett…

…But it might also make him think twice and wonder whether, in his drive to be the conservative Court’s steward and statesman, he’s not in fact betraying the values and vision he came on the Court to pursue.

John Holbo: “I think the chance that Roberts doesn’t realize that Barnett is really uncharitably caricaturing Roberts’ position is slight…

…I don’t really think Roberts is going to move left, but I fully expect him to stick by his guns, and to realize that his guns are actually firing at the Federalist Society now.

Must-Reads: July 5, 2016


Should Reads: