Competitive Edge: Congress needs to restore the Federal Trade Commission’s authority to seek monetary remedies when companies break the law

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Michael Kades has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Michael Kades

Market power and its abuse are far too prevalent in the U.S. economy, increasing the prices consumers pay, suppressing wage growth, limiting entrepreneurship, and exacerbating inequality. Equitable Growth’s 2020 antitrust transition report identifies a lack of deterrence as a key problem: “Antitrust enforcement faces a serious deterrence problem, if not a crisis.”

As the report explains, “Rather than deter anticompetitive behavior, current legal standards do the opposite: They encourage it because such conduct is likely to escape condemnation, and the benefits of violating the law far exceed the potential penalties.” In the face of such warnings, it is a particularly bad time for the Supreme Court to unanimously reject 40 years of lower court rulings and conclude that the Federal Trade Commission can neither force companies to give up the profits they earned by violating the law nor compensate the victims of those violations. (The first remedy is called disgorgement, and the second remedy is called restitution.)

Whether the Supreme Court in April correctly interpreted the statute at issue in the case, AMG Capital Management LLC v. Federal Trade Commission, is less important than its implications. Professor Andy Gavil discusses a potential silver lining in the Supreme Court’s decision—the glass-half-full approach. He argues that if the Supreme Court faithfully applies its approach to statutory interpretation, then it could open the door to broader application of the antitrust laws.

I look at the direct impact of the decision—the glass-half-empty approach. I argue that the decision deprives the antitrust agency of a critical, albeit imperfect, weapon that has deterred anticompetitive conduct particularly in the pharmaceutical industry. Although it has used disgorgement in competition cases sparingly, those awards have deterred the entire industry from engaging in the challenged conduct.

Before the recent Supreme Court decision, the disgorgement awards in competition cases went far beyond the impact in a single case. The savings include benefits from the conduct that did not occur. If the commission cannot seek monetary remedies, then companies will keep the rewards of their illegal conduct. Perversely, the companies causing the greatest harm will benefit the most from April’s decision.

The impact reaches even further. Without the threat of a disgorgement award, companies are more likely to drag out litigation and tax the FTC’s limited resources. Because the commission will spend more resources on egregious cases to reach weaker results, it will have fewer resources to challenge anticompetitive conduct in other areas and, for example, could affect enforcement in merger cases or in the high-tech industry.

On the bright side, Congress can easily restore the FTC’s ability to seek monetary remedies, and the idea has some bipartisan support. The remainder of this piece discusses how disgorgement has been a successful tool in antitrust cases and what we can expect if Congress does not restore the FTC’s ability to seek broader and more equitable remedies, including monetary relief.

Disgorgement as deterrence

The story of the FTC’s monetary relief has come full circle. In 1998, the agency sued Mylan Laboratories Inc. to prevent it from continuing to corner the supply of a critical input (the active pharmaceutical ingredient) for a common tranquilizer, lorazepam. (I was one of the FTC attorneys on the case.) Mylan’s conduct forced its competitors to temporarily exit the market, and Mylan raised wholesale prices by 2,500 percent. (See Figure 1.)

Figure 1

Price increase on lorazepam after Mylan cornered the market for the active pharmaceutical ingredient

Although Mylan’s competitors found new suppliers and reentered the generic market in a matter of months, Mylan had earned an additional $120 million in profit. The Federal Trade Commission and a group of state attorneys general sued, seeking to stop the conduct and to disgorge the profits Mylan earned. In settling the government actions, Mylan agreed to pay $100 million, which was distributed to consumers and state Medicaid plans that had paid the inflated prices.

Absent the monetary recovery, Mylan’s strategy would have been wildly successful, and others, seeing that success, could have repeated it in any market where there were few suppliers of an active pharmaceutical ingredient. Until recently, however, no pharmaceutical company appears to have tried Mylan’s strategy. By depriving Mylan of its illegal profits, the Federal Trade Commission sent the message to the industry that cornering supply was a game not worth the candle.

Some antitrust experts argue that the agency has no need for monetary remedies because private parties can obtain treble damages. Unfortunately, treble damages sound more effective than they are. A study by emeritus professor John M. Connor of Purdue University and Robert H. Lande, Venable professor of law at the University of Baltimore, found that, on average, private plaintiffs in cartel case settlements obtain just 19 percent of the actual, not trebled, damages.

Indeed, several factors limit the effectiveness of the treble-damage remedy. One is forced arbitration clauses. Another is specific procedural hurdles that private plaintiffs face that  government enforcers do not. And a third factor is the limitations on who is a proper plaintiff. The FTC’s authority to seek monetary remedies was not duplicative of private actions but rather made antitrust enforcement more effective.

Disgorgement and the efficient resolution of litigation

Private actions, even if they were sufficient to deter anticompetitive conduct, would not address the other problem the Federal Trade Commission will face without a monetary remedy. When the agency is challenging an ongoing activity, the longer the defendants can delay resolution, the longer they can earn their ill-gotten profits. When the conduct yields hundreds of millions of dollars in profits, legal fees of millions or even tens of millions of dollars look like a good investment.

Now, in the wake of the recent Supreme Court ruling, imagine the Federal Trade Commission trying to stop the conduct, and even if it wins, the company gets to keep everything it earned. Under that scenario, defendants have every reason to string out litigation. Further, the commission will be in a weaker position to negotiate settlements.

This scenario is not imaginary. In June 2013, the Supreme Court ruled in FTC v. Actavis Inc. that patent settlements in which a branded pharmaceutical company paid a potential generic not to compete, known as pay-for-delay or reverse-payment agreements, could violate the antitrust laws. At the time, the agency had two active pay-for-delay cases, the Actavis case itself and Federal Trade Commission v. Cephalon Inc. (I worked on both.) In the Actavis case, the commission had relinquished its disgorgement claim, but it had not in the Cephalon case. In less than 2 years, it settled the Cephalon case, obtaining $1.2 billion in disgorgement and the company’s agreement not to enter future pay-for-delay agreements.

In contrast, in the case against Actavis, there was no threat of disgorgement and so it dragged on for more than 5 years before a settlement was reached—and it ended up being a weakerorder with no monetary remedy. The result was worse: longer time to resolution, more resources expended, and a weaker remedy.

Without a disgorgement remedy in antitrust cases, particularly in pharmaceutical ones, the more profitable the conduct, the less incentive the defendants will have to settle—even when they are likely to lose on the merits. In turn, the Federal Trade Commission will have to use more resources on easy cases and have fewer resources for more complex matters.

If you are concerned about monopolization in digital platform markets, consolidation in hospital markets, or any other anticompetitive activity, the impact of the Supreme Court’s AMG decision should bother you. Without the ability to obtain disgorgement, anticompetitive conduct will be more likely, and the commission will face more demands on its already-insufficient budget.

Liability without consequences

If companies can keep the profits they earn by violating the law, then companies can engage in egregious behavior without fear of the consequences. Take the FTC’s recent case against AbbVie Inc. The commission proved that AbbVie had brought objectively baseless patent litigation, that the burden and length of the litigation (litigation process, not its outcome) delayed generic competition, and that that the company illegally increased its profits by $448 billion.

The U.S. Court of Appeals for the Third Circuit upheld the liability but concluded that the Federal Trade Commission could not deprive AbbVie of its illegal profits. The AbbViedecision was decided before the Supreme Court’s AMG decision and interpreted a different part of the Federal Trade Commission Act. Nonetheless, the AbbViecase exemplifies the limited impact of even successful FTC antitrust enforcement if the agency cannot seek monetary remedies.

Coming full circle?

Six years ago, Martin Shkreli, the so-called pharma bro, brought his hedge fund experience to prescription drugs. He acquired Daraprim, a drug used to treat a serious parasitic infection that can be deadly to babies and those with compromised immune systems. He then promptly raised the price from $13.50 per tablet to $750. The event triggered public outcry and unwanted attention on Shkreli, who is now in jail for securities fraud.

In addition to raising prices, Shkreli’s company made it more difficult, if not impossible, for new competitors to enter the market. It prevented generic companies from being able to obtain approval from the U.S. Food and Drug Administration through sample blocking, an anticompetitive tactic that Washington Center for Equitable Growth has discussed often and which Congress addressed through the CREATES Act in 2019. And, like Mylan, nearly 25 years earlier, Shkreli’s firm allegedly locked up the active pharmaceutical ingredient for Daraprim, creating a further hurdle to competition.

In 2020, the Federal Trade Commission sued, alleging both strategies were anticompetitive. Unlike in the Mylan case, after the AMG decision, the commission cannot seek monetary remedies such as restitution and disgorgement. Unless Congress acts, Shkreli and his co-defendants have no fear of losing the profits they earned through any anticompetitive and illegal activity. Regardless of the result in the Shkreli case, it is unlikely to deter anticompetitive conduct as strongly as the Mylan case did.

Status of a legislative solution

The fix is simple. The Supreme Court neither endorsed the fraudulent conduct at issue in the case nor suggested there was a constitutional objection to providing the Federal Trade Commission the authority to seek disgorgement or restitution. Congress can restore the authority the commission has been using for years by clarifying the scope of the FTC’s power.

On the surface, there appears to be strong bipartisan support for doing so. Within days of the Supreme Court’s AMG ruling, all four FTC commissioners called for legislative action. Both the then-acting chairwoman and the ranking member of the Senate Commerce Committee voiced support for a legislative fix. Recent hearings, however, in both the Senate Commerce Committee and in the House Commerce Committee suggest areas of disagreement over the scope of the legislation. The House did pass legislation with two Republicans voting in support last week.

Conclusion

After the AMG decision, much of the focus has been on how the decision limits the FTC’s ability to compensate victims of fraud and other consumer protection violations, which is understandable. The commission seeks monetary relief far more often in consumer protection cases (49 in 2019 alone) than in competition matters (14 total since 2000).  

The decision’s impact on antitrust enforcement, particularly in the pharmaceutical industry, however, should be equally troubling. The antitrust enforcement scheme can address the market power problem and the harms it causes only if it deters anticompetitive conduct in the first place. With companies no longer facing the threat of the Federal Trade Commission seeking restitution or disgorgement, violating the antitrust laws will be far more profitable than the dangers of being prosecuted. Perversely, the biggest winners from these developments are the companies that cause the greatest harm.

By no means were the FTC’s monetary remedies sufficient to completely deter anticompetitive activity. There is a robust debate about other powers the commission may already have to hold companies accountable, and recently introduced bills would give the U.S. Department of Justice and the Federal Trade Commission the power to seek civil penalties for antitrust violations.

Civil penalties can be much larger than disgorgement, which is limited to the defendant’s illegal profits, or restitution, which is limited to harms consumers suffered. Policymakers should be discussing those issues and whether stronger remedies are needed rather than uncontroversial propositions such as whether companies that violate the antitrust laws should be allowed to retain the profits they earned through unlawful conduct and whether victims should be left uncompensated.

Before the AMG decision, a monetary remedy was the knife the Federal Trade Commission brought to a gunfight with pharmaceutical companies. Unless Congress acts, the commission will now arrive at the gunfight with only its bare knuckles.

Competitive Edge: The silver lining for antitrust enforcement in the Supreme Court’s embrace of “textualism”

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Andrew I. Gavil has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Andrew I. Gavil

At a 2015 lecture in honor of Supreme Court Justice Antonin Scalia at Harvard Law School, his colleague Justice Elena Kagan famously proclaimed: “We’re all textualists now.” Her proclamation appeared prescient this past Supreme Court term, when textualism came to antitrust law in AMG Capital Management LLC v. Federal Trade Commission.

The case questioned the FTC’s authority to seek disgorgement as a remedy for violations of Section 5 of the Federal Trade Commission Act under Section 13(b) of the act. In an opinion authored by Justice Stephen Breyer, a unanimous Supreme Court concluded that the act did not provide for that authority. Section 13(b)’s use of “injunction,” the Supreme Court reasoned, was not the equivalent of the broader “equitable” relief with which disgorgement is associated and which is used in other provisions of the act.

Importantly, a comparison of Section 13(b) with those other provisions confirmed for the Supreme Court that Congress well understood the difference between the limited “injunction” and the broader “equitable” relief. The choice of language was deliberate and warranted differing interpretations. Under a textualist approach, the words would have to be assigned their distinct meaning.

As the Washington Center for Equitable Growth’s Director of Markets and Competition Policy Michael Kades explains, the decision in the AMG case was a blow to the FTC’s remedial authority that prompted immediate criticism and calls for legislative reform. In limiting the commission to injunctive relief, the Supreme Court had significantly circumscribed the commission’s remedial powers and, with it, had unquestionably diminished the deterrent value of its law enforcement power.

But is there a silver lining for antitrust law in the Court’s commitment to textualism?

Before AMG, the Supreme Court had demonstrated little, if any, interest in textualism to interpret the principal antitrust statutes. To the contrary, it often alluded to the common law origins and inherent flexibility of the terms of the Sherman Antitrust Act of 1890, which, over time, spilled into its interpretation of the Clayton Antitrust Act of 1914. AMG’s textualism calls those decisions into question—and they are worth questioning.

As lively debates about the future of U.S. antitrust law rage on, one persistent question has been: Can the current tools available to the antitrust enforcement agencies be more fully and effectively utilized? Textualism may hold the promise of an affirmative answer to that question, especially when it comes to the Clayton Act.

A brief history of the Clayton Antitrust Act of 1914

The Clayton Act was, by design, intended to augment the Sherman Act and redress the courts’ constrained reading of the Sherman Act’s common-law-derived standards in its first quarter-century of enforcement. To achieve the desired result, Congress made two textual choices. First, in lieu of the general language of the Sherman Act, it opted for more highly specified prohibitions. Second, it used an “incipiency” standard of competitive harm that recurs in all the Clayton Act’s main prohibitions.

In place of the Sherman Act’s unreasonable “restraint of trade” and “monopolization” standards, the Clayton Act prohibits conduct when its effect “may be to substantially lessen competition, or to tend to create a monopoly.” This text is used in its prohibition of price discrimination (Section 2), exclusionary contracts (Section 3), and mergers and acquisitions (Section 7). The choice of “may be” and “tend” signaled a departure from the Sherman Act and reflects congressional intent that the burden of establishing competitive harm under the Clayton Act should be lower than that required under the Sherman Act. (In other ways, the Clayton Act is narrower than the Sherman Act. Sections 2 (price discrimination) and 3 (exclusive dealing and tying), for example, are limited to sales of goods and exclude services.)

True to Congress’ intentions, early Supreme Court interpretations of the Clayton Act assigned significance to the Clayton Act’s text. The Court held in 1922 that Congress viewed the incipiency standard as a means of expanding the Sherman Act as it had been interpreted at the time, providing that conduct within the scope of the Clayton Act could be challenged “before the harm to competition is effected.” Then, in 1962, it explicitly noted that Section 7 “was intended to reach incipient monopolies and trade restraints outside the scope of the Sherman Act.” Similarly, the Supreme Court in 1941 viewed the FTC Act, which had also become law in 1914, as intended “to reach not merely in their fruition but also in their incipiency combinations which could lead to these and other trade restraints and practices deemed undesirable.”

The Supreme Court differentiated, however, between the “mere possibility” that an agreement falling within its terms would “substantially lessen competition or tend to create a monopoly” and the probability that it would do so, noting in 1922 that Section 3 could reach the latter, though not the former. Still, the Supreme Court would later observe, in 1961, that it had not drawn the line “where ‘remote’ ended and ‘substantial’ began.”

The Supreme Court’s decades-long wavering on the degree of probability necessary to establish an anticompetitive effect left enough discretion for courts to progressively downplay the significance of the Clayton Act’s text or simply to ignore it. “Probability,” of course, can range from low to high. But increasingly, courts, including the Supreme Court in 2021, have suggested that Sherman Act offenses require proof of “actual” competitive harm—ignoring even well-established Sherman Act precedent that has long used the formulation “actual or probable.”

Over time, therefore, two trends dissipated the potential potency of the Clayton Act: The courts demanded ever greater degrees of certainty of competitive harm in Sherman Act cases and progressively downplayed the distinction between the Sherman Act and the Clayton Act. Alleged offenses under both acts became homogenized. Burdens of proof for plaintiffs became elevated.

At best, the Supreme Court only paid lip service to the textual distinctiveness of the Clayton Act. In Brooke Group Ltd v. Brown & Williamson Tobacco Corp., for example, the Court acknowledged in 1992 that, whereas proof of a violation of Section 2 of the Sherman Act requires “probability” of competitive harm, a violation of the price discrimination provisions of Section 2(a) of the Clayton Act only requires a “possibility” of harm. It nevertheless concluded that a unitary standard should apply for claims of primary line predatory price discrimination under the Clayton Act and predatory pricing by a monopolist under Section 2 of the Sherman Act.

In short, the Supreme Court acknowledged the textual difference but failed to assign any significance to it. Worse, it imported the unduly restrictive “dangerous probability” of successful  monopolization requirement from the Section 2 offense of attempt to monopolize, without regard for its distorting effect on the Clayton Act. The decision has been criticized on the merits and is irreconcilable with AMG.

Reevaluating the Clayton Act through a textualist lens

The distinctive text of the Clayton Act could support a more expansive view of its reach, aided in part by both its more highly specified prohibitions and a less demanding burden of proof. Such an approach would also be supported, as in AMG, by crediting its text, contrasting it with the text of the Sherman Act, and assigning significance to it instead of ignoring and diluting it. Although there is nothing “ambiguous” in the Clayton Act’s text—a prerequisite that some strict textualists will often cite for consideration of legislative history—for the less strict, that history is rich and strongly supports the view that the Clayton Act was intended to have significance that it has been denied.

Three specific areas illustrate how such an approach could reinvigorate antitrust enforcement, especially by the Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice. First, as noted, conduct such as that in Brooke Group should have been easier to challenge.

Second, Section 3 of the Clayton Act has lost its distinctiveness, and hence its vitality, as a prohibition of various types of exclusionary contracting practices of goods, especially tying, exclusive dealing, and conditional pricing practices. Those practices have been subjected to demanding standards of proof by the courts, which have linked Section 3’s fate to the increasingly demanding standards of the Sherman Act’s rule of reason. Very few plaintiffs, public or private, have prevailed.

Third, the proof required in government antitrust challenges to horizontal mergers has become  demanding. This is the case even when challenges are brought pre-consummation based on predictions of likely effects.

Restoring balance to the process might begin with the recently announced plans to revise the Horizontal and Vertical Merger Guidelines, which could be profitably—and justifiably—modified to reflect a stronger commitment to incipiency. The same statutory text of Section 7 of the Clayton Act has supported every set of these guidelines adopted since 1968, yet the guidelines have continually evolved in the direction of ever more permissible levels of concentration.

Consider first the Herfindahl-Hirschman Index, or HHI, which has been used as a measure of market concentration in the guidelines since 1982 and by many courts since then. The two federal antitrust agencies could easily back away from the current HHI threshold of 2,500 that defines “highly concentrated” markets, for example, returning to the 1,800 level that was used before 2010.

Revised merger guidelines also could provide an even stronger message about the limited value of market definition when other evidence supports a prediction of anticompetitive harm, include more robust definitions of nascent and potential competition, and clarify the government’s approach to serial acquisitions. Anticompetitive presumptions could also be fortified for horizontal mergers and recognized for vertical mergers.

These kinds of revisions and others could provide a foundation for more aggressive enforcement relying on the text of Section 7 of the Clayton Act. They would also provide added guidance to the courts and greater transparency to the business community.

Conclusion

Revitalizing the incipiency standard of the Clayton Act is not a new idea. What is potentially new is AMG’s commitment to textualism. It provides an opportunity, if not an invitation, to test the Supreme Court’s commitment to consistent textualism: If it can unanimously support retrenchment of the FTC’s remedial power, then it also could support a renewal of its enforcement authority.

Andrew I. Gavil is a professor of law at Howard University School of Law.

New U.S. antitrust legislation before Congress must mandate an anticompetitive presumption for acquisitions of nascent potential competitors by dominant firms

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Steven Salop has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Steven C. Salop

The current U.S. economy is increasingly characterized by dominant firms controlling digital platforms. One way that dominant networks maintain or even increase their power is by acquiring nascent or potential competitors that otherwise might become significant competitors by themselves or by joining with other rivals. Now, there is legislation before Congress to correct past anticompetitive presumptions by dominant firms. Reining in acquisitions by these burgeoning monopolies in the digital arena is important to U.S. economic competitiveness and innovation.

Most of the discussion about digital monopolies focus on Alphabet Inc.’s Google unit, Facebook Inc., Apple Inc., and Amazon.com Inc., but there are other digital platforms with substantial market power. In air travel, Sabre Corp. is the dominant platform that connects airlines and travel agents. In auto repair, CCC Intelligent Solutions Holdings Inc. is the dominant platform that connects the auto industry’s original equipment manufacturers and other parts suppliers and repair shops. There are many credit-card-issuing banks, but only three major credit card networks (Visa, MasterCard, and American Express), and the networks’ rules make it impossible for merchants to negotiate lower fees with the card-issuing banks or each other. 

Viewpoints on these anticompetitive market conditions largely support greater antitrust scrutiny. Various commentators—including the University of Chicago Booth School of Business’ Stigler Report, Carl Shapiro at the University of California, Berkeley’s Hass School of Business, and me—recommend that there be increased enforcement in this area. Koren Wong-Ervin, an antitrust partner at Axinn, Veltrop, & Harkrider LLP, takes a more skeptical approach

Yet U.S. antitrust law makes it very difficult for the enforcement agencies to bring successful actions to prevent these kinds of acquisitions and, at least until recently, enforcement has been very lax. There have been a number of notable examples of missed enforcement opportunities. The Federal Trade Commission and the United Kingdom’s Competition and Markets Authority, for example, failed to attempt to stop the Facebook acquisition of Instagram in 2012, despite emails by Facebook’s CEO Mark Zuckerberg that clearly indicated a desire to acquire Instagram in order to prevent it from competing with Facebook.

Similarly, the Federal Trade Commission permitted Google to acquire AdMob in 2009 and Doubleclick in 2007. These two acquisitions helped set the stage for Google to obtain its monopoly over display advertising networks—a situation that resulted in a monopolization case brought by a group of state attorneys general led by the state of Texas.  

Now, the two U.S. antitrust agencies are stepping up. The Federal Trade Commission has brought a monopolization case to unwind the Facebook mergers, and the U.S. Department of Justice’s Antitrust Division sued to block the Visa/Plaid merger. But the legal barriers are overly high. In order to correct this enforcement gap, the law should mandate a strong anticompetitive presumption for acquisitions of nascent or potential competitors by dominant platforms and other firms with substantial market power.

That presumption should be coupled with a high rebuttal burden placed on dominant firms. This rebuttal burden means that the merging firms will have to show that the transaction will not harm competition. The high burden would require them to show this result with clear evidence, not simply showing by a preponderance of the evidence that a merger is unlikely to cause anticompetitive effects.

The rationale for this strong presumption has two parts. First, reliable case-specific evidence is often necessarily limited, so there is a need to rely more on a presumption, whether it is procompetitive or anticompetitive. Second, underdeterrence and failing to stop anticompetitive transactions is far more worrisome than overdeterrence and erroneously stopping procompetitive ones. Thus, the choice of an anticompetitive presumption with a high rebuttal burden makes more sense.

This is where I disagree with conservatives. The conservative position might be paraphrased as “unless you are confident, do nothing.” My view is that we face a policy choice in digital markets between accepting “monopoly capitalism” and ensuring the potential for competition and deconcentration. The latter approach is more appropriate. 

I will now drill down on the issue of evidence. In evaluating acquisitions and other conduct involving potential or nascent competitors, probative case-specific information often is limited because the potential or nascent rival may have little or no track record, and the competitive harms may be in future markets. Thus, the antitrust agencies and courts have no choice but to rely more heavily on categorical predictions—that is, presumptions. This approach is not speculation. It is an application of rigorous decision theory: the famous Bayes Law

Decision theory involves efficiently combining prior information about the category of conduct, or “presumptions,” and case-specific evidence. The relative weights placed on these two sources of information depend on the strength of the presumption and the reliability of the case-specific evidence. If the case-specific evidence is limited or less reliable in making a good prediction, then more weight should be placed on the presumption. If reliable evidence is limited, then the presumption often will decide the case.

Of course, there is a choice between an anticompetitive presumption or a procompetitive presumption. In my view, the appropriate approach is an anticompetitive presumption. This is because underdeterrence is a more serious concern than overdeterrence when the acquiring firm is dominant or has substantial market power.

Market forces lead to large underdeterrence concerns

First, in markets with large network effects, scale economies, and switching costs—all of which are subject to tipping toward monopoly—competition in the market often comes from disruptive new entrants. When such competition does occur, the benefits are substantial. The danger of the acquisitions of potential competitors is that they will prevent that disruptive competition from occurring.

Second, for this reason, dominant firms have powerful incentives to eliminate or neutralize nascent or potential competition. The dominant firm also has the resources to do so by sharing the monopoly profits. There is a simple reason for this: Monopoly profits typically exceed more competitive, duopoly profits. This is because competition on price and quality typically deliver increased consumer welfare while reducing industry profits. Consumers gain at the expense of firms when there is competition. By outbidding rivals to buy out the entrant, the dominant firm can preserve its monopoly power and profits.

Third, to state the obvious, there can be no “market self-correction” away from a condition of dominance to competition if the dominant firm is permitted to acquire, destroy, or neutralize nascent competitors or entrants that would lead to more competition and market self-correction.

Fourth, the dominant firm has more market information than do the two U.S. antitrust agencies and their counterparts abroad. So, dominant firms can perceive higher-level threats before they become apparent to the agencies. This might have been the case with Google’s acquisitions of DoubleClick and AdMob, or Facebook’s acquisitions of Instagram and WhatsApp.

Fifth, a dominant firm’s higher bids to acquire a nascent competitor or potential entrant by no means implies that the acquisition is more efficient or will be more likely to benefit consumers than would the acquisition by another firm. It can simply be the fact that the dominant firm is willing and able to pay more to protect its monopoly profits.

Overdeterrence is a lesser concern

First, dominant firms typically can achieve most, if not all, of the legitimate benefits from these acquisitions or agreements on their own, albeit perhaps with some delay, or through nonexclusive agreements.

Second, if dominant firms are not permitted to acquire potential competitors, then the courts and antirust agencies should not assume that the efficiencies will be lost. They typically will be obtained by alternative purchasers or partners.

Third, it is not the case that preventing dominant firms from making these acquisitions will lead to fewer start-ups or less innovation by nascent competitors. Even with constraints on dominant firms, the start-ups will still be able to pursue an “invest and exit” strategy of selling to larger firms. They just will not be able to sell to the dominant firm.

The evidence indicates greater competitive concerns from underdeterrence, not overdeterrence

Koren Wong-Ervin suggests that there is insufficient credible evidence of a competitive problem that needs to be fixed. I disagree.

First, there are some compelling anecdotes suggesting that the agencies believed they were unlikely to successfully enjoin some significant problematical acquisitions. These include Google’s acquisitions of AdMob and DoubleClick and Facebook’s acquisition of Instagram, as mentioned above. Sabre’s acquisition of Farelogix was a notable judicial error. 

Two recent studies also reinforce these concerns. One important study focused on pharmaceutical transactions. The study finds that drug projects acquired in mergers were less likely to be developed when they overlapped with the acquirer’s existing product portfolio, especially when the acquirer’s market power was large due to weak competition or distant patent expiration. The authors find that 5 percent to 7 percent of the deals qualified as what they termed “killer acquisitions.” They also find that the deals were disproportionately not reportable in the United States.

Another study of acquisitions by Google, Amazon, Facebook, and Apple finds that among the deals where they had sufficient data, 10 percent to 15 percent of the deals were competitively problematical. This amounted to one to two significant deals per year. Moreover, the study limited the identification of potentially problematical transactions in several ways. Their list does not include transactions where the acquired firm was in a vertically adjacent market. The study simply assumes that these firms would not become entrants into the dominant firm’s “core market.”

This second study also did not account for the possibility that these acquired firms may have had important assets that would have been valuable to rivals or where the dominant platform was able to use the acquisition to increase the spread of its monopoly. Google’s acquisition of Nest, for example, was not included because Nest was not an entrant into search. But Nest likely will be an important search access point as voice assistance becomes more common. If Nest is owned by Google, then it likely will not support interoperability among competing voice assistants. 

Finally, there were many other deals that might have raised concerns, but a combination of narrow “filters” that the second study used to identify concerns and the authors’ lack of access to sufficient public information reduced the number of transactions evaluated in the second study. One case in point: The authors of the second study identified 79 Facebook deals, of which 19 were in the messaging or social media segments. Yet apparently only eight of those 19 deals were evaluated.

Conclusion

For all these reasons, the law should mandate a strong anticompetitive presumption for acquisitions of nascent or potential competitors by firms that are dominant or have substantial market power, with a high rebuttal burden placed on the acquiring firm. This type of presumption can lead to increased enforcement and greater competition as the modern economy develops.

In this regard, three legislative proposals have been made. Recently introduced legislation offered by Sen. Amy Klobuchar (D-MN) offers two approaches. First, the bill creates an anticompetitive presumption based on market share (50 percent) that applies to acquisitions of companies “that have a reasonable probability of competing with the acquiring person in the same market.” In this situation, it would not be necessary for an antitrust agency to prove that a potential competitor is more likely to enter or that it is more likely to have a substantial impact.  Second, a separate-size-of-transaction presumption would require, at least for the largest transactions and companies, that the acquirer prove that the acquisition does not create even an appreciable risk of materially lessening competition.

More recently, the Platform Competition and Opportunity Act of 2021 sponsored by Rep. Hakeem Jeffries (D-NY) and co-sponsored by Rep. Ken Buck (R-CO) addresses mergers for large tech platforms. This bill presumes that any merger by a covered platform is illegal unless the defendant can show by clear and convincing evidence that the acquired firm is not a competitive threat and that the transaction would not “enhance or increase the covered platform’s market position.”

Sens. Charles Grassley (R-IA) and Mike Lee (R-UT) also have introduced a bill that includes provisions that create anticompetitive presumptions against acquisitions of potential or nascent competitors by firms with market shares exceeding 33 percent. For firms with market shares exceeding 66 percent, the presumption is made even stronger because there is only a very limited exception.  

In light of this bipartisan support, Congress may well follow through and legislate a strong anticompetitive presumption. It should.

—Steven C. Salop is professor of economics and law at the Georgetown University Law Center and a senior consultant at Charles River Associates. He regularly consults to private companies and government agencies. The opinions expressed here are his own and do not necessarily represent the views of colleagues or consulting clients.

Competitive Edge: Big Ag’s monopsony problem: How market dominance harms U.S. workers and consumers

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Hiba Hafiz and Nathan Miller authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


""
Hiba Hafiz
Nathan Miller

Agricultural markets are among the most highly concentrated in the United States. The markets for beef, pork, and poultry, grain, seeds, and pesticides are dominated by four firms. Three firms dominate the biotechnology industry. One or at best two firms control large farm equipment manufacturing. And a small number of firms are increasingly dominating agricultural data and information markets.

Yet former Iowa Gov. Tom Vilsack (D)—President Joe Biden’s nominee for secretary of the U.S. Department of Agriculture, the same position Gov. Vilsack held during the Obama administration—has come out against breaking up Big Ag firms. “There are a substantial number of people hired and employed by those businesses,” he said last year. “You’re essentially saying to those folks, ‘You might be out of a job.’ That to me is not a winning message.”

Gov. Vilsack couldn’t be more wrong on the economics. It is precisely Big Ag’s buyer power in agricultural markets—these firms’ “monopsony” power—that destroys jobs and suppresses small farmer and worker pay.

Economic theory describes monopsony power as market power on the buy side of the market—it’s the analogue of monopoly power on the sell side of the market. Artificially acquiring or maintaining market power is unlawful under the U.S. antitrust laws, regardless of whether it derives from the buy or sell side. And that’s because buy-side power can be just as socially harmful as sell-side power.

Firms insulated from competition in input markets can profitably suppress the pay to suppliers of goods, services, and labor below the value that those suppliers provide. And lower pay has broader economic outcomes. It means suppliers have weaker incentives to provide the same quantity of inputs or invest in capacity, innovation, and quality. So, monopsony power can decrease input suppliers’ pay and the quantity of inputs buyers purchase.

Monopsony power can also harm downstream consumers. Less input means less output, and less output means more scarcity and higher prices to downstream consumers than would otherwise exist under competitive conditions.1

Taking steps to mitigate buyer power through aggressive antitrust enforcement and appropriate regulation can be a win-win for input suppliers and downstream consumers. Competition among buyers helps ensure that suppliers are paid according to their value. And competition increases output incentives and ultimately lowers downstream prices as well.

The monopsony power of Big Ag poultry, pork, and meat companies

The buyer power of companies such as Tyson Foods Corporation, Cargill PLC, and Smithfield Foods Inc. comes from high levels of market concentration in the agricultural industry. Simply put: Big Ag faces too little competition when they hire workers or procure inputs such as chicken (Tyson), hogs (Smithfield), or cattle (Cargill) from smaller suppliers.

Top Big Ag firms have merged with and acquired smaller firms in the industry over the past five decades, increasing consolidation of livestock packers, beef processors, and poultry processors. According to a recent agricultural industry report by the Center for American Progress, between 1986 and 2008, “the four-firm share of animal slaughter nationwide increased from 55 percent to 79 percent for cattle, from 33 percent to 65 percent for hogs, and from 34 percent to 57 percent for poultry.” High market concentration increased Big Ag’s price- and wage-setting power over cattle producers, hog and poultry farmers, and meat processing plant workers, lowering their prices for hogs, beef, chickens, and labor.

Big Ag’s concentration numbers at the local level are even more stark than at the national level. Most buying and processing of poultry, hogs, and beef happens locally to avoid high transportation and storage costs. Big Ag dominates local markets as the only buyers in town. More than 20 percent of poultry growers have only one local upstream buyer for poultry and 30 percent have only two.

Most hog growers face packer monopsony at the local level, with just one or two packers offering them contracts. One telling case in point: After dominant meat processor JBS S.A. acquired Cargill’s pork processing operations in 2015, the American Antitrust Institute and a coalition of farmers’ unions projected that only two firms—Tyson and the merged JBS-Cargill pork processing unit—would buy and slaughter 82 percent of hogs in Illinois, Indiana, and surrounding states. Similarly, local cash markets for cattle typically feature no more than three or four packers.

Concentration at the local level means that Big Ag can artificially suppress pay to cattle producers, hog and poultry farmers, and processing plant workers below the value that their inputs provide to the industry.

Evidence of the effects of Big Ag poultry, pork, and beef companies’ monopsony power

Empirical evidence of the effects of Big Ag’s buyer power on rural communities and consumers nationally is mounting. Suppliers and processing workers suffer lower pay while downstream consumers are paying higher prices on essential food. Around “three-quarters of contract growers live below the poverty line,” and average-sized operators lose money 2 out of 3 years.

Then, there is the evidence of rising farm bankruptcy rates. Farm bankruptcies have steadily increased every year for the past decade, due, in part, to high U.S. farm debt. Small farmers are not the only ones being undercompensated—a 2000 U.S. Department of Labor survey found that 100 percent of poultry processing plants failed to comply with federal wage-and-hour laws.

Buyer power also enables processors to impose abominable working conditions without workers quitting. Even before the coronavirus pandemic, poultry processing workers suffered occupational illnesses at five times the rate of other U.S. workers. But their conditions plummeted during the pandemic, with immigrant workers and workers of color suffering the most. A November 2020 study estimated livestock processing plants suffered 236,000 to 310,000 cases of COVID-19, the disease caused by the new coronavirus, and 4,300 to 5,200 deaths—3 percent to 4 percent of all U.S. deaths—with the majority related to community spread. Consumers have also suffer nationally by having to pay higher prices for meat products while facing fewer choices and lower quality.

More evidence of Big Ag’s buyer power emerges from high-profile U.S. Department of Justice and private enforcement actions against dominant Big Ag buyers in the poultry and pork industries for colluding to fix prices, rig bids, and suppress pay to growers and processing workers. High concentration levels make it easier for Big Ag firms to collude, and in June 2020, the Department of Justice indicted leading chicken industry defendants for price-fixing and bid-rigging in the broiler chicken market. Civil suits were filed against Tyson, Pilgrim’s Pride Corp., and others for price-fixing, wage-fixing, and using no-poach agreements in the markets for broiler chicken products, contract farmer services (contract farmers are farmers who grow chickens from chicks to market weight in long-term contracts with processors), and chicken-processing labor services.

The Department of Justice is currently investigating price-fixing and bid-rigging among dominant beef processors, too, and private plaintiffs have sued pork and beef processors for allegedly colluding to lower prices paid to producers and raise prices for consumers. Current litigation against the poultry, pork, and meat cartels estimates that hundreds of thousands of workers suffer poverty wages from wage-fixing conspiracies.

Big Ag is able to exercise its buyer power through its industry-transforming supply chain restructuring that allows lead firms to extract rents at each layer of their supply chain for their profit, and most especially, from small farmers and workers at the production level.2 Starting in the 1960s, poultry firms such as Tyson vertically integrated to own or control hatcheries, feed mills, veterinary care, slaughterhouses, processors, and sales contracts with poultry growers. The pork industry followed Tyson’s lead in the early 1980s, extending top-down ownership or control of hog production, packing, and processing in large-scale farms and processing facilities.

The only level of the supply chain not directly owned or operated by Big Ag chicken and pork producers is the growing stage, where Big Ag processors rely on small farmers to grow and raise the broilers and hogs provided by Big Ag-provided breeders, hatcheries, farrows, and weaners to slaughter weight. Still, Big Ag firms in these two meat sectors can squeeze these growers’ margins from above andbelow: Their inputs are supplied by Big Ag, and their product is sold to Big Ag.

Big Ag does this through contractual controls, forcing growers into one-sided production and marketing contracts while using their significant control over spot or cash markets to limit sales outside those contracts. Around 97 percent of chicken broilers are raised by contract growers in “take it or leave it” contractual arrangements; 63 percent of hogs were contractually raised in 2017, nearly double that in 1997.

These arrangements are crippling. Chicken growers’ production contracts require significant sunk investments—around $1 million in mostly debt-financing—and growers are required to purchase nearly all inputs, veterinary care, and technical assistance from vertically integrated buyers. Buyers can change or terminate contracts for almost any reason. Farmers sell their chickens in a “tournament system,” where their chickens compete for rankings with others given the same feed amount, but the ranking process lacks transparency—buyers weigh chickens behind closed doors and provide no standards for knowing whether a farmer is “getting the same inputs as the other farmers against whom the company makes him compete,” according to Lina Khan, then-policy analyst at the New America Foundation and now an assistant professor of law at Columbia Law School.

Big Ag buyers can retaliate against resistant farmers by refusing to renew contracts or sending bad feed or unhealthy chicks in future seasons—a system likened to “indentured servitude” by former chicken farmers suing Big Ag poultry firms. Contracts for hog growers also require significant capital investments, place much of the liability and risk for raising hogs on growers, and subject growers to unilateral buyer compensation-setting with limited transparency, and similarly allow retaliation through threats of contract termination or future substandard livestock or feed supply.

The beef industry is less vertically integrated than chicken and poultry. Beef packers find vertical supply chain ownership less profitable, yet these packers have achieved a degree of de facto control over the thousands of independent feedlots that supply them. Since the 1980s, and following meatpacker consolidation into the Big Four—Cargill, JBS, National Beef Inc., and Tyson Foods—the number of packing plants nationwide dropped 81 percent, and nearly a third of all the feedlots that purchase cattle from ranchers for fattening and resale to meatpackers have left the industry.

Among the feedlots that remain, most sign long-term contracts with the Big Four. More than 75 percent of packer cattle purchases come from long-term contracts with feedlots, up from 34 percent in 2004. Because most contract prices are pegged to outcomes in a subsequent cash market, this weakens packers’ incentives to bid aggressively in that cash market—bidding aggressively would just increase their payments for cattle already under contract.

So, in addition to alleged collusion among the Big Four beef packers to lower feeder pay—estimated at an average of 7.9 percent below average pay since 2015—feeders are also squeezed by the Big Four’s network of contracts and bidding schemes that the packers can profitably impose upstream.

Conclusion: Anti-monopsony action is urgently needed to protect workers and consumers

The economic theory is clear, and mounting empirical evidence backs it up: Big Ag’s monopsony power leads to fewer jobs, lower wages, and worse conditions dominating our nation’s food supply. And local farming communities are hurting. If U.S. Agriculture Secretary-nominee Vilsack wants to help rural communities—and reduce food prices for consumers in the bargain—he must get the economics right.

President Biden committed to strengthening antitrust enforcement to “help family farms and other small- and medium-sized farms thrive.” Before confirming Gov. Vilsack, Democratic senators must secure his public commitment to aggressively enforcing the Packers and Stockyards Act of 1921 and partnering with the U.S. Department of Justice to take on Big Ag’s buyer power. If keeping good jobs and sustainable business in rural America is the incoming administration’s priority, they can’t leave small farmers and workers to face Big Ag alone.

—Hiba Hafiz is an assistant professor of law at Boston College Law School. Nathan Miller is the Saleh Romeih associate professor at the Georgetown University McDonough School of Business.

Competitive Edge: Why noncompete clauses in employment contracts are by and large harmful to U.S. workers and the U.S. economy

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. David J. Balan has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


""
David J. Balan

Researchers in recent years have compiled a substantial and impressive body of empirical evidence on the economic effects of noncompete clauses, which are often included in labor contracts between U.S. workers and firms. While somewhat mixed, this evidence mostly indicates that noncompetes are harmful to workers and to the U.S. economy overall.

This recent empirical evidence stands in tension with older theoretical arguments claiming that noncompetes are beneficial to both workers and firms. How that tension should be resolved depends on the strength of the arguments. If the arguments were extremely strong, then it might make sense to believe them, even in the face of substantial (but imperfect) empirical evidence to the contrary. But if the arguments in favor of noncompetes are weak, or if there are valid arguments against them, then the tension disappears and the natural conclusion is simply that noncompetes are harmful.

In this column, I argue for the latter position. Specifically, I describe and critique each of the three main theoretical arguments that are commonly made in favor of noncompetes, namely that:

  • The worker and the firm both voluntarily agree to the noncompete, which justifies a strong inference that it is mutually beneficial and economically efficient
  • Noncompetes facilitate efficient knowledge transfer from firms to workers
  • Noncompetes facilitate efficient firm-sponsored investment in worker training

Let us examine each of these arguments in turn.

Noncompetes only exist because they benefit both workers and firms

The first argument goes as follows. The fact that the noncompete was agreed to by both the worker and the firm strongly indicates that it is mutually beneficial. To be sure, all else being equal, the worker would prefer not to have a noncompete because it restricts their ability to leave the job or to use the threat of leaving to improve their bargaining position. But all else is not equal. A noncompete can only exist if the worker agrees to it, and the worker does not have to agree; they always have the option to refuse and take their next-best alternative option instead.

In other words, the firm cannot impose a noncompete on the worker. Therefore, the firm can only induce the worker to accept a noncompete by offering some other contract terms that are sufficiently attractive to cause the worker to agree. That is, a noncompete will only exist if the worker has been sufficiently compensated by the firm.

The next step in this argument is that the firm will only be willing to pay that compensation to the worker if it derives an efficiency benefit from the noncompete that is at least as large as the payment. So, if a noncompete exists, the compensation must have been big enough that the worker was willing to accept it and small enough that the firm was willing to pay it—thus, it must be mutually beneficial. And if the noncompete benefits both parties, then it must also be economically efficient in the sense of increasing total economic surplus (as long as it does not harm any third parties).

This argument depends crucially on the premise that imposing a noncompete on the worker without compensation is impossible. That is, the argument requires that the worker’s formal agreement to the noncompete provision can never be obtained unless the provision truly makes the worker better-off. This premise is rather obviously incorrect. There are, in fact, a number of ways that firms can impose noncompetes on workers without compensation. These include:

  • The firm can mislead the worker about the existence or the meaning of the noncompete.
  • The firm can wait until the worker starts the job before informing the worker of the existence of the noncompete, exploiting the worker’s reluctance to quit and restart the job search.
  • The firm can impose on the worker an interpretation of the noncompete that is more restrictive than what was originally agreed to by exploiting the power asymmetry between the worker and the firm in the ability to bear the costs of fighting in court.
  • The firm can simply refuse to deliver the promised compensation, knowing that the worker’s most powerful weapon to compel the firm to keep its promises—threatening to quit—is precisely what is deterred by the noncompete itself.

In response to the above points, it might be argued that even if it were possible for noncompetes to be imposed on workers without compensation, they would be dislodged by competition in the labor market because firms that do not require an (uncompensated) noncompete would attract workers away from ones that do. But this competitive pressure is likely to be weak, especially if noncompetes are already ubiquitous in an industry.

For a firm to succeed in attracting workers by not requiring a noncompete, it would likely have to make the absence of a noncompete a central element of its recruiting message to the exclusion of other, likely more effective messages. Moreover, if only one or a few firms did not require a noncompete, then they would tend to attract the workers who care the most about avoiding a noncompete. Those workers may be less desirable as they may be the workers most likely to quit. For these reasons, the ability of labor market competition to dislodge uncompensated noncompetes is likely to be limited.

For the above reasons, noncompetes likely can be imposed on workers without compensation. And if that is true, then the presumption that noncompetes must be mutually beneficial disappears—and with it the presumption that they are economically efficient. Rather, it becomes possible, even likely, that noncompetes are instead largely a means by which firms extract value from workers.

Commonly claimed positive effects of noncompetes

It is worth noting that the above argument does not depend on any specific claim regarding possible positive effects of noncompetes. Rather, according to that argument, the mere existence of a noncompete, and its voluntary nature, are taken to be sufficient to demonstrate that it must have large positive effects, otherwise the firm would not have been willing to pay the compensation necessary for the worker to agree to it. But, as discussed above, this argument is badly flawed, and noncompetes likely can, in fact, be imposed without compensation. This does not necessarily mean that noncompetes do not have positive effects (more on this below), but it does mean that those positive effects must be demonstrated and not merely inferred from the fact that the noncompete exists.

We now turn to the specific claims of positive effects that are commonly made in favor of noncompetes. There are two such claims. The first is that they facilitate efficient transfer of knowledge from firms to workers, and the second is that they facilitate efficient worker-funded employee training. We consider each claim in turn.

Noncompetes facilitate efficient knowledge transfer from firms to workers

The first claim is that noncompetes facilitate efficient information sharing, which, in turn, provides stronger incentives to produce valuable information. The claim is that a firm will have greater incentive to share knowledge with a worker, and even to generate new knowledge in the first place, if the knowledge is protected by a noncompete to prevent the worker from taking that information to a new firm. But there are a number of reasons to doubt this benefit is large, including:

  • Much information sharing will occur with or without a noncompete simply because it is impossible to operate the business any other way. The efficiency benefit is only the increment of information sharing that is induced by the noncompete (that would not have occurred otherwise), and that increment may not be large.
  • Noncompetes impede the efficient flow of information across firms. The experience of California, which does not enforce noncompetes but is a world-leading center of innovation, suggests that the benefits of this cross-fertilization of knowledge may be so large that impeding it with noncompetes is harmful to innovation, on balance. At a minimum, it strongly suggests that any innovation benefits from noncompetes are not very large.
  • By the same logic that the noncompete increases the firm’s incentive to generate new knowledge, it decreases the worker’s incentive to do so. The fact that a worker who creates new knowledge cannot use that knowledge to make themselves more attractive to outside employers reduces the incentive to create the knowledge in the first place.
  • Noncompetes also impede the efficient flow of people across firms. Some job matches are inefficient, and noncompetes impede them from being dissolved in favor of more efficient ones.
  • To the extent that noncompetes do facilitate efficient information sharing, those benefits can often be achieved through other, less restrictive means, including nondisclosure and nonsolicitation agreements.

Noncompetes encourage efficient firm-sponsored investment in worker training

The second claim of positive effects of noncompetes is that they facilitate efficient firm-funded worker training. The idea is that a firm will have a greater incentive to train the worker if a noncompete prevents the worker from taking that training to a new firm. But there are a number of reasons to doubt that this benefit is large, namely:

  • Some training will occur with or without the noncompete simply because it is impossible to operate the business any other way. Once again, it is only the increment of training that is induced by the noncompete that matters (that would not have occurred otherwise), and that increment may not be very large.
  • A noncompete does remove a barrier to firm-funded training because the firm no longer has to worry that the worker will use that training to get a better job offer. That is, with a noncompete, the firm can capture the benefit of the training and so is more willing to pay the cost of it. But standard economic theory indicates that, in a competitive labor market, training with benefits that exceed the costs will occur regardless. With a noncompete, the firm will pay the cost and receive the benefit, but without a noncompete, the worker will pay the cost (through formal schooling and/or lower wages early in a career) and receive the benefit. The training will occur regardless.

Appropriate policy response

If noncompetes are, in fact, largely a means for firms to extract value from workers, the question becomes what the appropriate policy response might be. In a companion article, I argue that noncompetes can reasonably be viewed as a problem appropriately dealt with in the context of the antitrust laws. In another article, FTC Commissioner Rohit Chopra and researcher Lina Khan argue that this problem can be addressed using the FTC’s rulemaking authority.

Conclusion

The empirical evidence, combined with the weakness of the arguments in favor of noncompete contracts and the existence of strong arguments against them, suggests that noncompetes are harmful, on balance. This harm may extend beyond the measures that economists normally consider, such as effects on job mobility, entrepreneurship, worker training, innovation, and wages. Rather, it is likely that noncompetes have other, even worse, effects. By making it more difficult to leave a job, noncompetes increase worker vulnerability to nonmonetary harms such as abuse and degradation. A predatory employer or manager who knows that the worker cannot leave will be more unrestrained in their predation.

Aside from all measurable harms, the ability of human beings to take their body and their labor where they choose is a fundamental human right. Perhaps some extremely strong economic efficiency benefits would be sufficient to outweigh this, but both the evidence and the theoretical arguments indicate that such benefits do not exist.

—David J. Balan is an employee of the Federal Trade Commission. The views expressed in this column are solely those of the author.

Competitive Edge: Remedying monopoly violation by social networks—the role of interoperability and rulemaking

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Michael Kades and Fiona Scott Morton have authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


All eyes are laser-focused on competition in digital technology platforms such as Amazon.com Inc.’s Marketplace, Apple Inc.’s App store, Facebook Inc.’s eponymous social network, and the search engine operated by Alphabet Inc.’s Google unit. Congress, the Federal Trade Commission, the U.S. Department of Justice, and various state attorneys general are investigating their conduct, and, if press reports are to be believed, both Google and Facebook could soon find themselves as defendants in major monopolization cases. By way of comparison, the previous major monopolization case, United States v. Microsoft, was filed in 1998, when “You’ve got mail,” and that static noise of a dial-up connection were common.

It is, however, past time to think only about whether these technology giants are violating the antitrust laws and ask how to address such antitrust violations if they have occurred. Even in the most successful monopoly prosecutions, such as the antitrust cases against AT&T Inc. in the 1980s and against Microsoft Corp. in the 1990s, the courts struggled to develop and implement effective remedies with various degrees of success. Discussing remedy before there is a case may seem like putting the cart before the horse—but think of it as designing the cart before deciding what horses to use.

Today, we have posted a working paper that proposes a remedy for one type of digital platform: a social network such as Facebook. Our remedy proposal relies on five principles, summarized here and discussed in more detail below:

  • Social networks, like most digital platforms, have large “network effects.” We discuss this concept in detail below, but the basic idea is that like the telephone system and email, the more people on the same network, the more useful it is to its users. Those network effects create entry barriers, which make it easier for anticompetitive conduct to successfully create and protect monopoly power.
  • Unless a remedy addresses the entry barriers created by these network effects, it will likely fail to fully restore competition or prevent future violations.
  • Interoperability refers to the way phones from Verizon Communications Inc., AT&T, and other companies can connect with each other, or users of Gmail and Hotmail can write to each other. In the case of a social network, interoperability would enable social network users on different social networks to seamlessly connect with each other, meaning that interoperability is likely to be critical, although not sufficient, to address harms caused by an antitrust violation.
  • Implementing interoperability poses challenges for the litigation process. It requires the creation of a technical committee to address the technical details. The committee can’t be manipulated by the dominant players. Policing compliance with the remedy must be efficient. And substantial penalties are needed to deter incentives to violate the remedy order.
  • The Federal Trade Commission could use its rulemaking authority, outside of any particular litigation, to develop a default interoperability order that could increase the workability and effectiveness of any future interoperability requirement.

Digital platforms are under scrutiny

On Capitol Hill, the Senate Judiciary Committee just held a hearing on Google and online advertising. The House Judiciary Committee will release its report on digital platforms shortly. Jason Furman, a professor of the practice of economic policy at the Harvard Kennedy School and a member of Equitable Growth’s Steering Committee, outlined the role of networks on competition in digital markets in testimony before Congress (available as a Competitive Edge), and Equitable Growth has also summarized research more broadly.

A network effect means a digital platform’s value to users increases as the number of users increases. Take Facebook as an example. As the number of users on Facebook increases overall, any individual will need to be on Facebook to communicate with her friends or family; conversely, no one wants to be on a social network if none of their friends or family use it. Similarly, advertising on Facebook becomes more valuable the bigger Facebook’s user base grows, the longer users are on Facebook, and the more Facebook can help target the ads to those who will most likely respond to them, which is a function of the first two benefits of size.

In turn, this network effect can lead to a winner-take-all (or most) dynamic, also known as tipping. When one social network creates an edge in number of users, either legitimately or through exclusionary conduct, that advantage attracts even more users. The social network may become dominant and earn monopoly returns. Ultimately, the network effect creates an entry barrier. Few will join a new social network until their friends, families, and neighbors do.

Neither entry barriers nor tipping present insurmountable barriers for a new competitor, but they do make it easier to monopolize a market. In a market subject to tipping (even if it is not permanent), the value of excluding a competitor is greater because the prize is bigger. If entry barriers are high, any potential competitor’s chance of success is low. As a result, a social network may be able to inexpensively acquire nascent or potential competitors before they pose a threat to the network’s dominance.

A successful remedy will reduce entry barriers created by network effects

If this type of digital platform has violated antitrust laws, it has engaged in anticompetitive conduct that relies on and exploits the network effect and the entry barriers it creates. Absent intervention, the dominant platform will continue to benefit from its conduct; entry is unlikely and difficult. A divested network can compete with its existing installed base of users, and this will create choice for users—provided their friends move with them. So long as the network effect remains, however, the dominant firm continues to have the same incentives to adopt different and new exclusionary conduct to protect its monopoly. For a remedy to be fully effective, it needs to reduce the network effect and the entry barriers it creates.

Network effects manifest themselves across different types of digital platforms: social networks, online marketplaces, app stores, and online advertising. But they can operate differently in each setting. Network effects can be direct or indirect; platforms can have multiple sides. The effects may be asymmetric, and some may be strong and others weak. A remedy that addresses network effects present in a social network market may be meaningless in addressing network effects in an online marketplace. We use Facebook to explore addressing network effects as a remedy for a monopolization violation involving a social network.

Based on allegations currently being made, assume that Facebook has allegedly acquired a series of nascent or potential competitors to eliminate companies; that it cut off access to Facebook when a company could pose a competitive threat; and that those actions violate the antitrust laws as illegal monopolization. How would one remedy the violation? (Our working paper and this column do not comment on the merits of these allegations.) 

Certainly, a court could forbid Facebook from repeating the illegal act and similar acts. Facebook could face fines or have to give up its profits from violating the law. But we are doubtful that those remedies alone would recreate the lost competition and thereby give consumers the competition they were earlier denied. Conduct prohibitions are likely to create an expensive whack-a-mole game, with the government and the dominant firm arguing over both the impact of every new strategy and whether it counts as “similar” to what violated the law.

A more substantial remedy would break up a social network into separate parts and provide real benefits by setting the stage for robust competition. A remedy, for example, could require Facebook to divest its Instagram photo- and video-sharing unit and its messaging unit, WhatsApp. Divestiture would significantly benefit users post-break-up as the divested components would compete with each other to attract users. Each network would innovate and provide better service to win an advantage in the number of users. The competition would likely be fierce. But without additional remedies, the market would likely tip again to one of the competitors, creating another monopoly. Then, the winning social network has both the incentive and ability to engage in exclusionary acts to prevent future threats to its newly established or re-established market dominance.

Interoperability has the potential to lower entry barriers

Requiring interoperability can neutralize or significantly reduce the network effect that the incumbent employed to create and protect its monopoly. By interoperability, we mean that users on other or new social networks should be able to friend Facebook users and vice versa. Posts should flow from a Facebook user to her friend on a new network in much the same way email can be sent and received regardless of whether both parties use Gmail, or phone calls connect people regardless of their carriers. 

Interoperability reduces the barriers to entry created by network effects. Let’s say, for example, that one of the divested Facebook companies begins to lose users. It radically changes its business model from advertising-supported to a subscription-based business model and promotes the resulting high-quality user interface. It hopes to attract users because it has no advertising and strong privacy protections. Without interoperability, a user who prefers the subscription model and leaves Facebook to join it will lose contact with all her friends on Facebook and perhaps institutions there, such as her child’s school. Such costs might deter her from joining her preferred network. With interoperability, by contrast, she receives school forms and news of family vacations and college reunions that are sent to her through her new network. In short, with interoperability, each person can choose the network they prefer while staying in touch with their social circles. The network effect ceases to be an entry barrier.

In this world, entering social networks could compete on features outside the standard, such as their user interface, policies concerning news or offensive content, and privacy policies. Consumers could change social networks like they change wireless carriers, without losing the ability to stay in touch with their contacts. The need to compete for consumers on the basis of service quality, such as the amount of advertising and how it is targeted, rather than relying on network effects to keep users, would intensify competition among social networks to the benefit of consumers.

Interoperability could be ordered in addition to other relief, such as a divestiture, and could be complementary to it or stand on its own. It could be an appropriate remedy in any situation in which the dominant social networking firm has exploited network effects by violating antitrust laws. In today’s internet-based network markets, interoperability carries no incremental costs such as the dedicated wires and machines that were required for telecom interoperability in past decades. It requires the establishment of an open standard to exchange commonly used functionalities, such as text, calendars, and images between and among competing social networks.

The challenges of implementing interoperability as a remedy

Although interoperability as a concept is straightforward, effectively implementing it raises challenges. In our working paper, we look back at both the AT&T break-up order, where interoperability was effective, and the remedial order in United States v. Microsoft, where those provisions had little impact. From those cases, we suggest several operational principles.

Substantively, the remedy must establish the technical capability for users to communicate across platforms, balance the needs of multiple actors, promote entry, and enhance the user experience, including protecting privacy. Importantly, the remedy order must prevent the offending, dominant social network (or its divested parts) from manipulating the process. This requires that the remedy include provisions that will deter the defendant from violating the order, require standards that many entrants can meet, and not favor large incumbents.

The remedy also must establish a process for determining whether the defendant has violated the order. That process must be fast enough to provide relief to a harmed competitor before that firm fails, and the penalties must be significant enough that the dominant social network will be worse-off for having violated the remedy order.

From a process perspective, creating a technical committee overseen by an antitrust enforcer is the most promising option to solve these implementation challenges. Judge Harold Green used a similar procedure in the AT&T break-up, and Judge Colleen Kollar-Kotelly relied on a technical committee in Microsoft. Such a committee would include representatives of all relevant industry segments, but the antitrust enforcer engaged in policing the remedy would control the decision-making process to prevent capture by the dominant social network (or its divested parts).

FTC rulemaking can improve the remedy process

The final element of our proposal is that the Federal Trade Commission should use its rulemaking authority to develop a default order for interoperability. Rulemaking provides a number of advantages for developing the groundwork for a successful remedy. A default order derived through rulemaking can identify basic principles to apply in monopolization cases involving strong network effects or issue separate rules on remedies for different types of digital platforms.

In an administrative adjudication, where the Federal Trade Commissioners are the judges, the default order would be a mandatory starting point for a remedy. In cases brought in federal court by the Justice Department’s Antitrust Division, the states, or the Federal Trade Commission (the FTC can either bring cases internally, where it acts as a decisionmaker, or in federal court, where it is the plaintiff), courts would not be required to rely on the default order but would be free to do so.

In any individual case, the decision-maker could adjust the terms as necessary to fit the particular situation, but the default order would save time and effort. The default order would also help focus on the issues in dispute. Parties could appeal any of the decisions we describe to the courts. Given the existence of a carefully crafted, robust order, however, those appeals would likely be less frequent and burdensome than if a court had to resolve every issue from scratch.

Conclusion

The debate over whether any digital platform violates antitrust laws will continue in the press, in the halls of Congress, and, probably, in courtrooms across the country. Antitrust policymakers need not—and should not—wait for a liability determination before considering remedies they can apply today, using current law and existing institutions. Our working paper provides a contribution to the remedy discussion and on the need to address entry barriers as a necessary, but not necessarily a sufficient, goal of a successful remedy.

—Fiona M. Scott Morton is the Theodore Nierenberg Professor of Economics at the Yale University School of Management. She consults on antitrust issues for a range of corporations, including Apple Inc. and Amazon.com Inc. Michael Kades is the director for markets and competition policy at the Washington Center for Equitable Growth. He does no outside consulting.

Competitive Edge: The future of vertical mergers and the thing called ‘EDM’

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Jonathan Sallet has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Jonathan Sallet

The previous time federal antitrust agencies issued formal merger guidelines dealing with vertical mergers was in 1984. They and the entire antitrust community have learned a lot in the past 36 years.

Economic analysis of vertical mergers has evolved and been refined, and the thinking at the U.S. Department of Justice’s Antitrust Division and the Federal Trade Commission has similarly advanced. During that time, in a series of consent orders, antitrust enforcement carefully explained how vertical mergers can harm competition in industries ranging from agriculture to aerospace to energy to the internet and telecommunications. In 2019, the Federal Trade Commission, for example, confronted vertical merger issues in its decisions in two merger cases: office supply firm Staples Inc.’s acquisition of Essendant Inc. and UnitedHealth Group’s acquisition of DaVita Medical Group. The UnitedHealth Group/DaVita merger also made history when, for the first time, the Colorado attorney general entered into a separate consent decree in a vertical merger to protect competition in that state.

In January, the Federal Trade Commission and the Department of Justice’s Antitrust Division issued draft Vertical Merger Guidelines to capture their learnings and experience. Importantly, these guidelines paid no mind to the often-claimed and ill-supported notion that vertical mergers are inherently procompetitive. Instead, the two agencies explained carefully that Section 7 of the Clayton Act, which bars mergers “the effect of which may be substantially to lessen competition,” applies just as much to vertical as to other mergers.

Yet, an issue that is, even by antitrust standards, relatively arcane threatens to undo that progress and resurrect the notion that vertical mergers are presumptively procompetitive. The antitrust agencies need to reject that notion—in whatever form it appears.

The issue is called EDM. This is not an acronym for electronic dance music or even a garbled reference to a ‘90s rock band. EDM stands for “elimination of double marginalization,” which sounds harder than it is.

Consider a supplier of television programming and a cable system. The TV programmer makes a profit (which is to say, a margin) on its sales to the cable system, and the cable system makes a profit (another margin) by selling Pay TV packages to consumers. If the price to consumers of the Pay TV package were lowered, then the cable system might sell more and make more and so, too, might the TV programmer. So, that TV programmer might agree to lower its price to the cable system, allowing them to share the burden of lowering their profits but also sharing the extra revenue that comes from selling cooperatively to more consumers at a lower price.

But let’s say, for some reason, the TV programmer and the cable system can’t reach a deal. Then, if the cable system acquires the TV programmer, the merged firm could implement the same strategy by eliminating the margin on the sale of television programming and selling the Pay TV package to consumers at a lower price. This, simply put, is elimination of double marginalization. If EDM actually occurs, then there’s no doubt it’s a competitive benefit that can be weighed against the risk of anticompetitive harm.

That’s what merger analysis does. It looks at the likelihood of competitive harm and then considers whether there are offsetting competitive benefits that negate the risk of harm. These kinds of competitive benefits typically include so-called efficiencies, such as the ability of the merged company to produce more with less.

But now there is doubt whether the agencies will treat EDM the same as other claimed benefits. That’s because the agencies organized the draft Vertical Merger Guidelines in a way that separates EDM from efficiencies. And some comments to the agencies argue that EDM should be assumed, meaning that some such margin reduction will occur. Assuming there will be a reduction in margins is effectively a presumption in favor of EDM.

A presumption that EDM exists would vastly complicate the ability of federal and state antitrust enforcers to challenge vertical mergers and would lead to underenforcement. That’s because a judge might start by presuming that competitive benefits exist and then would assess whether there’s a risk of competitive harm. The government, in other words, could start off behind.

For five separate reasons, that’s wrong, and the antitrust agencies need to clarify their final guidelines to make plain that claims of EDM are to be presented and assessed the same as any other claimed procompetitive benefit. These five reasons are:

  • Vertical Merger Guidelines will be important to future litigation of vertical merger challenges.
  • The Department of Justice has spoken plainly that merging parties must support and quantify EDM as a defense.
  • The Department of Justice’s view is amply supported by antitrust statute and precedent.
  • Merging parties have the information and incentive to develop facts about their own internal operations, including EDM.
  • Economic models and analysis do not support a contrary conclusion.

Let’s consider each in turn.

Vertical Merger Guidelines will be important to future litigation of vertical merger challenges

As a technical matter, the Vertical Merger Guidelines will apply only to the agency investigations, but the federal courts are sure to look to them for guidance. That’s been the case in horizontal mergers, which have been litigated with some frequency, so the guidance supplied to courts will be even more important for vertical mergers, where litigated challenges have been exceedingly rare. And that means any implication of differential treatment of EDM could adversely impact the federal and state enforcers’ ability to successfully challenge vertical mergers. This also could have the effect of pushing federal and state enforcers toward a more permissive stance toward the vertical mergers that come in the door.

The Department of Justice has spoken plainly that merging parties must support and quantify EDM as a defense

The implication that EDM is a different kind of antitrust animal is demonstrably wrong. The Department of Justice has made plain that if merging parties “want credit for EDM, then they have to do the work, and have the evidence necessary to support it.” Assistant Attorney General Makan Delrahim explained last year very forthrightly that “[o]ur approach at the Antitrust Division is this: as the law requires for the advancement of any affirmative defense, the burden is on the parties in a vertical merger to put forward evidence to support and quantify EDM as a defense.” The Antitrust Division at the Justice Department took the same position in the AT&T Inc./Time Warner Inc. litigation when it discussed “efficiencies—such as the elimination of double marginalization.”

The Department of Justice’s view is amply supported by antitrust statute and precedent

The U.S. Congress designed Section 7 of the Clayton Act to stop anticompetitive conduct before it harms consumers. As the D.C. Circuit Court said in its AT&T/Time Warner opinion, “Congress acted out of concern with ‘probabilities, not certainties’ and charged the courts with ‘halting incipient monopolies.’” That’s why the government wins if it establishes a “reasonable probability” of harm to competition because it doesn’t have to eliminate every possibility to meet its burden. As Assistant Attorney General Delrahim said, “the Antitrust Division is not required to present, in its case-in-chief, evidence rebutting or anticipating the defendants’ affirmative claim that EDM will cause a price decrease.”

Merging parties have the information and incentive to develop facts about their own internal operations, including EDM

The burden of demonstrating EDM belongs on the merging parties because it’s the merging parties that have the information and incentive to develop facts about their own internal operations. The comments of 28 state attorneys general on the draft Vertical Merger Guidelines explain in detail that the merging firms have superior knowledge on topics such as what margins have existed, past actions (such as contracts) that have been considered or tried, the incentives and opportunities to reduce margins in the future, and, of course, how their plans will change as they plan the operations of the merged company. All of this goes to merger specificity, according to the 2010 Horizontal Merger Guidelines issued jointly by the Department of Justice and the Federal Trade Commission: “The Agencies credit only those efficiencies likely to be accomplished with the proposed merger and unlikely to be accomplished in the absence of either the proposed merger or another means having comparable anticompetitive effects.”

For any inquiry into the elimination of double marginalization, facts are critical. As the Department of Justice has said, “it is impossible to tell at first blush whether a vertical merger will eliminate double marginalization and, if it does, how large a savings that would create for consumers.” EDM might be small or nonexistent; indeed, Steven Salop at Georgetown Law School presented eight separate reasons to support that conclusion at the Department of Justice’s Workshop on Vertical Mergers held on March 11, 2020.

So, for example, as the draft Vertical Merger Guidelines expressly note, the downstream affiliate may not be able to efficiently use the upstream affiliate’s product. Or the new firm may instruct its upstream and downstream affiliates to operate independently. Or the upstream affiliate, which would give up its margin for the good of the new firm, may have limited capacity, which means that it could not produce more units even if its downstream affiliate cuts its retail price. The upshot: Without the ability to produce more units, there would be no incentive to lower retail prices.

And what if the price reduction reduces input sales that the upstream affiliate would have made to other downstream competitors? In that case, the merged firm might raise prices to coordinate higher prices, rather than lowering prices through the elimination of double margins.

Thus, “EDM must be shown to be merger specific to be credited,” which is “a factual question that must be assessed on a case-by-case basis.” For example, in the successful union of Comcast Corp. with NBC Universal, the Department of Justice procured a consent decree limiting the action of the merged firm after considering an EDM defense, but concluding that “[d]ocuments, data, and testimony obtained from Defendants and third parties demonstrate that much, if not all, of any potential double marginalization is reduced, if not completely eliminated, through the course of contract negotiations.”

These are all fact-specific questions that the merging parties are best positioned to answer. They have better access to facts and a healthy incentive to find them. By contrast, according to the Department of Justice, if antitrust enforcers “bore the burden on efficiencies, ‘the efficiencies defense might well swallow the whole of Section 7 of the Clayton Act because management would be able to present large efficiencies based on its own judgment and the Court would be hard pressed to find otherwise.’” Moreover, if the government had the burden of disproving EDM, companies might have the incentive to not cooperate.

Economic models and analysis do not support a contrary conclusion

I recognize the contention that the existence of EDM may be linked to the existence of incentive to harm competition (specifically by raising rivals’ costs). But in their Vertical Merger Guidelines comments, professors Jonathan Baker at American University’s School of Law, Salop at Georgetown Law, Nancy Rose at the Massachusetts Institute of Technology, and Fiona Scott Morton at Yale University tell us that “the economics literature does not support the proposition that there is a reliable relationship between EDM and raising rivals’ costs.” Salop specifically has explained that not all models treat EDM and raising rivals’ costs as linked. And Scott Morton, with Marissa Beck at Charles Rivers Associates, submitted comments in which they carefully reviewed past studies and concluded that vertical mergers are not generally procompetitive and that “the effects of a vertical merger will depend on the specifics of the transaction and markets at issue.”

Conclusion

In sum, as professor Martin Gaynor at Carnegie Mellon University has told the agencies, “EDM is not, in general, a necessary consequence of vertical (non-horizontal) integration. All of this is why, as Carl Shapiro, the government’s testifying expert in the AT&T/TWE case, said in his comments, ‘EDM, like all other efficiencies, must be shown to be cognizable before it can be credited.’”

That’s correct. EDM should be treated like any other claim of competitive benefits arising from a merger, through the analysis traditionally applicable to efficiencies. And that is what, in accordance with the Department of Justice’s repeated views, the final Vertical Merger Guidelines should say.

—Jonathan Sallet is a senior fellow at the Benton Institute for Broadband & Society. He also assists the antitrust work of the Colorado attorney general’s office as a special assistant attorney general. The views expressed here are his own.

Competitive Edge: The states’ view of vertical merger guidelines in U.S. antitrust enforcement

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Phil Weiser has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Phil Weiser

Colorado and 26 other states filed comments today with the Antitrust Division of the U.S. Department of Justice and the Federal Trade Commission on their proposed Vertical Merger Guidelines. These two federal antitrust enforcement agencies are commendably working to update an outdated set of guidelines from 1984 that downplay the risks posed by vertical mergers—mergers between companies at different levels of the chain or production, distribution, or marketing of products or services. In our comments, we highlight areas of competitive harm that warrant attention, areas for improvement in these guidelines, and a suggested added focus on remedies.

In so doing, as I have written about previously, we emphasize that vertical integration is not always benign and indeed has the potential to create significant anticompetitive harms. In this post, I summarize and highlight some of the points made in our comments.

First off, it is important to understand that vertical mergers of, say, a wholesale distributor and a retail outlet, have the potential to harm competition and hurt consumers just like horizontal mergers—mergers between two rivals—do. Indeed, in some cases, a vertical merger may remove the most likely potential rival to an incumbent firm.

Consider, for example, the case of Live Nation Entertainment Inc.’s merger with Ticketmaster in 2010. In that case, Live Nation’s concert promotion and venue business prepared Live Nation to enter into the ticketing platform business, but the merger with Ticketmaster undermined that nascent competition. Indeed, Live Nation had already begun that entry before the merger. This is why a vertically related firm in one market (say, wholesale distribution) might be the natural entity to sponsor entry against a dominant firm in a related market (say, retail sales), and that potential sponsorship could be undermined on account of the merger between the dominant firm and the vertically related one. That is particularly true in evolving or fast-growing sectors such as technology markets.

Second, it is important to recognize how vertical mergers, once completed, can be used to undermine existing rivals or raise entry barriers that make future entry materially more difficult. Colorado, like other states, has addressed such dangers. In June 2019, Colorado took action to prevent anticompetitive harms from occurring as a result of the merger of DaVita Inc. and UnitedHealth Group Inc. In that case, UnitedHealth, a health insurer, was facing an upstart rival, Humana Inc., which undermined UnitedHealth’s once-dominant position in the market, leading it to drop from around 75 percent to around 50 percent of the Colorado Springs Medicare Advantage market. Humana’s growth in this market reflected, on its account, its strong relationship with DaVita’s physician clinics in the area.

The acquisition raised the threat of customer foreclosure by limiting access to the relevant patient population. UnitedHealth already had an exclusive arrangement with Centura Health, another clinical network. This acquisition would give UnitedHealth control over DaVita. As a result of this merger, UnitedHealth would have the incentive and ability to increase costs for DaVita’s services to UnitedHealth’s rival insurers, or even to withhold such services altogether, leading Medicare Advantage rates to go up and/or quality to decrease. To prevent this harm, our state office required that the DaVita contract be extended and that UnitedHealth end its exclusive contractual arrangement with Centura Health. By doing so, the remedy protected Humana’s access to doctors who can enable access to Medicare Advantage customers.

The DaVita/UnitedHealth case is one of many examples of how vertical mergers can be consummated for the purpose and effect of excluding rivals or raising their costs. In some cases, such as this one, the impact relates to access to customers. In other cases, the merger impacts access to critical inputs (termed by economists as “input foreclosure”). Consider, for example, the merger of Comcast Corp. with NBCUniversal, in which the Department of Justice in 2011 was rightly concerned that Comcast’s control of NBCU would lead Comcast to limit the ability of rival upstart online distribution platform companies to access NBC content, a critical input into their own offerings.

Similarly, the proposed 1998 merger between Ingram, a leading book distributor, and Barnes & Noble, then the dominant book retailer, threated to entrench Barnes & Noble’s dominant position. At the time of the merger, Ingram was Amazon.com Inc.’s leading supplier, filling more than 58 percent of its orders. With respect to the merger, Amazon.com and independent booksellers raised a series of concerns as to how the merger could harm competition in retail books sales, including on issues such as “credit, speed of delivery, and access to popular titles.” In the face of opposition by the Federal Trade Commission, the merger was abandoned.

In addition to highlighting the prospect of competitive harms on account of vertical mergers and discussing the types of evidence that could be collected to challenge such mergers, the comments submitted today to the two federal antitrust enforcement agencies by the state attorneys general also discussed the use of merger remedies. Many vertical mergers offer the opportunity to impose a conduct remedy that can allow the merger to go forward while blocking anticompetitive outcomes.

In the UnitedHealth/DaVita merger, for example, Colorado’s remedy did just that, extending an existing contract and banning an exclusive contracting arrangement that bolstered UnitedHealth’s market position. In other cases, however, such remedies may well be impractical or difficult to administer in practice. Where monitoring and administration of a decree may be difficult, the better course is to simply prevent the merger from taking place—as happened in the blocked merger of Barnes & Noble/Ingram—rather than attempting to implement a conduct remedy that is prone to abuse and may well prove ineffective.

As captured in our comments, State Attorneys General play an important role in protecting their citizens from anticompetitive consolidation. By sharing our experience, we are doing our part to help ensure that the federal merger guidelines are effective in protecting competition.

—Phil Weiser is Colorado Attorney General, sworn in as the State’s 39th Attorney General on January 8, 2019.

Competitive Edge: Underestimating the cost of underenforcing U.S. antitrust laws

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Michael Kades has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Michael Kades

For much of the past three-and-a-half decades, courts across the United States increasingly accepted that strict antitrust rules present far greater dangers than lenient rules. According to this theory, overly strict antitrust rules limit business conduct in two ways. Conduct that is beneficial is wrongly condemned (what is known as a false positive), and the rule deters companies from undertaking procompetitive actions. Further, once an overly strict legal rule is enshrined in precedent, it is difficult to change and has a long-lasting harmful event. In contrast, overly lenient antitrust rules allow anticompetitive conduct to go unpunished (what is known as a false negative), but market forces will correct those problems more quickly than it takes to overturn precedent.

Courts have relied on concern about false positives to limit rules regarding refusals to deal, predatory pricing, and proof of conspiracy, as well increasing the procedural requirements for plaintiffs. This policy, however, has no basis in theory and little empirical support. The evidence that does exist suggests underenforcement is costly. A fuller discussion on this issue (called error-cost analysis) occurs in section 2 of the Washington Center for Equitable Growth’s comments on the Federal Trade Commission’s Hearings Competition and Consumer Protection in the 21st Century.

Recent experience with U.S. antitrust rules and pharmaceutical patent settlements provides further evidence that overly lenient rules are costly, and fears of overly stringent rules can be overstated. Between 2005 and 2013, federal courts adopted a lenient rule that allowed patent holders to pay alleged infringers to stay off the market until the patent expired, fearing the dangers of overenforcement and discounting the dangers of this type of settlement. In 2013, the Supreme Court rejected this approach and subjected settlements to antitrust scrutiny in its Federal Trade Commission v. Actavis Inc. decision. Post-Actavis, some scholars argue that the decision will lead to false negatives in some cases, but others conclude false negatives are very unlikely.

Based on the history of reverse-payment settlements, antitrust rules matter. After the courts adopted the lenient rule, the number of settlements with substantial payments increased dramatically, from zero in fiscal year 2004 to a high of 34 in FY2012. Those deals increased prescription drug costs by $63 billion. After 2013, the problematic deals virtually disappeared, and there is no evidence that this stricter rule prevented settlements, limited innovation, or suppressed patent challenges—the primary policy concerns of courts adopting the scope of the patent test relied upon.

Background on pharmaceutical patent settlements

Competition from low-cost generics is one of the few proven ways to control prescription drug prices. Often, that competition depends on the outcome of patent litigation between the firm that owns the branded drug and the one planning to sell a generic alternative over whether the branded firm’s patents are valid and whether the generic product infringes on those patents. Beginning in the 1990s, branded and generic companies found a new way to settle patent litigation. A branded company would allege that its potential generic competitor’s product infringed on the branded company’s patent, yet the branded company would pay the generic to stay off the market for a period of time, which is known as a pay-for-delay, or reverse-payment, patent settlement.

The anticompetitive threat is straightforward (a fuller discussion can be found here). The generic company is a potential competitor (whether the branded company’s patent blocks competition is uncertain) and receives payment to accept the branded company’s proposed entry date. The agreement eliminates potential competition and protects the branded company’s monopoly. In turn, the payment compensates the generic company for accepting a later entry date, which delays competition. Consumers are worse off because, in an expected sense, they wait longer for competition and pay higher costs for the product.

The combination of the payment and the restriction on the entry of a new generic drug into the market creates the competition concern. A procompetitive settlement reflects the strength of the patent and occurs when the generic company and the branded company settle a patent suit by splitting the remaining time on the patent. If the patent expires in 10 years, for example, then the generic company might receive a license to the patent in 6 years, guaranteeing 4 years of competition. Without a payment, the settlement simply reflects the parties’ estimates of the strength of the patent.

In contrast, a firm that owns the branded drug will pay the generic alternative to accept an entry date only if the settlement delays generic competition beyond what the patent strength warrants. The generic firm will accept a later entry date only if it is compensated. So, in the example above, a branded company would only pay the generic firm if the generic firm agreed to delay entry more than 6 years, and the generic firm would agree to delay entry more than 6 years only if it is paid.

The Federal Trade Commission, private plaintiffs, and state attorneys general brought a series of cases challenging reverse-payment settlements in the late 1990s. According to them, any payment that was more than de minimis raised significant antitrust concerns and was likely anticompetitive. In March 2005, however, the 11th Circuit Court of Appeals, in Schering Plough Corp. v. Federal Trade Commission, reached the opposite conclusion. It suggested that, with limited exceptions, reverse-payment settlements were legal unless the generic company agreed to stay off the market until after the patent expired, also known as the “scope-of-the-patent” rule. Two other circuits shortly thereafter adopted this position.

In adopting this lenient rule, courts expressed concerns about the costs of an overly strict rule. Specifically, the courts variously argued that:

  • An overly strict rule would “discourage settlement of patent litigation” (see Federal Trade Commission v. Actavis, 570 US 136, 170 (Roberts C.J., dissenting)) because litigation can be costly and inefficient, and limiting settlements could, the courts reasoned, just increase costs to everyone.
  • Limiting patent holders’ settlement options could “decrease product innovation by amplifying the period of uncertainty around the drug manufacturer’s ability to research, develop, and market the patented product” (see Schering Plough Corp. v. Federal Trade Commission, 402 F.3d 1056, 1075, 11th Cir. 2005) because if patent holders have fewer settlement options, then the uncertainty of litigation might deter them from investing in research and development.
  • An overly strict rule would “reduce the incentive to challenge patents by reducing the challenger’s settlement options” (see Asahi Glass co. v. Pentech Pharms, Inc., 289 F. Supp. 2d 986, 994, ND Ill 2003) because if generic companies could not resolve patent litigation by getting paid, then they might not undertake their challenges in the first place.

At the same time, these courts were confident that even if settlements with payments were anticompetitive, the market would quickly remedy the situation. Once the branded company paid off one generic competitor, that payment would entice other generic companies to challenge the patent. As one court explained, “Although a patent holder may be able to escape the jaws of competition by sharing monopoly profits with the first one or two generic challengers, those profits will be eaten away as more and more generic companies enter the waters by filing their own paragraph IV certifications attacking the patent.”(See Federal Trade Commission v. Watson Pharms, Inc, 677 F.3d 1298, 11th Cir. 2012) In other words, if the branded company paid off one generic firm to avoid competition, then it would face a host of challengers, each demanding a payment to drop their challenges.

In 2013, however, the U.S. Supreme Court put an end to the scope-of-the-patent era and subjected these types of agreements to traditional antitrust analysis. According to the Supreme Court, an agreement in which the branded and generic companies eliminate potential competition and share the resulting monopoly profits likely violates the antitrust laws, absent some justification. (See FTC v. Actavis 570 US 136, 158) The result: The adoption of a very lenient rule, the scope-of-the-patent test, and the change to the stricter rule-of-reason approach provides insight into the costs and consequences of the two regimes.

The cost of the lenient scope-of-the-patent rule

Information from the Federal Trade Commission allows an estimate of the cost to consumers of the scope-of-the-patent rule. The FTC reports the number of settlements with any compensation and the subset of those with compensation greater than $7 million. Adoption of the lenient scope led to a dramatic increase in settlements with substantial compensation (more than $7 million), which peaked in FY2012 at 33. The Supreme Court’s rejection of that rule virtually eliminated settlements with substantial payments. In fiscal year 2016, only a single agreement occurred. The cost of a lenient rule is not simply that anticompetitive conduct goes unpunished; lenient rules also will encourage more anticompetitive conduct. (See Figure 1.)

Figure 1

One can estimate the cost to consumers of allowing pay-for-delay settlements by multiplying the length of the delay, the lost consumer savings due to delayed generic entry, and the volume of commerce affected. In 2010, the FTC issued a report analyzing pharmaceutical patent settlements. It found that settlements with payments and restrictions on entry delayed generic competition by an average of 17 months relative to settlements without payments. On a yearly basis, consumer lost savings equals 77 percent of the branded drug’s total revenue.

Finally, in information recently provided to Sen. Amy Klobuchar (D-MN), the FTC provided statistics on the total revenue of branded drugs subject to patent settlements with restrictions on generic entry and compensation of more than $7 million. The total cost to consumers equals 1.42 years of delay, multiplied by 77 percent of brand product’s revenue, multiplied by the yearly revenue of the branded drugs covered by settlements with compensation above $7 million. As it turns out, the consumer loss is equal to roughly a year of brand sales, or $63.3 billion. (See Table 1.)

Table 1

This estimate may be conservative. The 2010 FTC report counted even de minimis payments as a form of compensation in measuring the average delay. Such deals likely had little or no delay. So, the average length of delay for deals with payments above $7 million is likely longer than 17 months. Of course, more detailed data would allow for a more precise estimate, but it is clear that reverse-payment settlements during the scope-of-the-patent era increased prescription drug costs substantially (in the order of tens of billions of dollars).

It appears the courts adopting the scope-of-the-patent rule underestimate its cost. As a practical matter, during the scope-of-the-patent period, the market response did not deter the practice. Reverse-payment settlements were profitable and successful either because the payments did not entice additional generic companies to challenge the patent or the branded company could pay off all potential competitors. The Supreme Court, in its Actavis decision, explained that for reasons based on the competitive dynamics in the industry, paying off the first generic challenger removed the most motivated challenger. (See FTC v. Actavis at 155.) And, as one article explained, the incentives for subsequent challenges to litigate were so small that they would settle for little or nothing.

The costs of the stricter rule

Many cases are ongoing, and determining whether a given case is a false positive or a false negative requires a case-specific analysis. But what about concerns that a stricter antitrust rule would prevent settlements, deter generic companies from challenging patents, and lower incentives to innovate? Academics have questioned the relevance of those arguments. Whether those concerns are relevant in a legal sense, there is no empirical evidence that they are occurring.

First, stricter antitrust enforcement did not end pharmaceutical patent settlements. Record numbers of settlements occurred in each of the first 3 years after Actavis (FY2014 to FY2016). Although the Actavis decision deterred the use of payments to resolve patent litigation, parties found other ways to settle their disputes that did not harm competition. (See Figure 2.)

Figure 2

Second, there is no evidence that stricter antitrust enforcement deterred innovation. Although it is difficult to determine the impact on research and development, the pharmaceutical industry has not claimed that it is spending less on research and development because of the Actavis decision. To the contrary, PhRMA, the trade association for branded pharmaceuticals, highlights its increased research and development spending.

Third, the Supreme Court’s adoption of a stricter antitrust rule does not appear to have deterred generic companies from challenging patents, based on the 2016, 2017, and 2018 Lex Machina Anda Patent Litigation Reports. In the 4 years preceding the Actavis decision, under the scope-of-the-patent rule, new patent challenges averaged 271 a year, with a low of 236 cases and a high of 293 cases. In the 4 years after the Actavis decision, the average increased to a 413 new pharmaceutical patent challenges yearly, with a low of 326 cases and a high of 476 cases.

Proving a negative—that the Actavis rule did not deter procompetitive settlements—is challenging. The statistics, although they do not establish causation, suggests that the Actavis rule had little, if any, negative impact. Although one can hypothesize that there would be even more settlements, patent challenges, and innovation in the absence of Actavis, the theories are far less plausible, given the descriptive data offered here and lack of qualitative evidence to support them.

Conclusion

Adoption of the scope-of-the-patent test cost consumers more than an estimated $60 billion dollars, with little to no evidence of corresponding benefits. Going forward, courts should be more concerned about rules in this area that are overly lenient than rules that are overly strict. The drug-making industry responded both to the lenient scope-of-the-patent rule and then again to the stricter Actavis rule. Both of these points suggest that an even stronger rule—one that presumes such payments are anticompetitive—may be more effective. It would eliminate the risk of a bad court decision that would substantially increase prescription drug costs, and it would reduce the cost of enforcement.

More generally, contrary to accepted antitrust principles, underenforcement can cause substantial harm, and there should be no presumption that markets by themselves will limit the harm of anticompetitive activity. At the same time, claims that stricter antitrust enforcement will suppress beneficial conduct are overstated. The before and after lessons of pay-for-delay should be a cautionary tale for courts as they apply antitrust law.

Competitive Edge: Jason Furman testifies on the role of data and privacy in online platforms’ market power

Antitrust and competition issues are receiving renewed interest, and for good reason. So far, the discussion has occurred at a high level of generality. To address important specific antitrust enforcement and competition issues, the Washington Center for Equitable Growth has launched this blog, which we call “Competitive Edge.” This series features leading experts in antitrust enforcement on a broad range of topics: potential areas for antitrust enforcement, concerns about existing doctrine, practical realities enforcers face, proposals for reform, and broader policies to promote competition. Jason Furman has authored this contribution.

The octopus image, above, updates an iconic editorial cartoon first published in 1904 in the magazine Puck to portray the Standard Oil monopoly. Please note the harpoon. Our goal for Competitive Edge is to promote the development of sharp and effective tools to increase competition in the United States economy.


Washington Center for Equitable Growth Steering Committee member Jason Furman, a professor of the practice of economic policy at the Harvard Kennedy School and a nonresident senior fellow at the Peterson Institute for International Economics, testifies today about “Online Platforms and Market Power: The Role of Data and Privacy in Competition” before the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law. Furman, who previously served as a top economic adviser to President Barack Obama, including as chair of the Council of Economic Advisers, was deeply involved in competition policy, issuing reports on concentration and promoting inclusive growth and employment monopsony in the U.S. labor market, during his time in the White House.

In his testimony today, Furman makes four points to congressional members of the subcommittee:

  • Digital platforms are highly concentrated.
  • Competition will benefit consumers.
  • More robust merger enforcement is needed to protect competition.
  • Regulations can promote competition.

Furman’s testimony is informed most recently by his leadership of a digital competition expert panel for the government of the United Kingdom that produced a report titled “Unlocking Digital Competition.” Furman’s testimony also highlights the market power of the big online platforms and the importance of helping federal antitrust enforcers keep pace with algorithms, machine learning, and artificial intelligence. Please read his full testimony below.

—Michael Kades, director of Markets and Competition Policy at Equitable Growth



Prepared Testimony for the Hearing “Online Platforms and Market Power, Part 3: The Role of Data and Privacy in Competition”

Jason Furman
Professor of the Practice of Economic Policy, Harvard Kennedy School

U.S. House of Representatives
Committee on the Judiciary, Subcommittee on Antitrust, Commercial and Administrative Law

October 18, 2019

Jason Furman

Chairman Cicilline, Ranking Member Sensenbrenner, and Members of the Committee:

Thank you for the opportunity to testify on the important topic of online platforms and market power. I am a professor of the practice of economic policy at the Harvard Kennedy School, where I focus on a wide range of economic policy issues. I recently chaired the Digital Competition Expert Panel for the U.K. government that produced a report titled “Unlocking Digital Competition.”3 I am currently advising the U.K. as they move forward with a key set of recommendations from this report, including the establishment of a Digital Markets Unit to act as a procompetition regulator. Many of the recommendations in our report are applicable to the United States and I appreciate the opportunity to share some of those ideas with you today.

In my testimony today, I will make four points:

  1. The major digital platforms are highly concentrated and, absent policy changes, this concentration will likely persist with detrimental consequences for consumers.
  2. More robust competition policy can benefit consumers by helping to lower prices, improve quality, expand choices, and accelerate innovation. These improvements would likely include greater privacy protections given that these are valued by consumers. However, it is not clear that competition will be sufficient to adequately address privacy and several other digital issues.
  3. More robust merger enforcement should be part of the solution to expanding competition, including better technical capacity on the part of regulators, more forward-looking merger enforcement that is focused on potential competition and innovation, and legal changes to clarify these processes for the courts.
  4. A regulatory approach that is oriented toward increasing competition by establishing and enforcing a code of conduct, promoting systems with open standards and data mobility, and supporting data openness is essential. This is because more robust merger enforcement is too late to prevent the harms from previous mergers, and antitrust enforcement can take too long in a fast-moving market.

I also want to recommend to the Committee the recommendations in the recent report by the University of Chicago’s Stigler Center Committee on Digital Platforms on the economy and market structure, many of which dovetail with the suggestions in the report I chaired and with the recommendations in my testimony today.4

I will now elaborate on each of my four points.

Point #1: The major digital platforms are highly concentrated and, absent policy changes, there is a high likelihood that this concentration will persist with detrimental consequences for consumers.

The major online platforms, including online search, mobile operating systems, digital advertising, and social media, are each dominated by two players. Moreover, the two players in each of these markets are generally drawn from the same five major companies. A number of economic features of digital markets have helped to greatly reduce what economists call “competition in the market” by leading to tipping that results in a winner-take-most situation. These economic features include the combination of economies of scale and scope, the network externalities associated with having many users on the same platform, behavioral biases on the part of consumers, the data advantages of incumbents, the importance of raising capital, and brands. While many of these individual features are found in a wide range of markets, their combination in digital markets is unique.

It is more difficult to provide a definitive answer to the question of whether there is “competition for the market” in the digital sector. This is the idea that even if at any given moment only one or two major platforms are viable, over time these incumbents can be toppled and replaced by newer and more innovative competitors. Many of the dominant technology companies of the past seemed unassailable but then faced unexpected competition due to technological changes that created new markets and new companies. For example, IBM’s dominance of hardware in the 1960s and early 1970s was rendered less important by the emergence of the PC and software. Microsoft’s dominance of operating systems and browsers gave way to a shift to the internet and an expansion of choice. But these changes were facilitated, in part, by government policy, in particular, antitrust cases against these companies, without which the changes may never have happened.

Similar changes have been seen in the platform space, including Google replacing Yahoo and Facebook replacing MySpace. However, these and other similar examples all took place in the early days of the World Wide Web. Moreover, to the degree that the next technological revolution centers around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools. New entry may still be possible in some markets, but to the degree that entrants are acquired by the largest companies with little or no scrutiny, anticompetitive behavior is tolerated, and open standards are limited, the channel of competition for the market is not fully operative.

Point #2: More robust competition policy can benefit consumers by helping to lower prices, improve quality, expand choices, and accelerate innovation.

This lack of competition is costly. Consumers may think they are receiving “free” products, but they are paying a price for these products in a number of ways. First, the competitive price for some of these products might have been negative, so the fact that consumers are not being paid for the use of their data may reflect a failure of competition. Second, to the degree that the highly concentrated advertising market results in higher ad prices than would otherwise be the case, these higher costs are passed along by sellers in the form of higher prices for consumers. Third, consumers pay in the form of quality reductions. Finally, consumers pay in the form of reduced innovation in a world in which the major platforms have reduced incentives to innovate and incumbents have distorted incentives to make more incremental improvements that can be incorporated into the dominant platforms rather than more paradigmatic changes that could challenge these platforms.

Competition policy is very good at helping consumers get more of what they want. To the degree that public policy interests are aligned with those of consumers, that means that competition policy can be an effective tool in increasing social welfare. That is generally the case in the economy, and the digital sector is no exception. Many consumers want more privacy. Right now, with so few platform choices, they have limited options in this regard—a consumer can delete Facebook, for example, but will not have another place to go to connect with her friends. More choice would create more incentives for privacy protections.

There is an alternative perspective on privacy that is the basis for the European Union’s General Data Protection Regulation, or GDPR, which is that privacy is grounded in human rights and is generally applicable—it is not just something that should be provided to the degree that consumers want it in a competitive marketplace. This perspective would say that in addition to ensuring robust choices for consumers, regulators should also explicitly set minimal standards and rules for privacy, based on these human rights concerns or the worry that consumers will not be sufficiently attentive for competition to serve their needs. The United States already has such rules in areas like healthcare and banking, and understanding whether a generalized set of privacy rules is necessary—as a complement to competition policy and taking into account their impact on competition—is an important issue to resolve.

Beyond privacy, there are some issues that cannot be solved by competition. Some consumers value harmful content, like child pornography or instructions on assembling weapons of mass destruction. Competition, by itself, would deliver more of this content. While competition is an essential component of policy toward digital platforms, these other issues make clear that competition cannot be the only element of such a strategy.

Point #3: More robust merger enforcement should be part of the solution to expanding competition, including better technical capacity on the part of regulators, more forward-looking merger enforcement that is focused on potential competition and innovation, and legal changes to clarify these processes for the courts.

Competition policy generally recognizes a distinction between companies that grow organically, presumably reflecting efficiencies, and companies that grow through mergers, where regulators need to weigh the efficiencies against the harms from lessened competition.

In the past decade, Amazon, Apple, Facebook, Google, and Microsoft combined have made mor than 400 acquisitions globally. Many, if not most, of the major features of these companies have not been developed internally but acquired. Many of these acquisitions are small and almost certainly efficiency enhancing, but several have been quite big—the largest being Microsoft paying $26.2 billion for LinkedIn.

Merger control is subject to two types of errors: false positives, when a merger that should have been allowed to go through is blocked, and false negatives, when a merger that should have been blocked is allowed to go through. No enforcement can be perfect given all of the uncertainties inherent with forward-looking merger assessments, so some balancing of these types of errors is necessary.

To date, there have been no false positives in mergers involving the major digital platforms for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the United States and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change. Had such a change been in effect, it is likely that the vast majority of these mergers would still have gone through based on their minimal impact on competition and their potentially large benefits for consumers. But some would likely have been blocked, resulting in more competition today.

A better approach involves three elements. First, the Federal Trade Commission, or FTC, and the Department of Justice’s Antitrust Division need expanded resources to develop greater technical expertise in the digital space. Economics and law are essential, but so is computer science. Doing this will require more staff and an increased focus on digital expertise.

Second, merger analysis cannot simply be focused on short-run, static price effects, but must also consider the effects on innovation in the future. This can involve consideration of the role of data as a potential barrier to entry and the role of potential competition in the market. This is further complicated by the fluid definitions of digital markets, which continue to evolve over time. Economists have tools to assess some of these issues, but in many cases this can be very difficult and can lead to some ambiguity and uncertainty in any given case.

Third, in recent decades, courts have established an increasingly high bar for blocking mergers. This is likely inappropriate in the economy as a whole, but it is especially problematic in the digital sector, where a strong presumption in favor of mergers runs up against the necessity of considering what are inherently more speculative—but still very real and important—issues, like potential competition and innovation. As a result, the legal standards for merger review need to be clarified, either more generally or specifically for the digital space, including shifting some of the burdens of proof.

Point #4: A regulatory approach that is oriented toward increasing competition by establishing and enforcing a code of conduct, promoting systems with open standards and data mobility, and supporting data openness is essential.

Expanded merger enforcement would be helpful, but it is not sufficient since many of the horses have already left the barn. Antitrust scrutiny of the major platforms, like the valuable work being undertaken by this committee and the efforts by the FTC and Department of Justice, are important as well. But in a fast-moving technological landscape, none of these efforts are sufficient to ensure adequate competition—by the time enforcement happens, the competition may have been wiped out and the major platforms have moved on. Moreover, the fines associated with enforcement may not be a sufficient deterrent, especially when they are levied for very specific conduct and do not set a clear precedent for other companies operating in the future.

That is why my panel recommended the establishment of a “Digital Markets Unit,” a step the U.K. government announced it is taking and that I am currently helping them to implement. I believe this recommendation is fully applicable in the United States. I will describe the three main functions that regulation should undertake, recognizing that this could be housed in an existing regulator like the FTC or in a new body like the “Digital Authority” floated by the Stigler Center commission.

The first function is a code of conduct that would apply to companies that were deemed to have “strategic market status,” a designation that would be applied based on transparent criteria that would be re-evaluated every 3 to 5 years and would be focused not just on traditional criteria like market shares but also on the degree to which a platform acted as a “gateway” or a “bottleneck.” Companies with strategic market status should be subject to a code of conduct that would be developed through a multistakeholder process and should be enforceable. The elements of the code of conduct would be similar to existing antitrust law, including ensuring that business users are provided with access to designated platforms on a fair, consistent, and transparent basis; provided with prominence, rankings, and reviews on designated platforms on a fair, consistent, and transparent basis; and not unfairly restricted from, or penalized for, utilizing alternative platforms or routes to market. Importantly, smaller businesses and new entrants would not be subject to these rules—the goal of these rules is the establishment of a level playing field but not inhibiting innovation and choice by emerging competitors.

The second function is promoting systems with open standards and data mobility. These steps would benefit consumers by allowing them to access and engage with a wider range of people in a simpler manner, fostering more competition and entry—including enabling consumers to multihome by using multiple systems simultaneously or to switch more easily to alternative platforms. This step is not self-executing; you cannot just order it and expect it to happen. It will require hard work to identify relevant areas, like messaging or social networks, collaboration with companies on necessary technical standards, and careful consideration to ensure that it is done in a manner that is compatible with other objectives like protecting privacy. Much of this is happening already, including through initiatives like the Digital Transfer Project organized by many of the major tech companies. Companies do not, however, have a fully aligned incentive to facilitate competition through open standards, so further pressure can help by providing further incentive for private efforts to continue to become even more robust and/or by creating a more formal regulatory requirement.

The third function is data. Companies active in the digital economy generate and hold significant volumes of customers’ personal data. This data represents an asset which enables companies to engage in data-driven innovation, helping them improve their understanding of customers’ demands, habits, and needs. Enabling personal data mobility may provide a consumer-led tool that will increase use of new digital services, providing companies with an easier way to compete and grow in data-driven markets. However, in some markets, the key to effective competition may be to grant potential competitors access to privately held data. Such efforts, however, need to be very carefully balanced against both commercial rights and concerns about privacy. Digital platforms are already making an increasing amount of data open. Continuing to encourage this is important, but so is understanding additional steps that could foster more open data.

Thank you very much for your work on these important issues and I look forward to your questions.