The Weak Argument Jeopardizing Tech Antitrust Legislation

Democrats are pumping the brakes on an ambitious Senate bill over long-shot concerns about content moderation.
Senator Amy Klobuchar speaks with reporters after a news conference
Photograph: Kent Nishimura/Los Angeles Times/Getty Images

Opponents of the antitrust push targeting Big Tech have lobbed all kinds of arguments to try to weaken support for new legislation. They may finally have found one that sticks.

This week, a group of four Democratic senators, led by Brian Schatz of Hawaii, sent a letter to Amy Klobuchar asking her to pump the brakes on the American Innovation and Choice Online Act. The bill, which Klobuchar cosponsored with bipartisan support, would prohibit the biggest tech companies from abusing their power to disadvantage businesses that operate on their platforms. But Schatz’s group argues that a terrible side effect is buried in the legislation. The bill, they claim, would prevent dominant platforms from enforcing their content policies, which in turn “would supercharge harmful content online and make it more difficult to combat.”

Here is what the bill says about content moderation: nothing. The relevant section says that a “covered platform”—the likes of Google, Amazon, Apple, Meta, or Microsoft—cannot “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition.” This does not appear to ban or limit content policies. It suggests, to the contrary, that platforms can continue to enforce their terms of service—just not in a discriminatory way. On its face, this means that a dominant platform can’t apply its rules unfairly against a company that relies on it to reach customers. If a new video-sharing app was eating into YouTube’s market share, for instance, this provision would prevent Google from selectively invoking some little-used policy to ban it from its app store.

If the bill doesn’t discuss content moderation, where did some people get the idea that it would nonetheless affect it? In part, it’s a talking point from an industry that isn’t shy about making creative arguments to defeat proposed regulation. But tech insiders aren’t the only ones making this claim. Last week, law professors Jane Bambauer and Anupam Chander published an op-ed in The Washington Post issuing much the same warning. On Wednesday, Chander, who teaches at Georgetown, walked me through the argument. Take what happened to Parler, the conservative-friendly “free speech” Twitter alternative. Last year, after the January 6 riot, Apple and Google banned Parler from their app stores, and Amazon AWS canceled its hosting contract. Parler sued but had no legal leg to stand on. (It eventually implemented a content policy and was allowed back into the app stores.) Under the new bill, however, a conservative state attorney general, like Texas’ Ken Paxton, would be able to sue the platforms, claiming that they discriminated against Parler because of its conservative affiliation.

OK, but couldn’t the companies then simply say, “But this wasn’t discrimination. Here is the policy they violated, and here’s the evidence that they violated it”? Not so fast, Chander argues. It doesn’t really matter what Google or Amazon says; what matters is what a federal judge, and ultimately the Supreme Court, decides. And a lot of Republican-appointed federal judges might agree that tech companies are mistreating conservatives.

“Content moderation decisions are not clear up-and-down decisions,” Chander says. “It’s easy to cast those judgment calls as discriminatory, especially when you have judges who feel that their side is the one being discriminated against.” He adds, “Boy, are you handing the conservative judges on these courts a loaded weapon, knowing they’re going to be backed up by all the conservative Supreme Court justices.”

Chander clearly has a point. Republican officials have recently proven their willingness to use the law to punish corporations over ideological disagreement, a trend illustrated most vividly by Ron DeSantis’ feud with Disney in Florida. The judiciary is indeed politicized. Still, most federal judges don’t just pull indefensible rulings out of thin air—especially when those rulings could decimate a major industry. Recall that the bill says a platform can’t discriminate against “similarly situated” businesses. In other words, an enforcer like Paxton would have to prove that another company is getting away with the same thing that the conservative company was punished for. Even then, the bill puts up several more barriers. Paxton would have to show that its punishment “would materially harm competition,” which means showing harm to the competitive process itself, not just a single company. That is a high bar to clear in antitrust law. Further still, the bill includes a list of “affirmative defenses” that a platform can raise. So even if a court agreed that there was discrimination that materially harmed competition, the company could still escape liability by showing that the enforcement was necessary for safety or to “maintain or substantially enhance the core functionality of the covered platform.”

According to Chander, however, this somewhat misses the point. “The bill will not be litigated,” he says. The penalty for breaking the law is 10 percent of a company’s total US revenue for the period of the violation. Because of that, Chander argues, no company will be willing to take the risk, no matter how small, of being successfully sued. Instead of going to court and trusting that the facts are on their side, the likes of Facebook and Google will preemptively stop enforcing their content policies.

Will they, though? Let’s game this out. The antitrust bill dictates how a platform treats “business users,” which in this case mostly means advertisers. According to Chander’s logic, Facebook, Instagram, and YouTube would stop enforcing their content rules on advertisers on their platforms, lest anybody claims anti-conservative bias. But this would be a disaster—for the tech platforms. These companies have dubious track records of policing the ads that they run, but it’s hard to imagine them announcing that absolutely anything goes. Racism, graphic violence, medical misinformation—it simply isn’t in the companies’ interests to let pure garbage take over people’s feeds, especially given that advertisers have the power to micro-target users. At a certain point, the risk of losing users—and reputable advertisers—outweighs the risk of the Supreme Court going rogue. YouTube isn’t going to start welcoming Nazi ads. AWS won’t feel compelled to host Stormfront. And the judiciary is unlikely to make them.

What seems much more plausible is that the law would spur companies to finally make sure their content policies are clear and consistently applied. That would require investing more heavily in those systems and offering much more transparency into how they operate. Which sounds … pretty nice!

“Guaranteeing non-discrimination is good,” says Erin Simpson, director of tech policy at the Center for American Progress, who cowrote a detailed analysis of the bill. “There’s such a huge gap between what the rules say on paper and what they’re actually doing in the real world. The enforcement gap is huge. If this bill helps close that, that’s a good thing.”

The four Democrats who sent the letter don’t see it that way. (In addition to Schatz, they are Ron Wyden, of Oregon; Tammy Baldwin, of Wisconsin; and Ben Ray Luján, of New Mexico.) They suggest adding a section to the bill clarifying that it can’t be “construed to impose liability on a covered platform operator for moderating content.” In plain English, this would mean that no one could sue a platform for discriminatory enforcement of content policies—even if the discrimination was real. That seems like a strong position to take, so I asked Schatz’s office if it’s a fair description of the proposal. Does the senator really think dominant platforms should be allowed to discriminate against similarly situated businesses when they enforce content policies? His spokesperson pointed me back to the letter and noted that Schatz has introduced separate legislation related to content moderation policies, but didn’t answer the question directly.

(There’s also reason to think the law already does what the senators are requesting. In a response to their letter, David Cicilline, the top Democrat on the House Antitrust Subcommittee, pointed out that Section 230 of the Communications Decency Act already gives companies legal immunity for content moderation decisions. The new bill doesn’t change that law.)

Part of the story here is that the two parties have polarized dramatically around the issue of content moderation. Klobuchar and the other supporters of the antitrust bill know that if they explicitly exempt content moderation from its ambit, they risk losing the Republican votes necessary for it to become law. That’s because, for Republicans, content moderation is the single most important reason to have a non-discrimination law in the first place. On the political right, it is mainstream, even banal, to believe that these companies unfairly discriminate against conservative viewpoints. Never mind the fact that Facebook has been an incredible boon to right-wing publishers and political movements; for conservatives, episodes like the Hunter Biden laptop affair and Donald Trump’s deplatforming prove that the fix is in. Opinion on much of the left, meanwhile, has swung in the opposite direction. Because many claims of “censorship” or “shadow-banning” are overblown, even preposterous, many Democrats seem to have stopped worrying about the very real power that companies like Meta and Google have over online speech and information. They’re more likely to rally to defend these corporations’ constitutional right to remove content at will.

It’s true that any new law (or indeed any existing one) can be abused. No piece of legislation is risk-free. But to back away from regulating the tech giants on that basis is on some level to give up on the prospect of democratic governance. It amounts to trusting the platforms to do a better job regulating themselves than government could do. Which is how we got into this mess in the first place.