Breaking News
CASE PREVIEW

Social media content moderation laws come before Supreme Court

Once again, the relationship between the government and social media will headline arguments at the Supreme Court on Monday. NetChoice v. Paxton and Moody v. NetChoice are just the second of three social media disputes the court will hear this term. The justices on Monday will consider the constitutionality of controversial laws in Texas and Florida that would regulate how large social media companies like Facebook and X (formerly known as Twitter) control content posted on their sites.

Defending the laws, Texas and Florida characterize them as simply efforts to “address discrimination by social-media platforms.” But the tech groups challenging the laws counter that the laws are “an extraordinary assertion of governmental power over expression that violates the First Amendment in multiple ways.”

The legislatures in Texas and Florida passed the laws in 2021 in response to a belief that social media companies were censoring their users, especially those with conservative views. As they are drafted, the laws do not apply to conservative social media platforms like Parler, Gab, and Truth Social.

The Florida law originally created an exception for theme parks and entertainment so that the law did not apply to Disney and Universal Studios, which operate in the state. But the state’s legislature stripped that protection in 2022 after Disney officials criticized the state’s “Don’t Say Gay” law.

Although the two states’ laws are not identical, there are themes that are common to both. Both contain, for example, provisions that limit the choices that social media platforms can make about which user-generated content to present to the public and how. For example, the Florida law bars social media platforms from banning candidates for political office, as well as from limiting the exposure of those candidates’ posts. Both laws also contain provisions requiring social media platforms to provide individualized explanations to users about the platforms’ editorial decisions.

Two trade groups representing social media platforms – including Google, which owns YouTube, X (formerly known as Twitter), and Meta, which owns Facebook – went to federal court to challenge the laws.

A federal district judge in Tallahassee, Florida, barred the state from enforcing most of the law. The U.S. Court of Appeals for the 11th Circuit left that ruling in effect, agreeing that the main provisions of the Florida law likely violate the First Amendment. The state then came to the Supreme Court in 2022, asking the justices to weigh in.

A federal judge in Austin, Texas put that state’s law on hold before it could go into effect, but the U.S. Court of Appeals for the 5th Circuit disagreed. That prompted the tech groups to come to the Supreme Court, which in May 2022 temporarily blocked the law while the tech groups’ appeal continued.

After the 5th Circuit ultimately upheld the law, the tech groups returned to the Supreme Court, which agreed last fall to review both states’ laws.

Defending the laws, the states describe social media platforms as the new “digital public square,” with enormous control over news that members of the public see and communicate. States, they say, have historically had the power to protect their residents’ access to that information. And what social media platforms are ultimately seeking, the states contend, is to avoid any regulation whatsoever – an argument, Florida says, that “if accepted, threatens to neuter the authority of the people’s representatives to prevent the platforms from abusing their power over the channels of discourse.”

The states maintain that their laws do not implicate the First Amendment at all, because they simply require social media platforms to host speech, which is not itself speech but instead conduct that states can regulate to protect the public. The business model for these platforms, the states say, hinges on having billions of other people post their speech on the platforms – something very different from, say, a newspaper that creates its own content and publishes it.

To support this argument that they are merely regulating the platforms’ conduct, the states point to Supreme Court cases holding, for example, that shopping malls must allow high school students to solicit signatures for a political petition, and that a federal law requiring law schools to choose between providing military recruiters with access to their campuses and forfeiting federal funding does not violate the First Amendment.

The states also assert that the First Amendment does not apply to the laws because the states are just treating the platforms like “common carriers,” such as telephone and telegraph companies. The state laws simply impose a basic requirement that the platforms, as common carriers, not discriminate in providing their services, “which is how common-carrier regulation has functioned for centuries.”

But even if the laws do regulate speech, the states continue, they are subject to a less exacting standard of review because they do not target specific content on any platform, and they merely ensure that speakers continue to have access to the “modern public square.”

Finally, the states insist that provisions requiring the social media platforms to provide individual explanations about their content-moderation decisions are consistent with the Supreme Court’s 1985 decision holding that states can require companies to disclose “purely factual and uncontroversial information” about their services. Indeed, Texas suggests, the SMPs can use an automated process to fulfill their obligations under these provisions.

The tech groups push back against the states’ suggestion that the Texas and Florida laws do not implicate the First Amendment at all. The First Amendment, the groups write, protects the right of private social-media platforms, rather than the government, to decide what messages they will or will not disseminate. “Just as Florida may not tell the NYT what opinion pieces to publish or Fox News what interviews to air, it may not tell Facebook or YouTube what content to disseminate,” they emphasize.

The tech groups explain that there is a “cacophony of voices on the Internet engaged in everything from incitement and obscenity to political discourse and friendly banter.” As a result, they say, social media platforms must make billions of editorial decisions per day. These decisions take on two forms, they observe. First, there are judgments about what content they will remove. Facebook, for example, restricts hate speech, bullying, and harassment, while YouTube bars pornography and violent content. Second, they continue, there are judgments about how the remaining content appears on their sites for individual users.

The Texas and Florida laws interfere with platforms’ speech, the tech groups say, because they interfere with the platforms’ right to exercise their editorial discretion. In particular, the groups emphasize, the laws require large social media platforms to disseminate virtually all speech by the state’s preferred speakers, no matter how blatantly or repeatedly the speaker violates the website’s terms of use.”

And while the states rely on the line of cases indicating that there is no First Amendment right not to host someone else’s speech, the tech groups point to a different line of cases, in which the Supreme Court has recognized that the First Amendment protects a right to editorial judgment – so that, for example, a state cannot require a newspaper to give a political candidate a right to respond to criticism, nor can it mandate that the private organizers of a parade allow a group to participate when the organizers do not approve of the group’s message.

Because “countermanding the editorial judgments of ‘Big Tech’ about what speech to allow on their websites” is the “raison d’être” of the state laws, the tech groups conclude, the laws are therefore subject to the most stringent form of review, known as strict scrutiny. And the laws fail this test, the groups contend, because even if states had an interest in having their residents have access to a wide range of views on social media, that still wouldn’t justify requiring private social media platforms to publish content with which they disagree.

The states also cannot justify regulating social media platforms on the theory that they are common carriers, the tech groups continue. There is no tradition of treating a private party, like a social media platform, that publishes speech as a common carrier, they say. But even if there were, the laws at issue in these cases are not traditional common-carrier regulations, because (among other things) they only regulate some social media platforms.

Finally, the tech groups tell the justices that the provisions requiring social media platforms to provide individualized explanations and disclosures when they exercise their editorial discretion are also unconstitutional because (among other things) they require the platforms to speak and, by imposing “massive burdens,” make it less likely that the platforms will exercise that discretion. They are, the tech groups suggested, “akin to requiring a newspaper to explain every decision not to publish any one of a million letters to the editor.”

The Biden administration filed a “friend of the court” brief supporting the tech groups. It stresses that although the First Amendment protects the social media platforms’ efforts to moderate the content on their sites, that does not mean that the platforms can never be regulated. But in these cases, it says, the states cannot show that their regulations survive under even a more lenient form of First Amendment scrutiny. And in particular, U.S. Solicitor General Elizabeth Prelogar wrote, the Supreme Court “has repeatedly rejected” the premise of the states’ argument – the idea that “the government has a valid interest in increasing the diversity of views presented by a particular private speaker — even if that speaker controls a powerful or dominant platform.”

The Biden administration will be back before the court in March in another case involving its own relationship with social media. In Murthy v. Missouri, slated for argument on March 18, the justices will consider whether and to what extent government officials can communicate with social media companies about their content-moderation policies.

This article was originally published at Howe on the Court

Recommended Citation: Amy Howe, Social media content moderation laws come before Supreme Court, SCOTUSblog (Feb. 23, 2024, 4:14 PM), https://www.scotusblog.com/2024/02/social-media-content-moderation-laws-come-before-supreme-court/