Social media platforms, at least in their most common form, have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited, the U.S. Supreme Court stated in its landmark decision in Moody v. NetChoice and NetChoice v. Paxton, which were decided together. 

The cases dealt with Florida and Texas laws that each limited the ability of online services to block, deamplify, or otherwise negatively moderate certain user speech.  

Yet the Supreme Court did not strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. 

The Supreme Court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged. 

This is an important ruling and one that EFF has been arguing for in courts since 2018. We’ve already published our high-level reaction to the decision and written about how it bears on pending social media regulations. This post is a more thorough, and much longer, analysis of the opinion and its implications for future lawsuits. 

A First Amendment Right to Moderate Social Media Content 

 The most important question before the Supreme Court, and the one that will have the strongest ramifications beyond the specific laws being challenged here, is whether social media platforms have their own First Amendment rights, independent of their users’ rights, to decide what third-party content to present in their users’ feeds, recommend, amplify, deamplify, label, or block.  The lower courts in the NetChoice cases reached opposite conclusions, with the 11th Circuit considering the Florida law finding a First Amendment right to curate, and the 5th Circuit considering the Texas law refusing to do so. 

The Supreme Court appropriately resolved that conflict between the two appellate courts and answered this question yes, treating social media platforms the same as other entities that compile, edit, and curate the speech of others, such as bookstores, newsstands, art galleries, parade organizers, and newspapers.  As Justice Kagan, writing for the court’s majority, wrote, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”   

As the Supreme Court explained,  

Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included—it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party’s expressive choices—the government confronts the First Amendment. 

The court thus chose to apply the line of precedent from  Miami Herald Co. v. Tornillo —in which the Supreme Court in 1973 struck down a law that required newspapers that endorsed a candidate for office to provide space to that candidate’s opponents to reply—and rejected the line of precedent from PruneYard Shopping Center v. Robins—a 1980 case in which the Supreme Court ruled that  a state court decision that the California Constitution required a particular shopping center to let  a group set up a table and collect signatures when it allowed other groups to do so did not violate the First Amendment. 

In Moody, the Supreme Court explained that the latter rule applied only to situations in which the host itself was not engaged in an inherently expressive activity. That is, a social media platform deciding what user generated content to select and recommend to its users is inherently expressive, but a shopping center deciding who gets to table on your private property is not. 

So, the Supreme Court said, the 11th Circuit got it right and the 5th Circuit did not. Indeed, the 5th Circuit got it very wrong. In the Supreme Court’s words, the 5th Circuit’s opinion “rests on a serious misunderstanding of First Amendment precedent and principle.” 

This is also the position EFF has been making in courts since at least 2018. As we wrote then, “The law is clear that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate the content. The Supreme Court has long held that private publishers have a First Amendment right to control the content of their publications. Miami Herald Co. v. Tornillo, 418 U.S. 241, 254-44 (1974).” 

This is an important rule in several contexts in addition to the state must-carry laws at issue in these cases. The same rule will apply to laws that restrict the publication and recommendation of lawful speech by social media platforms, or otherwise interfere with content moderation. And it will apply to civil lawsuits brought by those whose content has been removed, demoted, or demonetized. 

Applying this rule, the Supreme Court concluded that Texas’s law could not be constitutionally applied against Facebook’s Newsfeed and YouTube’s homepage. (The Court did not specifically address Florida’s law since it was writing in the context of identifying the 5th Circuit’s errors.)

Which Services Have This First Amendment Right? 

But the Supreme Court’s ruling doesn’t make clear which other functions of which services enjoy this First Amendment right to curate. The Supreme Court specifically analyzed only Facebook’s Newsfeed and YouTube’s homepage. It did not analyze any services offered by other platforms or other functions offered through Facebook, like messaging or event management. 

The opinion does, however, identify some factors that will be helpful in assessing which online services have the right to curate. 

  • Targeting and customizing the publication of user-generated content is protected, whether by algorithm or otherwise, pursuant to the company’s own content rules, guidelines, or standards. The Supreme Court specified that it was not assessing whether the same right would apply to personalized curation decisions made algorithmically solely based on user behavior online without any reference to a site’s own standards or guidelines. 
  • Content moderation such as labeling user posts with warnings, disclaimers, or endorsements for all users, or deletion of posts, again pursuant to a site’s own rules, guidelines, or standards, is protected. 
  • The combination of multifarious voices “to create a distinctive expressive offering” or have a “particular expressive quality” based on a set of beliefs about which voices are appropriate or inappropriate, a process that is often “the product of a wealth of choices,” is protected. 
  • There is no threshold of selectivity a service must surpass to have curatorial freedom, a point we argued in our amicus brief. "That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference,” the Supreme Court said. Courts should not focus on the ratio of rejected to accepted posts in deciding whether the right to curate exists: “It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.” 
  • Curatorial freedom exists even when no one is likely to view a platform’s editorial decisions as their endorsement of the ideas in posts they choose to publish. As the Supreme Court said, “this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution.” 

Considering these factors, the First Amendment right will apply to a wide range of social media services, what the Supreme Court called “Facebook Newsfeed and its ilk” or “its near equivalents.” But its application is less clear to messaging, e-commerce, event management, and infrastructure services.

The Court, Finally, Seems to Understand Content Moderation 

Also noteworthy is that in concluding that content moderation is protected First Amendment activity, the Supreme Court showed that it finally understands how content moderation works. It accurately described the process of how social media platforms decide what any user sees in their feed. For example, it wrote:

In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. 

and 

In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content. 

This comes only a year after Justice Kagan, who wrote this opinion, remarked of the Supreme Court during another oral argument that, “These are not, like, the nine greatest experts on the internet.” In hindsight, that statement seems more of a comment on her colleagues’ understanding than her own. 

Importantly, the Court has now moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Court used that language to describe the process in last term’s case, Twitter v. Taamneh. It is now clear that in the Taamneh case, the court was referring to Twitter’s passive relationship with ISIS, in that Twitter treated it like any other account holder, a relationship that did not support the terrorism aiding and abetting claims made in that case. 

Supreme Court Suggests Competition Law to Address Undue Market Influences 

Another important element of the Supreme Court’s analysis is its treatment of the posited rationale for both states’ speech restrictions: the need to improve or better balance the marketplace of ideas. Both laws were passed in response to perceived censorship of conservative voices, and the states sought to eliminate this perceived political bias from the platform’s editorial practices.  

The Supreme Court found that this was not a sufficiently important reason to limit speech, as is required under First Amendment scrutiny: 

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. . . . The government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey. 

But, as EFF has consistently urged in its amicus briefs, in these cases and others, that ruling does not leave states without any way of addressing harms caused by the market dominance of certain services.   

So, it is very heartening to see the Supreme Court point specifically to competition law as an alternative. In the Supreme Court’s words, “Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access." 

While not mentioned, we think this same reasoning supports many data privacy laws as well.  

Nevertheless, the Court Did Not Strike Down Either Law

Despite this analysis, the Supreme Court did not strike down either law. Rather, it sent the cases back to the lower courts to decide whether the lawsuits were proper facial challenges to the law.  

A facial challenge is a lawsuit that argues that a law is unconstitutional in every one of its applications. Outside of the First Amendment, facial challenges are permissible only if there is no possible constitutional application of the law or, as the courts say, the law “lacks a plainly legitimate sweep.” However, in First Amendment cases, a special rule applies: a law may be struck down as overbroad if there are a substantial number of unconstitutional applications relative to the law’s permissible scope. 

To assess whether a facial challenge is proper, a court is thus required to do a three-step analysis. First, a court must identify a law’s “sweep,” that is, to whom and what actions it applies. Second, the court must then identify which of those possible applications are unconstitutional. Third, the court must then both quantitatively and qualitatively compare the constitutional and unconstitutional applications–principal applications of the law, that is, the ones that seemed to be the law’s primary targets, may be given greater weight in that balancing. The court will strike down the law only if the unconstitutional applications are substantially greater than the constitutional ones.  

The Supreme Court found that neither court conducted this analysis with respect to either the Florida or Texas law. So, it sent both cases back down so the lower courts could do so. Its First Amendment analysis set forth above was to guide the courts in determining which applications of the laws would be unconstitutional. The Supreme Court finds that the Texas law cannot be constitutionally applied to Facebook’s Newsfeed of YouTube’s homepage—but the lower court now needs to complete the analysis. 

While these limitations on facial challenges have been well established for some time, the Supreme Court’s focus on them here was surprising because blatantly unconstitutional laws are challenged facially all the time.  

Here, however, the Supreme Court was reluctant to apply its First Amendment analysis beyond large social media platforms like Facebook’s Newsfeed and its close equivalents. The Court was also unsure whether and how either law would be applied to scores of other online services, such as email, direct messaging, e-commerce, payment apps, ride-hailing apps, and others. It wants the lower courts to look at those possible applications first. 

This decision thus creates a perverse incentive for states to pass laws that by their language broadly cover a wide range of activities, and in doing so make a facial challenge more difficult.

For example, the Florida law defines covered social media platforms as "any information service, system, Internet search engine, or access software provider that does business in this state and provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site” which has either gross annual revenues of at least $100 million or at least 100 million monthly individual platform participants globally.

Texas HB20, by contrast, defines “social media platforms,” as “an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images,” and specifically excludes ISPs, email providers, online services that are nor primarily composed of user-generated content, and to which the social aspects are incidental to a service’s primary purpose.  

Does this Make the First Amendment Analysis “Dicta”? 

Typically, language in a higher court’s opinion that is necessary to its ultimate ruling is binding on lower courts, while language that is not necessary is merely persuasive “dicta.” Here, the Supreme Court’s ruling was based on the uncertainty about the propriety of the facial challenge, and not the First Amendment issues directly. So, there is some argument that the First Amendment analysis is persuasive but not binding precedent. 

However, the Supreme Court could not responsibly remand the case back to the lower courts to consider the facial challenge question without resolving the split in the circuits, that is, the vastly different ways in which the 5th and 11th Circuits analyzed whether social media content curation is protected by the First Amendment. Without that guidance, neither court would know how to assess whether a particular potential application of the law was constitutional or not. The Supreme Court’s First Amendment analysis thus seems quite necessary and is arguably not dicta. 

 And even if the analysis is merely persuasive, six of the justices found that the editorial and curatorial freedom cases like Miami Herald Co v. Tornillo applied. At a minimum, this signals how they will rule on the issue when it reaches them again. It would be unwise for a lower court to rule otherwise, at least while those six justices remain on the Supreme Court. 

What About the Transparency Mandates

Each law also contains several requirements that the covered services publish information about their content moderation practices. Only one type of these provisions was part of the cases before the Supreme Court, a provision from each law that required covered platforms to provide the user with notice and an explanation of certain content moderation decisions.

Heading into the Supreme Court, it was unclear what legal standard applied to these speech mandates. Was it the undue burden standard, from a case called Zauderer v. Office of Disciplinary Counsel, that applies to mandated noncontroversial and factual disclosures in advertisements and other forms of commercial speech, or the strict scrutiny standard that applies to other mandated disclosures?

The Court remanded this question with the rest of the case. But it did imply, without elaboration, that the Zauderer “undue burden” standard each of the lower courts applied was the correct one.

Tidbits From the Concurring Opinions 

All nine justices on the Supreme Court questioned the propriety of the facial challenges to the laws and favored remanding the cases back to the lower courts. So, officially the case was a unanimous 9-0 decision. But there were four separate concurring opinions that revealed some differences in reasoning, with the most significant difference being that Justices Alito, Thomas, and Gorsuch disagreed with the majority’s First Amendment analysis.

Because a majority of the Supreme Court, five justices, fully supported the First Amendment analysis discussed above, the concurrences have no legal effect. There are, however, some interesting tidbits in them that give hints as to how the justices might rule in future cases.

  • Justice Barrett fully joined the majority opinion. She wrote a separate concurrence to emphasize that the First Amendment issues may play out much differently for services other than Facebook’s Newsfeed and YouTube’s homepage. She expressed a special concern for algorithmic decision-making that does not carry out the platform’s editorial policies. She also noted that a platform’s foreign ownership might affect whether the platform has First Amendment rights, a statement that pretty much everyone assumes is directed at TikTok. 
  • Justice Jackson agreed with the majority that the Miami Herald line of cases was the correct precedent and that the 11th Circuit’s interpretation of the law was correct, whereas the 5th Circuit’s was not. But she did not agree with the majority decision to apply the law to Facebook’s Newsfeed and YouTube’s home page. Rather, the lower courts should do that. She emphasized that the law might be applied differently to different functions of a single service.
  • Justice Alito, joined by Thomas and Gorsuch, emphasized his view that the majority’s First Amendment analysis is nonbinding dicta. He criticized the majority for undertaking the analysis on the record before it. But since the majority did so, he expressed his disagreement with it. He disputed that the Miami Herald line of cases was controlling and raised the possibility that the common carrier doctrine, whereby social media would be treated more like telephone companies, was the more appropriate path. He also questioned whether algorithmic moderation reflects any human’s decision-making and whether community moderation models reflect a platform’s editorial decisions or viewpoints, as opposed to the views of its users.
  • Justice Thomas fully agreed with Justice Alito but wrote separately to make two points. First, he repeated a long-standing belief that the Zauderer “undue burden” standard, and indeed the entire commercial speech doctrine, should be abandoned. Second, he endorsed the common carrier doctrine as the correct law. He also expounded on the dangers of facial challenges. Lastly, Justice Thomas seems to have moved off, at least a little, his previous position that social media platforms were largely neutral pipes that insubstantially engaged with user speech.

How the NetChoice opinion will be viewed by lower courts and what influence it will have on state legislatures and Congress, which continue to seek to interfere with content moderation processes, remains to be seen. 

But the Supreme Court has helpfully resolved a central question and provided a First Amendment framework for analyzing the legality of government efforts to dictate what content social media platforms should or should not publish. 

 

 

 

Related Issues