By: Associates Stephanie Yee and Stephanie Roque-Hurtado
The U.S. Supreme Court recently published a decision in Twitter, et al. v. Taamneh, et al. that provides some insight as to how it might handle a Communications Decency Act § 230 analysis.
The Court in recent years appears reluctant to address § 230 – in 2020, for example, the Supreme Court declined to hear an appeal on whether Section 230 protected a software company against claims of anticompetitive conduct, and certiorari was denied, leaving many to wonder about how the Supreme Court would treat § 230 in Twitter. (See Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 592 U.S.___(2020.) Although it similarly did not perform a § 230 analysis in this case, Twitter may still shed some light on the Court’s view of § 230 protections. In Twitter, the Supreme Court held that Twitter, Facebook, and Google (which owns YouTube) were not liable under an antiterrorism statute (18 U.S.C. § 2333) for hosting terrorism-related content uploaded by third-party users. Although § 230 was a defense that could potentially have immunized these content platforms from liability, on appeal, the Supreme Court did not reach a § 230 analysis, and instead based its decision upon a § 2333 analysis alone. However, in its § 2333 analysis, the Supreme Court appeared open to the notion that content platforms should not be left wide open to liability for failure to police third party content, shining a light onto how it could potentially address a § 230 analysis in a future case.
Background of the Communications Decency Act
The Communications Decency Act was enacted in 1996 in an effort to regulate obscenity and indecency online. Section 230 was introduced as an amendment to the Act to ensure that “providers of an interactive computer service” would not be treated as publishers of third-party content, in contrast to newspapers who may be exposed to potential liability for articles they publish. The Act, however, does not give providers unqualified immunity: providers are legally responsible for information that they have developed or for activities unrelated to third-party content. In 1997, the Supreme Court struck down the Act’s indecency provisions, finding that they were unconstitutional under the First Amendment in the Reno v. American Civil Liberties Union case, but left § 230 in place. The purposes of § 230 were to promote free speech and prevent excessive liability and regulation in a then-rapidly emerging online landscape. As our understanding and use of the internet rapidly expanded, critics have called for § 230 to be overturned because of its wide safe-harbor that companies use as a shield for any and all liability. On the other hand, proponents of § 230 argue that eviscerating the protections of the Act would lead to potential runaway liability on part of all online businesses and websites, which would cause significant and detrimental curtailment of online speech and activity.
Twitter v. Taamneh
Twitter, et al. v. Taamneh, et al. was a case that some legal commentators speculated could potentially threaten the protections § 230 and allow the Supreme Court to eviscerate the safe harbor that content platforms currently enjoy. In the events that led up to this case, a terrorist claiming to be associated with the so-called Islamic State of Iraq and Syria (ISIS), Abdulkadir Masharipov, carried out an attack on a nightclub in Istanbul, Turkey in 2017. Masharipov killed Nawras Alassaf, along with 38 other people.
Alassaf’s family subsequently brought suit against Twitter, Facebook, and Google under § 2333, which provides the ability for U.S. nationals who have been injured by an act of international terrorism to file a civil suit for damages. Alassaf’s family sought to hold Twitter, Facebook, and Google (via YouTube) liable under § 2333 for hosting terrorism-related content uploaded by third-party users. The family alleged that the three companies aided and abetted ISIS when they knowingly allowed ISIS to use “their platforms and ‘recommendation’ algorithms as tools for recruiting, fundraising, and spreading propaganda.” The crux of the plaintiffs’ argument was that the companies should be liable under § 2333 because they allegedly knew about and promoted terrorism-related content to certain users. They also alleged that the companies profited from the content by placing advertisements alongside tweets, posts, and videos.
The Supreme Court unanimously agreed that for these companies, hosting such content and failing to take action against either the content or the uploading users (i.e. banning users) did not give rise to liability for aiding and abetting terrorism. The Supreme Court held that plaintiffs failed to state a claim under § 2333 that Twitter, Facebook, or Google aided and abetted ISIS in the terrorist attack on the nightclub.
Importantly, the Court, without reaching a § 230 analysis, appeared to support the idea of a safe harbor afforded to content providers when hosting material uploaded by third party users, even when there is elevation via algorithm of such content. It opined that to hold Twitter, Facebook, and Google liable in this instance, “would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them.” (Twitter, et al. v. Taamneh, et al., 598 U.S. ___, 27 (2023).)
The Court noted that the platforms were not used to plan or coordinate any attacks, nor did Twitter, Facebook, or Google give ISIS any “special treatment or words of encouragement” perhaps leaving open the possibility that it could decide differently on different facts. The Court suggested that a different outcome might be possible if perhaps a platform were to selectively promote terrorist content, potentially leading to a finding that the content platform culpably assisted a terrorist group. Perhaps if a platform were to manually, via human employee intervention, promote similar content, a court might find liability could attach. While it remains to be seen whether the Supreme Court would keep § 230 protections intact in a direct analysis, the Twitter decision at least suggests that in a § 230 analysis, the Court does not believe platforms should bear liability for hosting third party content, using promotional algorithms, and profiting from the same.