with No Comments

SECTION 230 AND GENERATIVE AI

By Manya Oberoi


The United States Supreme Court recently rejected claims in two cases, against Google and against Twitter, stating that they could not be liable for not doing enough to remove terrorist-related content from their platforms.  While the Court declined to specifically address whether the claims were barred by Section 230 of the Communications Decency Act (“CDA”), the implications of these cases and the arguments raised have a great impact on the future of such platforms, and their use of generative artificial intelligence (“generative AI”).  This article explores this impact and whether operators of such generative AI tools could be shielded from liability using Section 230 of the CDA.


Section 230 of the CDA

Section 230 is a federal law that provides immunity for website platforms for third-party content. Section 230 has two main parts:

  1. Section 230(c)(1) provides immunity for website platforms from liability (including for libel, slander, or other torts) for content posted by third parties like website users.
  2. Section 230(c)(2) provides immunity for website platforms that remove or restrict content that they deem to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” It is pertinent to note that there is no compulsion or obligation on platforms to regulate such content, rather an immunity to liability is provided.

Section 230 was in part a response to the New York Supreme Court’s decision in 1995 in Stratton Oakmont, Inc. v. Prodigy Services Co., which decided that early message-board platform, Prodigy, could be liable for publishing harmful user-generated content because it had tried but failed to screen all harmful content from its site. Therefore, to ensure that internet platforms would not be penalized for attempting to engage in content moderation, Congress enacted Section 230. However, since its passage, Section 230 has been heavily criticized, in part, because the law was established when the internet was emerging, and cannot reasonably apply “as is” to emerging technologies, like generative AI. (See, https://www.internetsociety.org/blog/2023/02/what-is-section-230-and-why-should-i-care-about-it/?gclid=Cj0KCQjwholBhC_ARIsAMpgMoeXqmTXk3PUaj2aal1yhScjLtVgPnAcEePmvutdR9N_Ej3f6QqnrosaAnhDEALw_wcB)

 

Recent Cases

So far, courts generally have found that Section 230 shields platform operators from liability for the posts, photos and videos that third parties share on their services. Recently the United States Supreme Court had a chance to opine on the liability of platform operators for failing to adequately regulate or moderate the user content on their platforms.

In Twitter v Tamneh, decided in May, the plaintiffs alleged that Twitter did not take adequate measures and accordingly aided and abetted certain attacks in 2017 by ISIS. The Plaintiff’s claims were based on the Anti-Terrorism Act (“ATA”), which allows victims to seek recovery from “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed an act of international terrorism.” (18 U.S.C. § 2333(d)(2)).   The Court held that the ATA requires that the defendant consciously, voluntarily, and culpably participate in the terrorist act in a way as to help make it succeed.  Accordingly, Twitter’s omission, to effectively moderate its platform, did not meet this standard just because it could have taken more “meaningful” or “aggressive” action to prevent such use.

 

In Gonzalez v. Google, LLC, also decided in May, the family of victims of the 2015 ISIS attacks across Paris stated that Google, through its targeted recommendation algorithms, boosted and amplified ISIS’ content on YouTube (owned by Google) and therefore aided and abetted the terror acts of ISIS. Previously, the Ninth Circuit held that these potential claims were not barred by Section 230 but that plaintiffs’ allegations failed to state a viable claim. The Supreme Court used the Ninth Circuit decision, along with its decision on Twitter v. Tamneh, analyzed above, to state that “plaintiffs’ complaint, independent of §230, states little if any claim for relief.” The court accordingly “decline[d] to address the application of §230 to [the plaintiffs].”

 

Importantly, however, while Gonzalez v. Google focused primarily on social media recommendations, during oral arguments, Justice Neil M. Gorsuch briefly discussed platform liability for the use of generative AI.  During his questioning, Gorsuch used generative AI as a hypothetical example of when tech platforms would not be eligible for Section 230 protections. “Artificial intelligence generates poetry,” he said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”  (See, https://www.washingtonpost.com/technology/2023/02/21/gonzalez-v-google-section-230-supreme-court/)


Generative AI and Section 230

Whether an interactive computer service enjoys immunity under Section 230 is based on two prerequisites:

  • Is the platform or interactive computer service responsible, in whole or in part, for the creation or development of the content at issue? (47 U.S.C. § 230(f)(3))
  • Does the claim seek to treat the platform “as the publisher or speaker” of that content? (Brief Of Senator Ron Wyden And Former Representative Christopher Cox As Amici Curiae In Support Of Respondent, available at https://www.supremecourt.gov/DocketPDF/21/21-1333/252645/20230119135536095_21-1333%20bsac%20Wyden%20Cox.pdf)

While the issue of Section 230’s application to generative AI has not been looked at by courts as yet, Justice Gorsuch’s line of questioning and reasons stated below would lead to the inference that content created by such generative AI would not be protected by Section 230, for two reasons.  One is that the output generated by generative AI may be considered content developed, at least in part, by the platform itself.  The second is that the Federal Trade Commission (“FTC”) has warned businesses that it will be attributing responsibility for a user’s use of generative AI tools to the operator of such tools rather than to users.

Conclusion

Technology platforms currently bear little responsibility for their platforms’ amplification of harmful or illegal content.  With the Supreme Court’s recent denial to look at the immunities enjoyed by such platforms, it still remains to be tested to what extent liability could be applied to such platforms under existing law.  If the courts at a later stage limit the applicability of Section 230, it will allow a fountain of litigation against companies over their use of algorithmic decision-making and content generation. Regardless, technology platforms should tread with caution when it comes to the use of generative AI, given the recent influx of regulations.

2024 Inventus Law. All rights reserved. | Website Designed By Blue Astral