top of page
  • Writer's pictureThe San Juan Daily Star

Supreme Court frustrated and wary over legal shield for tech companies

The Supreme Court building in Washington, April 12, 2022. In a case with the potential to alter the very structure of the internet, the Supreme Court on Tuesday, Feb. 21, 2023, explored the limits of a federal law that shields social media platforms from legal responsibility for what users post on their sites.

By Adam Liptak

In a case with the potential to alter the very structure of the internet, the Supreme Court did not appear ready Tuesday to limit a law that protects social media platforms from lawsuits over their users’ posts.

In the course of a sprawling argument lasting almost three hours, the justices seemed to view the positions taken by the two sides as too extreme, giving them a choice between exposing search engines and Twitter shares to liability on the one hand and protecting algorithms that promote pro-Islamic State group content on the other.

At the same time, they expressed doubts about their own competence to find a middle ground.

“You know, these are not like the nine greatest experts on the internet,” Justice Elena Kagan said of the Supreme Court, to laughter.

Others had practical concerns. Justice Brett Kavanaugh, echoing comments made in briefs, worried that a decision imposing limits on the shield “would really crash the digital economy with all sorts of effects on workers and consumers, retirement plans and what have you.”

Drawing lines in this area, he said, was a job for Congress. “We are not equipped to account for that,” he said.

The federal law at issue in the case, Section 230 of the Communications Decency Act, shields online platforms from lawsuits over what their users post and the platforms’ decisions to take content down. Limiting the sweep of the law could expose the platforms to lawsuits claiming they had steered people to posts and videos that promote extremism, advocate violence, harm reputations and cause emotional distress.

The case comes as developments in cutting-edge artificial intelligence products raise profound new questions about whether old laws — Section 230 was enacted in 1996 — can keep up with rapidly changing technology.

“This was a pre-algorithm statute,” Kagan said, adding that it provided scant guidance “in a post-algorithm world.” Justice Neil Gorsuch, meanwhile, marveled at advances in AI. “Artificial intelligence generates poetry,” he said. “It generates polemics.”

The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during the terrorist attacks in November 2015, which also targeted the Bataclan concert hall. Eric Schnapper, a lawyer for the family, argued that YouTube, a subsidiary of Google, bore responsibility because it had used algorithms to push Islamic State group videos to interested viewers, using information that the company had collected about them.

“We’re focusing on the recommendation function,” Schnapper said.

But Justice Clarence Thomas said that recommendations were vital to making internet platforms useful. “If you’re interested in cooking,” he said, “you don’t want thumbnails on light jazz.” He later added, “I see these as suggestions and not really recommendations because they don’t really comment on them.”

Schnapper said YouTube should be liable for its algorithm, which he said systematically recommended videos inciting violence and supporting terrorism. The algorithm, he said, was YouTube’s speech and distinct from what users had posted.

Kagan pressed Schnapper on the limits of his argument. Did he also take issue with the algorithms Facebook and Twitter use to generate people’s feeds? Or with search engines?

Schnapper said all of those could lose protection under some circumstances, a response that seemed to surprise Kagan.

Justice Samuel Alito said he was lost. “I don’t know where you’re drawing the line,” he told Schnapper. “That’s the problem.”

Schnapper tried to clarify his position and in doing so revealed its breadth. “What we’re saying is that insofar as they were encouraging people to go look at things,” he said, “that’s what’s outside the protection of the statute.”

Section 230 was enacted in the infancy of the internet. It was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation.

The provision said, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The provision helped enable the rise of social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability for every post.

Malcolm L. Stewart, a lawyer for the Biden administration, largely argued in support of the family’s position in the case, Gonzalez v. Google, No. 21-1333. He said that successful lawsuits based on recommendations would be rare but that the immunity provided by Section 230 was generally unavailable.

Kagan acknowledged that many suits would fail for reasons unrelated to Section 230. “But still, I mean, you are creating a world of lawsuits,” she said. Kavanaugh echoed the point.

Lisa S. Blatt, a lawyer for Google, said the provision gave the company complete protection from suits like the one brought by Gonzalez’s family. YouTube’s algorithms are a form of editorial curation, she said. Without the ability to provide content of interest to users, she said, the internet would be a useless jumble.

“All publishing requires organization,” she said.

A ruling against Google, she said, would either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. “You have ‘The Truman Show’ versus a horror show,” she said.

Kagan asked Blatt if Section 230 would protect “a pro-ISIS” algorithm or one that promoted defamatory speech. Blatt said yes.

Section 230 has faced criticism across the political spectrum. Many liberals say it has shielded tech platforms from responsibility for disinformation, hate speech and violent content. Some conservatives say the provision has allowed the platforms to grow so powerful that they can effectively exclude voices on the right from the national conversation.

25 views3 comments

Recent Posts

See All

3 comentarii

Rose Rose
Rose Rose
24 feb. 2023

Under Section 2333 of the Anti-Terrorism Act, as amended by the Justice Against Sponsors of Terrorism Act, U.S. nationals injured by “an act of international terrorism” that is “committed, planned, or authorized by” a designated foreign terrorist organization may sue any person who “aids and abets, by knowingly provid- ing substantial assistance, or who conspires with the person who committed such an act of international ter- rorism,” and recover treble damages. 18 U.S.C. § 2333(a), (d)(2). The questions presented are:

1. Whether a defendant that provides generic, widely available services to all its numerous users and “regularly” works to detect and prevent terrorists from using those services “knowingly” provided substantial assistance under Section 2333 merely because it alleg- edly could…

Rose Rose
Rose Rose
24 feb. 2023
Răspunde utilizatorului

And if the justices are frustrated it’s because they were a part of the sham as well, remember when sotomayor wanted to jab the kids without trials, oh wait it was 8 lab rats, she wanted to jab our kids with an experimental drug. No parent in their right mind would jab their kids with an experimental drug for a repackaged cold/flu!!!!!

bottom of page