Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could jeopardize the entire internet by removing a key defense against lawsuits over content moderation decisions that involve artificial intelligence (AI).
Section 230 of the Communications Decency Act of 1996 (opens in a new tab) currently offers a general “liability protection” for how companies moderate content on their platforms.
However, as reported by CNN (opens in a new tab)Google wrote wa legal filing (opens in a new tab) that if SC rules in favor of the plaintiff in Gonzalez v. Google, which concerns YouTube’s algorithms recommending pro-ISIS content to users, the internet could be flooded with dangerous, offensive and extremist content.
Automate in moderation
Being part of a nearly 27-year-old law, already designated for reform by US President Joe Biden (opens in a new tab)Section 230 is not suited to legislate on modern developments such as artificially intelligent algorithms, and this is where the problems begin.
The crux of Google’s argument is that the internet has grown so much since 1996 that incorporating AI into content moderation solutions has become a necessity. “Virtually no modern website would function if users had to sort content themselves,” reads the filing.
“Abundance of content” means tech companies need to use algorithms to present it to users in an accessible way, from search results to flight listings to job recommendations on job websites.
Google also pointed out that under current law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but it puts the internet at risk of being a “virtual cesspool.”
The tech giant also pointed out that YouTube’s community guidelines explicitly reject terrorism, adult content, violence, and “other dangerous or offensive content” and that it is constantly improving its algorithms to preemptively block prohibited content.
He also claimed that “about” 95% of videos violating YouTube’s “violent extremism policy” were auto-detected in Q2 2022.
Nevertheless, the petitioners on this matter maintain that YouTube has not removed all ISIS-related content and in doing so has helped the “rise of ISIS” to prominence.
In an attempt to further distance itself from any liability on the issue, Google responded by saying that YouTube’s algorithms recommend content to users based on similarities between the content and content the user is already interested in.
This is a complicated case, and while it’s easy to agree with the view that the internet has grown too big to be moderated manually, it’s equally compelling to suggest that companies should be held accountable when their automated solutions fail.
After all, even if tech giants can’t guarantee what’s on their website, users of filters and parental control cannot be sure that they are taking effective action to block offensive content.