TRIGGER WARNING: PLEASE TAKE CARE WHEN READING THIS BLOG, THIS BLOG CONTAINS CONTENT ABOUT SELF-HARM INVOLVING CHILDREN.
Children in Danger.
Challenges. Perhaps once associated with the noble and reckless endeavors of Herculean tasks, now reduced to children harming themselves behind locked doors to gain the social approval of their peers on TiKTok. There are all sorts of challenges on the internet from ingesting things that make you feel sick, to causing skin damage by applying freezing or burning materials until suffering severe burns or frostbite. The more you read up on these challenges the more concerning they become. The saddest part is the main target audience for these challenges tend to be vulnerable individuals eager to gain the approval of their peers: children and young adults. Sadly, the most extreme of these challenges can result in death.
We won’t be providing too many examples of these challenges for fear of popularizing this trend, but we will be focusing on one challenge that has caused the death of 20 children from 2021 to 2022, 15 of them under the age of 12. The danger of this particular challenge is evident from its very nature: requiring participants to self-asphyxiate until losing consciousness.
It’s shocking that we have to come to the point where children are accidentally hanging themselves, before we begin to see any legal progress in holding these social media giants to account. But up until now, they have been comfortably protected by s.230 of the 1996 Communications Decency Act (‘CDA’).
Social media giants can thank S.230.
The birth of S.230 of the CDA is derived from an attempt to solve the issues created by two incompatible court cases. The case of Cubby, Inc. v. CompuServe Inc. held that an online messaging board was a passive distributor, but the case of Stratton Oakmont, Inc. v. Prodigy Services Co found that, because an online bulletin board had an online screening program, this was in practice ‘editorial control’ and therefore, should be considered liable as a publisher. This in turn put companies in a hopeless position:
S.230 comes to the rescue. Passed by Congress, it aimed to resolve this problem by ensuring that Internet Service Providers (ISPs) were not liable for information posted by third parties on their servers (of course aside from illegal content, including copyright infringement (DMCA) and sex trafficking (FOSTA-SESTA)). Furthermore, the provision also empowered ISPs by ensuring they were not liable for content moderation. One can certainly see the positive aspects of this provision: social medial platforms flourish; the message of millions of unheard voices is heard; and free speech is strengthened.
However, this has also meant that social media giants can profit from allowing dangerous and illicit content on their servers, with zero responsibility as to the consequences of who might view and act on it. Historically, S.230 has been interpreted very broadly and offered ample protection to these companies, but have AI AI algorithms changed everything?
AI algorithmic amplification: immunity lost?
The widespread use of AI algorithms which recommend curated content has raised the question of whether this form of ‘speech’ counts as publishing. If we compare what AI algorithms and a newspaper editor might do, there is very little difference. Both make editorial decisions on whether to display or withdraw certain content. If AI algorithmic amplification counts as publishing, then have social media giants lost their S.230 immunity?
The cases of Twitter, Inc. v. Taamneh (2023) and Gonzalez v. Google* (2023)* proposed to the Supreme Court that Twitter and Google aided and abetted terrorists who posted content to their platforms. Both cases relied on the recommendation AI algorithms used by these companies, with the Google case explicitly questioning whether the S.230(c)(1) immunity of the CDA applies where an AI algorithm performs traditional editorial functions. In both these cases, the court resolved the case on different grounds, so it did not address the effect of AI algorithmic amplification on S.230 immunity.
In an Amicus Curiae Brief (a brief in which a friend of the court sets out legal recommendations in a given case) of the Gonzalez v Google case, it was suggested that S.230 should be interpreted narrowly to ensure state authority to allocate loss amongst parties is maintained. In this brief, the Tennessee Attorney General Jonathan Skrmetti asserted that: Section 230’s far-reaching scope of immunity, as interpreted by the Ninth Circuit and other courts, prevents states from allocating “losses for internet-related wrongs.” Skrmetti would have to wait till 2024 for the Anderson v TikTok Inc case.
The case that may change everything: Anderson v TikTok Inc.
It all began when a TikTok AI algorithm recommended a self-asphyxiation challenge (discussed previously) to ten-year-old Nylah Anderson’s ‘For You Page’. Tragically, after watching this video, Nylah decided to attempt the challenge and accidentally hung herself. Her mother brought a claim against TikTok, claiming that TikTok:
(1) was aware of the challenge;
(2) allowed users to post videos of themselves participating in the challenge;
(3) recommended and promoted the challenge to minors through their ‘For You Page’ which resulted in Nylah’s death.
Anderson’s case relied on the authoritative supreme court judgment of Moody v Net Choice, this case held that a “platform’s AI algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment”. Anderson argued that if the AI algorithmic expressions of social media sites are considered a form of ‘first party speech’, then they should be subject to both the same protections and liabilities as first party speech. Thus, they would lose the protections offered by S.230 and be liable for speech posted by AI algorithmic suggestions. TikTok should, therefore, be held liable for its “expressive activity” through its AI algorithm and for the consequences this had for her daughter.
The case was dismissed by the district court on the grounds that TikTok was immune under S.230 of the CDA. Yet, on appeal the third division court held that while TikTok could not be held liable for hosting the challenge or providing it in response to a search, it could be held liable for promoting the challenge unsolicited through AI algorithmic expression.
This landmark judgment could entirely change the liability systems of social media companies. If social media companies begin to be held accountable for their AI algorithmic expressions, then they would be forced to take a more human-centered approach, and ensure that their AI algorithms promote positive (or at least not harmful) behaviors. Could we be seeing the start of social media companies being held to account for the effect of their AI algorithms?
Or, more specifically, is this the end of TikTok?
Goodbye TikTok?
In the latest news, more than a dozen states have sued TikTok for helping to drive a mental health crisis amongst teenagers. The lawsuit alleges that many of TikTok’s features are highly addictive and problematic when it comes to long term use. Specifically, the lawsuit mentions: “alerts that disrupt sleep; videos that vanish, driving users to check the platform frequently; and beauty filters that allow users to augment their appearance”. That the TikTok app was designed to be addictive is not a huge surprise when you consider it was originally designed as a tool for Chinese Communist Propaganda.
This isn’t the first time that TikTok has been accused of falling short when it comes to protecting children. TikTok has been described as a ‘repeat offender’ for its violations in protecting children’s privacy and has faced millions of dollars in fines across Europe, the UK and now the US. Where, as a society, we have begun to acknowledge the immense impact that these platforms have on the health and safety of our children, and the US is consequently extending the Children and Teens’ Online Privacy Protection Act (COPA) to cover children up to the age of 17, TikTok is barely meeting the standards we already have in place.
Following concerns of TikTok handing over user-data to the Chinese government, the U.S. has passed a legislation requiring TikTok to be sold by January 2025 to a government-approved buyer or face being banned in the US. The legislation also provides the President with broad powers to limit apps tied with Russia, China, Iran and North Korea. While the owners ByteDance are challenging the US action in court, it is likely that it will be years before a decision is reached. Beijing has vowed to oppose a move to sell ByteDance to an American company. So, for now, it looks as though TikTok may be on its way out (at least in the USA).
Frankly, I say good riddance. Clearly TikTok has a terrible track record when it comes to protecting children, and while I can’t say that other social media platforms are all that much safer, it’s hard to find a company with a worse track record. The Anderson case has provided some hope that we might soon be seeing these giants clean up their acts.
Written by Celene Sandiford, smartR AI