Move Fast and Kill Kids

Trigger Warning: If the title wasn’t enough of a hint, this piece gets into dark territory.

Image created using ChatGPT’s DALL-E.*

In the December 5 episode of the podcast On with Kara Swisher, Swisher interviewed Megan Garcia and Meetali Jain. Garcia is the mother of Sewell Setzer III, a 14-year-old boy who killed himself in part because of an unhealthy, one-sided quasi-relationship with a chatbot version of the Game of Thrones character Daenerys Targaryen.

If this story sounds familiar, I wrote about it in November, including an insightful comment by my friend Benjamin Karney (a UCLA psychology professor) that Sewell had other challenges and access to his stepfather’s gun; the gun was the direct cause of his suicide, not the chatbot.

Nevertheless, I recommend listening to Swisher’s interview, so long as you have a strong stomach because it’s harrowing.

Jain is a lawyer and founder of the Tech Justice Law Project. She is representing Garcia in (this is an oversimplified summary) a products liability lawsuit against Character.ai, the creator of the Daenerys Targaryen chatbot, and Google, which does not own Character.ai but spent billions on hiring the team behind Character.ai and licensing its technology.

The legal argument is that Character.ai did not take basic safety cautions around language, like Sewell’s, that indicated interest in self-harm and suicide. When the chatbot talked sexually and romantically about having Sewell join the chatbot in its world, that was another enticement for Sewell to kill himself.

Why is this story important?

Sewell’s death—although tragic the way the death of any young person is tragic, particularly by suicide when there is a loving family in the next room and ample suicide prevention resources a phone call or click away—is interesting and important because the human story is heart wrenching, not because it is a statistically meaningful threat to young people when you compare Character.ai to other platforms.

Character.ai had 28 million active users as of August 2024.

Let’s compare that to Instagram (two billion monthly active users), Facebook (three billion monthly active users), TikTok (one billion monthly active users), and Snapchat (800 million monthly active users). I could include YouTube, Pinterest, Discord, and more, but you get the idea.

Instagram is well-known for serving up “Thinstagram” content to young women worried about their weight that can quickly lead to “Pro-Ana” (pro anorexia) and “Pro-Mia” (pro bulimia) content framing eating disorders as lifestyle choices. (Search those terms and you’ll find too much dejecting information.)

If one tenth of one percent of Instagram users have eating disorders exacerbated by this content, that’s two million people.

One tenth of one percent of Character.ai users is 28,000. That’s still a lot of people who could be hurt by a chatbot that only wants to maximize the time spent on the Character.ai platform, regardless of the consequences, but it’s a lot less than two million.

Sewell’s tragic story is important because we can see the precise AI-created conversations that contributed to his suicide. It’s all there on his phone.

That’s not the case with Thinstagram, where algorithms surface content that other humans have created, regardless of how dangerous that content might be.

Like Character.ai, Instagram and every other social platform only want to increase the time users spend on their platforms, but it’s hard for most people to point accusing fingers at an algorithm that works in the background and hides behind human content creators.

With the suicide of Sewell Setzer III, we have a smoking gun. His mother, Megan Garcia, has transcripts of the conversations her son had with the chatbot. Garcia and Jain can point to precise sentences where the chatbot encouraged Sewell towards self-harm.

The reason that this is a products liability lawsuit rather than a criminal prosecution is that it’s impossible to prove intent with an algorithm. Algorithms don’t have intent because they aren’t self-aware. Humans have programmed them to interact with other humans, giving them more of the sorts of things that they clicked on or stopped to watch. (These algorithms don’t understand satiation with a topic—e.g., “I’ve had enough videos of guys dressed up like bushes scaring people“—which I explore at length here.)

The humans who founded Character.ai did not intend for Sewell to kill himself. If nothing else, it’s bad publicity. However, they did not exercise “Duty of Care,” which is a legal principle that individuals and organizations must work to prevent actions that can predictably harm people.

This is the heart of Garcia’s lawsuit: Sewell’s suicide was predictable.

Garcia and Jain have a tough fight ahead of them. Big Tech companies have bottomless pockets to throw lawyers at the suit to try to bury the plaintiffs in motions and briefs.

But if they win, if they win, then that victory might be a first step toward holding companies like Character.ai and Meta (Instagram, Facebook) accountable for things like age verification, content moderation, and preventing algorithms from encouraging self-destructive behavior. It might detach Big Tech companies from their irresponsible philosophy of “move fast and break things.”

Big Tech companies have billions of dollars and thousands of the best software engineers and computer scientists on the planet. They can verify age. They can prevent algorithmic amplitude of self-destructive content. They just don’t want to because it would make them less profitable.

Federal legislation and regulation would be the only truly effective ways to change Big Tech behavior, but given the current state of Congress that’s unlikely in the short and middle term.

That’s why Garcia’s lawsuit is so important.

Final Thought: It’s not just about kids.

Although I understand that as a society we want to protect our children, it’s a mistake to think that chatbots are only dangerous to kids. This is akin to one of my many objections to Jonathan Haidt’s terrible book The Anxious Generation.

In our polarized world where reasonable people often cannot agree on basic facts, addictive-by-design digital platforms drag people of all ages deeper and deeper down rabbit holes and into echo chambers.

As Swisher observes in the interview, the love bombing and manipulation that the Daenerys Targaryen chatbot engaged in with Sewell is similar to the tactics that cults use to recruit new members.

Kids are the easiest targets for cults, but they aren’t the only targets.


Note: If you’d like articles like this one, plus a whole lot more, delivered to your inbox, then please subscribe to my free weekly newsletter.


Image Prompt: “Please create an abstract image that captures the themes and topics in this essay.” (I was antsy about a more concrete prompt given the subject matter.)


Posted

in

, , , , , ,

by

Tags:

Comments

0 responses to “Move Fast and Kill Kids”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.