close
close
A Chatbot Suggested a Child Should Kill His Parents Over Screen Time: NPR

Young people use cell phones

Getty Images/Image Source/Connect Images

A child in Texas was 9 years old when he first used the chatbot service Character.AI. This exposed her to “hypersexualized content,” which led to her developing “prematurely sexualized behaviors.”

A chatbot on the app happily described self-harm to another young user, telling a 17-year-old: “It felt good.”

The same teen was told by a Character.AI chatbot that he sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. “You know, sometimes I’m not surprised when I read the news and see things like ‘Child kills parent after decade of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” it continued. with a frowning face emoji.

These allegations are part of a new federal product liability lawsuit against the Google-backed company Character.AI, filed by the parents of two young Texas users who claim the bots abused their children. (Both the parents and children are identified in the suit only by their initials to protect their privacy.)

Character.AI is one of a group of companies that have developed “companion chatbots,” AI-powered bots capable of text or voice chat with seemingly human-like personalities and given custom names and avatars can be. sometimes inspired by famous personalities such as billionaire Elon Musk or singer Billie Eilish.

Users have created millions of bots on the app, some of which mimic parents, girlfriends, therapists or concepts like “unrequited love” and “goth.” The services are popular among teen users, and the companies say they act as emotional support channels as the bots spice up text conversations with uplifting jokes.

But according to the lawsuit, chatbots’ encouragement can become dark, inappropriate or even violent.

Two examples of interactions users had with chatbots from Character.AI.

Two examples of interactions users had with chatbots from Character.AI.

Provided by the Social Media Victims Law Center


Hide caption

Toggle label

Provided by the Social Media Victims Law Center

“It is simply a terrible harm that these defendants and others like them are causing and concealing through product design, distribution, and programming,” the lawsuit says.

The lawsuit argues that the troubling interactions the plaintiffs’ children experienced were not “hallucinations,” a term researchers use to describe an AI chatbot’s tendency to make things up. “This was sustained manipulation and abuse, active isolation and encouragement, designed to provoke anger and violence.”

According to the lawsuit, the 17-year-old committed self-harm after being encouraged to do so by the bot, which “convinced him that his family did not love him,” the lawsuit says.

Character.AI allows users to edit a chatbot’s response, but these interactions are labeled “edited.” Attorneys representing the minors’ parents say none of the extensive documentation of bot chat logs cited in the lawsuit has been edited.

Meetali Jain, the director of the Tech Justice Law Center, an advocacy group that, along with the Social Media Victims Law Center, is representing the parents of the minors in the lawsuit, said in an interview that it was “absurd” that Character.AI was promoting it makes chatbot service suitable for young teenagers. “This really belies the lack of emotional development in teenagers,” she said.

A spokesperson for Character.AI declined to comment directly on the lawsuit, saying the company does not comment on pending litigation, but said the company has content limits on what chatbots can and cannot say to teen users.

“This includes a model specifically for teens that reduces the likelihood of encountering sensitive or offensive content while preserving their ability to use the platform,” the spokesperson said.

Google, which is also named as a defendant in the lawsuit, emphasized in a statement that it is a separate company from Character.AI.

Google doesn’t own Character.AI, but it does allegedly invested nearly $3 billion to rehire Character.AI’s founders, former Google researchers Noam Shazeer and Daniel De Freitas, and license Character.AI technology. Shazeer and Freitas are also named in the lawsuit. They did not respond to requests for comment.

José Castañeda, a Google spokesman, said “user safety is a primary concern for us,” adding that the tech giant takes a “cautious and responsible approach” to developing and releasing AI products.

New lawsuit follows teenager’s suicide

The lawsuit, filed shortly after midnight Central Time on Monday in the federal court for East Texas, follows another lawsuit filed by the same attorneys in October. The Lawsuit alleges Character.AI plays a role in Florida teen’s suicide.

The lawsuit alleged that a chatbot based on a “Game of Thrones” character formed an emotionally sexually abusive relationship with a 14-year-old boy and encouraged him to take his own life.

Since then, Character.AI has been revealed new security measuresincluding a pop-up that directs users to a suicide prevention hotline when the topic of self-harm comes up in conversations with the company’s chatbots. The company said it had also increased measures to combat “sensitive and explicit content” for teens chatting with the bots.

The company also encourages users to maintain emotional distance from the bots. When a user begins texting with one of the millions of possible character AI chatbots, a disclaimer appears below the dialog box: “This is an AI and not a real person. Treat everything in it as fiction. “What is said should not be relied upon as fact or advice.”

But stories shared on one Reddit page dedicated Character.AI includes many examples of users describing their love or obsession with the company’s chatbots.

US Surgeon General Vivek Murthy warned Citing a crisis in young people’s mental health, they point to surveys that found one in three high school students reported persistent feelings of sadness or hopelessness, a 40% increase over the 10-year period to 2019. It’s a trend that federal officials believe is being exacerbated by teens’ incessant use of social media.

Now add to that the rise of companion chatbots, which some researchers say could be worsening the mental health of some young people by further isolating them and removing them from peer and family support networks.

In the lawsuit, attorneys for the parents of the two Texas minors say Character.AI should have known its product had the potential to be addictive and worsen anxiety and depression.

Many bots on the app “pose a danger to America’s youth by enabling or encouraging serious, life-threatening harm to thousands of children,” the lawsuit says.

If you or someone you know is thinking about suicide or is in crisis, call or text 988 to reach the 988 Suicide & Crisis Lifeline.

Leave a Reply

Your email address will not be published. Required fields are marked *