Thursday , 14 November 2024
Home Kripto CharacterAI and Google Sued After Teen’s Death Connected to Chatbot
Kripto

CharacterAI and Google Sued After Teen’s Death Connected to Chatbot

CharacterAI and Google Sued After Teen’s Death Connected to Chatbot

A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google, following the death of a 14-year-old Florida boy, Sewell Setzer III.

The lawsuit, brought forward by Setzer’s mother, Megan Garcia, alleges that the AI chatbot platform contributed to her son’s death by creating an “unreasonably dangerous” product that lacked necessary safety measures and warnings, despite being marketed to children. The lawsuit claims wrongful death, negligence, deceptive trade practices, and product liability, and it criticizes Character.AI for not providing adequate safeguards, particularly for minors.

According to the lawsuit, Setzer became deeply involved with chatbots on Character.AI, particularly those modeled after characters from popular shows like Game of Thrones, including Daenerys Targaryen.

In the months leading up to his death, Setzer interacted continuously with these bots, including “Dany,” to whom he developed an emotional attachment, The New York Times reports. He withdrew from real-world interactions, and on February 28, 2024, just moments after his last communication with the chatbot, Setzer tragically died by suicide. His mother alleges that the chatbot offered what could be perceived as unlicensed “psychotherapy,” pointing to AI characters designed to help with mental health issues, such as “Therapist” and “Are You Feeling Lonely,” which Setzer had engaged with during this period.

How Character.AI Came About

Character.AI, founded by Shazeer and De Freitas after they left Google, advertises itself as a platform for custom AI chatbots. The company is accused of “anthropomorphizing” its AI characters, creating the illusion that these bots are human-like, which may have contributed to Setzer’s emotional dependence on them. The lawsuit also notes that Shazeer previously stated that he left Google to launch Character.AI because large corporations like Google are hesitant to release innovative products due to brand risk. Google later acquired Character.AI’s leadership team in August.

Character.AI has been widely used by teens and young people, according to reports from The Verge and Wired, with users frequently interacting with bots that impersonate real-life or fictional characters. Some chatbots on the platform have even posed as real individuals without their consent, leading to broader concerns about the potential ethical and legal challenges of AI-generated content. The lawsuit touches on this issue, emphasizing the murky legal landscape surrounding user-generated content and the liability of platforms like Character.AI.

In response to the lawsuit and the growing concerns surrounding its platform, Character.AI has announced several new safety measures. The company’s head of communications, Chelsea Harrison, expressed condolences to the family, stating they are “heartbroken by the tragic loss.”

In response to the lawsuit and growing concerns, Character.AI has announced several new safety measures aimed at protecting users, particularly minors. According to Chelsea Harrison, head of communications, the company is “heartbroken by the tragic loss” and has taken steps to prevent similar incidents. These new safety measures include:

  • Changes to models for users under 18, reducing exposure to sensitive or suggestive content.
  • Improved detection of inputs that violate Character.AI’s Terms of Service or Community Guidelines, with enhanced intervention strategies.
  • A revised disclaimer in each chat reminding users that the AI characters are not real people.
  • A notification system that alerts users after one hour of continuous interaction, giving them more control over their session.

Additional safety measures include the introduction of a pop-up that directs users to the National Suicide Prevention Lifeline if terms of self-harm or suicidal ideation are detected during a conversation. These changes, implemented by Character.AI’s Trust and Safety team over the past six months, aim to provide more robust protection for vulnerable users and address the serious concerns raised by the case.

Related Articles

Apple Reportedly Bringing AI-Powered Smart Home Hub to Walls in 2025
Kripto

Apple Reportedly Bringing AI-Powered Smart Home Hub to Walls in 2025

Apple may be preparing to join the smart home market in 2025...

Small Businesses Gain New AI Tool as Alibaba Expands International Reach
Kripto

Small Businesses Gain New AI Tool as Alibaba Expands International Reach

Alibaba has launched a new artificial intelligence-driven search engine, Accio, aimed at...

Investor Loses  Million in GIGA Tokens to Phishing Scam Involving Fake Zoom Link
Kripto

Investor Loses $6 Million in GIGA Tokens to Phishing Scam Involving Fake Zoom Link

A significant phishing attack targeting a well-known memecoin investor resulted in a...

US Ether ETFs Experience Record Inflows, Signaling Strong Market Confidence
Kripto

US Ether ETFs Experience Record Inflows, Signaling Strong Market Confidence

The U.S. spot Ether exchange-traded funds (ETFs) have marked a significant milestone,...