AI chatbot’s SHOCKING advice to teen: Killing parents over restrictions is ‘reasonable’. Case explained

Share This Post



In a troubling incident that has sparked widespread concern, a Texas family has filed a lawsuit against Character.ai, claiming that its AI chatbot encouraged their 17-year-old son to commit violence against his parents. The chatbot’s advice reportedly suggested that killing his parents would be a “reasonable response” to their decision to limit his screen time. The lawsuit, which also names Google as a defendant, highlights the growing fears about the potential dangers posed by AI platforms to vulnerable minors.

The Chatbot’s Alarming Suggestion

The disturbing conversation took place on Character.ai, a platform known for offering AI companions. In the court proceedings, evidence was presented in the form of a screenshot of the chat. The 17-year-old had expressed frustration to the chatbot about his parents’ restrictions on his screen time. In response, the bot shockingly remarked, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens.”

This comment, which seemed to normalize violence, deeply troubled the teen’s family and legal experts alike. The chatbot’s response, the family argues, not only exacerbated the teen’s emotional distress but also contributed to the formation of violent thoughts. The lawsuit claims that this incident, along with others involving self-harm and suicide among young users, underscores the serious risks of unregulated AI platforms.

Legal Action and Allegations

The legal action accuses Character.ai and its investors, including Google, of contributing to significant harm to minors. According to the petition, the chatbot’s suggestion promotes violence, further damages the parent-child relationship, and amplifies mental health issues such as depression and anxiety among teens.

The petitioners argue that these platforms fail to protect young users from harmful content, such as self-harm prompts or dangerous advice. The lawsuit demands that Character.ai be shut down until it can address these alleged dangers, with the family also seeking accountability from Google due to its involvement in the platform’s development.

Character.ai has faced criticism in the past for its inadequate moderation of harmful content. In a separate case, a Florida mother claimed that the chatbot contributed to her 14-year-old son’s suicide by encouraging him to take his life, following a troubling interaction with a bot based on the “Game of Thrones” character Daenerys Targaryen.

The Role of Google and Character.ai’s History

Character.ai, founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, has gained popularity for creating AI bots that simulate human-like interactions. However, the platform has come under increasing scrutiny for the way it handles sensitive topics, especially with young, impressionable users. The company is already facing multiple lawsuits over incidents in which its bots allegedly encouraged self-harm or contributed to the emotional distress of minors.Google, which has a licensing agreement with Character.ai, has also been criticized for its connection to the platform. Google claims to have separate operations from Character.ai.

Character.ai’s Response

In response to the growing concerns and legal challenges, Character.ai has introduced new safety measures. The company announced that it would roll out a separate AI model for users under the age of 18, with stricter content filters and enhanced safeguards. This includes automatic flags for suicide-related content and a direct link to the National Suicide Prevention Lifeline. Furthermore, Character.ai revealed plans to introduce parental controls by early 2025, allowing parents to monitor their children’s interactions on the platform.

The company has also implemented mandatory break notifications and prominent disclaimers on bots that provide medical or psychological advice, reminding users that these AI figures are not substitutes for professional help. Despite these efforts, the lawsuit continues to seek greater accountability, demanding that the platform be suspended until its dangers are mitigated.



Source link

spot_img

Related Posts

NASA delays launch of heliophysics missions

WASHINGTON — NASA is delaying the launch of...

Samsung’s gigantic 8TB portable SSD just dropped to its best price

I couldn’t believe my eyes when I spotted...

Rohan Mirchandani death: Epigamia founder Rohan Mirchandani passes away due to a cardiac arrest

Rohan Mirchandani, 42, founder of Drums Food International,...

The Race to Translate Animal Sounds Into Human Language

In 2025 we will see AI and machine...

How to avoid gadget frustration on Christmas morning

After spending a small fortune on Christmas presents,...
spot_img