Meta to give teens’ parents more control after criticism over flirty AI chatbots

Share This Post


U.S. regulators have stepped up scrutiny of AI companies over the potential negative impacts of chatbots [File]
| Photo Credit: AP

Meta said on Friday it will let parents disable their teens’ private chats with AI characters, adding another measure to make its social media platforms safe for minors after fierce criticism over the behaviour of its flirty chatbots.

Earlier this week, the company said its AI experiences for teens will be guided by the PG-13 movie rating system, as it looks to prevent minors from accessing inappropriate content.

U.S. regulators have stepped up scrutiny of AI companies over the potential negative impacts of chatbots. In August, Reuters reported how Meta’s AI rules allowed provocative conversations with minors.

The new tools, detailed by Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang, will debut on Instagram early next year, in the U.S., United Kingdom, Canada and Australia, according to a blog post.

Meta said parents will also be able to block specific AI characters and see broad topics their teens discuss with chatbots and Meta’s AI assistant, without turning off AI access entirely.

Its AI assistant will remain available with age-appropriate defaults even if parents disable teens’ one-on-one chats with AI characters, Meta said.

The supervision features are built on protections already applied to teen accounts, the company said, adding that it uses AI signals to place suspected teens into protection even if they say they are adults.

A report in September showed that many safety features Meta has implemented on Instagram over the years do not work well or exist.

Meta said its AI characters are designed not to engage in age-inappropriate discussions about self-harm, suicide or disordered eating with teens.

Last month, OpenAI rolled out parental controls for ChatGPT on the web and mobile, following a lawsuit by the parents of a teen who died by suicide after the startup’s chatbot allegedly coached him on methods of self-harm.

(Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)



Source link

spot_img

Related Posts

Access Denied

Access Denied You don't have permission to access...

Castelion raises $350 million to scale hypersonic missile production

WASHINGTON — Castelion, a defense technology startup led...

Clear the walls! Samsung’s The Frame TV could go up to 98-inches for 2026

Samsung is reportedly planning an even larger version...

Google AI announcements from November

We released Gemini 3, AI for a new...
spot_img