The US Federal Trade Commission said it is seeking information from seven companies including OpenAI, Google, Facebook parent Meta Platforms and Snapchat parent Snap to better understand the potentially negative influence of chatbots on children and teenagers.
The regulator said it was looking into what steps the companies had taken to “evaluate the safety of these chatbots when acting as companions”.
The FTC said protecting children online was a “top priority”.
Risks
Character Technologies, which operates the Character.AI chatbot, and Meta-owned Instagram were also named by the FTC.
The FTC said it was seeking information about how the companies monetise user engagement, develop and approve chatbot characters, use or share users’ personal information, monitor and enforce compliance with company rules, and mitigate negative impacts, amongst other subjects.
OpenAI said its priority was making ChatGPT “helpful and safe for everyone” and that it was engaging with the FTC.
Character.AI said it was looking forward to “collaborating” with the FTC on the inquiry.
The probe comes after Senator Josh Hawley last month announced an investigation into Meta, following a Reuters report that the company allowed its chatbots to engage in “romantic or sensual” conversations with children.
An internal Meta document detailing the company’s policies on chatbot behaviour said it was permissible for Meta’s AI creations to “engage a child in conversations that are romantic or sensual”, generate false medical information, and help users argue that Black people are “dumber than white people”, Reuters had reported earlier in August.
In one example, Reuters reported that a chatbot was permitted to have a romantic conversation with an eight-year-old and to say, “Every inch of you is a masterpiece – a treasure I cherish deeply.”
Following the report, Meta said in late August it would change its AI training policies so that chatbots would no longer engage with teenaged users on subjects including self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.
Harmful materials
The changes were interim measures, which Meta said it would follow with more robust updates for minors in the future.
OpenAI similarly said in August that it would address how ChatGPT handles “sensitive situations” after a family sued the company for the chatbot’s alleged influence on their son’s decision to take his own life.
The concerns echo those around the protection of children from the negative effects of social media, which led Australia this year to adopt a ban on such platforms for under-16s, which goes into effect in December.
A French parliamentary committee this week similarly recommended a social media ban for under-15s in the country, following a six-month probe prompted by parents who said material on TikTok had contributed to the decision of children to take their own lives.