On October 14, OpenAI CEO Sam Altman posted on X about ChatGPT allowing erotica for verified adults from December. After the post garnered over 15 million views and triggered reactions ranging from enthusiasm and curiosity to disgust and rage, Mr. Altman issued a clarification two days later, addressing concerns about child safety and ChatGPT users facing mental health challenges.
What did Sam Altman post on X?
Earlier in the week, Mr. Altman admitted that ChatGPT was made “pretty restrictive” in order to be careful about mental health issues, but added this policy made the chatbot experience less enjoyable for those without mental health challenges. The OpenAI CEO claimed that since they had been able to “mitigate the serious mental health issues and have new tools,” many restrictions could be relaxed. Such a claim is yet to be backed up by regulators.
Mr. Altman further explained that a new, upcoming version of ChatGPT would let people better customise its responses, including acting “like a friend.”
“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults,” said Mr. Altman. This statement went viral, with some criticising Mr. Altman for making it easier to access erotica, while others opposed the sudden changes that were made to the older 4o model.
How did OpenAI respond?
Following the backlash, Mr. Altman clarified that AI-generated erotica was just one example of the freedom that a verified adult user could expect with ChatGPT.
Mr. Altman stressed that the company would prioritise safety over privacy and freedom for teenagers, and that no policies concerning mental health would be loosened.
However, he repeatedly noted that the company’s policy was to treat adult users like adults.
“As AI becomes more important in people’s lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission,” he posted in his clarification on X. “It doesn’t apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not. Without being paternalistic we will attempt to help users achieve their long-term goals,” he said.
How is OpenAI protecting minors?
OpenAI came under pressure to deliver better protections for children after the death of 16-year-old Adam Raine this year, whose parents sued the ChatGPT-maker. They alleged that the chatbot coached their son to die and assisted him in learning more about different suicide methods, instead of helping him find emergency support.
OpenAI has since introduced new age verification mechanisms to protect teens and more vulnerable users. In late September, the ChatGPT-maker announced parental controls so that parents/guardians could link their accounts to their teens’ accounts, manage the children’s chat settings, customise younger users’ experience, and get notified in case their child is in danger of harming themselves.
“Once parents and teens connect their accounts, the teen account will automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate,” said OpenAI in a blog post at the time.
However, there are unresolved contradictions. During an episode of ‘This Past Weekend’ podcast with Theo Von this summer, Mr. Altman pointed out major gaps in privacy rights for users who ask ChatGPT extremely personal questions.
Do other AI companies allow erotica?
Many less regulated AI companies and chatbots allow users to engage in sexual interactions, often with flimsy age-gating measures. App store reviews for these Gen AI chatbots note the ability for users to initiate sexual role-plays with the AI chatbots. This has also impacted the safety and mental health of children.
In one instance, the mother of a 14-year-old who died by suicide in 2024 alleged that her son was sexually abused by a Character.AI chatbot, which lets users engage in role-play with AI personas inspired by popular characters. The mother has filed a lawsuit against the creator, Character Technology.
Meta too faced backlash and said it would revise content policies after it was found that its AI chatbot could make romantic or sexually suggestive comments to children.
Meanwhile, Elon Musk’s Grok AI chatbot that is available on X is also able to generate erotic content, with Mr. Musk himself sharing videos of partially dressed women that he said were created with Grok Imagine.
Published – October 20, 2025 08:30 am IST