A group of dozens of US state attorneys general has sent a letter to AI companies including Google, Microsoft and OpenAI warning that “delusional outputs” from their models could violate state laws.
The bipartisan letter, sent to 13 AI firms, urges them to implement internal safeguards to flag and remediate delusional or sycophantic ideations that risk causing mental health harms.
It was also sent to Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
Mental health harms
The recommended safeguards include transparent third-party audits of large language models to look for “delusional outputs” and incident reporting procedures to inform users when chatbots are sending them potentially harmful outputs.
The review process should allow groups including academic and civil society bodies to review AI models before they are released, and publish their findings independent of a company’s approval, the letter says.
“GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states, pointing to a number of well-publicised incidents over the past year, including suicides and murder.
“In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”
Regulatory tug-of-war
The attorneys general recommend an incident reporting and remediation model similar to that used for cyber-security incidents.
States are currently in a battle with the US federal government, which has made multiple attempts to limit or ban state regulations on AI.


