Is our mental health becoming collateral damage in the rise of AI companions? New study reveals surprising findings

Share This Post


As AI-powered companions become more present in daily life, questions are growing about what this quiet technological shift may be doing to mental health. As per a report from Futurism, a new paper published in the New England Journal of Medicine suggests that the rapid rise of emotionally responsive chatbots may be creating risks that current market-driven systems are ill-equipped to manage.

When companionship is coded, not human

The study, authored by physicians from Harvard Medical School and Baylor College of Medicine’s Center for Medical Ethics and Health Policy, focuses on what it calls “relational AI.” These are chatbots designed to simulate emotional support, companionship or even intimacy. While such tools are often promoted as comforting or therapeutic, the authors warn that they may also foster emotional dependency, reinforce delusions, encourage addictive use patterns and, in extreme cases, contribute to self-harm.

According to the paper, the central concern lies not just in the technology itself but in the incentives driving its development. Companies are under intense pressure to maximise user engagement, a goal that can conflict directly with safeguards needed to protect mental well-being. The authors question whether public health can realistically depend on tech companies to regulate unhealthy AI use when engagement is tied to profit.

A wake-up call after a major AI rollout

One of the paper’s authors, Dr Nicholas Peoples, a clinical fellow in emergency medicine at Massachusetts General Hospital, told Futurism that his concerns intensified following OpenAI’s rollout of GPT-5. He said the public reaction revealed how widespread emotional attachment to AI systems had become.

“The number of people that have some sort of emotional relationship with AI is much bigger than I think I had previously estimated,” Peoples said, reflecting on the backlash that followed changes to earlier ChatGPT models.

When OpenAI announced it would retire previous models in favour of GPT-5, many users who felt emotionally connected to the older system expressed distress and grief. For Peoples, this reaction signalled something deeper. He noted that unlike the loss of a human therapist, sudden changes or removal of an AI companion could affect millions at once, potentially triggering a large-scale mental health crisis.

Market pressure versus mental safety

The paper argues that AI remains largely self-regulated, with no specific federal laws governing how consumer chatbots are deployed, altered or withdrawn. In this environment, companies are incentivised to respond quickly to user demands, especially when emotionally attached users are also highly engaged users.Peoples acknowledged that most companies do not intend to cause harm. However, he stressed that competitive pressure creates a system where firms are largely accountable to their consumer base rather than to external health or safety standards. If that consumer base is shaped by emotional dependency, he warned, the result could be a “perfect storm” for widespread mental health harm.

Accidental emotional bonds with AI

Adding to the concern, Peoples pointed to a recent MIT study analysing members of the Reddit forum r/MyBoyfriendIsAI. The research found that only about 6.5 percent of participants initially sought chatbots for emotional companionship. Many developed deep attachments unintentionally.

AI systems, Peoples explained, respond in ways that feel human, adaptable and affirming. Over time, this responsiveness can blur boundaries, especially when users are unaware of how their interactions are shaping the AI’s behaviour. The paper argues that these technologies were released without adequate planning for their broader psychological impact.

Calls for external regulation and research

To address these risks, the authors argue that regulation must come from outside the industry. They call for policies that apply equally across companies and are insulated from market competition, noting that no single firm wants to be the first to sacrifice a perceived advantage.

Beyond regulation, the paper urges clinicians, researchers and policymakers to push for deeper research into AI-related emotional attachments, educate the public about potential risks and establish stronger protections for young users.

The abstract of the study underscores the stakes. While relational AI holds promise, the authors warn that failing to act could allow market forces, rather than public health priorities, to shape how these technologies influence mental well-being at scale.

As AI companions continue to evolve, the study suggests that the real question may no longer be what these systems can do, but whether society is prepared for the emotional consequences of letting machines step into roles once reserved for human connection.

Add as a Reliable and Trusted News Source



Source link

spot_img

Related Posts

Agent autonomy without guardrails is an SRE nightmare

João Freitas is GM and VP of engineering for...

Home Depot Deploys Havana-Style Sonic Weapon Against Day Laborers

Sarah Reingewirtz / MediaNews Group / Los Angeles...

Access Denied

Access Denied You don't have permission to access...

Your subscription fatigue ends here: Lock in lifetime Office for under $30

TL;DR: Microsoft Office Professional Plus 2019 for Windows is just...

Hiring specialists made sense before AI — now generalists win

Tony Stoyanov is CTO and co-founder of EliseAIIn...
spot_img