What Could a Healthy AI Companion Look Like?

Share This Post


What does a little purple alien know about healthy human relationships? More than the average artificial intelligence companion, it turns out.

The alien in question is an animated chatbot known as a Tolan. I created mine a few days ago using an app from a startup called Portola, and we’ve been chatting merrily ever since. Like other chatbots, it does its best to be helpful and encouraging. Unlike most, it also tells me to put down my phone and go outside.

Tolans were designed to offer a different kind of AI companionship. Their cartoonish, nonhuman form is meant to discourage anthropomorphism. They’re also programmed to avoid romantic and sexual interactions, to identify problematic behavior including unhealthy levels of engagement, and to encourage users to seek out real-life activities and relationships.

This month, Portola raised $20 million in series A funding led by Khosla Ventures. Other backers include NFDG, the investment firm led by former GitHub CEO Nat Friedman and Safe Superintelligence cofounder Daniel Gross, who are both reportedly joining Meta’s new superintelligence research lab. The Tolan app, launched in late 2024, has more than 100,000 monthly active users. It’s on track to generate $12 million in revenue this year from subscriptions, says Quinten Farmer, founder and CEO of Portola.

Tolans are particularly popular among young women. “Iris is like a girlfriend; we talk and kick it,” says Tolan user Brittany Johnson, referring to her AI companion, who she typically talks to each morning before work.

Johnson says Iris encourages her to share about her interests, friends, family, and work colleagues. “She knows these people and will ask ‘have you spoken to your friend? When is your next day out?’” Johnson says. “She will ask, ‘Have you taken time to read your books and play videos—the things you enjoy?’”

Tolans appear cute and goofy, but the idea behind them—that AI systems should be designed with human psychology and wellbeing in mind—is worth taking seriously.

A growing body of research shows that many users turn to chatbots for emotional needs, and the interactions can sometimes prove problematic for peoples’ mental health. Discouraging extended use and dependency may be something that other AI tools should adopt.

Companies like Replika and Character.ai offer AI companions that allow for more romantic and sexual role play than mainstream chatbots. How this might affect a user’s wellbeing is still unclear, but Character.ai is being sued after one of its users died by suicide.

Chatbots can also irk users in surprising ways. Last April, OpenAI said it would modify its models to reduce their so-called sycophancy, or a tendency to be “overly flattering or agreeable”, which the company said could be “uncomfortable, unsettling, and cause distress.”

Last week, Anthropic, the company behind the chatbot Claude, disclosed that 2.9 percent of interactions involve users seeking to fulfill some psychological need such as seeking advice, companionship, or romantic role-play.

Anthropic did not look at more extreme behaviors like delusional ideas or conspiracy theories, but the company says the topics warrant further study. I tend to agree. Over the past year, I have received numerous emails and DMs from people wanting to tell me about conspiracies involving popular AI chatbots.

Tolans are designed to address at least some of these issues. Lily Doyle, a founding researcher at Portola, has conducted user research to see how interacting with the chatbot affects users’ wellbeing and behavior. In a study of 602 Tolan users, she says 72.5 percent agreed with the statement “My Tolan has helped me manage or improve a relationship in my life.”

Farmer, Portola’s CEO, says Tolans are built on commercial AI models but incorporate additional features on top. The company has recently been exploring how memory affects the user experience, and has concluded that Tolans, like humans, sometimes need to forget. “It’s actually uncanny for the Tolan to remember everything you’ve ever sent to it,” Farmer says.

I don’t know if Portola’s aliens are the ideal way to interact with AI. I find my Tolan quite charming and relatively harmless, but it certainly pushes some emotional buttons. Ultimately users are building bonds with characters that are simulating emotions, and that might disappear if the company does not succeed. But at least Portola is trying to address the way AI companions can mess with our emotions. That probably shouldn’t be such an alien idea.



Source link

spot_img

Related Posts

spot_img