Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren’t legally private, warns Sam Altman

Share This Post



Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality.

During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI.

Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there’s a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. “That’s very screwed up,” Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention.

Altman Urges New Privacy Standards for AI

Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn’t even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman’s warning validating.

The OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot’s assistance. “It makes sense to want privacy clarity before you use it a lot,” Altman said, agreeing with Von’s skepticism.

Chats Can Be Accessed and Stored

According to OpenAI’s own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse.The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company’s data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an overreach.

Debate Around AI and Data Rights

Altman also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court’s Roe v. Wade reversal, which raised fears about digital privacy around personal choices.

While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn’t caught up. Until it does, Altman’s message is clear: users should be cautious about what they choose to share.



Source link

spot_img

Related Posts

Why AI is making us lose our minds (and not in the way you’d think)

Want smarter insights in your inbox? Sign up...

Senior Google executives meet J&K CM Omar Abdullah

Senior Google executives called on Jammu and Kashmir...

US condemns French inquiry into social media platform X

The United States on Friday sharply condemned France's...

I just proved I’m an adult online – and I’m torn about what that really means

I just received an email confirming I’ve successfully...
spot_img