Altman Warns ChatGPT Users: No Legal Confidentiality for Personal Conversations

News Desk

New York: OpenAI CEO Sam Altman has issued a stark warning about the limitations of privacy when using ChatGPT, cautioning users not to treat the AI chatbot as a therapist or source of emotional support due to the absence of legal confidentiality protections.

Speaking on the podcast This Past Weekend with Theo Von, Altman highlighted that while professionals such as doctors, therapists, and lawyers are bound by legal privilege to protect client information, AI platforms like ChatGPT are not.

“People talk about the most personal stuff in their lives to ChatGPT,” Altman said. “Young people, especially, use it as a therapist or life coach, asking things like ‘what should I do’ in their relationships. But unlike human professionals, there’s no legal protection around that information.”

Altman acknowledged that the lack of privacy safeguards for AI interactions could pose serious risks. In legal proceedings, companies like OpenAI could potentially be compelled to provide user chat logs, exposing deeply sensitive personal data.

“I think that’s very screwed up,” he said. “We should have the same concept of privacy for your conversations with AI that we do with a therapist or a lawyer.”

His comments come amid increasing scrutiny of how AI firms handle user data, particularly as tools like ChatGPT become more integrated into everyday life. Although OpenAI offers enhanced privacy settings for enterprise users, the company is currently facing a legal challenge from The New York Times.

A recent court order in the case which OpenAI is appealing requires the company to preserve and potentially produce user conversations as part of legal discovery. OpenAI has called the move “an overreach” and warned that it could set a precedent for broader demands on user data.

Altman acknowledged growing public concern around privacy and trust in AI systems, particularly when users engage with these tools on intimate or emotional issues. “I think it makes sense to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,” he said.

The debate over AI and privacy comes in the context of wider societal conversations on digital rights and data security. Recent shifts in U.S. legal precedent, including the Supreme Court’s decision to overturn Roe v. Wade, have driven users toward encrypted platforms for handling personal health and lifestyle information.

As AI technologies continue to evolve and become more pervasive, experts say establishing clear legal standards for data privacy will be critical in maintaining public trust and safeguarding user rights.

Comments are closed.