ChatGPT Didn’t Leak The Private Conversations, The User’s Account Was Compromised: Report،
Recently, a ChatGPT user noticed unrecognized chats with the AI chatbot in the history section. Upon review, the user realized that the history contained multiple usernames and passwords, potentially linked to a pharmaceutical support system that uses the AI chatbot. User Chase Whiteside sent the screenshots of the conversation to the publication ArsTechnica, which then broke the story.
The original story mentions that it might have been ChatGPT that leaked the conversation
Originally, the story mentioned that ChatGPT was leaking private conversations that include some random users' login details and other personal information. The report also mentioned other discussions that included the name of a presentation someone was working on and details of an unpublished research proposal. Therefore, it appeared that ChatGPT was randomly showing the user a session conducted by someone else with the AI chatbot.
However, OpenAI mentions otherwise
However, in an updated report, OpenAI says that Whiteside's account has been compromised. According to the company, the unauthorized logins came from Sri Lanka, while Whiteside logs into his account from New York. The company considers this “an account takeover as it is consistent with the activity we see when someone contributes to a 'pool' of identities that an external community or Proxy server uses to distribute free access. In other words, a bad actor accessed the account to do the talking.
Whiteside's account was accessed from Sri Lanka
On the other hand, Whiteside believes it is unlikely that his account was compromised because he uses a nine-character password with upper and lower case letters. Nonetheless, the chat histories appeared all at once during a break in the account's use on Monday. Additionally, OpenAI's statement suggests that it is not ChatGPT that randomly leaks user details. Instead, the account was compromised and connected to Sri Lanka.
Although the company confirmed that the ChatGPT account was hacked, this is not the first time an alleged data breach has been reported. In March 2023, a ChatGPT bug leaked several chat titles. Additionally, in November 2023, researchers could access private data to train the chatbot. Without security features like 2FA or recent login history, users should use the AI chatbot with caution, a Business Today report suggests.
You can follow Smartprix on Twitter, Facebook, Instagram and Google News. Visit smartprix.com for the latest news, reviews and technical guides.