ChatGPT, the artificial intelligence chatbot developed by OpenAI, has confirmed a data breach that exposed personal information of some of its users, raising questions about the privacy and security of the popular service.
The data breach occurred on March 20, when a bug in an open source library used by ChatGPT caused some users to see chat data belonging to others. The bug also exposed payment-related information belonging to 1.2% of ChatGPT Plus subscribers, including names, email addresses, payment addresses, card expiration dates, and the last four digits of the card numbers.
OpenAI said it took the chatbot offline as soon as it discovered the issue and worked with the maintainers of the Redis data platform to patch the flaw. The company also notified the affected users and assured them that there was no ongoing risk to their data.
However, the data breach has raised concerns about the privacy and security of ChatGPT, which has become one of the fastest-growing consumer apps in history since its launch in late 2022. The chatbot uses a powerful language model that can generate realistic and engaging text based on user input. It can also integrate with various plugins that expand its capabilities, such as writing code, generating images, or composing music.Embed from Getty Images
While many users have praised ChatGPT for its creativity and versatility, some have also warned about the potential misuse and abuse of the technology. For instance, ChatGPT could be used to create fake news, phishing emails, or malicious code. It could also expose sensitive or personal information from its users or from its training data.Embed from Getty Images
According to a report by GreyNoise, a threat intelligence company, ChatGPT introduced a new feature on March 20 that allows users to integrate their plugins with the chatbot. However, the code examples provided by OpenAI included a docker image for the MinIO distributed object storage system that is affected by a critical vulnerability that can be exploited to obtain secret keys and root passwords. GreyNoise said it has seen attempts to exploit the vulnerability in the wild.
Moreover, ChatGPT has also faced legal challenges from regulators who claim that it violates data protection laws. In April, the European Data Protection Board (EDPB) issued a warning to OpenAI and blocked ChatGPT from operating in the EU until it complies with the General Data Protection Regulation (GDPR).
According to the EDPB, ChatGPT has likely unlawfully processed personal data of high numbers of people (including children), included false information in data sets, is subject to security concerns and is in breach of various other GDPR requirements.
“ChatGPT poses serious risks to the privacy and security of individuals and society as a whole,” said Andrea Jelinek, chair of the EDPB. “We urge OpenAI to take immediate action to ensure that ChatGPT complies with the GDPR and respects the rights and freedoms of data subjects.”Embed from Getty Images
OpenAI has not commented on the EDPB’s decision, but said it is committed to improving ChatGPT and addressing any issues that may arise.
“We are constantly working to make ChatGPT better and safer for our users,” said Sam Altman, CEO of OpenAI. “We appreciate the feedback and support we receive from our community and we hope to continue providing a valuable and enjoyable service.”