A recent discovery by Check Point has revealed a previously unknown vulnerability in OpenAI’s ChatGPT, allowing sensitive conversation data to be exfiltrated without the user’s knowledge or consent.
This vulnerability could be exploited by a single malicious prompt, turning an ordinary conversation into a covert exfiltration channel and leaking user messages, uploaded files, and other sensitive content.
The cybersecurity company’s findings have significant implications for users of ChatGPT, highlighting the potential risks of using AI-powered chat platforms.
OpenAI has since patched the vulnerability, addressing the issue and preventing further data exfiltration.
In addition to the data exfiltration flaw, OpenAI also patched a vulnerability in Codex, which affected GitHub tokens.
These patches demonstrate OpenAI’s commitment to user security and data protection, and users are advised to ensure they are using the latest version of ChatGPT to minimize the risk of data breaches.
Source: Original Article
