Hacker creates fake memories in ChatGPT to steal victims' data, but it might not be as bad as it seems

Security researchers have exposed a vulnerability that could allow threat actors to store malicious instructions in a user's memory settings in the MacOS ChatGPT application.

A report by Johann Rehberger in Embrace the red It was observed how an attacker could trigger a quick injection to take control of ChatGPT and then insert a memory into its long-term storage and persistence mechanism. This leads to the exfiltration of the conversation on both sides directly to the attacker's server.

© 2024 Telegraph247. All rights reserved.
Designed and developed by Telegraph247