More security flaws found in popular AI chatbots – and they could mean hackers could learn all their secrets

If a hacker can monitor the internet traffic between his target and the target's cloud-based AI assistant, he could easily capture the conversation. And if that conversation contained sensitive information, that information would also end up in the attackers' hands.

This is according to a new analysis by researchers at Ben-Gurion University's Offensive AI Research Laboratory in Israel, who found a way to implement side-channel attacks on targets using all the Large Language Model (LLM) wizards except Google Gemini.

scroll to top