Data poisoning attacks: the alarm about the silent killer of GenAI

When researchers at software management firm JFrog routinely scanned AI/ML models uploaded to Hugging Face earlier this year, the discovery of 100 malicious models put the spotlight on an underappreciated category of cybersecurity issues: data poisoning and manipulation.

The problem with data poisoning, which targets training data used to build artificial intelligence (AI) and machine learning (ML) models, is that it is unorthodox as far as cyberattacks go, and in some cases can be impossible to detect or stop. Attacking AI in this way is relatively easy, and it doesn’t even require hacking in the traditional sense to poison or manipulate the training data that popular large language models (LLMs) like ChatGPT rely on.

scroll to top