The future of security flaw finding could be multiple LLMs working together

Most likely, the future of penetration testing and vulnerability hunting will not depend on AI, but on AI, as is the case on multiple occasions, security experts have warned.

Researchers at the University of Illinois Urbana-Champaign (UIUC) found that a group of large language models (LLMs) outperformed the use of single AI and significantly outperformed ZAP and MetaSploit software.

scroll to top