48% of security professionals believe AI is risky


A recent survey of 500 security professionals by HackerOne, a security research platform, found that 48% believe AI represents the most significant security risk to their organization. Their biggest concerns related to AI include:

  • Leaked training data (35%).
  • Unauthorized use (33%).
  • Hacking of AI models by third parties (32%).

These fears highlight the urgent need for companies to reassess their AI security strategies before vulnerabilities become real threats.

AI tends to generate false positives for security teams

While the full Hacker Powered Security report won’t be available until later this fall, additional research from a SANS Institute report sponsored by HackerOne found that 58% of security professionals believe security teams and threat actors could find themselves in an “arms race” to leverage generative AI tactics and techniques in their work.

Security professionals surveyed by SANS said they have had success using AI to automate tedious tasks (71%). However, the same respondents acknowledged that threat actors could exploit AI to make their operations more efficient. In particular, respondents “were most concerned about AI-driven phishing campaigns (79%) and automated vulnerability exploitation (74%).”

WATCH: Security leaders are getting frustrated with AI-generated code.

“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations, or they risk creating more work for themselves,” said Matt Bromiley, an analyst at the SANS Institute, in a press release.

The solution? AI implementations should undergo external review. More than two-thirds of respondents (68%) chose “external review” as the most effective way to identify AI safety and security issues.

“Teams are now more realistic about the current limitations of AI” than they were last year, HackerOne senior solutions architect Dane Sherrets said in an email to TechRepublic. “Humans bring a lot of important context to defensive and offensive security that AI can’t yet replicate. Issues like hallucinations have also made teams hesitant to deploy the technology in critical systems. However, AI is still great at increasing productivity and performing tasks that don’t require deep context.”

Other findings from the SANS 2024 AI Survey, released this month, include:

  • 38% plan to adopt AI within their security strategy in the future.
  • 38.6% of respondents said they had faced shortcomings when using AI to detect or respond to cyber threats.
  • 40% cite legal and ethical implications as a challenge to AI adoption.
  • 41.8% of companies have faced pushback from employees who do not trust AI decisions, something SANS speculates is due to “a lack of transparency.”
  • 43% of organizations currently use AI within their security strategy.
  • AI technology within security operations is most frequently used in anomaly detection systems (56.9%), malware detection (50.5%), and automated incident response (48.9%).
  • 58% of respondents said AI systems struggle to detect new threats or respond to atypical indicators, which SANS attributes to a lack of training data.
  • Of the total number of people who reported shortcomings in using AI to detect or respond to cyber threats, 71% said AI generated false positives.

Anthropic seeks input from security researchers on AI security measures

Generative AI maker Anthropic expanded its bug bounty program on HackerOne in August.

Specifically, Anthropic wants the hacker community to test the mitigations we use to prevent the misuse of our models, including attempting to break through security barriers meant to prevent AI from providing recipes for explosives or cyberattacks. Anthropic says it will reward up to $15,000 to those who successfully identify new jailbreaking attacks and provide security researchers at HackerOne with early access to its upcoming security mitigation system.

scroll to top