- Microsoft's December 2024 complaint concerns 10 anonymous defendants
- “Piracy-as-a-service operation” stole API keys from legitimate users and bypassed content safeguards
- The District of Virginia's complaint has led to the removal of a Github repository and website
Microsoft has accused an anonymous group of developing tools to intentionally bypass security programming in its Azure OpenAI service that powers the ChatGPT artificial intelligence tool.
In December 2024, the tech giant filed a complaint in the US District Court for the Eastern District of Virginia against 10 unnamed defendants, whom it accuses of violating the Computer Fraud and Abuse Act, the Copyright of the Digital Millennium and the federal law against organized crime.
Microsoft claims its servers were accessed to assist in the creation of “offensive”, “harmful and unlawful” content. Although he did not elaborate on the nature of that content, it was clear enough for quick action; It had a Github repository offline and claimed in a blog post that the court allowed them to seize a website related to the operation.
ChatGPT API Keys
In the complaint, Microsoft stated that it first discovered users abusing Azure OpenAI service API keys used to authenticate them to produce illicit content in July 2024. It went on to discuss an internal investigation that found that API keys in issue had been stolen from legitimate customers.
“The precise manner in which Defendants obtained all of the API keys used to carry out the misconduct described in this lawsuit is unknown, but it appears that Defendants have engaged in a pattern of systematic API key theft that allowed them to steal API Keys.” of Microsoft from multiple Microsoft clients,” the complaint reads.
Microsoft claims that, with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, as well as additional software to allow de3u to communicate with Microsoft servers. .
De3u also worked to bypass Azure OpenAI Services' built-in content filters and subsequent review of user prompts, allowing DALL-E, for example, to generate images that OpenAI wouldn't normally allow.
“These features, combined with Defendants' unlawful programmatic API access to the Azure OpenAI service, allowed Defendants to reverse engineer media to circumvent Microsoft's content and abuse measures,” he wrote in the lawsuit.
Via TechCrunch