- Google is entering the military/government market
- New Pentagon contract allows use of Gemini for 'any lawful purpose'
- Google employees are not happy with the new contract
Google recently expanded its contract with the US Department of Defense (DoD) to provide Gemini for use in classified operations, or for “any lawful purpose,” and also backed out of a $100 million Pentagon challenge to build swarms of voice-controlled autonomous drones.
At the same time, the company is facing internal dissatisfaction with its decision to provide Gemini to the Pentagon for classified projects, but the company responded by telling staff it is “proud” of the Pentagon's AI contract.
So how have Google's ethics and policies evolved over time? And are they changing to allow the company to grab a highly lucrative, if ethically dubious, slice of the government pie?
Article continues below.
Drone grounding
Google's move away from its once widely recognized motto of “Don't be evil” may be coming true in the eyes of some Google employees, but it's not the first time the company has changed its policy. The company's AI principles once stated that the company would not deploy its AI tools where they “could cause harm” and would not “design or implement” AI tools for surveillance or weapons.
Google reported that withdrawing from the Pentagon's competition to create technology capable of converting spoken instructions into commands for a swarm of autonomous drones was a matter of lack of resources, however, the real cause is reported to be an internal ethics review. Bloomberg information.
This suggests, at least, that the internal ethics board is still functioning and not entirely ineffective.
On the other hand, with the company expanding its Gemini availability to classified networks, the Pentagon is free to use Gemini for “any lawful purpose.” This clause is more bark than bite.
Before the turn of the century, it was illegal for communications providers to install backdoors for law enforcement purposes, but CALEA and the Patriot Act changed all that. Federal authorities were also previously prevented from legally seizing data stored on servers in foreign countries, but the CLOUD Act changed that, too.
Things are only illegal until they are legal, and vice versa, effectively giving the Pentagon a future-proof loophole in case the intended use case suddenly becomes legal.
The “any lawful purpose” clause therefore offers no meaningful protection against the use of AI for autonomous weapons systems or mass domestic surveillance purposes, as Anthropic protested, and is further weakened by the inclusion of a clause within the contract between Google and the Department of Defense stating that the company has “no right to… veto lawful government operational decision-making.” Something that OpenAI also found in its agreement with the Pentagon.
This gives the Pentagon almost freedom over the direction it chooses to take with Gemini in its classified projects. Mass surveillance has been going on for decades, but the purpose of AI in all of this is simply to make it smarter, more targeted, and more efficient.
A piece of Pentagon cake
The appeal of working as a government and military contractor is simple: there's a lot of money involved. Before the ink was completely fried on Anthropic's separation from government use, OpenAI had a shiny contract expanded to play exactly the role Anthropic sought to avoid.
Similarly, Microsoft and Amazon have already won numerous contracts involving cloud tools, artificial intelligence, and cybersecurity, and it appears Google is playing catch-up.
Google employees have been a challenge when it comes to working ethics with the government. In 2018, protests by Google employees caused the company to abandon Project MAVEN over the use of Google technology to analyze drone strike images. These protests also gave rise to Google's now-defunct AI principles of “do no harm.”
Google also faced a similar disagreement when employees objected to the company's possible involvement in providing technology to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).
As is tradition, Google employees are once again forming digital pickets, and more than 600 signed a letter to CEO Sundar Pichai asking him to reject any use of Google's artificial intelligence technology for military purposes.
In response, Kent Walker, Google's president of global affairs, wrote in an internal memo Tuesday seen by The information“We have proudly worked with defense departments since the early days of Google and continue to believe it is important to support national security in a thoughtful and responsible manner.”
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.





