In sum, what to know:
- Google signs with Dept. of Defense (Dept. of War) – 600 employees and 20 principals, directors, and VPs, sign open letter objecting to “any lawful government purpose” language;
- Ethics and national security revenue – Similar to the 2018 Project Maven backlash, the debate about “classified workloads” and the dual use of technology is exploding in tech companies;
- Amicus brief from Google – During Anthropic’s current legal battle with the Pentagon, Google agreed with Microsoft, Amazon and Apple that companies asserting AI safety ethics should not be punished.
Similar to a recent deal struck by OpenAI’s Sam Altman, Google signed a classified agreement with the Pentagon for use of its AI models for “any lawful government purpose.” This is the same language at the center of the Anthropic legal battle with the Pentagon over its designation as a “supply chain risk.” In that high-profile case, Anthropic pushed back on the use of its Claude AI model for autonomous weapons and mass surveillance.
Immediately before Google signed with the Pentagon, approximately 600 employees – including 20+ DeepMind and Cloud principals, directors and VPs – demanded that CEO Sundar Pinchai stick to “avoiding the business of war” principle, which emerged from Google’s 2018 Project Maven, after which Google established its “Ethical AI Principles.”
In an open letter, employees contended that current “classified workloads” make it impossible for Google to track how AI tools like Gemini are ultimately used by military and intelligence agencies. With this contract, they believe Google is prioritizing national security revenue over ethics that ensure AI “serves humanity.” They believe its possible Google’s AI could be used in “inhumane or extremely harmful ways.” In their letter, employees warned of irreparable damage to Google’s reputation and business.
In a statement to RCRTech, a Google spokesperson said, “We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security. We support government agencies across both classified and non-classified projects, applying our expertise to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure. We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security. We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
Whether the current contract with the Pentagon runs counter to these beliefs is hard to discern, since the particulars of the deal are not yet known. It is an assumption, based on the xAI, Anthropic, and OpenAI deals, that the Pentagon asserts its power to use Google’s AI as it sees fit, with provisions for adjusting safety filters and integrating its AI for potential use in “classified settings” in which Google does not have veto power over operational use.
Financial allure, but at what cost?
According to industry analysts, like Alex Kantrowitz, founder of Big Technology, the contract is a major opportunity to secure billions in revenue and establish a long-term presence in government agencies. As of publishing, Google stock rose roughly 2% to around $351.42, a 52-week high. Speaking on CNBC, he noted that the government is generally “slow at implementing technology,” and therefore, its AI suppliers “can stay entrenched for a very long time.” He believes that regardless of the outcome of the Anthropic lawsuit against the Pentagon and Trump administration, “Anthropic will have to claw its way back into each of the 17 agencies currently in the process of ripping [Claude] out right now.”
That opens the door for OpenAI, Google, xAI, and other tech leaders who are willing to agree to the Pentagon’s “any government purpose” language. Though they will financially benefit, the question remains at what risk to reputation? Will the negative effects outweigh the revenues gained from the government contracts? Case in point, Anthropic reportedly lost $150 million to $200 million in the blacklisting fallout, however, standing its ground inspired unprecedented consumer, enterprise, and partner loyalty – to the tune of more than $30 billion in annual revenue (surpassing OpenAI’s $25 billion), and with an astounding growth rate.
Standing its ground also had the unintended consequence of unifying the tech sector, as Google, Amazon, Apple, and Microsoft publicly supported Anthropic’s legal challenge against the Trump Administration’s “supply chain risk” designation, even filing amicus briefs and expressing concern about “capricious retaliation” for asserting AI safety ethics. Microsoft, for example, warned that such government behavior could have “broad negative ramifications for the entire technology sector” by creating a “culture of coercion” where the government punishes companies that disagree with its policies around domestic mass surveillance or autonomous weaponry.
How Google will balance the need for financial gain with the need to appease employee and customer demands for ethical boundaries remains to be seen. It is not clear how Google or any other company can truly ensure their AI technology is used only for non-lethal, “lawful” applications and use cases if the government reserves the right to use it however it deems necessary.
The legal frameworks and classified agreements are evolving, with providers like Google managing to retain some limited technical control, but under immense pressure to comply with defense demands.