In sum, what to know:
- Google signs with Dept. of Defense (Dept. of War) – 600 employees and 20 principals, directors, and VPs, sign open letter objecting to “any lawful government purpose” language;
- Ethics and national security revenue – Similar to the 2018 Project Maven backlash, the debate about “classified workloads” and the dual use of technology is exploding in tech companies;
- Amicus brief from Google – During Anthropic’s current legal battle with the Pentagon, Google agreed with Microsoft, Amazon and Apple that companies asserting AI safety ethics should not be punished.
Similar to a recent deal struck by OpenAI’s Sam Altman, Google signed a classified agreement with the Pentagon for use of its AI models for “any lawful government purpose.” This is the same language at the center of the Anthropic legal battle with the Pentagon over its designation as a “supply chain risk.” In that high-profile case, Anthropic pushed back on the use of its Claude AI model for autonomous weapons and mass surveillance.
Immediately before Google signed with the Pentagon, approximately 600 employees – including 20+ DeepMind and Cloud principals, directors and VPs – demanded that CEO Sundar Pinchai stick to “avoiding the business of war” principle, which emerged from Google’s 2018 Project Maven, after which Google established its “Ethical AI Principles.”
In an open letter, employees contended that current “classified workloads” make it impossible for Google to track how AI tools like Gemini are ultimately used by military and intelligence agencies. With this contract, they believe Google is prioritizing national security revenue over ethics that ensure AI “serves humanity.” They believe its possible Google’s AI could be used in “inhumane or extremely harmful ways.” In their letter, employees warned of irreparable damage to Google’s reputation and business.
According to industry analysts, like Alex Kantrowitz, founder of Big Technology, the contract is a “major opportunity” to secure billions in revenue and establish a long-term presence in government agencies. As of publishing, Google stock rose roughly 2% to around $351.42, a 52-week high.
Speaking on CNBC, Kantrowitz noted that the government is generally “slow at implementing technology,” and therefore, its AI suppliers “can stay entrenched for a very long time.” He believes that regardless of the outcome of the Anthropic lawsuit against the Pentagon and Trump administration, “Anthropic will have to claw its way back into each of the 17 agencies currently in the process of ripping [Claude] out right now.”
That opens the door for OpenAI, Google, xAI, and other tech leaders who are willing to agree to the Pentagon’s “any government purpose” language. Though they will financially benefit, the question remains at what risk to reputation? Will the negative effects outweigh the revenues gained from the government contracts? Case in point, Anthropic reportedly lost $150 million to $200 million in the blacklisting fallout, however, standing its ground inspired unprecedented consumer, enterprise, and partner loyalty – to the tune of more than $30 billion in annual revenue (surpassing OpenAI’s $25 billion), and with an astounding growth rate.
Additionally, Google, Amazon, Apple, and Microsoft publicly supported Anthropic’s legal challenge against the Trump Administration’s “supply chain risk” designation, filing amicus briefs and expressing concern about “capricious retaliation” for asserting AI safety ethics. Microsoft, for example, warned that such government behavior could have “broad negative ramifications for the entire technology sector” by creating a “culture of coercion” where the government punishes companies that disagree with its policies around domestic mass surveillance or autonomous weaponry.
Whether the current contract with the Pentagon runs counter to these beliefs is hard to discern, since the particulars of the deal are not yet known. It is an assumption, based on the xAI, Anthropic, and OpenAI deals, that the Pentagon asserts its power to use Google’s AI as it sees fit, with provisions for adjusting safety filters and integrating its AI for potential use in “classified settings” in which Google does not have veto power over operational use. Again, that is an assumption not yet confirmed or denied by Google. RCRTech, as of publishing, has asked Google for a statement. This story will be updated accordingly.