Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Google has told its employees it “proudly” works with US military and will continue to do so, as the tech giant faces down opposition from hundreds of staff over a deal for its AI to be used in classified operations.
Kent Walker, Alphabet’s president of global affairs, said in a memo to staff on Tuesday: “We have proudly worked with defence departments since Google’s earliest days and continue to believe that it is important to support national security in a thoughtful and responsible way.”
“Staying engaged with governments, including on national security, will help democracies benefit from responsible technologies,” he added.
Google on Monday signed a deal with the defence department that will allow its AI technology to be used in classified operations — an extension of an existing $200mn contract to provide the Pentagon with AI tools.
The decision comes amid a clash between Anthropic and the Pentagon. Dario Amodei, chief executive of the AI start-up, has said he refused to sign a deal with the defence department unless the government guaranteed that Anthropic’s tools would not be used for mass domestic surveillance and lethal autonomous weapons.
The government has said the start-up has no right to dictate national policy and moved to cancel Anthropic’s government contracts.
Walker acknowledged in the memo AI tools are “not appropriate for domestic mass surveillance or use in connection with autonomous weapons without appropriate human oversight”. However, he said Google would support military uses of AI “in line with the approaches of other major AI labs”.
OpenAI and Elon Musk’s xAI have struck similar deals to Google.
The search group’s version of the deal was signed the same day that more than 560 employees sent an open letter to chief executive Sundar Pichai, urging him to walk away from talks because of concerns that its technology could be used in “inhumane or extremely harmful ways”.
In February, employees petitioned DeepMind’s chief scientist Jeff Dean, asking him to “do everything in your power to stop any deal which crosses these basic red lines”. Dean at the time posted on X: “Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression.”
The speed of Monday’s decision has shocked researchers at its DeepMind AI lab, some of whom are concerned about the lack of oversight and regulation of cutting-edge AI models that they helped to build.
One researcher told the FT that technical experts within Google were especially concerned because they are keenly aware of AI models’ limitations and feel they can no longer guarantee their technology will not be applied to dangerous use cases.
Staff are concerned that language of the contract permits AI to be used for “any lawful governmental purpose”, people familiar with the terms said. Whilst the terms say Google’s AI is “not intended for” domestic mass surveillance or autonomous weapons without human control, the tech company does not get to “veto” government decision-making, the people said.
Walker justified the decision saying Google had worked on classified initiatives for government agencies in the past, including on cyber security, translation for diplomatic activities and veterans’ healthcare.
He also noted all governments “already have access to AI technology on an open source . . . basis (including for national security purposes), and are already using widely available open-source software on their own systems”.
Following the agreement, employees who oppose the deal are now regrouping around demands for more transparency and better oversight of Google’s AI products that will be used by the military, two people close to the efforts said.
Google’s response to recent employee activism is a significant change from the past. In 2018, several staff quit and thousands signed a petition against Project Maven, which used AI to improve drone strikes. Google did not renew the contract and pledged not to work on AI for weapons or surveillance.
Google said: “We are proud to be part of a broad consortium . . . providing AI services and infrastructure in support of national security.”
The company said it was committed to a “consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight”.
