Google’s attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company’s cloud computing conference, building on his predecessor’s strategy of emphasizing Google’s strength in artificial intelligence.
That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its—and its customers’—AI projects to ethical reviews. They have caused Google to turn away some business. “There have been things that we have said no to,” says Tracy Frey, director of AI strategy for Google Cloud, although she declines to say what.
But this week, the company fueled criticism that those mechanisms can’t be trusted when it fumbled an attempt to introduce outside oversight over its AI development.
Google’s ethics reviews tap a range of experts. Frey says product managers, engineers, lawyers, and ethicists assess proposed new services against Google’s AI principles. Some new products announced next week will come with features or limitations added as a result.
Last year, that process led Google not to launch a facial recognition service, something rivals Microsoft and Amazon have done. This week, more than 70 AI researchers—including nine who work at Google—signed an open letter calling on Amazon to stop selling the technology to law enforcement.
Frey says that tricky decisions over how—or whether—to release AI technology will become more common as the technology advances.
In February, San Francisco research institute OpenAI said it would not release new software it created that is capable of generating surprisingly fluent text because it might be used maliciously. The episode was dismissed by some researchers as a stunt, but Frey says it provides a powerful example of the kind of restraint needed as AI technology gets more powerful. “We hope to be able to have that same courageous stance,” she says. Google said last year that it modified research on lip-reading software to minimize the risk of misuse. The technology could help the hard of hearing—or be used to infringe on privacy.
Not everyone is convinced that Google itself can be trusted to make ethical decisions about its own technology and business.
Google’s AI principles have been criticized as too vague and permissive. Weapons projects are banned, but military work is still allowed. The principles say Google will not pursue “technologies whose purpose contravenes widely accepted principles of international law and human rights,” but the company has been testing a search engine for China that, if launched, would have to perform political censorship.
Since Google revealed its AI principles, the company has been dogged by questions about how they would be enforced without external oversight. Last week Google announced a panel of eight outsiders it said would help implement the principles. Late Thursday it said that new Advanced Technology External Advisory Council was being shut down and that the company was “going back to the drawing board.”
The U-turn came after thousands of Google employees signed a petition protesting the inclusion of Kay Coles James, president of conservative think tank the Heritage Foundation. She worked on President Trump’s transition team and has spoken against policies aimed at helping trans and LGBTQ people. As the controversy grew, one council member resigned and another, Oxford University philosopher Luciano Floridi, said Google had made a “grave error” in appointing James.
Os Keyes, a researcher at the University of Washington who joined hundreds of outsiders in signing the Googlers’ petition protesting James’ inclusion, says the episode suggests Google cares more about currying political favor with conservatives than the impact of AI technology. “The idea of ‘responsible AI’ as practiced by Google is not actually responsible,” Keyes says. “They mean ‘not harmful, unless harm makes money.’”
Anything that adds friction to new products or deals could heighten Kurian’s challenge. He took over at Google Cloud last year after the departure of Diane Greene, a storied engineer and executive who led a broad expansion of the unit after joining in 2016. Although Google’s cloud business made progress during Greene’s tenure, Amazon’s and Microsoft’s did too. Oppenheimer estimates that Google has 10 percent of the cloud market, well behind Amazon’s 45 percent and Microsoft’s 17 percent.
Google is not the only big company talking more about AI ethics lately. Microsoft has its own internal ethical review process for AI deals and also says it has turned down some AI projects. Frey says such reviews don’t have to slow down a business and that Google’s ethical AI checkups can generate new business because of growing awareness of the risks that come with AI’s power. Google Cloud needs to encourage trust in AI to succeed in the long term, she says. “If that trust is broken at any point we run the risk of not being able to realize the important and valuable effects of AI being infused in enterprises around the world,” Frey says.