A top Google scientist wrote a prophetic email warning that its military AI contract would be 'red meat' for critics by Jake Kanter on May 31, 2018, 6:02 AM Advertisement
- A Google scientist warned in an internal email that the company's involvement in the US Department of Defense's Project Maven would be "red meat" for critics.
- The message, seen by The New York Times, proved prophetic after there has been a huge backlash against Google, both internally and externally, over the Pentagon drone contract.
- Furious staff have flooded message boards, attended fractious meetings, created anti-Maven stickers, and resigned in protest. Academics have written to Google asking it to withdraw from the project.
- Google plans to create a list of principles about its use of artificial intelligence for military means.
A senior Google scientist warned in an email that winning a military AI contract would spark a controversy which would be totally out of the company's control. The email was disclosed in a detailed New York Times report, which charts the backlash against Google, both internally and externally, after the firm won a slice of the US Department of Defense's "Project Maven." The Pentagon program will use artificial intelligence to interpret video images. The Department of Defense said machine learning is critical to "maintain advantages over increasingly capable adversaries and competitors," but critics say Google's involvement could help improve the accuracy of drone missile strikes. Fei-Fei Li, the chief scientist for AI at Google Cloud, issued her warning in an email exchange last September about how to publicise Google's role in Project Maven. In the message to Google's Head of Defense and Intelligence Sales Scott Frohman, she reportedly said: "Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google." In a statement to the New York Times, Li doubled down on her email: "I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I think is to weaponize AI." Furious staff flood message boards, create anti-Maven stickers, and resign in protest Her remarks turned out to be prophetic, with Google's involvement in Project Maven stoking strong feelings, as many pointed to the company's "don't be evil" motto. Around 4,000 Google staff signed a letter to CEO Sundar Pichai urging the company to end the controversial contract with the Department of Defense, while around a dozen employees resigned in protest, according to Gizmodo. More than 200 academics and researchers also demanded Google pull out of the deal. The New York Times reported that Project Maven has "fractured" the workforce, leading to several internal meetings where staff around the world have listened to explanations from senior management. Internal message boards have also been flooded with comments about the deal. One outgoing engineer petitioned to rename a conference room after Clara Immerwahr, a German chemist who killed herself in 1915 after protesting the use of science in warfare. "Do the Right Thing" stickers have also appeared in Google's New York office, according to the New York Times. "Even within this free-expression workplace, longtime employees said, the Maven project has roiled Google beyond anything in recent memory," The New York Times said. Google to come up with military AI "principles" Google declined to comment when contacted by Business Insider. The New York Times said Pichai addressed the matter at an all-staff meeting last Thursday, telling employees that the firm intends to come up with a list of principles about its use of artificial intelligence for military means. These will stop the use of AI in weaponry, Google said. Separately, Diane Greene, the chief executive of Google Cloud, has reassured staff that its Project Maven involvement is "not for lethal purposes" and the deal is only worth $9 million. SEE ALSO: A former Googler leading the charge against AI weapons says her time at Google taught her that even 'nice' people can make bad moral decisions |
0 comments:
Post a Comment