Google pledges not to develop AI weapons

Artificial intelligence debate flares at Google

Google CEO bans autonomous weapons in new AI guidelines

The restriction could help Google management defuse months of protest by thousands of employees against the company's work with the U.S. military to identify objects in drone video.

Google CEO Sundar Pichai announced the change in a set of AI principles released today.

About 4,000 Google employees had signed a petition demanding "a clear policy stating that neither Google nor its contractors will ever build warfare technology". The following month, dozens of workers resigned in protest from the company.

Technologies that cause or are likely to cause harm. "How AI is developed and used will have a significant impact on society for many years to come", Pichai wrote. In it, Google says that its principles "are not theoretical concepts", but rather "concrete standards" that'll "actively govern" its future AI work.

The AI principles represent a reversal for Google, which initially defended its involvement in Project Maven by noting that the project relied on open-source software that was not being used for explicitly offensive purposes.

While most of Google's A.I. guidelines are unsurprising for a company that prides itself on altruistic goals, it also included a noteworthy rule about how its technology could be shared outside the company.

The principles were met with mixed reactions among Google employees.

One Googler told Gizmodo that the principles amounted to "a hollow PR statement". America does not have a great track record when it comes to adhering to "widely accepted principles of global law and human rights" or keeping its word. "The worldwide norms surrounding espionage, cyberoperations, mass information surveillance, and even drone surveillance are all contested and debated in the global sphere".

If Project Maven has any long-term benefit, it may be that it forced Google to go on the record about how it will use AI.

Be made available for uses that accord with these principles.

Google Cloud CEO Diane Greene defended her organization's involvement in Project Maven, suggesting that it did not have a lethal impact.

Pichai's insistence that Google will continue to work with the military may be a signal that Google still plans to vye for Joint Enterprise Defense Infrastructure (JEDI), a 10-year, $10 billion cloud contract with the US military that drew the attention of major tech companies like Amazon and Google. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large data sets to make predictions and identify patterns and anomalies faster than humans could. Pichai's assertions about not using AI for surveillance also left something to be desired, Eckersley added. So, Google won't build smart weapons for the government.

The United States military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere.

Google plans to honor its commitment to the project through next March, a person familiar with the matter said last week.

An internal memo also revealed that a member of Google's defense sales team believed that participation in Maven was directly tied to a government contract worth billions of dollars, according to The Intercept.

Google's pledge to quit doing military work involving its AI technology does not include its current job helping the Pentagon with drone surveillance. In response, they circulated an internal letter, arguing that "Google should not be in the business of war". The whole affair gave people a brief glimpse of what Google really is - a terrifyingly large and powerful tech company that does things for money - cracking the benign, consumer-friendly mask the company usually dons.

Latest News