Google has made significant modifications to its AI principles, removing sections prohibiting the use of technology for surveillance or weapon applications.
In the initial policy statement released in 2018, the company pledged not to pursue any AI development that could cause harm and would not design or deploy AI tools for weapons or surveillance purposes. This part has now been eliminated.
Instead, Google states its responsibility lies in "responsible development and deployment." It claims this will only be implemented under "adequate human oversight, due diligence, and feedback mechanisms" to align with user objectives, social responsibilities, and widely accepted principles of international law and human rights.
Google did not deny some adjustments to its AI philosophy. In a blog post written before the annual financial report, James Manyika, Senior Vice President at Google, and Demis Hassabis, head of AI lab Google DeepMind, stated that governments now need to work together to support "national security."
The article mentions that since 2018, technology has "evolved," suggesting that principles require some fine-tuning. "Billions of people are using AI in their daily lives," it says. "AI has become a general-purpose technology, a platform numerous organizations and individuals use to build applications. It has transformed from a niche research topic in laboratories into a widespread technology like mobile phones and the internet."
The article adds that global competition around AI is intensifying in an increasingly complex geopolitical environment. They express belief that "democratic countries should lead in AI development, guiding core values such as freedom, equality, and respect for human rights."
This change echoes the evolution of Google's motto. Founders Sergey Brin and Larry Page introduced the motto "Don't be evil," updated in 2015 to "Do the right thing."
Since then, the company has been cautious regarding ethics and technology, abandoning a contract with the U.S. Department of Defense involving AI surveillance technology, which was dropped in 2018 due to strong opposition from employees and the public. At that time, Google introduced new guidelines for AI usage in defense and intelligence contracts.