200 AI Researchers Call for Safety Testing in AI Companies

2024-03-07

More than 200 leading artificial intelligence researchers have signed an open letter calling on companies such as OpenAI, Meta, and Google to allow independent experts to assess and test the security of their AI models and systems.

This letter argues that the strict rules set by tech companies to prevent misuse or abuse of their AI tools are having unintended consequences, stifling independent research that audits potential risks and vulnerabilities in these systems.

Notable signatories include Percy Liang from Stanford University, Pulitzer Prize-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, AI ethics researcher Deb Raji, and former government advisor Suresh Venkatasubramanian.

What concerns AI researchers?

Researchers state that AI companies' policies, which prohibit certain types of testing, copyright infringement, the generation of misleading content, or other abusive behaviors, are too broad in scope. This creates a "chilling effect," with auditors fearing that their accounts will be banned or face legal consequences if they conduct stress tests on AI models without explicit approval.

The letter states, "Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned research that seeks to hold them accountable."

This letter comes amidst a growing tension, with AI companies like OpenAI claiming that The New York Times' investigation into ChatGPT's copyright issues is equivalent to "hacking." Meta has updated its terms, threatening to revoke authorizations if its latest language model is used for intellectual property infringement.

Researchers believe that companies should provide a "safe harbor" for responsible auditing and establish direct channels for reporting potential vulnerabilities discovered during testing, rather than being forced to publicly expose them on social media.

"Our regulatory system is already fragmented," said Borhane Blili-Hamelin from the AI Risk and Vulnerability Alliance. "Of course, people will find issues. But the only channel that has an impact is these 'gotcha' moments, when you catch the company in a vulnerability."

This letter, along with its accompanying policy recommendations, aims to foster a more collaborative environment where external researchers can assess the security and potential risks of AI systems that impact millions of consumers.