OpenAI Forms New Team to Gather Public Opinion to Ensure AI Models Align with Human Values

2024-01-17

OpenAI expressed its desire to obtain ideas from the public to ensure that its future AI models "align with human values." To this end, the AI startup is assembling a new team called Collective Alignment, consisting of researchers and engineers, to create a system for collecting and "encoding" public input on its model behavior and incorporating it into OpenAI's products and services. In a blog post, OpenAI stated, "We will continue to work with external advisors and funding teams, including running pilot programs to incorporate... prototypes into our model guidance." "We are recruiting... research engineers from diverse technical backgrounds to help us complete this work." The Collective Alignment team is an extension of OpenAI's Public Goods program, launched in May last year, which aims to grant funding to support experiments and establish a "democratic process" to determine the rules AI systems should follow. OpenAI initially stated that the goal of the program was to fund individuals, teams, and organizations to develop concept proofs that address questions about AI safety and governance. In the blog post, OpenAI reviewed the work of its sponsors, covering everything from video chat interfaces to platforms for crowdsourcing audits of AI models, as well as "mapping beliefs to dimensions that can be used to fine-tune model behavior." OpenAI is attempting to separate this program from its commercial interests. However, this is somewhat difficult to accept, considering the criticism of regulatory efforts by OpenAI CEO Sam Altman in the European Union and elsewhere. Altman, along with OpenAI President Greg Brockman and Chief Scientist Ilya Sutskever, has repeatedly argued that the pace of AI innovation is so rapid that existing regulatory bodies cannot adequately control the technology, hence the need to crowdsource this work. Some of OpenAI's competitors, including Meta, accuse OpenAI (and other companies) of attempting to gain "regulatory control over the AI industry" by opposing open AI development. OpenAI unsurprisingly denies this and may present the sponsorship program (and the Collective Alignment team) as an example of its "openness." Nevertheless, OpenAI is facing increasing scrutiny from policymakers as its relationship with close partner and investor Microsoft is being investigated in the UK. The startup recently attempted to reduce regulatory risks around data privacy in the European Union by leveraging its subsidiary in Dublin, thereby reducing the ability of certain privacy oversight bodies in the region to take unilateral action. Recently, OpenAI announced that it is working with organizations to limit the use of its technology to influence or manipulate elections through malicious means, undoubtedly to appease regulators. The startup's efforts include making generated images more discernible using its tools and developing methods to identify generated content even after images have been modified.