OpenAI is establishing an "independent" safety committee.
According to OpenAI's blog post, the company is transforming its safety committee into an independent "board oversight committee" with the authority to delay model releases due to safety concerns. After a 90-day review of OpenAI's "safety and security-related processes and safeguards," the committee recommended the establishment of an independent board.
The committee is chaired by Zico Kolter and includes members Adam D'Angelo, Paul Nakasone, and Nicole Seligman. OpenAI states that the committee will "brief the company leadership on safety assessments of major model releases and oversee model releases together with the entire board, including the power to delay releases until safety concerns are addressed." The full board of OpenAI will also receive "regular briefings" on "safety and security matters."
It is worth noting that the members of the OpenAI safety committee are also members of the company's broader board, so the actual independence and structure of the committee are unclear. (CEO Sam Altman was previously a member of the committee but is no longer.)
By establishing an independent safety committee, OpenAI appears to be taking a similar approach to Meta's oversight board, which reviews some of Meta's content policy decisions and can make binding rulings that Meta must comply with. The members of the oversight board are not members of Meta's board of directors.
The review conducted by OpenAI's safety committee has also helped "identify more opportunities for industry collaboration and information sharing to advance AI industry safety." The company also states that it will seek "more ways to share and explain our safety work" and explore "opportunities for more independent testing of our systems."