OpenAI might soon require organizations to complete an identity verification process in order to access certain upcoming AI models. This information comes from a support page published on the company’s official website last week.
The procedure, referred to as "Verify Organization," is described as a "new method for developers to unlock the most advanced models and features on the OpenAI platform," according to the page. Verification requires a government-issued ID from one of the countries supported by OpenAI's API. Each ID can only verify one organization every 90 days, and not all organizations will be eligible for verification, OpenAI stated.
"At OpenAI, we take seriously the responsibility of ensuring that AI remains widely accessible while being used safely," the page reads. "Unfortunately, a small number of developers intentionally misuse our services in violation of our policies. We are introducing this verification process to reduce unsafe use of AI, while continuing to offer advanced models to a broader developer community."
The new verification process appears aimed at strengthening the security of OpenAI's products as they become more sophisticated and powerful. The company has released several reports detailing its efforts to detect and mitigate malicious uses of its models, including alleged activities by groups based in North Korea.
This move could also be intended to prevent intellectual property theft. According to a Bloomberg report earlier this year, OpenAI is investigating whether a group associated with China’s DeepSeek AI Lab extracted large amounts of data through its API in late 2024, potentially for training their own models—a clear violation of OpenAI’s terms.
OpenAI blocked access to its services in China last summer.