In response to the close attention of parents and relevant parties, OpenAI has taken an important step by establishing a new Child Safety Team, aiming to research and prevent the misuse or abuse of its AI tools by minors. This move highlights the company's determination and concern to protect underage users.
According to the latest recruitment information on OpenAI's hiring page, the Child Safety Team will closely collaborate with the Platform Policy, Legal and Investigations team, as well as external partners, to jointly manage the "processes, incidents, and reviews" related to underage users. The team is currently actively seeking a Child Safety Law Enforcement Specialist who will be responsible for enforcing OpenAI's policies in the context of AI-generated content and actively participating in the review process related to "sensitive" content that may be related to children.
With the popularity of generative AI tools, more and more children and adolescents are turning to these tools not only for learning assistance but also for personal problem-solving. However, this has also raised concerns about how minors can use these tools safely and in compliance. OpenAI's move is aimed at addressing this challenge and ensuring that underage users can use AI tools in a safe and protected environment.
In fact, some major technology providers have already invested a significant amount of resources to comply with relevant laws, such as the US Children's Online Privacy Protection Act, which strictly regulates the types of content children can access online and the types of data companies can collect. Therefore, it is not surprising that OpenAI is hiring child safety experts, especially considering the expected influx of a large number of underage users. Currently, OpenAI's terms of use require children aged 13 to 18 to obtain parental consent and strictly prohibit the use of the platform by children under 13.
However, the establishment of this new team also reflects OpenAI's vigilance against violations of policies related to minors' use of AI and the potential negative news that may arise. In the past year, schools and universities have banned the use of generative AI tools such as ChatGPT, fearing that students will use these tools for plagiarism or spreading misinformation. Although some schools have lifted the ban, concerns about the potential risks of generative AI still exist.
A survey shows that more than half of children have reported seeing their peers using generative AI in a negative way, such as creating false credible information or images to harm others. This further exacerbates concerns about how minors can use these tools correctly.
To guide educators on how to use generative AI as a teaching tool, OpenAI released documentation on ChatGPT in the classroom in September last year, providing relevant tips and frequently asked questions. However, the company also acknowledged in one of its support articles that its tool may produce outputs that are not suitable for all audiences or age groups and advised caution in exposing children to it, even those who meet the age requirements.
With the widespread application of generative AI in the field of education, there is an increasing demand for guidelines on children's use of these tools. UNESCO urged governments around the world at the end of last year to regulate the use of generative AI in education, including implementing age restrictions for users and measures to protect data and user privacy. Audrey Azoulay, the Director-General of UNESCO, said, "Generative AI can be a tremendous opportunity for human development, but it can also cause harm and bias. It cannot be integrated into education without public participation and the necessary safeguards and regulations from governments."
In this context, the establishment of OpenAI's Child Safety Team is undoubtedly a positive move to ensure that underage users can use generative AI tools in a safe and protected environment, while preventing their misuse or abuse.