⚡ Tech Giants Join Forces to Shield Children from AI's Reach
posted 23 Apr 2024
Artificial intelligence developers including OpenAI, Anthropic, Stability AI, Google, Meta, and Microsoft have pledged to bolster security measures to protect children during the development and deployment of AI models. This effort is spearheaded by the nonprofits All Tech Is Human and Thorn, dedicated to child safety.
Under these commitments, companies are obliged to address risks associated with child safety proactively and conduct more rigorous data checks to detect materials related to child sexual abuse and other harmful content. Furthermore, developers have vowed to enhance model protection against exploitation.
OpenAI representatives highlighted their existing efforts in this area, including limiting model capabilities to prevent unwanted content creation, setting age restrictions on applications, and collaborating with international organizations to safeguard children. The company is now poised to implement further changes.
As a result of this agreement, developers are likely to fine-tune language and other models, which may affect the quality of responses to any queries, rather than simply restricting the output of prohibited answers.