×
Roblox, Discord, OpenAI and Google form alliance to detect AI-generated child abuse content
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Tech giants Google and OpenAI have joined forces with gaming and social platforms Roblox and Discord to launch a new non-profit initiative focused on child safety online. The Robust Open Safety Tools (ROOST) initiative, backed by $27 million in philanthropic funding, aims to develop and distribute free, open-source AI tools for identifying and addressing child sexual abuse material.

The core mission: ROOST will develop accessible AI-powered safety technologies that any company can implement to protect young users from harmful online content and abuse.

  • The initiative will unify existing safety tools and technologies from member organizations into a comprehensive, open-source solution
  • Large language models will be leveraged to create more effective content moderation systems
  • The tools will be made freely available to companies of all sizes, democratizing access to advanced safety capabilities

Key partnerships and funding: Multiple organizations have committed resources and expertise to ensure ROOST’s success over its initial four-year period.

  • Discord is providing both funding and technical expertise from its safety teams
  • OpenAI and other AI foundation model developers will help build safeguards and create vetted training datasets
  • Various philanthropic organizations have contributed to the $27 million funding pool
  • Child safety experts, AI researchers, and extremism prevention specialists will guide the initiative

Regulatory context: The formation of ROOST comes at a time of increased scrutiny and regulatory pressure on social media platforms regarding child safety.

  • Platforms like Roblox and Discord have faced criticism over their handling of child safety issues
  • The initiative represents a proactive industry response to growing concerns about online child protection
  • Member companies are simultaneously developing their own safety features, such as Discord’s new “Ignore” function

Technical implementation: The initiative will focus on developing practical, implementable solutions while addressing complex technical challenges.

  • Existing detection and reporting technologies will be combined into a unified framework
  • AI tools will be designed to identify, review, and report harmful content
  • The exact scope and integration methods with current moderation systems are still being determined

Future implications: While ROOST represents a significant step forward in industry collaboration on child safety, several important considerations remain about its practical impact.

  • The success of the initiative will largely depend on widespread adoption by smaller platforms and companies
  • The effectiveness of AI-powered moderation tools in identifying nuanced content remains to be proven
  • Cross-platform coordination and data sharing protocols will need careful development to ensure both safety and privacy
Roblox, Discord, OpenAI, and Google found new child safety group

Recent News

Consumer-company interactions to improve with AI, Zendesk CEO predicts

AI systems could handle 80% of customer service inquiries within five years, allowing human agents to focus on complex problems requiring emotional intelligence.

Hallucinations spike in OpenAI’s o3 and o4-mini

Despite AI advances, OpenAI's newer o3 and o4-mini models show higher hallucination rates than predecessors, creating a reliability paradox at a time when they're targeting more complex reasoning tasks.

The rise of Deepfake job candidates

Job seekers now compete with sophisticated AI-generated applicants wielding fake identities and customized résumés, adding new security risks for employers unable to distinguish digital deception.