Models
June 10, 2025

UK campaigners urge regulators to restrict Meta’s use of AI in potentially unsafe applications

Campaigners urge Ofcom to limit Meta’s AI-driven risk assessments, warning they may weaken child safety standards and violate the UK Online Safety Act’s intent without human oversight and accountability.

Internet safety campaigners are urging Ofcom, the UK’s communications regulator, to scrutinize Meta’s use of AI for risk assessments under the Online Safety Act, particularly regarding child safety and illegal content. Concerns center on whether AI-led evaluations can meet the rigorous standards required by the Act. Campaigners warn that over-reliance on automated systems may lead to inadequate content moderation, insufficient protection for minors, and failure to identify harmful material. They are calling for greater transparency, human oversight, and clear accountability to ensure AI technologies used by major platforms like Meta do not undermine the intent of the legislation.

#
DeepSeek

Read Our Content

See All Blogs
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more
AI safety

Everything you need to know about Anthropic's Project Glasswing

Deveshi Dabbawala

April 8, 2026
Read more