Models
June 10, 2025

UK campaigners urge regulators to restrict Meta’s use of AI in potentially unsafe applications

Campaigners urge Ofcom to limit Meta’s AI-driven risk assessments, warning they may weaken child safety standards and violate the UK Online Safety Act’s intent without human oversight and accountability.

Internet safety campaigners are urging Ofcom, the UK’s communications regulator, to scrutinize Meta’s use of AI for risk assessments under the Online Safety Act, particularly regarding child safety and illegal content. Concerns center on whether AI-led evaluations can meet the rigorous standards required by the Act. Campaigners warn that over-reliance on automated systems may lead to inadequate content moderation, insufficient protection for minors, and failure to identify harmful material. They are calling for greater transparency, human oversight, and clear accountability to ensure AI technologies used by major platforms like Meta do not undermine the intent of the legislation.

#
DeepSeek

Read Our Content

See All Blogs
Gen AI

Exploring OpenClaw: The self-hosted AI assistant revolution that is reshaping everything

Deveshi Dabbawala

February 18, 2026
Read more
LLM Models

The comprehensive guide to building production-ready Model Context Protocol systems

Deveshi Dabbawala

February 11, 2026
Read more