AI Safety and Regulation
June 9, 2025

Microsoft to launch a cloud-based AI safety scoring framework

Microsoft adds a “safety” category to Azure Foundry’s AI leaderboard, helping users assess models for hate speech and misuse risks, advancing responsible AI, privacy protection, and ethical deployment practices.

Microsoft is introducing a new "safety" category on its AI model leaderboard in Azure Foundry to help cloud customers evaluate models based on benchmarks for implicit hate speech and potential misuse.

This initiative aims to enhance trust and transparency in AI deployments by addressing concerns related to data privacy, content safety, and ethical use. By providing standardized safety metrics, Microsoft enables users to make more informed decisions about which models align with their risk tolerance and regulatory requirements.

This move reflects a broader industry trend toward responsible AI development and reinforces Microsoft’s commitment to safe and ethical AI.

#
Microsoft

Read Our Content

See All Blogs
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more
ML

The evolution of machine learning in 2025

Siddharth Menon

August 8, 2025
Read more