Models
November 11, 2025

Meta platforms releases open-source “omnilingual ASR” for 1,600+languages

Meta open-sourced ASR models that natively support 1,600+ languages, with zero-shot extension to 5,400+ languages, greatly expanding voice-to-text accessibility for low-resource languages.

Meta released the Omnilingual ASR model suite, a family of automatic speech recognition models supporting over 1,600 languages out-of-the-box, and designed to generalize to more than 5,400 languages via zero-shot in-context learning. The models are fully open-source under Apache 2.0, enabling commercial reuse.

The architecture includes self-supervised speech encoders and LLM-based decoders, enabling transcription of under-represented languages previously unavailable in major ASR systems.

This release marks a significant step in voice AI inclusivity and indicates Meta’s renewed emphasis on foundational AI infrastructure.

No items found.

Read Our Content

See All Blogs
Gen AI

Exploring OpenClaw: The self-hosted AI assistant revolution that is reshaping everything

Deveshi Dabbawala

February 18, 2026
Read more
LLM Models

The comprehensive guide to building production-ready Model Context Protocol systems

Deveshi Dabbawala

February 11, 2026
Read more