Models
October 7, 2025

Google introduced the Gemini 2.5 Computer Use model

Google’s blog describes a Gemini model optimized for on-device / hybrid compute that balances latency, privacy, and efficiency. It routes tasks between cloud and device intelligently to reduce cost and improve user experience.

A Google DeepMind blog post outlines a “Gemini computer-use model” which distributes AI computation intelligently between local devices and cloud servers. The goal is to reduce latency, preserve privacy, and optimize bandwidth by processing certain tasks on device (e.g. quick responses, sensitive data) and offloading heavier workloads to cloud.

The article discusses architecture choices, resource constraints, and how the model adapts dynamically to device capabilities, network conditions, and energy usage.

Google claims this paradigm enables more responsive, resilient AI experiences across devices, while maintaining safety and control over critical computation flows.

#
Google

Read Our Content

See All Blogs
LLM Models

The definitive guide to LLM use cases in 2025

Deveshi Dabbawala

October 23, 2025
Read more
Gen AI

The GenAI Divide Report is a Trojan Horse for MIT NANDA

Rishabh Sood

October 14, 2025
Read more