Ecosystem
May 22, 2025

Amazon bedrock prompt caching becomes generally available to reduce cost and latency

Prompt caching in Amazon Bedrock improves generative AI app performance by reducing latency and costs through reuse of frequently used prompt responses, ideal for high-volume, production-grade Gen AI use cases.

Amazon Bedrock has introduced prompt caching, now generally available, to improve the performance and efficiency of generative AI applications.

With prompt caching, commonly used prompts and their responses are stored, reducing repeated computation and latency for future requests. This significantly accelerates response times, lowers costs, and boosts throughput for production-grade AI workflows.

Developers can toggle caching settings with simple API parameters, offering control and flexibility for inference tasks. This feature is particularly beneficial for high-volume use cases like chatbots, knowledge assistants, and content generation platforms, ensuring smoother, more responsive user experiences with minimized infrastructure overhead.

#
AWS

Read Our Content

See All Blogs
AI safety

Decoding White House Executive Order on “Winning the AI Race: America’s AI Action Plan” for Organizations planning to adopt Gen AI

Rishabh Sood

September 24, 2025
Read more
AWS

AWS AI offerings powering enterprise AI in 2025

Siddharth Menon

September 22, 2025
Read more