Ecosystem
May 22, 2025

Amazon bedrock prompt caching becomes generally available to reduce cost and latency

Prompt caching in Amazon Bedrock improves generative AI app performance by reducing latency and costs through reuse of frequently used prompt responses, ideal for high-volume, production-grade Gen AI use cases.

Amazon Bedrock has introduced prompt caching, now generally available, to improve the performance and efficiency of generative AI applications.

With prompt caching, commonly used prompts and their responses are stored, reducing repeated computation and latency for future requests. This significantly accelerates response times, lowers costs, and boosts throughput for production-grade AI workflows.

Developers can toggle caching settings with simple API parameters, offering control and flexibility for inference tasks. This feature is particularly beneficial for high-volume use cases like chatbots, knowledge assistants, and content generation platforms, ensuring smoother, more responsive user experiences with minimized infrastructure overhead.

#
AWS

Read Our Content

See All Blogs
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more
ML

The evolution of machine learning in 2025

Siddharth Menon

August 8, 2025
Read more