Ecosystem
May 22, 2025

Amazon bedrock prompt caching becomes generally available to reduce cost and latency

Prompt caching in Amazon Bedrock improves generative AI app performance by reducing latency and costs through reuse of frequently used prompt responses, ideal for high-volume, production-grade Gen AI use cases.

Amazon Bedrock has introduced prompt caching, now generally available, to improve the performance and efficiency of generative AI applications.

With prompt caching, commonly used prompts and their responses are stored, reducing repeated computation and latency for future requests. This significantly accelerates response times, lowers costs, and boosts throughput for production-grade AI workflows.

Developers can toggle caching settings with simple API parameters, offering control and flexibility for inference tasks. This feature is particularly beneficial for high-volume use cases like chatbots, knowledge assistants, and content generation platforms, ensuring smoother, more responsive user experiences with minimized infrastructure overhead.

#
AWS

Read Our Content

See All Blogs
AWS

The Complete Guide to Nova 2 Omni

Sharan Sundar Sankaran

December 14, 2025
Read more
AWS

Day 4 at AWS re:Invent: Experience-Based Acceleration (EBA) partners announced and a big bang close

Deveshi Dabbawala

December 4, 2025
Read more