Back

How Ojje used AI for story generation to cut publishing time by 98%

Deveshi Dabbawala

August 2, 2025
Table of contents

Ojje creates interactive storybooks to help children build reading skills. While AI had already reduced story creation from 70 days to 2, the final publishing process still required manual selection and took up to a full day. Ojje aimed to compress this further, to just a few hours. However, its legacy infrastructure on GCP posed performance and scalability limitations.  

To accelerate operations, reduce costs, and meet growth goals, Ojje partnered with GoML to implement an end-to-end AI for story generation pipeline using AWS-native services.

The problem: slow storybook creation, infrastructure limitations

Ojje’s platform combined AI-generated narratives with high-quality illustrations to create personalized interactive books. Although generative models had reduced the initial creation time, the full publishing process still took 1–2 days due to manual refinement and infrastructure bottlenecks.

GCP’s setup was insufficient to support the scalability and low-latency needs of mass content production. Delays in image rendering, story generation, and task coordination were hindering Ojje’s goal of global scale. Ojje needed a robust and scalable AI for story generation solution that could streamline the pipeline, reduce publishing time, and handle increasing user demand, all while maintaining high-quality storytelling and visuals.

The solution: end-to-end storybook creation pipeline with AI for story generation

GoML delivered a complete AWS-native solution that automated the full journey from story prompt to published book using AI for story generation and supporting services.

Story generation with Amazon Bedrock

GoML integrated a large language model on Amazon Bedrock to generate compelling children’s stories. Stories were automatically structured and adapted to different age groups, drastically reducing human input.

Illustration generation with MidJourney

To accompany the stories, GoML used MidJourney to create unique, stylized illustrations aligned with the story theme. This made each storybook visually engaging and cut down illustration wait times.

Event-driven architecture with Lambda and SQL

GoML set up an AWS SQS queue to manage task workflows for story and image creation. AWS Lambda functions handled processing events serverlessly, ensuring low-latency responses and seamless scaling.

Scalable compute with EC2

The core application was containerized and deployed on EC2, allowing Ojje to manage variable workloads and launch updates quickly across environments.

Metadata management with MongoDB

Each story’s prompts, metadata, and decision paths were stored in MongoDB for flexible access, editing, and auditing.

Durable image storage with S3

Illustrations were stored in Amazon S3 for high durability and fast retrieval during publishing.

Monitoring with CloudWatch

Real-time visibility into function performance, errors, and usage was enabled via CloudWatch, ensuring operational reliability.

The impact: from days to under an hour with Gen AI

Ojje achieved massive improvements in both content creation speed and operational efficiency:

  • 98% reduction in interactive storybook creation time from 2 days to under 1 hour
  • 40% gain in efficiency through automated prompt engineering
  • 50% scalability boost with AWS-native infrastructure supporting growing demand

"With GoML and AWS, we deliver 36 differentiated versions of every story in minutes, enabling teachers to reach every learner and help students build skills," says Adrian Chernoof, Founder, Ojje.

Lessons for other publishers

Common pitfalls to avoid

  • Relying on manual review even when LLMs are integrated
  • Delaying infrastructure transition despite scaling needs
  • Treating illustrations as a static design task instead of a pipeline element

Advice for teams building AI-powered content solutions

  • Automate story and image generation with structured workflows
  • Use prompt engineering as a first-class optimization layer
  • Adopt AWS-native services to reduce latency and maximize scale

Want to reduce story creation time by 98%?

Let GoML help you implement AI for story generation just like Ojje.

Outcomes

98%
Reduction in interactive content creation by moving to AWS from GCP, from 2 days to less than 1 hour
40%
Improved efficiency with automated prompt engineering
50%
Scalable infrastructure and AWS infrastructure are leveraged to handle increased demand