Enhancing Content Creation and Personalization for Curly Tales with Generative AI

Curly Tales, a leading food, travel, and lifestyle storytelling platform, engages its audience with curated experiences and insights across cities. With a vast repository of user interaction data and content, Curly Tales sought to revolutionize its operations through GenAI-driven automation and personalization.

Business Problem

  • Manual content generation was time-intensive, limiting the platform’s ability to scale and personalize content.
  • Curating high-quality, contextually relevant images for articles posed significant challenges.
  • Delivering tailored travel itineraries and recommendations required substantial manual intervention, limiting user engagement potential.

Solution

GoML collaborated with Curly Tales to deliver a Proof of Concept (PoC) using outstanding Generative AI technologies. The solution addressed these challenges through the following components, supported by an advanced tech stack:

Architecture

  • Data Sources and Knowledge Base Creation
    AWS S3 Buckets: Stores raw data for processing.
    – Data Mart: Aggregates and organizes data for further use.
    – Knowledge Base Creation: Curates the data to form a centralized repository for vectorization.
  • Data Preprocessing
    Python Native Scripts: Executes preprocessing logic on raw data.
    – Data Preprocessing Layer: Cleans, structures, and prepares data for vectorization.
  • Vectorization Process
    Business Intelligence Heuristics: Applies analytical rules for data transformation.
    – Chucking and Rejoining Algorithms: Optimizes data chunks for embedding.
    – Metadata Processing: Annotates data with relevant metadata.
    – Embedding Model: Converts processed data into vector representations.
    – Storage: Uses OpenSearch Vector DB for storing vectorized data.
  • Query Processing
    User Input: Accepts conversational queries from users.
    – Query Processing Module: Analyzes and processes user queries to extract intent.
  • Advanced Retrieval-Augmented Generation (RAG) Pipeline
    – AWS Bedrock: Powers foundational AI services and manages to embed them.
    – Data Pipelines: Facilitates smooth data flow and transformation.
    – Parameter Extraction and Generation Layer: Extracts parameters from queries for relevant response generation.
    – Historical Chat Storage: Stores conversation history for contextual replies.
  • Focused AI Agents
    Multiple AI Agents: Specialized agents handle distinct functionalities based on query parameters.
  • Deployment and Infrastructure
    – Code Repository: Centralized version control for all components.
    – Docker: Containerizes applications for consistent deployment across environments.
Outcomes

0%

Time Savings: Automation of content and image generation significantly reduced manual effort.

0%

Higher Engagement: Personalized content and itineraries resulted in improved user interaction metrics.

0%

Cost Reduction: Streamlined operations lowered costs associated with manual content creation and personalization.