Business Problem
- Manual content generation was time-intensive, limiting the platform's ability to scale and personalize content.
- Curating high-quality, contextually relevant images for articles posed significant challenges.
- Delivering tailored travel itineraries and recommendations required substantial manual intervention, limiting user engagement potential.
Solution
GoML collaborated with Curly Tales to deliver a Proof of Concept (PoC) using outstanding Generative AI technologies. The solution addressed these challenges through the following components, supported by an advanced tech stack:
Article Outline Generation
- Technology Used: Claude V3.5 (LLM Model) with Open Search Vector Database
- Developed a system with a rag pipeline to generate structured and detailed article outlines based on user input.
- Added source references
AI-Powered Itinerary Planner
- Technology Used: Claude V3.5 (LLM Model) with Open Search Vector Database
- Built an API endpoint to generate personalized travel itineraries using Curly Tales' extensive content database.
- Provided tailored recommendations to improve user satisfaction and engagement.
Infrastructure
- Technology Used: AWS
- Ensured scalability, reliability, and seamless integration with Curly Tales’ existing systems.
Image Generation
- Technology Used: Stable Diffusion V3 (Image Model)
- Leveraged AI to create high-quality, contextually aligned images that enhance the visual appeal and engagement of articles.
Simple User Interface (UI)
- Technology Used: React
- Designed a lightweight and intuitive UI, enabling users to input topics and keywords and receive AI-generated article outlines and visuals effortlessly.
Architecture
- Data Sources and Knowledge Base Creation
- AWS S3 Buckets: Stores raw data for processing.
- Data Mart: Aggregates and organizes data for further use.
- Knowledge Base Creation: Curates the data to form a centralized repository for vectorization. - Data Preprocessing
- Python Native Scripts: Executes preprocessing logic on raw data.
- Data Preprocessing Layer: Cleans, structures, and prepares data for vectorization. - Vectorization Process
- Business Intelligence Heuristics: Applies analytical rules for data transformation.
- Chucking and Rejoining Algorithms: Optimizes data chunks for embedding.
- Metadata Processing: Annotates data with relevant metadata.
- Embedding Model: Converts processed data into vector representations.
- Storage: Uses OpenSearch Vector DB for storing vectorized data. - Query Processing
- User Input: Accepts conversational queries from users.
- Query Processing Module: Analyzes and processes user queries to extract intent. - Advanced Retrieval-Augmented Generation (RAG) Pipeline
- AWS Bedrock: Powers foundational AI services and manages to embed them.
- Data Pipelines: Facilitates smooth data flow and transformation.
- Parameter Extraction and Generation Layer: Extracts parameters from queries for relevant response generation.
- Historical Chat Storage: Stores conversation history for contextual replies. - Focused AI Agents
- Multiple AI Agents: Specialized agents handle distinct functionalities based on query parameters. - Deployment and Infrastructure
- Code Repository: Centralized version control for all components.
- Docker: Containerizes applications for consistent deployment across environments.