Business Problem
- Inefficient Data Retrieval: Underwriters had to manually search for relevant guidelines, wasting time and slowing decision-making.
- Delayed Underwriting Decisions: The lack of a centralized, conversational interface led to inefficiencies in handling queries.
- Scattered Document Access: Important documents were stored across multiple systems, making retrieval cumbersome.
- Scaling Challenge: As the number of underwriting requests increased, manual processes became unsustainable.
About Ledgebrook
Ledgebrook’s underwriters needed a way to quickly access underwriting guidelines and compare them with submitted documents. goML developed an AI-powered chatbot to streamline the underwriting process.
Solution
goML developed an intelligent chatbot powered by AI and search capabilities:
Conversational Interface for Underwriters: A chatbot was built using AWS Bedrock, enabling underwriters to retrieve policy and document information via natural language queries.
Seamless API Integration: The chatbot communicated with Ledgebrook’s existing document processing systems via AWS Lambda.
Automated Document Lookups: Integrated with OpenSearch Serverless, the chatbot fetched relevant documents instantly.
Secure & Scalable Deployment: Hosted within a VPC-based AWS infrastructure, ensuring data security and high availability.
Policy & Document Comparison: Underwriters could compare guidelines with submitted documents stored in AWS S3 and categorized using PostgreSQL RDS.
Search & Retrieval Optimization: Document vectorization in OpenSearch improved response accuracy for underwriting queries.
Architecture
- User: The initiator of the process, providing an input payload containing a token and a query.
- DB (Database): Stores user data, including the aiDocumentSessionToken, potentially used for authentication or session management.
- Bedrock: Serves as the central component for orchestration and processing, handling:
- Prompt Engineering: Formulating prompts for the RAG pipeline.
- RAG (Retrieval Augmented Generation): The core logic for retrieving relevant information and generating responses. It interacts with:
S3 (Simple Storage Service): Stores the document corpus (underwriting data, document information).
OpenSearch: Provides indexing and search capabilities over the document corpus for efficient retrieval. - Webhook Endpoint: An external service that is triggered by Bedrock to deliver the final response.
- Input Payload: The user sends an input payload (token, query) to the DB.
- Token Retrieval: The DB verifies the token and provides the aiDocumentSessionToken to Bedrock.
- RAG Pipeline Execution: Bedrock uses the token and query to initiate the RAG pipeline. This involves:
- Prompt Engineering: Creating a suitable prompt for querying the document index.
- Retrieval: Querying OpenSearch to fetch relevant documents from S3.
- Information Extraction: Processing the retrieved documents to extract the necessary information to answer the query.
- Response Generation: Bedrock generates a final response based on the extracted information.
- Webhook Trigger: Bedrock triggers the Webhook endpoint, passing the final response.
- Final Response Delivery: The Webhook endpoint delivers the final response to the user.