Back

How GoML built a conversational data insights engine for MyIntelliSource

Deveshi Dabbawala

October 23, 2025
Table of contents

MyIntelliSource is a cloud software company focused on simplifying business operations through intuitive, mobile-ready applications. From accounting and scheduling to project management and time tracking, its solutions make business data instantly accessible anytime, anywhere. The company also champions social impact, supporting autism-related causes through its initiatives.

The problem: Business data locked behind technical barriers

As MyIntelliSource’s platform ecosystem grew, so did the volume of structured business data across modules like accounting, scheduling, and task management. However, accessing this data requires manual SQL queries and technical expertise.

Business users struggled to get real-time answers to questions like:

“What was our Q3 revenue growth?”
“Which service category had the highest engagement this month?”

Every query had to pass through analysts, creating bottlenecks, delays, and risks of inconsistent logic or human error. The team needed a solution that would make data access instant, intuitive, and secure, enabling instant, intuitive, and secure data access empowering every user, not just developers or analysts.

The solution: GoML’s LLM-powered Text-to-SQL Agent

GoML partnered with MyIntelliSource to develop a Text-to-SQL AI Agent Proof of Concept (PoC) that converts plain English queries into valid SQL statements, eliminating the need for manual query writing.

The system is built using LangGraph orchestration and AWS Bedrock’s Claude models, providing a secure, scalable, and enterprise-grade architecture. It integrates directly with MyIntelliSource’s internal databases to deliver conversational, on-demand insights.

Natural language understanding

Interprets user questions, identifies entities and intent, and automatically maps them to relevant MySQL tables.

SQL query generation

Powered by AWS Bedrock’s Claude 3 Sonnet for high-accuracy natural language-to-SQL translation with schema-aware validation.

Dynamic schema retrieval

Fetches live table structures from MySQL to prevent invalid or broken queries.

Secure query execution

Executes only read-only SQL queries through secure RDS connections tunneled via SSH.

Conversational interface

Built using Streamlit, allowing users to interact in real time, view past queries, and refine insights through natural dialogue.

Scalable cloud-native architecture

  • Backend: FastAPI microservices containerized with Docker
  • AI Orchestration: LangGraph multi-node workflow
  • LLMs: AWS Bedrock (Claude-v2, Claude-3-Haiku, Claude-3-Sonnet)
  • Database: Amazon RDS (MySQL 8.0)
  • Deployment: AWS ECS (Fargate) with CloudWatch monitoring

Built-in guardrails and observability

  • SQL injection prevention and input sanitization
  • Row and time limits for controlled execution
  • Role-based access with OAuth readiness
  • Audit logs for every query and response
Text to SQL for data analytics

The impact: Making MyIntelliSource data conversational

With GoML’s Text-to-SQL AI Agent, MyIntelliSource’s teams can now query live business data in plain English, no coding or technical knowledge required.

Results achieved

  • 90%+ query accuracy across diverse natural language inputs
  • 12–16 second average response time under load
  • 10+ concurrent users supported with zero query failures

Lessons for other organizations

Common pitfalls to avoid:

  • Building LLM-to-SQL models without schema validation
  • Ignoring query safety and role-based access
  • Overlooking conversational context and query history

Best practices from GoML’s approach:

  • Start with a small PoC schema to validate accuracy and latency
  • Use LangGraph for modular agent workflows
  • Leverage Bedrock’s Claude models for robust language understanding
  • Implement guardrails early for compliance and data safety

Ready to make your business data conversational?

Let GoML build your AI-powered Text-to-SQL Agent and empower your teams with a natural language data analytics engine.

Outcomes

90%+
Query accuracy during tests
12 - 16 sec
Average response times per query