OFFICES

R 10/63, Chitrakoot Scheme,
Vaishali Nagar, Jaipur, Rajasthan
302021, India

445 Dexter Avenue,
Montgomery, Alabama USA,
36104

61 Bridge Street, Kington, HR5
3DJ, United Kingdom

Case Study

Brainfish : Scalable AI Chatbot for Domain-Specific, High-Speed Customer Support


General AI SaaS Platforms

Brainfish – Domain-Specific AI Chatbot for Fast Self-Service

Brainfish needed a scalable AI chatbot solution to serve platforms like Airtasker and Mad Paws—and we delivered. We built an intelligent assistant trained on complex FAQs, help articles, and live chat data that can instantly answer customer queries across domains. It uses semantic search and context retention to surface precise answers, even across different products or topics. The bot handles over 85% of common queries autonomously, reducing support costs and response times. With multilingual capabilities and self-learning feedback loops, Brainfish delivers lightning-fast support with minimal human intervention.

Project Overview

  • Client: Brainfish (Customer self-service AI platform used by companies like Airtasker & Mad Paws)
  • Challenge: Manual query handling couldn’t scale across different knowledge domains
  • Goal: Build a unified AI chatbot to:
    • Instantly answer high-volume FAQs across multiple domains
    • Use semantic understanding to serve accurate responses, even to complex questions
    • Reduce support tickets while improving user experience and deflection rates
  • Team: 7 (2 NLP Engineers, 2 Backend Devs, 1 UX Lead, 1 QA, 1 PM)
  • Timeline: 4 months (Training Phase → Beta Rollout → Full Integration)

“GenX built us a support engine that thinks, adapts, and scales. Brainfish now delivers expert answers—faster than any human ever could.”

Co-Founder & CTO, Brainfish

The Challenge

Critical Pain Points:
  • User questions spanned multiple product areas, requiring intelligent context switching
  • Legacy bots relied on keyword triggers, often surfacing irrelevant or incomplete answers
  • High support team load due to repeat queries and lack of real-time automation
Technical Hurdles:
  • Creating domain-specific embeddings that could generalize across help content
  • Handling layered questions that required memory of prior turns in conversation
  • Ensuring quality answers in different languages with minimal retraining

Tech Stack

Component Technologies
Search & Retrieval Models FAISS, OpenAI Embeddings, BM25, Pinecone
Dialogue & Response Models GPT-4, LangChain, Rasa, Custom Prompt Templates
Data Infrastructure Firebase, PostgreSQL, Redis
Frontend SDKs & Chat UI React, Web Components, Flutter SDK
Monitoring & Analytics Mixpanel, Sentry, Real-time Feedback Tracker
Language Support Google Translate API, DeepL, spaCy

Key Innovations

The chatbot learned from help docs and live chat logs to resolve queries instantly. Semantic search provided accurate answers across products. With multilingual and self-learning features, Brainfish scaled support with minimal staff.

Cross-Domain Semantic Matching

  • Delivered accurate answers even when user phrasing varied significantly

Result: 87% query resolution rate without human intervention

Memory-Driven Conversational Intelligence

  • Tracked intent across multi-turn dialogues and adjusted answers dynamically

Result: 3.1x improvement in user satisfaction during prolonged sessions

Self-Improving Knowledge Graph

  • Bot trained on every user interaction, tuning answer accuracy over time

Result: 41% reduction in fallback cases over the first 60 days

Our AI/ML Architecture

Core Models

  • Semantic Retrieval Engine:
    • Hybrid BM25 + dense vector search (OpenAI Embeddings + FAISS)
    • Surfaces exact-match answers using semantic proximity rather than keywords
  • Memory-Aware Dialogue Layer:
    • GPT-4 with memory state + rule-based topic shift detection
    • Handles multi-turn conversations with domain-switching capability
  • Continuous Learning & Feedback Loop:
    • Learns from thumbs up/down, agent escalations, and unresolved queries
    • Self-updates response ranking through reinforcement tuning

Data Pipeline

  • Sources
    • Help center documents, live chat logs, support emails, and support macros
    • Customer feedback forms and unresolved query logs
  • Processing: Vectorized embeddings + knowledge base sync (every 24 hours)

Integration Layer

  • Web and mobile SDKs for plug-and-play deployment
  • Integration with Intercom, Zendesk, HelpScout, and custom CRMs
  • Live agent fallback bridge with session context carryover

Quantified Impact

First Response Time
Before AI

2.4 min

After AI

<3 sec

Human Support Deflection Rate
Before AI

21%

After AI

85%

Avg. CSAT for Bot-Resolved Queries
Before AI

68/100

After AI

91/100

Ticket Volume Handled Per Agent
Before AI

230/month

After AI

95/month

Multilingual Query Success Rate
Before AI

-

After AI

89.4%

A Legacy of Excellence in AI & Software Development Backed by Prestigious Industry Accolades