Last quarter, one of our support tickets fell through the cracks for eleven days. Eleven days. A paying customer had asked a straightforward question about their integration setup — and nobody saw it because it got buried under forty-seven other emails in the shared inbox. By the time we found it, they'd already posted about the experience on LinkedIn. That stung. But here's the ugly truth: that wasn't the first time. It wasn't even the second.
We were a small team — five people handling support between feature builds and client calls. Our setup was a shared email account, a half-hearted labeling system, and a lot of hope. Tickets got lost. SLA promises were theoretical. We'd tell customers "we guarantee a 4-hour response" and then deliver 36-hour ones without even knowing it. And the worst part? We kept typing the same answers over and over again. "Have you tried clearing your cache?" "Let me escalate that to engineering." "We're looking into it and will get back to you."
Something had to change. So we built a support ticketing system powered by AI agents using INFORMAT. This is the story of how we did it — what worked, what didn't, and the exact prompts we used.
The Real Cost of the Shared Inbox Circus
Before we talk about the solution, let's be honest about how bad things actually were. We tracked our support metrics for two weeks to get a baseline, and the numbers were embarrassing:
- Average first response time: 14 hours (our stated SLA: 4 hours)
- Tickets that got reassigned more than twice: 23%
- Questions we'd answered before in a previous thread: Roughly 40%
- Tickets we discovered purely because the customer followed up with "hey, any update on this?": Too many to count
The problem wasn't our team's effort — it was the system. Shared inboxes are built for receiving email, not for managing workflows. There's no automatic assignment, no priority queue, no way to know if something's been sitting unanswered for three days unless you manually scroll through everything. And when you're scrambling to respond to the squeakiest wheel, the quiet tickets just sink.
What We Tried Before AI
We didn't jump straight to building an AI system. We tried the "reasonable" things first.
Spreadsheets. We set up a Google Sheet where everyone was supposed to log every incoming request. It lasted about a week. The issue was obvious: asking busy people to manually copy-paste information from emails into a spreadsheet is a fantasy. It adds friction to every action, so people just stop doing it.
Labels and filters. Gmail filters that auto-labeled support emails, color-coded by domain. This worked slightly better, but labels don't assign work, don't track time, and don't tell you when you're about to blow an SLA. We were essentially using colored tags to organize a pile of work that kept growing.
Off-the-shelf help desk tools. We evaluated Zendesk, Freshdesk, and Help Scout. They're genuinely good products — but for a five-person team, the per-agent pricing adds up fast. More importantly, even those tools still require manual triage. A human has to read each ticket, decide what category it belongs to, assign priority, route it to the right person. The software helps you do those things faster, but it doesn't do them for you. And it certainly doesn't answer repetitive questions on its own.
We needed something that could handle the intake, triage, and first-response layers automatically — not just organize the chaos, but actually reduce it.
Building the AI Ticketing System on INFORMAT
We'd used INFORMAT before for other internal tools, so we knew it could handle data modeling and workflow logic. What we hadn't tried was the AI agent layer — the ability to create autonomous agents that read, classify, respond, and escalate without human intervention at every step.
Here's how we built it, step by step.
Step 1: Defining the Agent's Role
Every good support system starts with clear boundaries. What should the AI handle on its own? What needs a human? We created an AI agent with a system prompt that defined its role precisely. This matters more than you'd think — an AI agent without clear constraints will either try to do everything (and make mistakes) or default to "I don't know" too often.
Here's the prompt we used:
You are a first-line customer support agent for INFORMAT. Your primary role is to triage incoming support tickets and respond to common questions autonomously.
Rules:
1. Classify every ticket as one of: bug, billing, feature request, account issue, general question.
2. For general questions covered by the knowledge base: respond directly with a solution and ask if it resolved the issue.
3. For billing questions: gather the account email and invoice number if missing, then route to the billing queue.
4. For bugs: ask for reproduction steps, browser/OS version, and severity (blocking, major, minor), then route to engineering.
5. For feature requests: acknowledge receipt, explain that the product team reviews requests monthly, and tag with the relevant product area.
6. For account issues (password reset, 2FA, access): guide the user through self-service steps first. Escalate only if self-service fails.
7. If you cannot confidently classify a ticket, flag it as "needs review" and assign to a human. Do not guess.
8. Never make up information about product features, pricing, or release dates. If you don't know the answer, say so and route to a human.
This prompt alone handled about 30% of our incoming tickets on day one. The agent could answer common "how do I..." questions, reset password requests, and basic troubleshooting without any human touch. The key was rule #8 — we'd rather have false escalations than false answers. Nothing destroys trust faster than an AI confidently describing a feature that doesn't exist.
Step 2: Feeding It the Knowledge Base
An AI agent is only as good as the information it can reference. We pointed the agent at three sources:
- Our public documentation site (all help articles and API docs)
- A CSV export of the last 200 resolved tickets (anonymized)
- Our FAQ document with the top 50 questions we got every week
INFORMAT indexes these sources and makes them searchable by the agent at query time. We didn't need to train a model or write embedding pipelines — just upload the files and define which sources the agent should consult for each ticket category.
This is where we hit our first friction point. The CSV of past tickets was messy. Different team members had used different terminology for the same issues, resolution notes were inconsistent, and some tickets just said "fixed" with no explanation. We spent about two hours cleaning that data before loading it in, and honestly, we should've spent four. Garbage in, garbage out — even with AI.
Step 3: Building the Triage Pipeline
The real magic isn't just answering questions — it's making sure the right questions reach the right humans at the right time. We built a triage pipeline that works like this:
- Incoming email → INFORMAT webhook. Every support email gets forwarded to an INFORMAT endpoint that creates a ticket record.
- AI classification. The agent reads the email body and assigns a category, priority (urgent, high, normal, low), and suggested assignee.
- Auto-response or routing. If the agent can answer it, it drafts a response for human review (or sends it directly for low-risk questions). If it can't, the ticket goes to the appropriate queue.
- SLA timer starts. The clock starts ticking based on priority — 1 hour for urgent, 4 for high, 24 for normal, 72 for low.
- Escalation triggers. If a ticket approaches its SLA limit without a response, the agent sends a Slack alert to the assigned team member. If it breaches, it alerts the whole team.
The triage classifier prompt was a separate, more focused prompt:
Classify this support ticket. Return a JSON object with these fields:
- category: "bug" | "billing" | "feature_request" | "account_issue" | "general_question"
- priority: "urgent" | "high" | "normal" | "low"
- confidence: 0.0 to 1.0
- suggested_assignee: "engineering" | "billing" | "product" | "support"
Priority rules:
- urgent: production down, security issue, data loss
- high: feature broken for all users, billing double-charge
- normal: single-user bug, how-to question, feature request
- low: cosmetic issue, documentation suggestion
If confidence is below 0.7, set category to "needs_review" and route to support.
This structured output approach was critical. By forcing the agent to return JSON with a confidence score, we could set up automated rules: confident classifications get acted on immediately, low-confidence ones go to a human review queue. It's a simple safeguard that caught a lot of edge cases.
Step 4: SLA Monitoring That Actually Works
The SLA monitoring was surprisingly easy to set up. INFORMAT lets you define time-based triggers on any record. We configured:
- At 75% of SLA time: Agent sends a Slack DM to the assignee: "Ticket #1423 is approaching its SLA deadline (45 min remaining)."
- At 100% (breach): Agent posts in the #support-critical Slack channel with the ticket details.
- At 150%: Agent emails the support lead directly.
We went from not knowing about SLA breaches until a customer complained, to knowing about potential breaches before they happened. That shift alone — proactive instead of reactive — changed how our team operated.
Comparing the Approaches
Here's how our INFORMAT-based ticketing system stacks up against the alternatives we tried:
| INFORMAT AI System | Traditional Help Desk SaaS | Shared Inbox + Spreadsheets | |
|---|---|---|---|
| Setup time | 2 days (including data cleanup) | 1-2 days (basic config) | Zero (already using it) |
| Monthly cost (5-person team) | ~$200 (INFORMAT platform + email) | $250-$500 (per-agent pricing) | Free (email only) |
| Auto-response capability | AI handles ~40% of tickets fully autonomously | None (macros/templates require manual selection) | None |
| Smart triage | Automatic classification, priority, routing | Manual rules-based routing only | Manual only |
| SLA monitoring | Automated with proactive alerts | Basic timers, manual alert setup | Doesn't exist |
| Customization | Full control via prompts and data model | Limited to vendor's template/field options | Whatever you can hack in Sheets |
| Learning curve | Moderate (prompt design + workflow config) | Low (UI-based, familiar paradigm) | None |
| Avg first response time | Under 2 minutes (AI) / 2 hours (human) | Depends on team — usually 2-6 hours | 14 hours (our measured baseline) |
A few caveats: traditional help desk tools have polish that our system doesn't. Their reporting dashboards are better, their mobile apps are more mature, and they have decade's worth of UX refinements. If you need a battle-tested customer-facing portal with SLAs that auditors will look at, a dedicated tool might be the safer choice. But for a team that wants intelligent automation — not just a better inbox — the AI approach wins on almost every metric that matters day-to-day.
What Surprised Us (The Honest Part)
Not everything went smoothly. Here's what caught us off guard:
The AI was too helpful at first.
In the first week, our agent started answering tickets that it should have escalated. Someone reported a complex API bug with partial reproduction steps, and the agent went ahead and tried to debug it — asking overly specific technical questions that assumed a certain setup. The customer got frustrated because they felt like they were being interrogated by a script. We had to tighten the system prompt to be more conservative: if a bug report mentions anything beyond surface-level configuration, escalate immediately. The agent's job is to triage and answer known questions, not to be a junior engineer.
Prompt engineering is iterative, not one-and-done.
We revised the system prompt seven times in the first two weeks. Seven times. Each revision fixed one edge case and occasionally introduced another. The triage prompt went through a similar number of iterations. Anyone who tells you prompt engineering is "just writing instructions" has never tried to get an AI agent to reliably distinguish between a billing complaint and a feature request when the customer is angry and writing stream-of-consciousness.
Data quality matters more than prompt quality.
This was the biggest lesson. A mediocre prompt with clean, well-organized knowledge base data outperformed a perfect prompt with messy data every single time. When our past ticket CSV had inconsistent categories and sparse resolution notes, the agent would hallucinate answers or route things to the wrong queue. After we cleaned the data, everything got better — classification accuracy, response quality, everything. We should have prioritized data cleanup from day one instead of jumping straight to prompt tinkering.
"At first I was skeptical about getting answers from an AI instead of a person. But honestly, the AI response was faster and more detailed than what I usually got from the team. When it did need to hand me off to a human, the context transfer was seamless — I didn't have to repeat myself."
The Results After 30 Days
We've been running the system for a month now. Here are the numbers:
- First response time dropped from 14 hours to under 2 minutes for AI-handled tickets, and under 2 hours for human-handled tickets.
- 40% of tickets are resolved entirely by the AI agent without human involvement.
- SLA breaches went to zero in the third and fourth weeks (we had two in week one while tuning the alerts).
- Team support time dropped by roughly 60% — we went from the equivalent of 2.5 full-time people doing support to about 1.
- Customer satisfaction scores actually went up by 12% compared to the previous quarter. We were worried automated responses would feel impersonal, but faster answers more than made up for it.
The team reclaimed about 30 hours per week that had been going to repetitive questions and inbox triage. Those hours went back to product work, documentation improvements, and — this was the best outcome — deeper support for the complex tickets that actually needed human expertise.
Where We'd Do It Differently
If we were starting over, here's what we'd change:
Spend more time on knowledge base structure. We loaded everything we had into the system, but we should have organized it into clear tiers: beginner FAQs, troubleshooting guides, and advanced configuration docs. The agent performs noticeably better when it can match the depth of its answer to the complexity of the question.
Build the escalation flow before launch. In the first few days, tickets that the agent couldn't handle sometimes ended up in a gray zone — flagged "needs review" but not clearly assigned to anyone. We fixed this by adding an explicit default assignee for every category, but it caused a few delays early on.
Add a feedback loop from the start. We now track every case where a customer or team member corrected the agent's response. Those corrections feed back into the knowledge base and prompt refinements. We should've had this feedback cycle in place before going live.
The Takeaway
Building a customer support ticketing system with AI agents isn't about replacing humans. It's about making sure humans spend their time on the tickets that actually need human judgment — and letting the AI handle the rest. The shared inbox problem isn't going to solve itself with better folder structures or more disciplined tagging. The volume of repeat questions, the SLA blind spots, the triage overhead — those are structural problems that need a structural solution.
Our eleventh-day ticket was a wake-up call. But the system we built after it turned support from a constant source of stress into something that actually runs smoothly. We still have bad days. The agent still misfires on edge cases. But we're not losing tickets anymore. We're not discovering SLA breaches from angry LinkedIn posts. And we're definitely not typing "have you tried clearing your cache" for the thousandth time.
If you're running support on a shared inbox right now, you already know exactly what we're talking about. The fix doesn't require a massive SaaS budget or a team of engineers. It requires a clear prompt, clean data, and the willingness to iterate.