When I joined Actoserba Active Wholesale Ltd.'s customer success pod, the brief was clear: reduce support call volume. The obvious answer, which multiple people had already proposed, was a chatbot. A bot could handle the top-10 repeat queries automatically, deflect calls, save cost. Simple.
The first chatbot we deployed didn't work. Containment rate was quite low โ meaning most conversations ended with the user asking for a human agent anyway. We'd built a chatbot that users didn't trust to solve their problems.
The mistake wasn't in the technology. It was in the thinking that preceded the technology. Here's what we got wrong, and how we fixed it.
The Fundamental Error: Solving for Tickets, Not for Anxiety
When you look at a support queue and see "Where is my order?" as 35% of your contacts, the instinct is to build a chatbot that answers "Where is my order?" Technically, this is correct. In practice, it misses the point entirely.
A customer who contacts support about their order isn't just asking for a tracking number. They're anxious. They ordered something, they haven't received it, and they don't know why. The chatbot response "Your order is in transit, expected delivery 3โ5 days" doesn't resolve anxiety โ it just restates the information they already had access to. That's why they escalated to a human: the existing information wasn't reassuring enough.
The insight that changed everything: We ran a batch of support call recordings and tagged every call not by query type but by the underlying anxiety. "Is my order actually going to arrive?" was 45% of our contact volume โ but only a fraction of those contacts were categorised as "delivery query" in our ticket taxonomy. The taxonomy was hiding the real problem.
The Research That Revealed the Real Problem
Before redesigning the chatbot, I ran three research activities:
Call recording analysis
We listened to 200 support calls and tagged each one not by the stated query but by the moment the customer's anxiety peaked โ the thing they actually needed reassurance about. Three themes emerged: delivery uncertainty, return process confusion, and product quality concerns post-delivery. These weren't the same as our existing ticket categories.
Dropout point analysis
Using our analytics stack, we mapped where customers went in the app before they contacted support. A majority of delivery anxiety contacts had checked the order status page at least twice in the preceding hour. They weren't missing information โ they were unconvinced by the information they had.
Chatbot failure point analysis
For every chatbot conversation that ended in agent escalation, we tagged the point at which the user escalated. A large share of escalations happened on the first or second chatbot turn โ users gave up almost immediately. This wasn't a content problem. It was a trust problem. The chatbot didn't feel like it could actually help.
What We Built Instead
Armed with this research, we rebuilt the chatbot experience from the ground up with three principles:
Principle 1: Lead with empathy, not information
The original chatbot's first response to "Where is my order?" was a tracking status. The new chatbot's first response acknowledged the situation: "I can see your order was placed 4 days ago and you're waiting for it โ let me check exactly what's happening with it right now." Same information, different framing. Users felt heard before they felt informed. Escalation rates at turn one dropped considerably.
Principle 2: Proactive outreach at anxiety trigger points
Rather than waiting for customers to contact us, we built trigger-based proactive messages. When an order moved to "out for delivery" status, we sent a WhatsApp message with a live tracking link and a direct chatbot entry point pre-loaded with the order context. When a return was initiated but the pickup hadn't been confirmed within 48 hours, we proactively sent a status update.
This single change โ proactive messages at the moment of peak anxiety โ meaningfully reduced inbound contacts related to delivery status.
Principle 3: Visible, frictionless escalation path
The original chatbot made reaching a human agent three taps deep. We moved the "Talk to an agent" option to turn one, always visible. This seems counterintuitive โ why make escalation easier if you're trying to contain contacts? Because users who know they can escalate easily are more willing to try the bot first. Hidden escalation paths create distrust. Visible escalation paths create trust.
Containment rate improved significantly within three months of the redesign.
The Metrics That Matter for Chatbot Products
Most chatbot dashboards track the wrong things. Here's what actually matters:
- First-turn escalation rate: What percentage of users ask for a human on the very first chatbot response? This is your trust score. Below 15% is good. Above 30% means something fundamental is broken.
- Resolution confidence rate: After a chatbot resolves a query, do users contact support again within 24 hours about the same issue? If yes, the resolution wasn't real.
- Anxiety resolution rate: This requires a post-interaction survey. Did the interaction reduce the user's concern? Not "was your query answered?" but "do you feel confident your issue is resolved?"
The Bigger Lesson: Support is a Product Problem
Every support contact is a product failure signal. A customer who calls about their order status is telling you that your post-purchase communication isn't good enough. A customer who calls about a return is telling you the return process isn't clear enough in the app. The chatbot is a band-aid, not a cure.
The right response to high support volume isn't a better chatbot โ it's understanding the underlying product problems that are generating that volume, and fixing them at the source. The chatbot buys you time and cost reduction while you do the harder work of fixing the root causes.
Build chatbots that acknowledge human anxiety before serving information. Make escalation visible. And use every support contact as a product signal, not just a cost to be managed.
