Clind.ai
clind.ai ↗AI customer service running across 7 brands, handling around 200 thousand people a month. Most without anyone stepping in.
What happened here
Clind.ai was born inside Gocase, a multi-brand e-commerce group I led 3 engineering teams for. We started small: an n8n bot answering price questions on Instagram. We learned the flow, kept the parts that worked, and rebuilt the rest on top of the OpenAI Agents SDK to launch on WhatsApp, where customers don't ask, they complain. Today the agents handle around 200 thousand people a month across 7 brands, with most of them resolved end-to-end without a human.
Seven brands. Same questions, repeating all day.
Gocase is a multi-brand e-commerce: phone cases, accessories, lifestyle. Each brand has its own voice and channels: Instagram DMs, WhatsApp, web chat. But 80% of customer questions overlap: order status, exchange policy, where's my package, is this in stock, can I return it.
The first-level support team was burning hours on questions whose answers already lived in some system: Shopify, the ERP, the logistics provider. We didn't want to replace the team. We wanted to free them from the work that didn't require a person.
Volume + repetition + zero patience
Two things made this hard. First: scale. Thousands of conversations a day, peaks during launches and Black Friday. Second: stakes. A late phone case is annoying. A wrong refund triggers a public complaint. The system had to be fast and right at the same time.
We didn't want to ship a chatbot with scripts. We wanted an agent that actually consults the systems, makes decisions, and knows when to step out of the way and call a human. But we also didn't want to commit to a giant architecture before knowing what worked.
So we started small.
From an n8n bot on Instagram to autonomous agents on WhatsApp
First: validate the concept where it's easiest
We built the first version in n8n and put it on Instagram DMs. Why Instagram? Because the audience there was mostly looking for prices, sizes, simple doubts before buying. Lower stakes. We could test the concept without breaking anything.
From that bot, we mapped the user flow visually: who asks what, in what order, where confusion shows up. We built subflows for each step, and each subflow turned into a tool the agent could call.
Quick aside: what is a “tool”?
A tool is a function the agent decides when to call. “Check stock.” “Get order status.” “Look up the return policy.” The agent reads the customer message, decides which tools it needs, calls them, reads the result, and writes the answer. It's the difference between a chatbot guessing and an agent doing real work against real systems.
Second: WhatsApp was a different beast
Going to WhatsApp meant another audience. The customer on Instagram is shopping. The customer on WhatsApp already bought, and is upset. Late delivery, wrong product, refund pending. Higher stakes, more nuance, less patience.
n8n had taken us far, but the flexibility we needed for that conversation wasn't there. We needed total control over the agent: what it sees, what it can do, what it must not say, when it gives up and calls a human.
Third: rebuild on the OpenAI Agents SDK
We rebuilt everything on the OpenAI Agents SDK. It gave us full control with the right level of abstraction. We weren't writing every loop by hand, but nothing was hidden either. From there we wired in 15+ tools (stock, order status, logistics, products, return policy, internal FAQ), dynamic guardrails on inputs and outputs, and specialized handoff agents that decide case by case whether a human should step in.
The agent stack: what's actually running in production
A lot of what we built sits under the hood: retries, observability, model routing, conversation memory, evals on every release. Below is what actually shapes the customer's experience.
15+ tools
Stock, order status, logistics, products, policies, internal FAQ. The agent reads the message, picks the right tool, runs it.
Input guardrails
Filter prompt injection, off-topic threads, abusive content before they reach the main agent.
Output guardrails
Validate every reply: no invented prices, no policies the agent doesn't know, no promises the brand can't keep.
Handoff agents
Specialized agents whose only job is to decide: keep going, or call a human now? With full context handed over, the customer never has to repeat.
Multimodal
Text, audio and image. A customer can send a photo of a damaged product and the agent identifies it, opens the right flow, resolves it.
Built on the OpenAI Agents SDK
Total flexibility with the right abstraction. Nothing hidden, nothing magic. Every decision the agent makes is observable and auditable.
At 200K+ people a month, you can't trust a static prompt. The system has to defend itself in real time. Every input filtered, every output validated, every handoff decision auditable.
Around 200 thousand people a month. Most without anyone stepping in.
The first-level support team that used to firefight repetitive tickets now handles the 23% that actually require a human: refunds with edge cases, escalations, complex returns. The work that needs judgment, not lookup.
Everything is observable: every tool call, every guardrail trigger, every handoff. With that visibility, we keep tightening the system. The 77% isn't a finish line. It's where it sits today, and it's still climbing.
Does your customer service look like this?
If your team is answering the same questions all day with the answers sitting in a system somewhere, this kind of agent fits. We start with one channel, one flow, real volume. Then grow from there.