Your AI Team is Missing the Most Important Person (And It's Not an Engineer)
AgentBuild Circles is NOT just for engineers. In circles we work on the outcome, not just the tech, and hence the "domain expert" role becomes a core part of the circle. Read more to find out why.
I've watched this same disaster play out at multiple companies now. Smart engineers, cutting-edge models, millions in funding - and then the whole thing crashes because nobody thought to ask Maria from claims what "good" actually looks like.
We're so obsessed with getting the smartest engineers and the researchers that we completely miss the obvious: if your AI is supposed to help doctors triage patients, you need someone who actually knows how doctors think. Not someone who read about it. Someone who's been doing it for fifteen years and can spot the red flags that never make it into any textbook.
The problem that keeps happening
Here's what I see over and over again. The data science team ships something with 92% accuracy and everyone's high-fiving. Meanwhile, the actual users are quietly going back to their old spreadsheets because the AI keeps missing things that seem obvious to them.
I remember one project - insurance claims, won't name names - where this veteran adjuster kept saying "your AI doesn't get fraud" while pointing at claims that looked perfectly legitimate to the engineers but were textbook toy scenarios to anyone who had been handling auto claims for a decade.
The problem isn't the model. It's that nobody bothered to figure out what "accurate" actually means in the real world.
You get these committee meetings where everyone has opinions. Marketing wants it user-friendly. Engineering wants fewer false positives. Legal wants audit trails. The actual expert who knows what good looks like? They're usually not even in the room, or if they are, their voice gets drowned out by all the louder opinions.
And then you're stuck in this loop where the acceptance criteria keep shifting every week, QA becomes this weird theater performance, and all the field knowledge, the stuff that really matters, stays locked in people's heads instead of getting built into the system.
Why you need the Domain Expert
Look, I'm not saying you throw out everything else. You still need great engineers, researchers, UI folks, legal, etc. But if you're building something that's supposed to change how claims adjusters work, or how teachers plan lessons, or how radiologists read scans, then the person who already makes those decisions every day needs to be your north star.
Not a consultant. Not a proxy. The actual person.
Here's what changes when you get this right:
The domain expert knows what "good" actually means. Not what sounds good in a meeting, but what works when there's real money and real people on the line. They can tell you the difference between "technically correct" and "actually useful."
The domain expert surface the unspoken rules. There's so much stuff that experts just know, the patterns you can't articulate upfront, the edge cases that matter, the risk tolerances that vary by situation. You only get this by watching them work and asking "why did you flag that one?"
The domain expert resolves disputes. Instead of endless debates about whether something is good enough, you have one person who can say "yes" or "no" and everyone trusts that judgment because it's based on real expertise, not politics.
The domain expert helps move the right outcomes faster. When someone spots a problem, they can explain exactly what's wrong and why it matters. No more guessing games.
Think of domain experts as your living, breathing quality standard. They define what good looks like, help you test for it, and decide when you've hit the mark.
Finding the right Domain Expert
You're not looking for the most senior person or the loudest voice in meetings. You're looking for the person whose judgment already determines outcomes in the real world.
Start by writing down the core decision your AI will support in one sentence. Something like "Decide whether to approve a high-value insurance claim based on incomplete documentation." That tells you what kind of expertise you actually need.
Then look for people who:
Actually make that decision today (not just advise on it)
Deal with real cases regularly (not just strategy meetings)
Get called in when others can't agree
Can spare a couple hours a week consistently
Can explain their reasoning to people outside their field
I would shortlist three or four candidates and score them on those criteria. The winner might surprise you - it's often someone two levels down from where you'd expect, someone who's still in the trenches dealing with the messy reality of the problem you're trying to solve.
One warning though: don't pick someone just because they're available or eager to help. Pick them because other people already trust their judgment.
Make it work for the Domain Expert
The biggest mistake I see is trying to turn this into some massive time commitment. These are busy people with day jobs. You need to make their involvement efficient and valuable for them too.
Start small. Get them to help you build a set of maybe 30-50 test cases that really matter - the tricky ones, the expensive failures, the edge cases that break everything. Real cases work better than made-up ones.
Set up a weekly session, maybe 45 minutes, where the engineers bring the most contentious cases and your expert makes the call. Live conversation. When they change their mind about something, capture why and update your tests immediately.
Keep it structured but not bureaucratic. Simple pass/fail decisions with a quick explanation. If they say something fails, the engineers need to understand why so they can fix the real problem, not just that specific case.
Here's what I tell teams: if your expert is spending more than three hours a week on this, you're doing it wrong. Their job is to set the standard and adjudicate the hard cases, not to become a full-time AI trainer.
What the Domain Expert changes
With the domain expert participating, the conversations shift from "does this look right?" to "does this meet our standard?" Suddenly everyone's talking about the same thing.
I worked on one project where the enginerring team was stuck for weeks arguing about accuracy metrics. We brought in this security analyst and within an hour she had completely shredded our beautiful 94% accuracy score. "You're flagging every employee who logs in from a coffee shop," she said, scrolling through our threat detection alerts. "But you missed three actual data exfiltration attempts last week because the attackers were using legitimate credentials and moving slowly. Your system is drowning us in false positives while the real threats walk right past."
She was right. We were optimizing for catching obvious anomalies, but the 6% we got wrong included every sophisticated attack that actually mattered, the ones that could cost the company millions.
Once we recalibrated around her definition of what mattered, everything clicked. The model got better because the engineers understood what they were building toward. The business side trusted it because they could see their expert's judgment baked into the system.
AgentBuild Circles is not just for engineers (and domain experts belong in them)
In AgentBuild, we don’t start with models - we start with an idea and defining the roles in the circle. Our Circles are building evaluation-led AI Agents: we co-write a simple acceptance bar, create small Golden Set (10-15 high-signal test cases), try to break each others agents. This keeps everyone building toward the same standard, not chasing shifting opinions.
And this means Domain Experts like Maria from claims (or the teacher, underwriter, radiologist in your world) could form a core part of AgentBuild Circle.
As a domain expert:
You set the bar; developers encode it.
You judge edge cases; builders fix root causes.
You spend hours, not days (target: ≤3 hrs/week).
The team (circle) ships with a signature Evaluation Card showing the system meets the standard - defined by the domain expert.
Don’t understimate the power of a Domain Expert
A mediocre model with a strong domain expert will beat a brilliant model without one every single time.
I've seen data scientists build incredibly sophisticated models that nobody actually uses because they solved the wrong problem. And I've seen relatively simple systems become indispensable because someone who really understood the domain helped shape what "good" meant from day one.
The domain expert isn't just another stakeholder to manage. They're the person who turns your demo into something that actually works in the real world. Find them early, give them real authority over quality decisions, and build everything else around that foundation.
That’s why AgentBuild Circles welcome the domain expert - a role that cannot be ignored for successful AI implementation.
Next cohort will be announced in few weeks. Stay tuned!