Artificial Intelligence ➛ Profitable Intelligence
I spoke with someone who watched enterprises burn millions on AI pilots that never shipped. This weekend, discover the $10K threshold that separates real opportunities from expensive experiments.
Hello everyone,
I just wrapped a conversation with Tobias Zwingmann, and he said something that stopped me cold:
“Purchasing AI has never been easier. But actually getting money back from it? That’s incredibly hard.”
Tobias runs an AI consulting firm and has spent five years watching enterprises navigate - and often fail at - AI implementation. Before that, he was hands-on building machine learning models as a data scientist. He’s seen both sides: the technical reality and the business expectation gap.
And he just wrote a book called “The Profitable AI Advantage” because apparently, making profit from AI isn’t as natural as everyone assumed.
We talked about the following topics:
Why copying Big Tech’s AI playbook backfires?
The $10K value threshold that forces focus.
The discovery-to-delivery gap that kills pilots.
Why data maturity predicts AI success?
How to prepare for 2026’s AI market consolidation?
The Problem
Here’s what struck me most about our conversation: Tobias isn’t selling complexity. He’s actually trying to simplify a market that’s become needlessly chaotic.
The issue isn’t that AI doesn’t work. It’s that organizations are approaching it backwards.
“There are two big tracks happening,” Tobias explained. “The first is personal productivity - people using AI tools to work faster or produce higher quality work. I haven’t seen a single case where someone made themselves obsolete by getting more productive.”
That’s the track everyone’s familiar with. ChatGPT for writing. Copilot for coding. Individual gains that compound but don’t fundamentally restructure anything.
The second track is different. That’s where you redesign workflows assuming AI exists. Not replacing humans one-to-one, but rethinking entire processes.
“These use cases don’t just happen by giving people access to tools,” he said. “They need to be designed and thought through, often top-down.”
This is where most organizations are stuck. They’ve figured out track one. Track two requires something harder: intentional strategy.
The $10K Threshold Framework
Tobias introduced something I immediately recognized as useful: the value threshold concept.
Here’s the problem it solves: Without a clear threshold, different stakeholders evaluate AI opportunities at completely different scales. Someone gets excited about a use case that saves 50 hours per year. Meanwhile, opportunities with 10,000-hour impacts get ignored because they seem complicated.
His solution? Set a minimum threshold - he uses $10K per year (roughly one full-time equivalent per month).
“Whatever you do with AI, every use case has to at least pass that threshold,” he explained. “This gives you alignment between business stakeholders. Everyone is clear on what threshold we’re crossing.”
Why does this work? Because AI projects aren’t plug-and-play. They require iteration, prototyping, and human resources. If a use case can’t clear your minimum threshold, the organizational effort isn’t worth it.
This framework forces you to focus on high-leverage opportunities, not just interesting technical experiments.
The Discovery vs. Delivery Gap
We spent time on why pilots succeed but production implementations fail. This resonated deeply with my own experience.
“Discovery and delivery are completely different organizations,” Tobias said. “Discovery works with small teams, light governance, fast iteration. But when you move to delivery, you need permanent ownership, sustained budgets, engineering resources.”
Most companies excel at discovery. They can spin up innovation labs, run hackathons, build prototypes. The breakdown happens at the transition point.
The typical solution - hiring external contractors to build it - only delays the problem. “You don’t learn anything as an organization,” Tobias pointed out. “There’s no feedback loop to your discovery pipeline.”
What actually works? Building internal capability to deliver, even if you start small. You need teams that can take learnings from discovery and turn them into production systems.
This is where his call center example was illuminating. Instead of jumping straight to an AI voice bot (high impact, low feasibility), they built incrementally:
A knowledge base for new agents to get onboarded faster
That same system embedded in live calls for quick information lookup
Real-time transcription with proactive AI suggestions
A website chatbot without voice
Finally, the voice component
“Even if we don’t get to the final stage, we’ve built value along the way,” Tobias said. “Each increment is valuable on its own.”
That’s the rock climbing metaphor he used - placing safety anchors as you climb. If you fall, you only drop to your last anchor. You don’t lose everything.
Data Maturity Isn’t What You Think
I asked Tobias what separates organizations stuck in pilot purgatory from those actually shipping to production.
His answer surprised me: “The ones that are successful are the ones that had high data maturity before. But not because their data is so good.”
The real advantage? Organizations with data maturity learned to deliver projects under uncertainty. They know how to scope work when you can’t predict exact outcomes. They’re comfortable iterating rather than trying to plan everything upfront.
“That’s very similar to AI,” he explained. “You have to approach it more like: here’s the potential we see, let’s iterate toward it. Give us three months for the first sprint, not more than 20K investment, and we’ll deliver a learning or an increment.”
This is fundamentally different from waterfall thinking: “These are the requirements, one year and we’re done.”
If your organization hasn’t built that muscle, AI projects will feel impossibly risky. Because they are risky - you’re just not equipped to manage that type of risk yet.
The 2026 Reality Check
Toward the end of our conversation, Tobias offered advice for organizations planning their 2026 AI strategy. It wasn’t what I expected.
“Prepare for failure,” he said. “Not your failure - prepare for failure of the AI market.”
His point: 2025 was hype. 2026 will bring consolidation. What happens when a major player stumbles? When API prices spike unexpectedly? When models suddenly require 10x or 100x more tokens?
“Prepare to own those workflows,” he advised. “Have certain AI workloads that you run locally or have long-term contracts for. You don’t want to be having conversations about whether you should still use OpenAI models because of whatever drama is happening.”
This is practical resilience, not fear-mongering. The goal isn’t to avoid AI - it’s to build AI systems that can survive market turbulence.
Focus on making today’s technology solve today’s problems. Don’t plan for the agentic future where robots do everything. Build on what’s proven, what you can control, what actually generates returns.
What Actually Matters
Near the end, I asked Tobias: If you could rename artificial intelligence, what would you call it?
“Profitable intelligence.”
That’s the frame that matters. Not intelligence for its own sake. Not innovation theater. Not keeping up with competitors.
Intelligence that generates measurable business value.
Everything else is just expensive experimentation.
The framework Tobias lays out in his book gives you a systematic way to find high-leverage AI opportunities and actually ship them to production. If you’re tired of POCs that go nowhere, this is worth your time.
The Profitable AI Advantage
Found this useful? Ask your friends to join.
We have so much planned for the community - can’t wait to share more soon.

