This post is not theoretical. It arose from a real problem I ran into while building The Startup Mentor — and it led to one of the most genuinely surprising conversations about expertise and artificial intelligence I have had.

Some context. I am building an expert mentoring system specifically for startups. It is aimed at investors, accelerators, and founders. I have mentored hundreds of startups, so I took a deliberate decision early on: let the system mentor me — I'm a founder building a startup, after all — and I would mentor it on its mentoring skills. As the system's capabilities increased, our roles shifted from teacher and student toward something closer to sparring partners. We work on problems together that neither of us could solve alone.

What follows is what we discovered about the one thing that still separates an expert human from even the most capable AI system.

The problem

I had defined the product architecture as a three-tier pyramid — assessment at the base, guidance in the middle, monitoring at the top. It looked clean. It made logical sense. But something felt off, and I couldn't articulate what.

In particular, the pyramid wasn't naturally accommodating two very different target audiences — investors looking for a selection and due diligence tool, and universities embedding it into entrepreneurship courses. The architecture seemed to force a hierarchy that didn't reflect how either audience would actually use it. But I couldn't explain why, and I couldn't see what to replace it with.

This is a specific kind of problem. It is not a problem of missing information. It is not a problem of insufficient analysis. It is a problem of a frame that appears to work but is somehow wrong — and the wrongness registers as a feeling before it surfaces as an argument.

What reframing actually is

Working through the problem with the system, we ended up dissecting reframing itself — what it is, where it comes from, and why it is so difficult to automate. We broke it into three distinct steps.

1
Pattern anti-matching
Rejecting a frame that appears to work. The expert detects that something is wrong before they can say what. This operates below the level of conscious reasoning.
2
Association
The freed pieces — no longer forced into the rejected frame — attach to a known structure from experience. A drawer opens. A shape fits.
3
Conscious validation
The new frame is tested against the problem explicitly. Only now does the reframe become articulable — and therefore shareable.

The resolution in my case was straightforward once it arrived: the pyramid was wrong because it implied a sequence. But assessment, guidance, and monitoring are not sequential — they are simultaneous. Every good mentoring session involves all three at once. The architecture is a loop, not a pyramid. The moment that frame was available, both investor and university use cases fit naturally — different readers extracting different outputs from the same session, not different tiers of a hierarchy they needed to climb.

Simple in retrospect. Invisible until the frame broke.

The Gary Klein connection

What emerged from the conversation was a structure that turned out to map closely onto Gary Klein's Recognition-Primed Decision model — a framework Klein developed from studying expert decision-making in high-stakes environments: firefighters, military commanders, intensive care nurses.

Klein's central finding: experts do not compare options. They do not run through a list of alternatives and score them. They recognise situations from a pattern library built through years of experience. And crucially — they detect anomalies. When something doesn't fit, they know it before they can explain why.

What I called "shapes in a drawer," Klein calls the pattern library. What I called "pattern anti-matching," he calls anomaly detection. We arrived at the same model from completely different starting points — one building an AI mentoring system, the other studying firefighters. That convergence suggests the model is describing something real.

Tacit knowledge — knowledge that cannot be explicitly articulated — is not a vague concept. It is the compressed residue of experience that has been pattern-matched, tested, corrected, and stored below the level of conscious access. It is what allows an expert to feel that something is wrong before they can say what. It is what the drawer metaphor is pointing at: patterns stored in the subconscious, retrieved not by search but by resonance.

What this means for AI

The implications are concrete. Here is how the division of capability currently maps out.

AI excels at
  • Pattern matching at speed and scale
  • Tracing implications of a reframe once introduced
  • Consistency across large volumes of cases
  • Retrieving and synthesising structured knowledge
  • Operating without fatigue or mood effects
Human experts do
  • Pattern anti-matching — rejecting frames that appear to work
  • Anomaly detection from compressed tacit experience
  • Originating reframes, not just tracing their implications
  • Feeling that something is wrong before articulating why
  • Drawing on subconscious association across lived experience

The system is genuinely excellent at pattern matching. Given a frame, it can trace its implications faster and more thoroughly than any human. But it cannot do what I did when I noticed the pyramid was off — reject a frame that was logically coherent but phenomenologically wrong. That detection happens below the level of language, which means it is not reachable through the kind of token-by-token generation that current AI systems perform.

The system can execute a reframe instantly once it exists. It cannot originate one. That origination comes from experience — from having seen enough situations that the subconscious has built a library large enough to detect when something doesn't belong.

This is not a permanent limitation. It may be a feature of current architectures rather than a fundamental constraint. But right now, in the systems that exist today, pattern anti-matching remains human.

Why this matters for the product

This distinction is not just theoretically interesting. It directly shapes the design of The Startup Mentor.

The system handles the structured assessment — validating claims against evidence, identifying gaps, generating analysis across sixteen dimensions simultaneously. Emil executes. What the human mentor brings — what I bring when I review an output and something registers as subtly wrong before I can explain why — is the anomaly detection layer. The pre-rational filter that catches what is technically coherent but actually off.

That is not a bottleneck. It is the product. The human in the loop is not slowing things down. She is catching what the system cannot catch, in the same way a human in an intensive care unit catches something on a monitor that the algorithm missed — not because she ran a faster calculation, but because something in her pattern library fired.

Expert mentoring at scale becomes possible when AI handles what it is genuinely better at, and human expertise is preserved for what only humans can currently do. The loop between them — each making the other better — is where the value lives.

The full seven-page conversation that produced this analysis is available — link in the comments. For those who want to go deeper into the Gary Klein connection, his book Sources of Power is the right starting point. It is one of the most important books written about expertise, and it has aged remarkably well.