How it works
From value gaps to focused action
Every session assesses the startup across sixteen value growth pillars, validates every claim on a five-level scale, and identifies what is blocking value growth. Because the system includes a sophisticated valuation model, it calculates the economic value of each gap — so the founder stops working on what feels productive and starts working on what creates the most value.
The output is not advice. It is a structured diagnostic: which pillars are strong, which are constrained, what specific action would unlock the most value, and how confident the system is in each claim. The validation level tells you how much to trust the score. A pillar at 90% and E1 is an optimistic guess. The same pillar at 60% and E3 is validated reality — and more useful.
The single largest value driver in the model is the jump from E1 (assumption) to E3 (validated). One structured conversation with five target customers creates more enterprise value than six months of building in isolation.
Scale without dilution
Thirty teams get thirty expert sessions running in parallel — same depth, same rigour, every time. The thirtieth session is as sharp as the first. Every founder assessed on the same sixteen value growth pillars, the same validation scale, the same gates — so a fintech in Lagos is directly comparable to a healthtech in Seoul.
Parallel depth
Thirty teams in a cohort get thirty expert sessions running in parallel. Fifty pipeline companies get structured due diligence before capital is committed. Sixty student teams get individual mentoring no teaching budget could otherwise afford.
Same depth, same rigour, every time. The thirtieth session is as sharp as the first.
Consistent methodology
Every founder is assessed on the same sixteen value growth pillars, the same five-level validation scale, the same five readiness gates. The assessment of a fintech in Lagos is directly comparable to a healthtech in Seoul.
Consistent methodology means consistent data — which means portfolio-level analytics that actually work.
Real-time visibility
Managers see each startup's value growth pillars, blockers, and planned actions in real time. Systemic patterns become visible: if fifteen of thirty startups are struggling with go-to-market, that's one workshop — not thirty conversations.
The dashboard aggregates individual assessments into cohort-level and portfolio-level views. Evidence quality distributions, gate progression rates, tarpit concentrations, red flag clusters — all visible before demo day, not after it.
Mentoring before selection
If mentoring produces the deepest due diligence, why not mentor before you select? Every applicant gets a session. The selection committee reads mentoring assessments, not application forms. Even rejected candidates leave with a value growth roadmap.
This inverts the traditional sequence. Instead of selecting founders and then hoping mentoring helps them, you mentor first and select based on what the mentoring reveals. The selection decision is now evidence-based — grounded in observed coachability, evidence quality, and value growth potential rather than pitch performance and pattern matching.
The result: expert mentoring that scales to any programme size, produces structured investment-grade data at the individual and portfolio level, and gives every founder — whether selected or not — a clear, evidence-based understanding of where their enterprise value stands and what specific action would move it most.
How the selection process works
Three phases. Each builds on the last. Nothing is lost between them.
Phase 1
Assess
Founders have a mentoring session in their own environment — no institutional system, no onboarding. Each founder receives a personal takeaway with specific next steps. Each submits their assessment report as part of the application. You receive structured, evidence-graded data on every applicant: sixteen value growth pillars, five readiness gates, observed coachability, red flags, green flags, and evidence quality — not pitch decks.
Phase 2
Select
Your selection committee reads mentoring assessments instead of application forms. They compare founders on the same validation scale — not on who presented best. Coachability is observed behaviour, not a self-reported claim. Evidence quality distinguishes founders who have validated their assumptions from those who are still guessing. The selection decision is grounded in diagnostic data, not pattern matching.
Phase 3
Onboard
When you select a founder, their pre-selection data transfers into your system: the complete assessment, the founder's behavioural profile, the coaching history, the evidence trail, the assigned evidence discovery tasks, and the session transcript. Nothing starts from scratch. The first session inside your programme is Session 2, not Session 1 — and your dashboard already has baseline data to track progress from.
Rejected candidates aren't left empty-handed either. They keep their assessment, their takeaway, and a clear understanding of what specific evidence would strengthen their position. If they reapply next cohort, they can show what changed.
Here's how to start within a day
Before selection
Founder-driven · Individual assessments
You get: Individual assessments per startup. Immediate value. No infrastructure required.
Before a founder is part of your programme, they can't be in your system — you haven't selected them yet. But they can still have a mentoring session on their own and share the assessment with you directly. No integration, no onboarding, no infrastructure on your side.
You tell applicants to have one or more sessions and submit the assessment as part of their application. You get consistent structured, evidence-graded data on every applicant — observed coachability, validated evidence, and diagnostic depth. Since The Startup Mentor can also generate a pitch, business case or investor memo directly from the data, you can ask for those as well. It also works for investors evaluating a deal: tell the founder to have a session, read the assessment, make a better-informed decision.
After selection
Organisation-driven · Portfolio management
You get: Everything above, plus dashboards, aggregate analytics, cross-portfolio comparison, and trend detection.
Once founders are in your programme, they use the organisation's system — and something additional happens: the data connects. Cohort comparisons become possible. Portfolio-level risk distribution appears. Evidence health across your entire programme becomes visible in a single dashboard. Session-over-session trajectory shows which startups are actually making progress and which are stalling.
This is where institutional-level insight emerges — the kind of visibility that no amount of individual reports can provide. It's the difference between assessing startups and managing a portfolio.
These aren't two versions of the same product — they're two phases of your relationship with founders. The first helps you decide who to select. The second helps you manage and grow the ones you've selected.
See a sample assessment → · Sample investment assessment → · See the dashboard → · All 14 document types →
© 2026 Monroe B.V. · The Startup Mentor™