The maths of manual assessment
A typical early-stage fund receives 500–1,500 applications per year. First-pass screening — deck scan, pattern match, gut feel — filters out 75–90%. The survivors get a second pass: an analyst reviews the deck, researches the market, checks the team, writes a memo. That takes 8–12 hours minimum. At €150–200/hour loaded, each second-pass assessment costs €1,200–2,400.
For the top 25 candidates that make it to partner meetings, a proper deep-dive — comparable to what The Startup Mentor produces — requires 33–43 hours of analyst work. That is €5,000–8,500 per assessment. Twenty-five of those is €125,000–212,000 per year in analyst time alone.
And the output? An unstructured memo that varies with the analyst. No consistent scoring. No evidence grading. No cross-pipeline comparability. Company A's memo was written by a sharp analyst on Monday morning. Company B's was written by an exhausted one on Friday afternoon. Both memos look the same. The quality is not.
The 90% you reject after a deck scan includes the founder who can't pitch but has five validated pricing conversations. The technical founder who built something extraordinary but doesn't have the warm intro. The founder building in a language you don't speak, in a market you don't recognise, solving a problem you've never encountered. A polished pitch deck proves someone can make a polished pitch deck. It doesn't prove they can build a company — and vice versa.
Every great bet was passed over by multiple investors. The question is not whether hidden gems exist in your rejected pile. They do. The question is whether you have a way to find them that doesn't require 40 hours per company.
What assessment at scale looks like
Every startup in your pipeline submits their materials — pitch deck, website, business case, whatever they have. The system produces a full in-depth assessment on each one. Sixteen dimensions. Five-level validation scale. Evidence graded, constraints identified, hidden value mapped, valuation estimated. The same depth on company number 1 and company number 1,000.
The output is structured and comparable. You can sort your entire pipeline by evidence quality, by validation level, by specific dimension scores. The founder who pitched badly but has E4 evidence on customer validation becomes visible. The founder who pitched brilliantly but has E1 evidence on everything becomes visible too. The filter stops rewarding presentation and starts rewarding substance.
The same structured data that improves your pipeline filter also transforms your portfolio monitoring. Every startup assessed on the same framework means you can track value trajectories across your entire portfolio — not through narrative quarterly updates, but through comparable data. Evidence velocity becomes a leading indicator of company health. A founder whose validation levels are rising is executing. One whose levels are flat for two quarters is stalling. You see it in the data before you hear it in the board meeting.
At the ecosystem level — spanning multiple funds, accelerators, and geographies — the patterns become even more powerful. Which idea categories are overcrowded? Which founder profiles are systematically underserved? Where do regional patterns diverge from global benchmarks? None of this is possible when assessments are produced individually by different people with different frameworks.
Programmes and cohorts
The same logic applies to accelerators and university programmes. A programme with thirty startups needs thirty deep assessments — not summaries, not pitch reviews. Thirty sessions where each founder is pushed hard enough to separate what they know from what they believe. With human mentors, session 28 is never as sharp as session 3. Cognitive load accumulates. Standards drift. The data across thirty sessions is incomparable because thirty different mentors asked thirty different questions.
When every startup in a cohort is assessed on the same framework, a new category of insight becomes available: the cohort view. If fifteen of thirty startups are stuck at the same point, that is not fifteen individual problems — it is one programme-level pattern. It points to a curriculum gap, a selection bias, or a systematic misalignment between what the programme expects and what founders arrive ready to do. That insight is invisible with individual mentor reports. It is immediate with structured data.
Each assessment produces structured output automatically: an in-depth assessment (all sixteen value growth dimensions, evidence grades, red and green flags, valuation range), a takeaway for the founder (plain language, specific next actions), and structured data that feeds the portfolio dashboard. The same assessment data serves investors, programme managers, and founders — each sees what they need from the same underlying analysis.
The depth is the scale. The two were always the same problem. A system that can assess one startup at 40 hours of analyst depth can assess a thousand — because the methodology is the same, the framework is the same, and the output quality doesn't degrade with volume. Why this approach works →