It is Tuesday evening. You are rehearsing Thursday’s board presentation. You have the revenue slide. The pipeline slide. The customer logo slide everyone likes but nobody acts on. What you do not have is the answer to the question the lead director asked last quarter — the one about probability-weighted pipeline coverage by origination source.
That question was not hostile. It was not a gotcha. It was sophisticated. It came from a director with 15 years of operating experience who has seen the metrics that distinguish companies that hit plan from those that explain why they missed it. And the question exposed a gap that most $5M–$50M technology CEOs cannot close — not because they lack the will, but because the revenue system underneath the business was never built to produce the metrics that sophisticated boards now require.
These are the five numbers. Each one is reasonable. Each one is increasingly expected by boards with operating experience or institutional investor representation. And each one requires a measurement architecture that most $5M–$50M companies simply do not possess.
1. Probability-Weighted Pipeline Coverage by Origination Source
Not raw pipeline coverage. Not blended coverage. The board wants coverage weighted by stage probability, then decomposed by the source that originated each opportunity.
The question underneath the number: which channels produce pipeline that actually converts to revenue — and which channels produce volume that inflates the coverage ratio without contributing to the close? Inbound might show 2.8x raw coverage but only 1.4x weighted. Outbound might show 1.6x raw but 1.1x weighted. Referrals might show 1.2x raw but 1.0x weighted because referral deals stage accurately and close at higher rates.
The total coverage number is meaningless without the source decomposition, because it obscures where the pipeline risk actually sits. A board that sees ‘3.2x coverage’ cannot assess whether the forecast is credible. A board that sees ‘1.4x weighted from inbound, 1.1x from outbound, 1.0x from referrals — total weighted coverage 2.1x with 60% from Stage 2 or later’ can make a governance judgment about forecast credibility.
Why your system cannot produce it: the CRM tracks leads by source and opportunities by stage, but does not natively combine weighted pipeline value by origination source into a single diagnostic view. Producing this number requires a custom report linking lead source attribution to opportunity records, weighting each deal by its stage probability, and aggregating by source. It takes a revenue operations function with analytical capability — or an external assessment that builds the view from the existing data.
2. Full-Funnel Lead-to-Revenue Conversion Rate by ICP Segment
Not lead-to-MQL. Not MQL-to-SQL. Not SQL-to-opportunity. The full journey: from first signal captured to closed revenue recognised, split by ICP segment.
The question underneath: how efficiently does the complete revenue system convert initial interest into revenue — and does that efficiency differ materially by customer type? A 4.2% lead-to-revenue rate in mid-market and a 1.8% rate in enterprise reveals a structural conversion efficiency gap that blended metrics hide entirely. If the company is investing 60% of its sales resources in
enterprise at half the conversion efficiency, the board needs to know that.
Why your system cannot produce it: most $5M–$50M companies track the funnel in stages across multiple disconnected systems — marketing automation for leads, CRM for opportunities, billing for revenue. The full-funnel conversion rate requires connecting these systems, matching records end-to-end across different identifiers, and segmenting by ICP. Without dedicated revenue operations, this analysis does not happen — and the board gets stage-to-stage conversion metrics that tell them nothing about end-to-end system efficiency.
3. Net Revenue Retention Decomposed by Cohort
Not aggregate NRR. The board wants to know: are enterprise customers expanding? Are mid-market customers contracting? What is the NRR trajectory for the cohort acquired in the last 12 months versus the cohort acquired 24 months ago?
Aggregate NRR at 108% can mask a company where enterprise NRR is 125% and mid-market NRR is 88%. The enterprise segment is carrying the metric singlehandedly. The mid-market segment is eroding quietly underneath. The aggregate looks healthy. The structural picture is bifurcated — and without the decomposition, the board cannot see it.
Why your system cannot produce it: cohort-level NRR requires tagging every customer by acquisition date, segment classification, and opening ACV — then tracking revenue changes (expansion, contraction, churn) by cohort over time. Most CRMs do not automate this. Most finance teams produce it quarterly via manual spreadsheet work, with significant effort and a 30–45 day lag that makes the data stale by the time it reaches the board. The board wants it monthly. The system produces it late, laboriously, and with error rates that undermine confidence in the number itself.
4. Sales Cycle Length by Deal Size and Segment
Average sales cycle is meaningless. A $15K mid-market deal closing in 28 days and a $75K enterprise deal taking 142 days produce a 72-day ‘average’ that describes neither deal type accurately. The board does not need the mean. They need the distribution.
The question underneath: are enterprise deals taking longer because the buying process is inherently complex (expected and budgetable) or because the sales process is structurally inadequate for the enterprise motion (diagnosable and fixable)? Cycle length exceeding 150% of the benchmark for that deal size and segment signals structural drag in the conversion architecture.
Why your system cannot produce it: calculating cycle length by segment requires accurate opportunity creation dates, close dates, deal sizes, and segment tags — all consistently captured. In most CRMs, the opportunity creation date is unreliable because deals are created retroactively or the date reflects CRM entry rather than first meaningful buyer engagement. The data exists in theory. In practice, it requires cleanup, standardisation, and a reporting layer that most $5M–$50M companies have never built.
5. CAC Payback Period by Acquisition Channel
Not blended CAC. Channel-specific acquisition cost — including fully-loaded marketing spend, SDR cost allocation, and sales cost attribution — divided by channel-specific gross margin contribution. The board wants to know which channels pay back in 12 months and which take 24.
Blended CAC payback of 14 months might include organic search at 6 months, paid digital at 22 months, and outbound at 18 months. The blended number masks a channel that is highly capital-efficient and two that are not — and the company’s budget allocation does not reflect this because the decomposition has never been produced.
Why your system cannot produce it: channel-specific CAC payback requires attributing fully-loaded acquisition cost to each channel’s closed revenue, then dividing by gross margin. This crosses marketing data, finance data, and sales data — typically stored in three separate systems with no standard cost attribution methodology connecting them. Most $5M–$50M companies produce blended CAC quarterly. Channel-specific CAC payback is a metric they aspire to rather than report.
These five metrics share a structural theme: each one is reasonable for a board to expect, genuinely useful for governance, and structurally impossible to produce without measurement architecture that most $5M–$50M companies have not built. The gap is not the board’s expectations being unreasonable. It is the system underneath the revenue team being inadequate for the level of governance the company has reached.
Lead-to-Order Structural Assessment
This article gave you the five numbers your board expects. It cannot give you your numbers — because producing them requires a structural diagnosis of the measurement architecture underneath your revenue system. Source-level pipeline quality. End-to-end conversion by segment. Cohort-level NRR. Cycle distributions. Channel-level economics.
The Lead-to-Order Structural Assessment produces these numbers as part of a six-dimension scored evaluation. The sample assessment — prepared for a $7M Cloud ERP CEO — includes the exact outputs a board needs: scored dimensions, structural cost quantification, and the specific metrics listed above. See what board-grade revenue intelligence looks like. No form. No gate.
Before You Commit Capital, Credibility, or Momentum
Technology CEOs are increasingly using decision-grade GTM due diligence before high-stakes commercial bets — not to outsource judgement, but to ensure the decision stands up before it's made.
When a GTM decision is hard to unwind — a senior hire, a pricing change, a market entry — the cost of being wrong compounds quietly. Two quarters slip away before you know it failed.
Commercial Bet Due Diligence (CBDD) is a short, independent review used before commitment. It evaluates a single GTM bet across product, pricing, positioning, sales, and customer growth — and concludes with a clear verdict:
- Review a sample CBDD board memo — the artefact CEOs and boards use to govern these decisions
- Learn how the CBDD process works — and when it's applied


