The median $5M–$50M B2B technology company that consistently hits plan carries 2.1x probability-weighted pipeline coverage. Not 3x. Not 4x. 2.1x weighted. 

That number probably contradicts whatever benchmark your VP Sales cited at last quarter’s board meeting. It contradicts it because their benchmark was almost certainly drawn from an industry report aggregating companies from $1M to $500M in ARR, making the median meaningless for any specific company within the range. A pipeline coverage benchmark calculated across a 500x revenue span tells you nothing actionable about your $12M business. The denominator is too diverse. The median is a fiction that sounds like a fact. 

These five benchmarks are drawn from Lead-to-Order assessments of B2B technology companies specifically in the $5M–$50M band. They are specific to this revenue range, specific to B2B technology, and ordered by predictive power — the degree to which each benchmark correlates with consistent quarterly plan attainment. 

1. Probability-Weighted Pipeline Coverage: 2.1x

Median (companies that hit plan): 2.1x | Top quartile: 2.8x | Bottom quartile: 1.3x 

Not raw coverage. Weighted by stage probability: Stage 1 at 8%, Stage 2 at 22%, Stage 3 at 45%, Stage 4 at 72%, Stage 5 at 88%. The median company reporting 3.4x raw coverage sits at 1.8x when weighted. The gap — 1.6x — is phantom pipeline: deals counted at full value with less than a 20% close probability. They are in the CRM. They inflate the dashboard. They will not become revenue. 

The gap between raw and weighted exists because most companies measure coverage raw: total pipeline value divided by target. It is a simple calculation that produces a reassuring number. Probability weighting adjusts each deal by its stage-specific close rate, producing a number that reflects what the pipeline can actually deliver rather than what it theoretically contains. The gap between these two numbers is the gap between what the board sees and what reality will deliver. 

Self-test 

Open your pipeline. Multiply each deal’s value by the stage probability above. Sum the results. Divide by your quarterly target. If the number drops below 2.0x, plan attainment probability falls below 55%. Below 1.5x, you need to win 70%+ of everything in the pipeline — a structural impossibility in any competitive market with normal competitive loss rates. 

2. Full-Funnel Lead-to-Revenue Conversion Rate: 3.8%

Median: 3.8% | Top quartile: 5.4% | Bottom quartile: 2.1% 

Full-funnel means total closed-won revenue divided by total leads originated in the same period. Not lead-to-MQL. Not MQL-to-SQL. The complete journey from first signal captured to cash recognised. This number is unforgiving because it includes every lead that went nowhere — the false positives from paid campaigns, the unqualified inquiries from content downloads, the ‘just browsing’ contacts who entered the system and consumed sales capacity without ever having a genuine buying intent. 

The reason this benchmark is predictive: it measures the entire revenue system’s efficiency as a single number. A company with strong lead generation but weak conversion sits at 2%. A company with moderate lead generation but excellent conversion sits at 5%. The blended rate reveals the system’s structural efficiency regardless of where the weakness lives — and the assessment locates where the leak is occurring.

Self-test 

Total closed-won revenue last quarter divided by total new leads entered in the same quarter. If below 3%, the revenue system has a structural conversion problem somewhere between signal capture and close. The question is not whether the problem exists — it is where it sits. Signal quality? Pipeline qualification? Conversion mechanics? Pricing realisation? Each produces the same symptom (low full-funnel rate) from a different structural cause. 

3. Sales Cycle Length: Segmented by Deal Size

$15K–$25K ACV: 42 days | $50K–$100K: 128 days | >$100K: 187 days | Top quartile 25–30% faster Average cycle length is meaningless without segmentation. A company reporting 72-day average cycle with $20K mid-market deals closing in 35 days and $80K enterprise deals closing in 155 days has no ‘average’ deal. The distribution is bimodal. The mean describes neither reality. The median for each segment is the only number that tells you whether your cycle is structurally healthy or structurally dragging for that deal type. 

The structural signal: if your enterprise cycle exceeds 150% of the benchmark for that deal size, the conversion architecture is creating drag. The most common causes are deals entering pipeline before qualification is complete (adding weeks of unproductive early-stage time), proposals sent before the economic case is established (triggering discount negotiations that extend the cycle by 3–6 weeks), and single-threaded deals that stall when the champion is unavailable (adding weeks of dead time waiting for one person to re-engage). 

Self-test 

Segment your closed deals from the last four quarters by ACV band. Calculate the median cycle for each. Compare to the benchmarks above. If enterprise cycles exceed 175 days, the drag is structural. If mid-market cycles exceed 55 days, the qualification gate is too loose. Both are diagnosable with the right data. 

4. Net Revenue Retention: 108%

Median: 108% | Top quartile: 118% | Bottom quartile: 94% 

Below 100%: the installed base is shrinking in revenue terms even if logos are retained. The company is losing more revenue to contraction and churn than it captures through expansion. Between 100–105%: expansion is accidental — happening in isolated accounts through individual CSM initiative, not through a structured commercial motion. Between 105–115%: expansion exists but is ad hoc. Above 115%: expansion is a structured revenue engine with its own pipeline, its own triggers, and its own commercial accountability. 

The gap between 108% (median) and 118% (top quartile) at $15M ARR is $1.5M in annual revenue. That revenue has no acquisition cost, no new sales cycle, and no competitive displacement risk. It comes from customers who have already been won, already been onboarded, and already demonstrated satisfaction. The revenue potential exists in the installed base. The capture mechanism does not. 

Self-test 

NRR below 108% with logo retention above 85% is a clear diagnosis: the product retains but the commercial motion does not expand. The constraint is not churn. It is the absence of a structured expansion architecture — usage triggers, expansion playbooks, commercial CSM targets, and QBR-to-expansion conversion processes. 

5. Forecast Accuracy: 82%

Median: 82% (within ±10% of commit) | Top quartile: 91% | Bottom quartile: 68% 

Below 75%: the forecasting methodology is judgment-based. The committed number is assembled by reviewing individual deals, applying personal probability estimates, and producing an opinion that occasionally approximates reality. The forecast is a person’s judgment, not a system’s output. 

Above 85%: the pipeline architecture provides structural inputs — weighted coverage by stage, historical stage-to-stage conversion rates, aging adjustments, and pipeline creation velocity — that produce the forecast as a system output. The VP Sales reviews the number for strategic context. They do not build it from scratch each quarter.

The difference between 68% accuracy and 91% accuracy is not the VP Sales’s talent. It is the architecture underneath the forecast. A talented VP applying judgment to unstructured pipeline data will produce 70–75% accuracy. The same VP with governed pipeline stages, weighted coverage calculations, and historical calibration will produce 85–90%. The architecture is the variable, not the person. 

Self-test 

Compare the last four quarters: committed forecast versus actual close. If accuracy averages below 80%, the pipeline architecture cannot support a reliable forecast. More scrutiny applied to structurally unreliable data will not produce a reliable number. It will produce a more confident version of the same unreliable number. 

Lead-to-Order Structural Assessment

These five benchmarks tell you where you stand relative to the $5M–$50M B2B technology band. They do not tell you what is constraining your performance or what the structural gap costs per quarter. Knowing you sit at 1.4x weighted coverage is diagnostic. Knowing why — and what it costs — requires the six-dimension structural analysis underneath the number. 

The Lead-to-Order Structural Assessment converts these benchmarks into a scored diagnosis with structural cost quantification. See the sample — a $7M Cloud ERP company benchmarked across all six dimensions. Every score. Every cost estimate. Every annotation. No form. No gate. 

If This Decision Is Live For You

Before You Commit Capital, Credibility, or Momentum

Technology CEOs are increasingly using decision-grade GTM due diligence before high-stakes commercial bets — not to outsource judgement, but to ensure the decision stands up before it's made.

When a GTM decision is hard to unwind — a senior hire, a pricing change, a market entry — the cost of being wrong compounds quietly. Two quarters slip away before you know it failed.

Commercial Bet Due Diligence (CBDD) is a short, independent review used before commitment. It evaluates a single GTM bet across product, pricing, positioning, sales, and customer growth — and concludes with a clear verdict:

GO HOLD STOP
See How Commercial Bet Due Diligence Works
Share this post

Subscribe to our newsletter

Keep up with the latest blog posts by staying updated. No spamming: we promise.
By clicking Sign Up you’re confirming that you agree with our Terms and Conditions.

Related posts