I can diagnose the state of a company’s revenue architecture in approximately 20 minutes with the right questions. Not by looking at the dashboard. Not by reviewing the pipeline report. By asking what is underneath the data — the design decisions, or the absence of them, that produced the numbers you are looking at.
What follows is the condensed version of that diagnostic: 12 questions across four sections, each with a simple scoring rubric. By the time you have finished, you will have a clear picture of which components of your Lead-to-Order architecture are designed and documented, which are partially in place, and which are missing entirely.
Most recurring-revenue businesses between $10M and $50M ARR score between 6 and 12 on this audit. The gap between your score and 24 is the architecture work that would move your forecast accuracy, win rate, NRR, and board meeting quality. Let us find out where you are.
How to Use This Audit
Score each question using this rubric: 0 — Not in place. This component does not exist in your revenue system in any formal way. 1 — Partially in place. This component exists informally, is inconsistently applied, or is understood by some but not documented or enforced. 2 — Fully designed and documented. This component exists in written form, is accessible to every commercial team member, and is reflected in your CRM configuration.
Be honest. The value of this audit is in the accuracy of your self-assessment — not in achieving a higher score. A score of 8 that reflects reality is more useful than a score of 18 that does not.
Section 1 — Pipeline and Qualification Architecture (Questions 1–4)
Question 1 — ICP Documentation
Does your company have a documented Ideal Customer Profile with specific, measurable criteria — firmographic, technographic, and situational — that every member of the sales and marketing team would describe consistently if asked independently?
Score 0 if: The ICP is understood by the founder or senior leaders but has never been formally documented. Score 1 if: There is a written ICP but it uses broad descriptions (e.g., ‘mid-market technology companies’) rather than specific, measurable criteria that can be applied consistently at the qualification stage. Score 2 if: There is a documented ICP with specific firmographic, technographic, and situational criteria that every rep can access and apply independently, and that is reflected in the CRM qualification fields.
Question 2 — Pipeline Stage Exit Criteria
Does every stage in your pipeline have a written exit criterion — a specific condition about the buyer’s situation, authority, need, and timeline that must be true for a deal to advance to the next stage?
Score 0 if: Pipeline stages are defined by what activity happened (demo booked, proposal sent) rather than what the buyer’s status is. Score 1 if: Some stages have informal exit criteria that managers apply inconsistently during pipeline review, but they are not written down or reflected in the CRM. Score 2 if: Every pipeline stage has a written exit criterion accessible to every rep, and the CRM requires the exit criteria to be confirmed before a deal can advance.
Question 4 — Lead Signal Definitions
Does your company have a written, agreed definition of what constitutes an MQL, an SAL, and an SQO — with specific evidence criteria for each — that both the marketing and sales teams would describe consistently if asked?
Score 0 if: The definition of a qualified lead is informal and contested — marketing and sales regularly disagree about lead quality. Score 1 if: There is a general understanding of what makes a good lead but it is not formally written, and the handoff criteria vary depending on who is managing the process at any given time. Score 2 if: Written definitions of MQL, SAL, and SQO exist with specific evidence criteria for each transition, both teams have agreed to them, and they are enforced in the CRM and marketing automation platform.
Section 2 — Commercial Governance (Questions 5–7)
Question 5 — Proposal Architecture
Does your company have a documented proposal structure that every rep follows — specifying what the proposal must contain, how the customer’s success criteria are addressed, how pricing is presented, and what the approval process is for non-standard terms?
Score 0 if: Proposals are created individually by each rep without a consistent structure or template, and quality varies significantly. Score 1 if: There is a proposal template in use but it is not consistently followed, non-standard terms are approved informally, and the proposal process is not designed around how enterprise buyers evaluate and justify purchases. Score 2 if: There is a documented proposal architecture that every rep follows, including structure, value framing, success criteria alignment, pricing presentation, and a clear approval process for non-standard terms.
Question 6 — Pricing Governance
Does your company have a written pricing governance document that specifies who can approve what level of discount, at what deal size, for what documented reason — with an escalation path for exceptions?
Score 0 if: Discount decisions are made deal-by-deal through informal conversations, with no documented approval structure or governance record. Score 1 if: There is an informal understanding of discount authority but it is not written down, is inconsistently applied, and exceptions are routine rather than documented. Score 2 if: A written pricing governance document exists specifying discount authority by deal size and rep level, an escalation path for exceptions, and a reporting requirement for all discounted transactions.
Question 7 — Forecast Process
Is your quarterly revenue forecast produced by a systematic process based on pipeline stage criteria — rather than assembled from the CRO or CEO’s personal assessment of the top deals?
Score 0 if: The forecast is the CRO or CEO’s personal estimate, assembled from conversations with reps rather than from structured pipeline data. Score 1 if: The forecast uses pipeline data but applies a rough percentage weighting by stage without consistent exit criteria underlying the stage placement. Score 2 if: The forecast is produced systematically from pipeline data where stage placement reflects verified buyer exit criteria, producing a structural forecast that the system generates rather than the leadership team assembles.
Section 3 — Post-Sale Architecture (Questions 8–10)
Question 8 — Sales-to-CS Handoff Protocol
Does your company have a written handoff protocol specifying exactly what information transfers from the sales team to Customer Success at the point of close — including the customer’s stated success criteria, commercial commitments, product configuration, implementation timeline, and key relationships?
Score 0 if: The handoff is an informal introduction — a calendar invitation and a brief conversation, with no structured information transfer requirement. Score 1 if: There is a handoff template but it is inconsistently used, and the CS team regularly starts customer relationships without the full context of what was promised during the sales process. Score 2 if: A written handoff protocol exists with a required information set, it is completed for every deal, and it is accessible to the CS team in the CRM or CS platform immediately upon close.
Question 9 — Expansion Motion
Does your company have a documented expansion process — with specific trigger criteria, a qualification framework for expansion opportunities, and a defined commercial ownership structure between Customer Success and Sales?
Score 0 if: Expansion opportunities are identified by individual CSMs when they notice them, with no systematic process for identifying triggers, qualifying opportunities, or managing the commercial motion. Score 1 if: Some expansion conversations happen but they are relationship-dependent rather than system-generated, and there is no consistent definition of when and how expansion opportunities enter the commercial pipeline. Score 2 if: There is a documented expansion motion with specific trigger criteria (usage thresholds, product adoption milestones, contract anniversary proximity), a qualification framework for expansion deals, and a defined handoff between CS and Sales for commercial ownership.
Question 10 — Renewal Architecture
Does your company have a documented renewal process that begins 90 days before contract anniversary, with written criteria for at-risk designation, defined escalation paths, and clear ownership for every renewal at every risk level?
Score 0 if: Renewals are managed reactively — the CS team contacts customers near the renewal date and the conversation either goes well or surfaces a problem that is now difficult to address. Score 1 if: There is awareness of upcoming renewals but the process begins too late (30 days or fewer before expiry), at-risk criteria are informal, and escalation paths are undefined. Score 2 if: There is a documented renewal architecture beginning 90 days prior, with written at-risk criteria applied consistently, defined escalation to the commercial team for at-risk accounts, and a clear ownership structure for every renewal scenario.
Section 4 — System Instrumentation (Questions 11–12)
Question 11 — RevOps Metrics Design
Has your company formally defined the set of metrics that measure the health of your revenue architecture — distinguishing between real-time leading indicators (stage conversion rates, deal velocity, pipeline quality) and lagging outcome indicators (win rate, NRR, forecast accuracy) — with assigned ownership and update frequency for each?
Score 0 if: Metrics are produced reactively — dashboards are built when someone asks for them, and there is no designed metric set that the revenue team consistently monitors and acts on. Score 1 if: A set of metrics exists but is not formally designed — some are produced automatically, others require manual assembly, and the distinction between leading and lagging indicators has never been made explicit. Score 2 if: The metric set is formally designed, every metric has an assigned owner and update frequency, leading indicators are monitored weekly and lagging indicators quarterly, and the full set is produced automatically by the CRM and CS systems rather than assembled manually.
Question 12 — Board Metrics Production
Are the five primary board-level revenue metrics — forecast accuracy, stage conversion rates, win rate by ICP tier, NRR by cohort, and CAC payback by channel — produced automatically by your revenue system, or do they require manual assembly for each board meeting?
Score 0 if: Board metrics are assembled manually over multiple days each quarter from multiple data sources, with significant reconciliation required before they can be presented. Score 1 if: Some board metrics are produced automatically, others require manual work, and the data sometimes requires explanation or reconciliation in the board meeting itself. Score 2 if: All five primary board metrics are produced automatically as continuous outputs of the revenue system, with no material manual assembly required, and the data is consistent and audit-ready.
Your Score — What It Means
Score 0–8 — Founder-Led System
Your revenue architecture is operating primarily on founder intuition and informal practice. The system works because the people who built it understand it — but it does not transfer consistently to hired commercial leaders, and it does not produce reliable metrics for board reporting or performance management. Every growth plateau you hit is an architecture problem waiting to be recognised as one.
The priority at this score: pipeline stage design and ICP documentation. These two components have the highest leverage on every other metric and create the foundation that all other architecture work builds on. The shift from a founder-led system to an emerging architecture typically takes four to eight weeks of focused design work.
Score 9–16 — Emerging Architecture
You have the bones of a revenue architecture in place. Some components are designed and working. Others are partially built or informally applied. The gaps are visible in your forecast variance, your CRM adoption rate, the recurrence of the sales-marketing lead quality argument, and the time your RevOps team spends on manual data work.
The priority at this score: identifying which specific components are at 0 or 1 and sequencing the design work by impact. Components 4 (Pipeline Stage Design) and 8 (Expansion Motion) typically have the highest leverage at this stage. The move from emerging to designed architecture takes six to twelve weeks for most companies at this revenue level.
Score 17–24 — Designed System
You have a designed revenue architecture. The foundation is in place. Your forecast is defensible. Your board metrics are produced structurally. The next moves are optimisation — making the components more precise and more connected — and augmentation — deploying AI tools on top of a designed process that can actually use them to produce reliable outputs.
The priority at this score: identifying which of the ten components are at 1 rather than 2, and completing them. Then evaluating which AI augmentation investments are now viable given the quality of your underlying data and process structure.
The Next Step
This audit gives you a score. The Lead-to-Order Architecture Assessment gives you the full picture: a component-by-component map of your revenue architecture, a precise diagnosis of which components are designed, partially built, or missing, and a prioritised roadmap of what to build next and in what sequence.
Companies between $10M and $50M ARR typically identify four to six specific architecture gaps in the first assessment. Addressing two of those gaps — pipeline stage design and either the expansion motion or the renewal architecture, depending on where your revenue mix sits — typically moves forecast accuracy by 15 to 20 percentage points and NRR by 5 to 10 points within two full quarters of implementation.
The assessment takes 15 minutes. It does not require a discovery call, a sales process, or a commitment. It produces a clear, structured diagnosis of your revenue architecture based on your specific situation.
Is your revenue architecture built to scale — or built by accident?
Most recurring-revenue companies between $10M and $50M ARR have never formally designed their Lead-to-Order architecture. They have a CRM, a pipeline, a process of sorts — but not a system with deliberate structure, stage exit criteria, qualification frameworks, handoff protocols, and an expansion motion that runs without founder involvement.
The Lead-to-Order Architecture Assessment shows you exactly where your system is designed, where it is accidental, and where it is missing — component by component, with a prioritised fix list.

