The economics of a bad AE hire are brutal and well understood. A mid-market account executive in a B2B SaaS company typically costs $120,000–180,000 in total compensation. Add recruiting fees, onboarding time, manager attention, and the opportunity cost of the territory running at zero productivity for four to six months, and the fully-loaded cost of a hire that doesn't work out approaches $300,000–400,000 by the time you have made the decision. And the decision itself — to let someone go or to move them out of the role — typically happens at the end of a second quarter of underperformance, which means the organisation has already absorbed most of that cost before the conversation begins.

The standard response to this problem is to invest in better hiring: more rigorous assessment, structured interviews, work samples, reference checks that go beyond the names the candidate provides. These investments are worthwhile and genuinely reduce the failure rate. But they address the wrong end of the problem. No hiring process eliminates failure — and more importantly, many AE failures are not caused by the person being the wrong hire. They are caused by inadequate visibility during the ramp period, which means that problems that were correctable at week six become irreversible by week twenty.

This article is about the visibility side of the problem.

Why quota attainment is the wrong primary signal

Most B2B sales organisations use quota attainment as the primary and often sole formal measure of AE performance. The logic is straightforward: the job is to close deals, the measure of success is closed deals, therefore the measure of performance is whether the person is closing deals.

The problem is timing. In a typical B2B SaaS sale with an average deal cycle of 60–90 days, a new AE hired in January will not close their first deal until March at the earliest, and that assumes everything goes perfectly — that they had a pipeline to inherit, that they ramped on the product and process quickly, and that deals they sourced themselves progressed without delays. In practice, most AEs do not hit meaningful quota attainment until month five or six at the earliest. Which means that using quota attainment as the primary signal gives you a verdict six months after the hire — at which point you have already consumed most of the onboarding investment and the better part of two quarters.

4.7 mo
average AE ramp time to first quota attainment in B2B SaaS — meaning most performance decisions based on quota attainment come at 5–6 months, well after the majority of the onboarding cost has been absorbed
Bridge Group SaaS AE Metrics Report, 2024

The other problem with quota attainment as a signal is that it conflates performance with territory. An AE in a rich territory with strong inbound leads and an inherited pipeline can hit quota in month three without demonstrating any of the skills that will matter in month thirteen. An AE in a greenfield territory with no inherited pipeline may be executing exceptionally well against all the behavioural indicators and still show no quota attainment at month five. Using the outcome measure alone tells you nothing about whether the underlying capability is there.

The early signals that actually predict ramp success

Research on AE ramp performance consistently identifies a cluster of behavioural indicators in the first six to eight weeks that are more predictive of eventual success than any outcome measure at the same stage. These are not personality traits or interview performance — they are observable behaviours that show up in how the AE engages with their pipeline and their process.

Process adherence in early deals. The single most reliable early predictor of AE ramp success is whether the new hire follows the defined sales process in their first five to ten deals. This sounds obvious but is routinely overlooked, because managers often give new hires latitude to "find their own style" during ramp. The problem is that process adherence in early deals is a proxy for learning agility and coachability — not because the process is necessarily optimal, but because a rep who consistently skips discovery steps or advances stages without exit criteria met in their first month is showing you something about how they will behave when the stakes are higher. Reps who follow the process early tend to be good at it six months later. Reps who improvise around it early tend to continue doing so.

Contact cadence and two-way engagement rates. How many prospect-initiated responses is the AE generating in their first four weeks of active selling? Not dials made or emails sent — those are input metrics that measure effort without measuring effectiveness. The signal that matters is whether prospects are responding and engaging. An AE who sends 50 emails in their first month and receives three replies is showing you a different signal than one who sends 30 emails and receives fifteen replies. This is visible in email data. It does not require the AE to self-report, and it does not require waiting for outcomes.

Pipeline progression rates by stage. For deals that have been in the AE's pipeline for 30 days, what percentage have progressed at least one stage? What percentage have been sitting at the same stage since the AE first touched them? A new AE whose early deals are not progressing may be encountering territory problems, product-market fit issues, or process execution problems — but the pattern shows up in pipeline data long before it shows up in quota attainment.

Playbook execution completeness. At each stage of the defined process, are the required steps being taken? Is the relevant email being sent, the relevant call being scheduled, the relevant document being attached? This is the most granular of the early signals and the hardest to read without the right tooling — but it is also the most actionable, because it points directly to where coaching needs to focus.

Building a ramp scorecard that works

The practical implication of this is a ramp scorecard — a weekly view of the leading behavioural indicators — reviewed by the manager in the first 12 weeks of a new AE's tenure. This is distinct from and in addition to the pipeline review: it is specifically focused on the early execution signals, not the outcome signals.

A useful ramp scorecard has roughly five to seven indicators, reviewed weekly, with explicit thresholds for what looks healthy and what warrants a coaching conversation. The indicators should be mostly observable (derivable from email, calendar, and CRM data rather than self-reported) and should track behaviour in the current period rather than cumulative outcomes.

A reasonable starting set looks something like this: weekly two-way email engagement rate (responses received / outreach sent); number of stage progressions in the period; process adherence rate (percentage of required playbook steps completed at each stage); number of new qualified opportunities created; and average response latency from prospects (the time between the AE's outreach and the prospect's reply — a measure of message quality and targeting).

None of these individually is a verdict. An AE can have a bad week on any one of them for entirely legitimate reasons. The value of the scorecard is the pattern across weeks — and specifically, the early identification of a rep who is showing consistently weak signals across multiple dimensions before the problem becomes irreversible.

The coaching implication — and the decision implication

It is important to be explicit about the purpose of this kind of early visibility: it is primarily a coaching tool, not a termination tool. The goal is not to identify reps to let go faster. The goal is to give managers the information they need to intervene at a point when intervention can actually change the outcome.

An AE who is showing low process adherence in week four can be coached on it before the habit is set. An AE who is generating low engagement rates in week five can be worked with on their outreach approach before they have built six months of territory without traction. These are fixable problems at week five. They are much harder to fix at month five, by which point the rep has developed patterns that are resistant to coaching and the manager has spent political capital defending them in quarterly reviews.

That said, early visibility also compresses the time to a difficult decision when one is warranted. A rep who is showing weak signals across all five dimensions by week eight, who has received coaching on the specific gaps, and who has not responded — that is a different situation than a rep who is weak in one area and strong in others. Having the data early means the decision, if it needs to be made, is made at week twelve rather than week twenty-four. That is not cruelty — it is better for the rep, who gets clarity sooner and can move to a role that suits them, and better for the organisation, which reduces the cost of the situation and recovers the territory faster.

What tooling helps here — and what it cannot do

Getting visibility into these early signals requires two things. The first is a CRM and process setup where stage progression and playbook step completion are visible — which means the process needs to be defined at the granular level (what specifically is done at each stage) rather than just the stage level (what the deal is called at each milestone).

The second is email and calendar signal reading — specifically, the ability to see prospect response rates and engagement patterns without relying on rep self-reporting. This is available through most modern sales execution platforms and some CRM integrations, and it is the data layer that makes the ramp scorecard credible rather than dependent on what reps say about their own performance.

What tooling cannot do is substitute for the coaching conversations. The scorecard surfaces the signal; the manager still needs to act on it. Organisations that invest in the visibility layer but not in the management discipline to run weekly ramp reviews will find that the data exists but the outcomes do not improve. The data is necessary but not sufficient. What it does is give managers something concrete to work with rather than the vague feeling that something might not be going well.

◆ Ramp visibility audit — four questions for this quarter

Question 1: For your last three AEs who were let go after a ramp failure, go back and identify the earliest point at which a consistent pattern of weak behavioural signals was visible. If that point was before month three but the decision was made at month five or six, your detection lag is costing you roughly a quarter of avoidable expense per hire.

Question 2: Do your managers currently review any metric other than pipeline size and stage progression during the ramp period? If not, they are flying with one instrument in a situation that requires several.

Question 3: Can you currently see — without asking the rep — how many prospect responses they generated last week? If the answer is no, you are dependent on self-reporting for the signal that most matters in early ramp assessment.

Question 4: What is your current average time from hire to first performance conversation (not a pipeline review — a conversation specifically about execution behaviour)? If it is longer than six weeks, you are systematically coaching after the patterns have set.

The ramp period is the highest-leverage management moment in the AE lifecycle. The cost of low visibility during those twelve weeks is not just a bad hire — it is a missed coaching opportunity on a hire that might have worked, and a delayed decision on one that wouldn't.