by David Taber

How to fix your sales forecasting process

Opinion
Oct 11, 2018
CRM SystemsEnterprise Applications

A misleading sales forecast impacts all kinds of end-of-quarter decisions, and resources can be wildly misallocated. Running some reports in your CRM system can identify areas of the forecast that are at risk.

Sales forecasting is a surprisingly risky thing, which is ironic since its whole purpose is to identify risk areas in time to apply corrective action.  While everyone agrees that an accurate forecast would be a good thing, there are a surprising number of incentives – financial and political – to forecast with too much optimism. Want to know the definition of impossible?  A sales forecast that is too conservative.

So how do you spot areas of the forecast that are at risk?  There are reports you can run in your CRM that will give you some strong clues:

  • Deals that are past their projected “close date”
  • Deals that have been in the pipeline longer than 150% of your average sales cycle
  • Big deals that haven’t been updated in the last 30 days (unless there’s a specific “on-hold” flag that indicates a good reason for it)
  • Deals with a ridiculously low “amount” value, or a large-ish value that is a nice, neat round number
  • Deals that are supposed to be far along in your pipeline process but there isn’t much detail (e.g., no “opportunity products” or “order items” defined)
  • Deals that are supposed to be in or near the negotiation stage, but there’s no quote, no draft contract, or similar artifacts of a live deal
  • Deals that have little in the way of emails, meeting notes, or phone calls
  • A deal that represents >50% of the rep’s target for the year
  • Reps who have a total pipeline that is within a percent or two of their target or the latest “top down” mandated number for their region

Now, many of these red flags will appear for reps who just hate keeping the CRM system up to date.  There’s an easy answer for this: tell them they can avoid a lot of scrutiny from their manager if they just populated the CRM with halfway-realistic data. Sure, they’ll complain about how long data entry takes…but nobody wants to sit in more meetings with their manager.

Sales forecasting is a surprisingly risky thing, which is ironic since its whole purpose is to identify risk areas in time to apply corrective action.  While everyone agrees that an accurate forecast would be a good thing, there are a surprising number of incentives – financial and political – to forecast with too much optimism. Want to know the definition of impossible?  A sales forecast that is too conservative.

So how do you spot areas of the forecast that are at risk?  There are reports you can run in your CRM that will give you some strong clues:

  • Deals that are past their projected “close date”
  • Deals that have been in the pipeline longer than 150% of your average sales cycle
  • Big deals that haven’t been updated in the last 30 days (unless there’s a specific “on-hold” flag that indicates a good reason for it)
  • Deals with a ridiculously low “amount” value, or a large-ish value that is a nice, neat round number
  • Deals that are supposed to be far along in your pipeline process but there isn’t much detail (e.g., no “opportunity products” or “order items” defined)
  • Deals that are supposed to be in or near the negotiation stage, but there’s no quote, no draft contract, or similar artifacts of a live deal
  • Deals that have little in the way of emails, meeting notes, or phone calls
  • A deal that represents >50% of the rep’s target for the year
  • Reps who have a total pipeline that is within a percent or two of their target or the latest “top down” mandated number for their region

Now, many of these red flags will appear for reps who just hate keeping the CRM system up to date.  There’s an easy answer for this: tell them they can avoid a lot of scrutiny from their manager if they just populated the CRM with halfway-realistic data. Sure, they’ll complain about how long data entry takes…but nobody wants to sit in more meetings with their manager.

The reports above should be run every month (every week during the last month of the quarter).  This goes double if you sell physical goods where supply chain and inventory effects can severely limit your ability to respond quickly to customer demand.

The list above is really effective with direct sales reps. Things get trickier with channel sales because – unless your company is the 800-pound gorilla – the partners simply won’t give you much in the way of detail about their deals.  If you’re lucky, you get a monthly spreadsheet.  There’s not much you can really do about this except change the rules of engagement with the partner:  if they chronically forecast too high, they’ll be allocated less product; if they chronically forecast too low, they’ll lose priority for expedited and special orders.

Going deeper

Spotting fake forecasts will give your management team more confidence, but truly fixing things will require spotting bad forecasting processes.

While there is no universal best practice in forecasting – really, there can’t be one – there are plenty of worst practices to watch out for.

Running forecasts outside of the CRM

Probably the granddaddy of worst practices is having the forecast cycle run outside of the CRM, using spreadsheets being emailed around the sales organization.  While sales folks (and sometimes the finance guys) are totally comfortable with spreadsheets, they invite problems, including:

  • Limited visibility – Because the spreadsheet is separated from the data about the account, people, and deal actions that help validate the rep’s claims about deal status.
  • Data corruption – Unless you put in a ton of macros and security features, Excel is an engraved invitation to data corruption and fakery (e.g., a sales manager quietly raises a number he needs to be higher).
  • Inability to audit – Unless you do some very clever things in Excel, there’s no audit trail showing how the forecast evolved as it was sent around the organization.
  • No history – Unless you archive spreadsheets, you won’t be able to see how the pipeline has evolved during the quarter. In particular, it’ll be harder to see “plug numbers” that management adds to make things look good.
  • Inviting errors – Unless you lock lots of cells, innocent but sloppy users of the spreadsheet can introduce errors in formulas.

Having more than one forecasting process

The next worst practice is to have more than one forecasting process.  For example, Europe might have its own private forecast that is done alongside the global forecasting cycle.  Or the factory might call around to key sales offices to find out the “real number” to drive the ERP cycle. The inherent complexity of multiple forecasts typically limits automation, requiring several manual steps, adjustments, and compensating transactions during each forecasting cycle—all making the process more error-prone and difficult to reliably repeat.

This is particularly the case for the revenue forecast, which is what will drive what you tell Wall Street. While good reasons are always given to rationalize these shenanigans, the existence of multiple forecasts will degrade trust across the organization and limit the credibility of the CRM.  Oh yeah, and you’re opening yourself up to shareholder lawsuits.

Having mandated forecasts

The mandated forecast occurs when a sales manager confuses what “the plan says we need to make” with “what the prospects look like they are actually planning to buy.” This also confuses “what we are doing because we want their money” with “what the customer needs to do for their business” – obviously, very different things. 

Forecasts that are done without customer input (e.g., email responses, meetings attended, quotes responded to) are little more than wishful thinking. This becomes a particular problem when upper management doesn’t really trust their sales organization, believes they are sand-bagging and trying to keep their quota low.  Essentially, this is where everyone is gaming everyone, but they realize it only at a gut level.  No CRM consultant can fix this: it’s purely a management problem that is exposed in the CRM data.

The acid test

Metrics of forecast accuracy are the ultimate tell, but it’s easy for people to dream up metrics that can be gamed.  So have someone with a background in statistics look at your metrics and make sure they meaningfully reflect your business realities. If your business is 80% B2C impulse buys through a web store, your standards and metrics have got to be much more forgiving than a direct-sale business in the heavy machinery industry.

Whatever metrics you use, look for “contribution of error” percentages to narrow down where your forecasting process isn’t up to snuff. If you have chronic under-forecasting in one step that is compensated by chronic over-forecasting in another, you’ve got some work to do.

The acid test of a good forecasting process is one that is repeatable, auditable, and can be self-correcting over time.  It’s hard, but the core of what you need is already on your desktop, sitting in your CRM system.