Forecast Error Myth #3: Sales And Marketing Have their Forecast Error Measured
Executive Summary
- Companies typically have Sales and Marketing departments that never have to worry about their forecast error is measured.
- We cover the problems that arise when Sales and Marketing have no forecast accountability.
Video Introduction on Forecast Error Myth #3: Sales And Marketing Have their Forecast Error Measured
Text Introduction (Skip if You Watched the Video)
Very few people will come straight out and admit that they don’t care about forecast error or don’t want to be held accountable for inaccurate forecasts. However, most people, if given the opportunity, will change their forecasts after the fact. This is why people have to be measured so that they cannot be cheated to acknowledge that they were incorrect. In the company setting, the two departments that provide the highest error inputs to the forecast are Marketing and Sales. You will learn why marketing and sales don’t care much about forecast accuracy.
Our References for This Article
If you want to see our references for this article and related Brightwork articles, see this link.
Forecast Maintenance (Version 1)
Forecast Orgin | January Forecast | February Forecast | March Forecast |
---|---|---|---|
Initial Statistical Forecast | 50 | 50 | 50 |
Sales Forecast | 60 | 65 | 70 |
Marketing Forecast | 90 | 100 | 100 |
Final Forecast | 70 | 100 | 70 |
Pros and Cons of Option #1
Under Option #1, each forecast is kept as a separate row. There is some mechanism for determining the final forecast, which is maybe one of the three inputs (statistical, Sales, or Marketing). In option one, Jan is a mixture of the three inputs, in Feb, the forecast is directly from Marketing, and in Mar, the forecast is directly from Sales.
Forecast Maintenance (Version 2)
Forecast Orgin | January Forecast | February Forecast | March Forecast |
---|---|---|---|
Initial Statistical Forecast | 50 | 65 | 100 |
Pros and Cons of Option #2
Under Option #2, there is only one forecast row, and it is continually overwritten.
Forecast Maintenance (Version 3)
Forecast Orgin | January Forecast | February Forecast | March Forecast |
---|---|---|---|
Initial Statistical Forecast | 50 | 50 | 50 |
Final Forecast | 50 | 65 | 100 |
Pros and Cons of Option #3
This can be viewed as a mixture of Option#1 and Option #2. It allows the measurement of the difference in error from the statistical forecast and the adjustments, not individual forecasts.
I have seen all three methods deployed at companies, but I believe Option#3 is the most common. However, even though most companies have the forecasts for the Statistical and Final forecast, most still do not know their forecast error differential between these two forecasts and typically only measure the forecast error of the Final forecast.
This robs the company of its ability to identify the positive versus negative inputs.
Forecast Error Measurement of the Multiple Inputs
When companies talk about their forecast error, they almost invariably are quoting the final forecast. Even the idea of thinking of separate forecast errors depending upon the input is an alien concept. It may be possible that Sales should get zero input into the Final forecast, or it should receive input and be the primary input, but only for three product categories. Which multiple stream forecast error measurement, no one knows.
The Problem of Forecast Error Measurement Within Forecasting Applications
Now that we have gone through the potential measurability based upon the availability of forecast data, we return to the subject we sped past back when discussing Option #3, which is how to perform the forecast error measurements of multiple forecast inputs.
A logical approach is to begin measuring these forecasts. However, if one has to rely on creating custom reports within companies, typically, this will not occur. Two factors hamstring the process.
- The standard forecast error calculation methods (that everyone uses) make the over process difficult.
- The individual, and hence comparative forecast error measurements require a report to be generated that exports the forecasts from the forecasting system. These reports are generally quite inflexible, and it is infrequent to have them show comparative forecast error among the different forecast inputs. Instead, they usually will provide an average forecast error of many product location combinations using standard error calculation methods.
A Better Approach
Observing the same thing at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast. This includes, of course, the Sales and Marketing forecasts. The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. The Sales and Marketing forecast can be measured against the statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from the Sales or Marketing forecast.
This is the fastest and most accurate way of measuring multiple forecasts that we have seen.
Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?
It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.
What the Forecast Error Calculation and System Should Be Able to Do
One would be able to for example:
- Measure forecast error
- Compare forecast error (For all the forecasts at the company)
- To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
- To be able to measure any forecast against the baseline statistical forecast.
- To weigh the forecast error (so progress for the overall product database can be tracked)
Getting to a Better Forecast Error Measurement Capability
A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.
Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.