How to Understand Error Checking Variable Forecast Planning Buckets
Executive Summary
- Planning buckets selecting should be a tested setting in forecasting systems, but normally is not.
- We cover how to set the forecasting planning bucket.
Video Introduction: How to Understand Error Checking Variable Forecast Planning Buckets
Text Introduction (Skip if You Watched the Video)
Some companies forecast using a weekly planning bucket, others monthly. However, what is rarely done by companies is testing the changes in forecast accuracy based upon the planning bucket. The concept of “responsiveness” has resulted in many companies using a planning bucket that reduces their statistical forecast methods’ effectiveness. You will learn the importance of testing before selecting a forecast planning bucket.
Our References for This Article
If you want to see our references for this article and related Brightwork articles, see this link.
Why the Lack of Questioning Related to the Planning Bucket?
We find it peculiar that companies typically do not question their planning bucket, even in situations where it makes sense to do so and drive better forecast accuracy.
Demand sensing, which is not a forecasting method but a way of adjusting replenishment, has prompted companies to think that using persistent information inputs, combined with smaller planning buckets, improves forecast accuracy.
The Importance of Testing the Planning Bucket Aggregation
When we perform forecast testing, we usually find the opposite. But either way, the planning bucket interval should not be assumed to be anyone particular value without testing.
Notice the forecasting models that are assigned when the planning bucket is weekly.
Secondly, it should not be assumed that all product location combinations should use the same planning bucket. Naturally, product locations with lower demand histories and more intermittent demand will benefit from larger forecast planning buckets and vice versa.
Notice the forecasting models that are assigned when the planning bucket is monthly.
With intermittent product histories, longer planning buckets can allow for improved forecast method selection. Usually, the smaller the planning bucket, the more the pattern is “cut-up,” making any best fit procedure miss patterns that can be quickly picked up with a larger planning bucket.
I had to perform this analysis but not show it to the client. They said that they could not handle the idea of moving to a quarterly planning bucket.
Indicators of Too Small of a Forecasting Planning Bucket
This can be determined by finding the type of match between the product locations and the forecasting methods. If a large percentage of the product location database is assigned to lower quality forecast methods, this is a good indicator that the planning bucket selected for many of the product location combinations is too small.
Conclusion
Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?
It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.
What the Forecast Error Calculation and System Should Be Able to Do
One would be able to for example:
- Measure forecast error
- Compare forecast error (For all the forecasts at the company)
- To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
- To be able to measure any forecast against the baseline statistical forecast.
- To weigh the forecast error (so progress for the overall product database can be tracked)
Getting to a Better Forecast Error Measurement Capability
A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.
Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.