“Forecastability” is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with “best in class” forecast accuracy may do so only because they have the easiest to forecast demand — not because their forecasting methods are particularly admirable.
Thus, the underlying forecastability has to be considered in any kind of comparison of forecasting performance.
“Forecastability” is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with “best in class” forecast accuracy may do so only because they have the easiest to forecast demand — not because their forecasting methods are particularly admirable.
Thus, the underlying forecastability has to be considered in any kind of comparison of forecasting performance.
Along with the general forecastability discussion is the question “What is the best my forecasts can be?” Can we achieve 100% forecast accuracy (0% error), or is there some theoretical or practical limit?
It is generally acknowledged that, at the other extreme, the worst your forecasts should be is the error of the naive forecast (i.e., using a random walk as your forecasting method). You can achieve the error of the naive forecast with no investment in big computers or fancy software, or any forecasting staff or process at all. So the fundamental objective of any forecasting process is simply “Do no worse than the naive model.”
“What is the best my forecasts can be?” is difficult, and perhaps impossible to answer. But a compelling new approach on the “avoidability” of forecast error is presented by Steve Morlidge in the Summer 2013 issue of Foresight: The International Journal of Applied Forecasting.
How Good Is a “Good” Forecast?
Steve Morlidge is co-author (with Steve Player) of the excellent book Future Ready: How to Master Business Forecasting (Wiley, 2010). After many years designing and running performance management systems at Unilever, Steve founded Satori Partners in the UK.
In his article, Steve examines the current state of thought on forecastability. He considers approaches using volatility (Coefficient of Variation), Theil’s U statistic, Relative Absolute Error, Mean Absolute Scaled Error, FVA, and “product DNA” (an approach suggested by Sean Schubert in the Summer 2012 issue of Foresight).
Errors from the naive forecast are one way of meauring the amount of noise in data. From this, Steve makes the conjecture that “there is a mathematical relationship between these naive forecast errors and the lowest possible errors from a forecast.”