Twenty five years of forecasting
The past 25 years has seen phenomenal growth of interest in judgemental approaches to forecasting and a significant change of attitude on the part of researchers to the role of judgement. While previously judgement was thought to be the enemy of accuracy, today judgement is recognised as an indispensable component of forecasting and much research attention has been directed at understanding and improving its use. Human judgement can be demonstrated to provide a significant benefit to forecasting accuracy but it can also be subject to many biases. Much of the research has been directed at understanding and managing these strengths and weaknesses. An indication of the explosion of research interest in this area can be gauged by the fact that over 200 studies are referenced in this review.
While judgement has always played an important role in forecasting, academic attitudes to the role and place of judgement have undergone a significant transformation in the last 25 years. It used to be commonplace for researchers to warn against judgement (e.g. Hogarth & Makridakis, 1981), but there is now an acceptance of its role and a desire to learn how to blend judgement with statistical methods to estimate the most accurate forecasts. The forecasting practitioner has never shared the scepticism of the researcher towards judgement. It is generally recognized that without management judgement in forecasting, serious problems can result. Worthen (2003) describes Nike’s $400 million experiment with forecasting software which went disastrously wrong leading to massive inventory write-offs due to the system’s inaccuracy and lack of management input. Worthen claims that “corporate America is littered with companies that invested heavily in demand software but have little or nothing to show from it”. Good forecasting requires that management judgement play its role and, equally important, that there be effective implementation of the forecasting systems (Fildes & Hastings, 1994). It is also important that the goal of the forecasting be clearly defined. The cost of lost sales and excess inventory are rarely equal and their costs fall on different organisational units. This often leads to different units having differing forecasting goals (Lawrence, O’Connor, & Edmundson, 2000).
The poor corporate experience of forecasting software claimed by Worthen may be partly responsible for the recent Sanders and Manrodt (2003) finding from a large survey of 240 US corporations, that only 11% reported using forecasting software. And of those who did use forecasting software, 60% indicated they routinely adjusted the forecasts based on their judgement. Thus, understanding the proper use of judgement is more than ever an important activity for researchers and practitioners.
It may be expected that judgement would play an important role in company sales forecasting where the impacts of promotions and competitor activity, generally known or anticipated by marketing staff, can be built into the forecasts. But judgement also plays an important role in macro-economic forecasting (Batchelor & Dua, 1990, Clements, 1995, Fildes & Stekler, 2002, McNees, 1990 and Turner, 1990). Fildes and Stekler (2002) in their review of macroeconomic forecasting, summarise their findings by stating that “the evidence unequivocally favours (judgmental) interventions”.
In Fig. 1, we show the steps in forecasting, say, the sales of a product. We propose viewing the total set of data useful for forecasting as made up of two classes; the history data and the domain or contextual data. The history data are the history of the sales of the product. The domain data are in effect all the other data which may be called on to help understand the past and to project the future. This includes past and future promotional plans, competitor data, manufacturing data and macroeconomic forecast data. The data usually input to a forecasting decision support system are the history data and occasionally promotion data. The adjustment review process is informed by both the history data and all the non-history data.
In this review of the past 25 years of research into judgmental forecasting, we have divided the field along the lines of Fig. 1. We first consider judgmental forecasting of a time series with no domain or contextual knowledge. Under this restriction, if we compare a judgmental forecaster with a quantitative model, as both are limited to the same data set, we gain a fair comparison of the strengths of each mode of forecasting. We then move to examine the influence of domain knowledge on the judgmental forecaster. Here we are specifically looking to see how the judgmental forecaster may use non-time series information to improve the forecast. Up to this stage in the review we have restricted ourselves to examining point forecasts. In the following section, we examine the research contribution aimed at investigating probabilistic or interval forecasts. Finally, we consider what research has revealed over the last 25 years about how the role of judgement in forecasting can be improved.
2. Judgmental (point) forecasting without domain knowledge
This section reviews judgmental point forecasting1 under the restriction that the judgmental forecaster has no domain knowledge.2 This is, in practice, a most unlikely situation as a judgmental forecaster will almost always have some information about the value to be forecasted in addition to the time series values. However, it does form a useful basis for comparison with statistical methods as the two methods are both restricted to the same data set and thus are on a “level playing field”.
Hogarth and Makridakis (1981), in a major review, analysed over 175 papers concerned with forecasting and planning and concluded without any hesitation that “quantitative models outperform judgmental forecasts” (p. 126). Furthermore, judgement was characterised as being associated with systematic biases and large errors, the tendency to see patterns where none exist, the illusion of control even when the underlying process is purely random and excessive and unfounded confidence in its correctness.
Volume 22, Issue 3, 2006, Pages 493–518