Share with your friends










Submit

Analytics Magazine

Value Added Analysis: Business forecasting effectiveness

July/August 2011

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Should accuracy be an organizations biggest concern?

Michael GillilandBy Michael Gilliland

Forecasting is a significant consumer of management time, yet it’s also a significant waste of management time if those efforts are not making the forecast any better. When attention is focused on accuracy alone, it ignores other important considerations such as forecast bias (is it chronically too high or too low?) and forecast uncertainty (the range of likely outcomes). Focus on accuracy also ignores the efficiency and effectiveness of the overall forecasting process.

Accuracy is largely determined by the nature of what is being forecast — its “forecastability.” Even costly and heroic efforts may not yield the level of accuracy management desires. Instead of squandering resources in pursuit of the perfect forecast, organizations should seek forecasts as accurate as can reasonably be expected (given the nature of what is being forecast) and to do this as efficiently as possible. Forecast value added (FVA) analysis, a tool in the lean approach to forecasting, can identify waste in the forecasting process, allowing organizations to achieve these objectives.

The Limits of Forecast Accuracy

Few would argue that perfect forecasts can be generated consistently. There is tacit recognition of the uncertainty of future outcomes, even when recurring patterns have held for long periods of time. Yet organizations do have expectations for forecast accuracy, and even set accuracy requirements in the job performance objectives of their forecasters. So what level of forecast accuracy can be consistently achieved?

Anything we attempt to forecast (product demand, interest rates, insurance claims, customer churn, calls to a service center, etc.) can be thought to have two components: 1. structured or systematic behavior, governed by a rule (which in theory we can understand and predict), and 2. random behavior (which we cannot predict).

Therefore, the best possible forecasts can be achieved when:

  • We understand the rule governing the systematic behavior.
  • We have a model that correctly expresses the rule.
  • The rule does not change over time.

Under these best of circumstances, we understand the mechanisms driving the systematic component of the behavior and can predict it perfectly. The only limit on forecast accuracy is the degree of randomness in the behavior. This can be illustrated by the tossing of fair coins.

Consider three processes to be forecast:

  • P10: The percentage of heads in the tossing of 10 fair coins;
  • P100: The percentage of heads in the tossing of 100 fair coins; and
  • P1000: The percentage of heads in the tossing of 1,000 fair coins.

Every day, the three processes will be executed. The coins will be tossed, and we have to predict the percentage of heads. What is our forecasted percentage of heads each day for each process? Can we forecast one process better than the others? What accuracy will we achieve? Are there any investments we can make (better software, bigger computer, more elaborate forecasting process, more skilled statistical analyst) to improve our accuracy?

This isn’t a trick question; the only rational forecast each day for each process is 50 percent heads. Figure 1 illustrates 100 daily trials of each of these processes. Since we are dealing with the independent tossing of fair coins, each process behaves according to the same underlying mechanism. Over a large number of trials, each process will average about 50 percent heads. We fully understand the nature of each process, and we realize it makes no sense to forecast anything other than 50 percent heads each day for each process. However, the variation in the percentage of heads in each process is vastly different, as is the accuracy of our forecasts.

Figure 1: One hundred trials of P10, P100 and P1000 [1].
Figure 1: One hundred trials of P10, P100 and P1000 [1].

When a lot of randomness exists in the behavior, we cannot expect to forecast it very accurately. Even when we know everything there is to know about the mechanism guiding the behavior, as we do here, the amount of randomness limits how accurate we can ever be. Also, in situations like these, any additional investment in the forecasting process would be a waste. There is nothing we could ever do to forecast P10 more accurately than P100, or P100 more accurately than P1000. The nature of each process, its systematic structure along with its random variability, determined the level of accuracy we were able to achieve.

What makes real life so difficult to forecast is that the underlying mechanisms guiding behavior may not be so apparent or may not even exist. Even if there is some structure to the historical pattern, it may not be obvious and can require good software or a skilled analyst to uncover it. But even then, even if we can discover and model the underlying rule guiding the behavior, there is no guarantee the rule won’t change over time. The fundamental difficulty of the forecaster’s job becomes apparent.

The coin-tossing contest illustrates that there are limits to the forecast accuracy we can achieve. We can’t assume that by applying more data, bigger computers and more sophisticated software, or by exhorting our forecasters to work harder, we can always achieve the level of accuracy desired. It is important to understand the limits of forecast accuracy, and to understand what level of accuracy is reasonable to expect for a given behavior.

The danger is that if you do not know what accuracy is reasonable to expect, you can reward inferior performance, or you can waste resources pursuing unrealistic or impossible accuracy objectives. You can also miss opportunities for alternative (non-forecasting) solutions to your business problems.

When Forecasting is a Waste of Time

Because forecast accuracy is limited by the nature of the behavior we are attempting to forecast, our expectations and pursuits must be more modest. Instead of pursuing a desired (and perhaps unachievable) level of accuracy, we must be content with the accuracy the behavior is so gracious to allow us, and do this efficiently.

A large amount of anecdotal evidence, as well as emerging academic research [2], leads to a conclusion that our forecasting efforts can be surprisingly ineffective. Our elaborate processes, with collaborative steps and executive touch points, may not result in the most accurate forecast or even an appropriate forecast. But this disheartening reality doesn’t mean that forecasting is pointless and irrelevant. It doesn’t mean that forecasting isn’t useful or necessary to run our organizations. And it doesn’t mean that managers should not care about their forecasting issues nor seek ways to improve them. It simply means that:

The amount of time, money and human effort spent on forecasting is not commensurate with the amount of benefit achieved (the improvement in accuracy).

We spend far too many organizational resources creating our forecasts while almost invariably failing to achieve the level of accuracy desired. We should be focusing much less on modeling and forecast accuracy and much more on process efficiency and effectiveness. We must also consider alternative solutions to the business problems that we, out of habit, rely on forecasting alone to address.

Business forecasting is plagued by wrong perceptions and bad practices. One such bad perception is that the desired level of forecast accuracy is always possible. The reality is that smooth, stable repeating patterns can be forecast accurately with simple techniques and little effort. Wild, volatile and erratic patterns, however, may never be forecast accurately — no matter how elaborate the process and statistical sophistication we throw at the problem. In short:

We may never be able to control the accuracy achieved or achieve the level of accuracy desired. But we can control the forecasting process we use, and the resources we invest.

A second wrong perception is that the accuracy of our forecasts is proportional to the extent of our forecasting efforts. “If only,” management bemoans. “If only we had more data, a bigger computer, a more elaborate process and better forecasters (or made the ones we have work harder!), we could get better forecasts.” But this is a false belief, as was demonstrated in the coin tossing example.

Curiously, there is often an inverse relationship between the amount of management attention given to forecasting and the accuracy of the results. The more a forecast is touched, the more it may go awry. Each process step, each opportunity to adjust a forecast, is just one more chance for wishes and politics and personal agendas to contaminate what should be an unbiased best guess at what is really going to happen.

A third wrong perception is that improving forecast accuracy is the ultimate goal — that improved accuracy is the best way, and perhaps even the only way, to improve organizational performance. But this belief can focus management’s attention on the wrong problem:

The goal of your organization is not accurate forecasts — it is to make a profit and stay in business.

Forecast improvements are only a means to this end. Unfortunately, improvements may be impossible to deliver (when your demand is unforecastable), they may be too costly to implement (not worth the benefits), or they may even go unused, if management is unwilling to accept the reality of what a more accurate forecast is telling them. Focusing only on forecast accuracy improvements ignores other, non-forecasting approaches that may more effectively solve the underlying business problem.

Forecast Value-Added Analysis

Among the analytical techniques for addressing the business forecasting problem is forecast value added (FVA) analysis [3]. FVA utilizes simple statistical methods to determine how forecasting process steps and participants impact forecast accuracy. By identifying and eliminating those activities that are failing to improve the forecast (or even making it worse!), organizations can use FVA to reduce the time and resources spent on forecasting, streamlining the process, while simultaneously getting better forecasts.

FVA is a metric for evaluating the performance of each step and each participant in the forecasting process. By definition:

FVA = The change in a forecasting performance metric that can be attributed to a particular step or participant in the forecasting process.

FVA analysis begins with the assumption (null hypothesis) that the step or participant is having no effect on forecasting performance. By gathering data on accuracy (or bias or other performance metric) of the forecast before and after the process step, we determine whether the null hypothesis can be rejected, i.e., that the step does have an impact (either positive or negative) on forecasting performance.

When FVA is positive, the step or participant is adding value by making the forecast better. When FVA is negative, the step or participant is just making the forecast worse. By eliminating those steps and participants that are failing to improve the forecast, the process can be streamlined (using fewer organization resources), and better forecasts can be achieved (by eliminating those steps that were just making it worse).

The fundamental test in FVA analysis is to compare process performance versus using a na?Øve forecast. A na?Øve forecast is something simple and inexpensive to compute, such as a random walk (using the latest observed value) or a seasonal random walk (e.g., using the observed value from a year ago as the forecast for this year). If the organization’s forecasting process is failing to outperform a na?Øve forecast, this raises serious questions about the value that process is providing the organization.

To see how FVA works, consider a very simple forecasting process (Figure 2). This process is to read demand history into a statistical forecasting model that generates a forecast, and then have an analyst review and (if necessary) override the statistical forecast.

Figure 2. Simple forecasting process.
Figure 2. Simple forecasting process.

FVA analysis compares the analyst override to the statistically generated forecast to determine if the override makes the forecast better. FVA analysis also compares both the statistical forecast and the analyst override to a na?Øve forecast, to determine whether the overall process is doing any good.

Figure 3 shows a sample FVA report. Here, the statistical model achieved accuracy of 75 percent, but after the analyst made manual adjustments to the statistical forecast, accuracy was only 70 percent. A na?Øve model achieved 65 percent accuracy. Given these results we would conclude that the statistical model is adding value by improving accuracy 10 percentage points over the na?Øve model. However, analyst overrides to the statistical forecast are just making it worse (FVA of -5 percent). Although these numbers were made up for purpose of illustration, this is not an uncommon finding.

Figure 3. Sample FVA report.
Figure 3. Sample FVA report.

Supply chain consultant and thought leader Tom Wallace has stated that FVA is the lean manufacturing mind-set applied to forecasting. Many companies are now using FVA analysis, and several (including Cisco, Intel, AstraZeneca, Tempur-Pedic, Yokohama Tire Canada, Amway and Newell Rubbermaid) have spoken publicly of their efforts.

To help organizations run more effectively, it’s important that management consider new ideas and creative approaches — but never to assume any approach works until its effectiveness has been demonstrated. Forecasting professionals (and those who reply on them) must be realistic about what forecasting can be expected to achieve and not squander precious resources in pursuit of the “perfect forecast.” FVA analysis is a way to determine the effectiveness of an organization’s forecasting efforts and streamline its forecasting process.

Michael Gilliland (mike.gilliland@sas.com) is product marketing manager for SAS forecasting software and author of “The Business Forecasting Deal: Exposing Myths, Eliminating Bad Practices, Providing Practical Solutions.” Gilliland has more than 20 years of forecasting experience in the food, apparel and consumer electronics industries. He holds master’s degrees in philosophy and mathematical sciences from Johns Hopkins University. You can follow his blog “The Business Forecasting Deal” at blogs.sas.com/forecasting.

References

  1. Portions of this article are adapted from Michael Gilliland, “The Business Forecasting Deal,” Wiley (2010).
  2. Fildes, P. Goodwin, M. Lawrence and K. Nikolopoulos, 2009, “Effective forecasting and judgmental adjustments: An empirical evaluation and strategies for improvement in supply-chain planning,” International Journal of Forecasting, Vol.  25, pp. 3-23.
  3. More information on FVA analysis is available in Gilliland, op. cit., pp. 81-109, or the SAS whitepaper “Forecast Value Added Analysis: Step-by-Step” (www.sas.com/reg/wp/corp/6216).

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.


Headlines

Accenture security report identifies top cyber threats

With recent large-scale cyberattacks signaling a growing front in destructive threats and business impact, a new midyear report from iDefense, part of Accenture Security, reveals how threat actors are continuing to evolve their ability to avoid detection. Based on in-depth analysis, the report anticipates a growth in the number of threat actors who are rapidly expanding their capabilities due to factors such as the proliferation of affordable, customizable and accessible tools and exploits. Read more →

Job searchers: It’s not just who you know, but how well you know them

While online networking sites enable individuals to increase their professional connections, to what extent do these ties actually lead to job opportunities? A new study in the INFORMS journal Management Science finds that, despite the ability to significantly increase the number of professional connections and identify more job leads with limited effort on these sites, unless the connection is a strong one, they typically will not lead to job offers.  Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

Essential Practice Skills for High-Impact Analytics Projects
Sept. 26-27, Executive Conference Center, Arlington, Va.

Foundations of Modern Predictive Analytics
Oct. 2-3, VT Executive Briefing Center, Arlington, Va.

2017 INFORMS Annual Meeting
October 22-25, 2017, Houston

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.