Share with your friends










Submit

Analytics Magazine

Focus on Forecasting: Worst Practices in Business Forecasting

July/August 2010

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Michael Gilliland Analytics Magazine Udo Sglavo
Analytics Magazine

By Michael Gilliland (left) and Udo Sglavo (right)

Our forecasts never seem to be as accurate as we would like them to be – or need them to be. But why are forecasts wrong, and sometimes so terribly wrong? Forecasts fail to deliver the level of accuracy desired for at least four reasons:

  • Unsound software lacking necessary capabilities, has mathematical errors in it or facilitates inappropriate methods.
  • Untrained, unskilled, inexperienced or unmotivated forecasters exhibiting behaviors that fail to improve the forecast or even make it worse.
  • Political contamination of the forecasting process driven by the whims and personal agendas of process participants who use the forecast to represent what they want to occur, rather than as an “unbiased best guess” of what is really going to occur.
  • Unforecastable behavior – the nature of what is being forecast (e.g., customer demand) is such that it cannot be forecast to the degree of accuracy desired.

There may be a strong temptation for management to just throw money at the forecasting problem in hopes of making it go away. But how many organizations are you aware of – perhaps even your own – that have thrown thousands or even millions of dollars at the forecasting problem, only to end up with the same old lousy forecasts?

No software, no matter how powerful, and no analyst, no matter how talented, can guarantee perfect (or even highly accurate) forecasts. The objective should be to deliver forecasts as accurate as can reasonably be expected given the nature of what is being forecast. For example, if we are asked to forecast heads or tails in the toss of a fair coin, we will be correct about 50 percent of the time over a large number of trials. It doesn’t matter that our boss may want us to achieve 60 percent accuracy or that our efforts are funded with millions of dollars in new computers and software. Forecast accuracy is ultimately limited by the nature of the behavior being forecast. Investment in forecasting process, systems and people will pay off up to this limit of accuracy but can take you no further. Unfortunately, most organizations fail to achieve this limit because of “worst practices” that confound their forecasting efforts. This article identifies several common worst practices in business forecasting.

1. Overly complex and politicized forecasting process

The forecasting process can be degraded in various places by the biases and personal agendas of participants. The more elaborate the process, with more human touch points, the more opportunity exists for these biases to contaminate what should be an objective and scientific process. Can you believe anyone when it comes to forecasting? Perhaps not.

Those who have some input or authority in the forecasting process can use this influence to their own benefit. If you are a sales rep and it is quota-setting time, aren’t you going to try to lower expectations and get easier-to-beat quotas? If you are a product manager with a new product idea, aren’t you going to forecast at least high enough to meet the minimum hurdles for new product development approval? (No one is going to forecast their new product idea to fail in the marketplace, even though that is the most likely outcome.) Just about every participant has a special interest of some sort, and these must be accounted for.

Elaborate and overly complex forecasting processes may also be a result of poor use of organizational resources. Do each step and each participant in your process actually make the forecast better? Can these participants be reassigned to more worthwhile activities in the organization? Consider using a method called Forecast Value Added (FVA) analysis [1] that streamlines the process by identifying process waste and inefficiencies (activities that are not making the forecast any better).

2. Selecting model solely on “fit to history”

One practice common in unsound software, or in the misuse of good software, is choosing the forecasting model based solely on the model’s “fit to history.” The software provides (or the forecaster builds) several models, and these are each evaluated compared to recent history. The model that most closely matches the recent history is then chosen and used for creating forecasts of the future.

Remember that our objective isn’t just to fit a model to history – it is to find an appropriate model for forecasting future behavior. It so happens that fitting a model to history is easy, anybody can do it, and it is always possible to find a model that has perfect fit. However, having perfect fit to history is no guarantee that the model will generate good forecasts or is at all appropriate for forecasting. “Over-fitting” models to randomness in the behavior, rather than to the systematic structure, is a common consequence of focusing attention solely on fit to history.

3. Assuming model fit = forecast accuracy

How accurate can we expect a forecasting model to be? Inexperienced forecasters, and those outside of forecasting, may assume that a model’s fit to history indicates how accurately the model will forecast the future. So if the error of the historical fit is 20 percent, then the error of the future forecasts will also be 20 percent. This is a very bad assumption. One of the dirty tricks of software vendors is to only show you how well they can fit their models to your history, but never show you how well they really forecast.

For a lot of reasons, forecast accuracy will almost always be worse, and often much worse, than the fit of the model to history. For example, we may have chosen an inappropriate model – one that happens to fit the history but does not capture the underlying mechanisms that guide the behavior. Or, we may have specified the right model that does correctly express the behavior – but then the behavior may change in the future. Whenever you are reviewing software to purchase, or even just reviewing the performance of the software you already have, make sure to focus on accuracy of the future forecasts . . . and not on the accuracy of the fit to history.

4. Inappropriate accuracy expectations

Forecast accuracy is ultimately limited by the nature of the behavior we are trying to forecast. If the behavior exhibits smooth, stable, repeating patterns, then we should be able to forecast it quite accurately with simple methods. If the behavior is wild and erratic with no structure or stability, then we have no hope of forecasting it well, no matter how much time and money and resources we invest trying to do so. The most sophisticated methods in the world aren’t going to let us forecast unforecastable behavior, and we have to learn to live with that reality.

The worst practice is having inappropriate expectations for forecast accuracy and wasting resources trying to pursue unachievable levels of accuracy.

A better practice is to utilize what is called a naïve forecasting model. A naïve model is something simple and easy to compute, like a random walk (using the last known value as your future forecast), a seasonal random walk (such as using the known value from a year ago as your forecast for the same period this year), or a moving average of observations with a small sliding window (such as the average value of the last three periods).

You can think of the naïve model as being something free – you don’t need expensive systems or an elaborate forecasting process – you don’t need anything at all. A naïve model will achieve some level of forecast accuracy, say 60 percent. This 60 percent accuracy then becomes the baseline against which all your forecasting efforts are evaluated. If your process cannot do any better than a naïve model, why bother?

5. Inappropriate performance objectives

Failing to understand what forecast accuracy is reasonable to achieve with your demand patterns can lead to setting inappropriate performance objectives. As mentioned above, you cannot consistently call the tossing of a fair coin correctly other than 50 percent of the time, so it makes no sense to give you a goal of achieving 60 percent accuracy. The same applies to forecasting demand. While management may want to achieve 90 percent accuracy, the nature of the demand patterns may be such that 90 percent is not achievable.

Goals are often assigned based on what accuracy management feels it “needs.” For example, it isn’t uncommon to have blanket goals such as “accuracy > 80 percent” across all products, with no consideration for whether this is reasonable.

Goals are sometimes based on industry benchmarks that purport to identify “best in class” forecasting performance. However, industry benchmarks are subject to a number of perils and should not be used to set forecasting objectives for your organization.

6. Perils of industry benchmarks

Benchmarks of forecasting performance are available from several sources, including professional organizations and journals, academic research and from private consulting/benchmarking organizations. But there are many reasons why industry benchmarks are irrelevant in setting your own forecasting performance objectives.

1) Can you trust the data? Are the numbers based rigorous audits of company data or responses to a survey? If un-audited survey responses, do the respondents actually know the answers or are they just making it up?

2) Is measurement consistent across the respondents? Are all organizations forecasting at the same level such as by product, or customer or region? Are they forecasting in the same time bucket such as week or month? Are they forecasting by the same lead-time offset, such as three weeks in advance, or three months? And are they using the same metric? Even metrics as similar sounding as MAPE (mean absolute percentage error), weighted MAPE and symmetric MAPE can deliver very different values for the same data.

3) Finally and most important, is the comparison even relevant? Does the benchmark company have equally forecastable data?

Let’s consider a worst-case example:

Suppose a benchmark study shows that Company X has the lowest forecast error. Consultants and academics then converge on Company X to study its forecasting process and publish reports touting Company X’s “best practices.” Other companies read these reports and begin to copy Company X’s “best practices.” However, upon further review, FVA analysis is applied, and we discover that Company X had very easy-to-forecast demand, and that they would have had even lower forecast error had they just used a moving average. In other words, Company X’s so-called “best practices” just made the forecast worse!

This example is not far-fetched. And many organizational practices, even purported best practices, may only make their forecast worse.

Industry benchmarks for forecasting performance should be ignored. Benchmarks tell us what accuracy the so-called “best in class” companies are able to achieve, but they do not tell us how forecastable their demand is. Companies at the top of the benchmark lists may be there simply because they have the easiest-to-forecast demand – not because their forecasting processes are worthy of admiration. Without information on forecastability, industry benchmarks are irrelevant and should not be used to set performance objectives.

Also, objectives should not be set arbitrarily, based on management’s wants or needs. It makes no sense to set arbitrary, blanket objects such as “forecast accuracy must be > 80 percent” without any consideration of the forecastability of the demand. If the objective is set too high it will demoralize the forecasters and encourage them to cheat. If the objective is set too low, such that a naïve forecast could beat it, then the forecasters can simply idle at their desks all year and still make the goal.

The better practice is to tie the forecasting performance objective to the underlying forecastability of the demand patterns, and the way to do this is to use a naïve forecast as the baseline. Perhaps the only reasonable objective is to beat the naïve model (or at least do no worse!) and to continuously improve the forecasting process. You improve the process not only by making the forecasts more accurate and less biased, but by making the process more efficient – using fewer and fewer resources and automating as much as possible. This is where good automated forecasting software can be very effective.

7. Adding variation to demand

The forecastability of demand is largely dependent on the volatility of that demand. When demand is smooth and stable, it can be forecast accurately with simple methods. When demand is erratic and random, it is unreasonable to expect accurate forecasts.

The scatter plot in Figure 1 compares forecast accuracy (from 0 percent to 100 percent on the vertical axis) to the volatility of the sales pattern (as measured by the coefficient of variation) along the horizontal axis. It is based on one year of weekly forecasts for 5,000 stock-keeping units (SKUs) at a consumer goods company. For SKUs with greater volatility (moving to the right in the plot), forecast accuracy tended to decrease.

Figure 1. Forecast accuracy versus volatility.

Figure 1. Forecast accuracy versus volatility.

Volatility analysis suggests that whatever we can do to reduce volatility in the demand for our products, the easier they should be to forecast. Unfortunately, and the worst practice here, is that most organizational policies and practices are designed to add volatility to demand rather than make it more stable.

Everyone is familiar with the quarter end push, or “hockey stick” – where companies do everything possible at the end of the quarter to make the short-term sales target. Figure 2 shows shipments from a consumer goods manufacturer to its customers, the retail stores. Shipments are shown by the thin line, and you can see the big spikes at quarter end and the big drop off at the start of every new quarter.

Figure 2: Shipments versus consumption.

Figure 2: Shipments versus consumption.

The thicker line shows consumer purchases from the retail store. Consumption is quite stable, and just using the mean would have provided a fairly accurate forecast.

The variation of the shipment pattern is three times the variation of the retail sales pattern. These highly erratic and hockey stick patterns are encouraged by our financial practices, such as hitting the quarter end revenue targets, and by our sales and promotional practices, such as cutting prices or offering other incentives that spike the demand. In many industries, customers have been trained to wait for quarter end to get the best deals.

Instead of policies that encourage volatile demand from your customers, a better practice (at least to improve forecasting) is to remove those kinds of incentives, or create incentives that encourage smooth and stable demand. In addition to being able to forecast smooth demand more accurately, smooth demand should be easier and cheaper to service, so you can reduce costs. Organizations can apply these same sorts of analyses with their own data and obtain visibility into the issues that may be created by their existing policies and practices [2].

8. New product forecasting

New product forecasting (NPF) is inherently difficult, and usually inaccurate.

A worst practice is making business decisions based on the assumption that new product forecasts are going to be accurate – because they probably won’t be!

Since there is no historical demand data for the new product, the forecast is largely based on judgment. Often the forecast is provided by the product manager or general manager who is advocating introduction of the product, and almost assuredly the forecast will be high enough to exceed internal hurdles to get new products approved for development. When justification for the forecast is required, a common method is to refer to past products, sometimes called “like items,” that are similar to the new product. This is known as forecasting by analogy. While this approach is legitimate, it is subject to selection bias – of only utilizing prior products that were successful. Since most new products fail in the marketplace, basing a forecast only on successful product introductions creates an unjustifiably optimistic perception.

While there are dozens of methods available purporting to improve new product forecasting accuracy, the most important thing is being aware of the uncertainties and likely range of outcomes. Too much confidence in the accuracy of your new product forecast can lead to dangerously risky business decisions, and that is what we want to avoid. A structured analogy approach can be useful in many NPF situations [3]. It augments human judgment by automating historical data handling and extraction, incorporating statistical analysis and providing visualization of the range of historical outcomes. The software makes it possible to quickly extract candidate products based on the user-specified attribute criteria. It aligns, scales and clusters the historical patterns automatically, making it easier to visualize the behavior of past new products. This visualization helps the forecaster realize the risks, uncertainties and variability in new product behavior.

Expectations for the accuracy of new product forecasts should be modest, and acknowledgement of this uncertainty should be at the forefront. The structured analogy approach allows the organization to both statistically and visually assess the likely range of new product demand so that it can manage accordingly. Rather than lock in elaborate sales and supply plans based on a point forecast that is likely to be wrong, the organization can use the structured analogy process to assess alternative demand scenarios and mitigate risk.

Judgment is always going to be a big part of new product forecasting – as of today a computer will not be able to tell us what is going to be the hot new fashion color. But judgment needs assistance to keep it on track and as objective as possible. While the structured analogy approach can be used to generate new product forecasts, it is also of great value in assessing the reasonableness of forecasts that are provided from elsewhere in the organization. The role of structured analogy software is to do the heavy computational work and provide guidance – making the NPF process as automated, efficient and objective as possible.

Summary

This article has identified several common worst practices in business forecasting. By identifying and eliminating these practices through methods such as FVA analysis, organizations can stop making the forecast worse and can achieve the level of forecast accuracy that is reasonable to expect given the nature of their demand patterns.

Michael Gilliland (mike.gilliland@sas.com) is a product marketing manager and Udo Sglavo (udo.sglavo@sas.com) is a global analytic solutions manager at SAS Institute Inc., a leading business analytics software and services company based in Cary, N.C.

Notes

1. See the SAS white paper “Forecast Value Added Analysis: Step-by-Step.”
2. For more thorough discussion of volatility analysis, worst practices and fva analysis, see Michael Gilliland, “The Business Forecasting Deal,” John Wiley & Sons, 2010.
3. See the SAS white paper “New Product Forecasting Using Structured Analogies” for more information on the service offering.

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.


Headlines

Gaining distribution in small retail formats brings big payoffs

Small retail formats with limited assortments such as Save-A-Lot and Aldi and neighborhood stores like Target Express have been growing in popularity in the United States and around the world. For brands, the limited assortments mean greater competition for shelf-space, raising the question of whether it is worth expending marketing effort and slotting allowances to get on to their shelves. According to a forthcoming study in a leading INFORMS scholarly marketing journal, Marketing Science, the answer is “yes.” Read more →

Cognitive computing a disruptive force, but are CMOs ready?

While marketing and sales professionals increasingly find themselves drowning in data, a new IBM study finds that 64 percent of surveyed CMOs and sales leaders believe their industries will be ready to adopt cognitive technologies in the next three years. However, despite this stated readiness, the study finds that only 24 percent of those surveyed believe they have strategy in place to implement these technologies today. Read more →

How weather can impact consumer purchase response to mobile ads

Among the many factors that impact digital marketing and online advertising strategy, a new study in the INFORMS journal Marketing Science provides insight to a growing trend among firms and big brands: weather-based advertising. According to the study, certain weather conditions are more amenable for consumer responses to mobile marketing efforts, while the tone of the ad content can either help or hurt such response depending on the current local weather. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

Essential Practice Skills for High-Impact Analytics Projects
Sept. 26-27, Executive Conference Center, Arlington, Va.

Foundations of Modern Predictive Analytics
Oct. 2-3, VT Executive Briefing Center, Arlington, Va.

2017 INFORMS Annual Meeting
October 22-25, 2017, Houston

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.