Share with your friends


Analytics Magazine

Focus on Forecasting: Overcoming Obstacles

July/August 2010


Elevating organizational acceptance is key to achieving forecasting’s full potential.

E. Andrew BoydBy E. Andrew Boyd

Forecasting is one of the most powerful and widely used tools by analytics professionals, and rightly so. It rests on a firm mathematical foundation. It’s taught in universities world over. And it’s extremely useful. Whether forecasting the number of people expected to buy newspapers or the demand for computers, good forecasts lead to better decisions.

At the same time, forecasting is examined and impugned more than any other area of analytics – so much so that forecasts are routinely ignored or thrown out. In this article, we discuss some of the barriers to forecast acceptance within an organization and ways to overcome them.

What is a Good Forecast?

Forecasts receive scrutiny because they’re easy to understand. For purposes of discussion, consider the number of cars rented on a Monday morning by a car company at a major airport. The company has reservations information, but some people who have booked won’t show up. Others will walk up to the counter looking to rent at the last moment. Forecasts of the actual number of people who rent are invaluable for both profitability and customer satisfaction.

A basic forecast of next week’s demand can be generated by looking at Mondays in the past. Figure 1 shows both observed demand and forecasts made by averaging the observed demand from all prior weeks.

Figure 1: Observations and forecasts for the number of car rentals at an airport location on Mondays. Forecasts are made one week in advance and represent the average of observed demand in all prior weeks.

Figure 1: Observations and forecasts for the number of car rentals at an airport location on Mondays. Forecasts are made one week in advance and represent the average of observed demand in all prior weeks.

Are these forecasts good or bad? Some common responses might include:

  • They look too flat. Something is being missed.
  • They aren’t close enough to what actually occurred.
  • They look pretty good.
  • The forecasting methodology is too simple. Certainly something more sophisticated can be employed to generate better forecasts.

All are valid responses. Individuals have their own ideas about “good” that affect their evaluation of forecasts. Differing perspectives are natural and can be a healthy part of the forecasting process, but they can’t be ignored and treated as a nuisance rather than a fundamental issue. Individuals who create forecasts must be willing to devote as much effort to two-way communication about their forecasts as they do to creating them. A consensus must be reached that forecasts are good – or at least good enough – if they are to be used.

Requirements for Good Forecasts

While “good” is largely subjective, some forecasting methods are clearly better than others. An average of past observations is better than a forecast based on phases of the moon, but is an average better than exponential smoothing? When predicting the sale of umbrellas on a street corner, a forecast that incorporates the likelihood of rain is better than one that doesn’t. But are there other predictors that can further improve forecasts? Good forecasts must meet certain basic criteria, most notably, the choice of reasonable predictors and a reasonable forecasting method. But these basic criteria aren’t by themselves enough to ensure forecasts will be perceived as good.

Measuring forecast error is also vital to establishing whether a forecasting method is good – at least in comparison to alternatives. Error measurement is especially useful for eliminating poor forecasts that may on the surface appear good. Consider, for example, the forecasts shown in Figure 2. They clearly overcome the objection of looking too flat, and they visually appear to track the actual observations quite well, but the mean absolute deviation is 40 percent higher than the forecasts of Figure 1. The method used in Figure 2 uses this week’s observation as next week’s forecast.

Figure 2: Observations and forecasts for the number of car rentals at an airport location on Mondays. Forecasts are made one week in advance, using the present week’s demand as the following week’s forecast.

Figure 2: Observations and forecasts for the number of car rentals at an airport location on Mondays. Forecasts are made one week in advance, using the present week’s demand as the following week’s forecast.

Useful as error measurement is when comparing forecasting methods, it can’t resolve the fundamental question of whether forecasts are perceived as good. The forecasts in Figure 1 show less error than those in Figure 2, but are they good?

Barriers to Forecast Acceptance

Disagreements about a best forecasting method can create barriers to success, though many of these barriers can be overcome by recognizing there’s no absolute right in forecasting. If multiple forecasting methods are considered good, and the question is which is best, then a forecasting initiative is well on its way to success.

A more significant obstacle is when the best forecasting efforts fail to generate any forecasts that are perceived as good. There are times when forecasts are truly inadequate. However, frequently the forecasts are quite adequate for the task at hand, but decision-makers won’t accept them because of preconceptions about what good forecasts should look like. Through extensive data collection and experimentation, a forecaster may determine that a simple average provides the lowest overall forecasting error. On the other hand, if a decision-maker thinks the forecasts are too flat to be indicative of actual demand, he may refuse to use them.

Further complicating the situation is that no matter how much time a forecaster may spend experimenting, there are always more predictors and more forecasting methods to be evaluated. This is often coupled with an underlying belief that if the right predictors can just be found and woven together with the appropriate forecasting methodology, better (good) forecasts will emerge. Thus, extensive time, effort and money may be devoted to a task with little hope of achieving significant improvement. The number of people who show up to rent cars at an airport is, in the end, a random process. If variability is high enough, even the best efforts can reduce forecast error only so much.

Another common barrier to deeming forecasts good is that as forecasting grows more sophisticated, it also becomes more difficult to determine exactly why a particular forecast is what it is. Consider, for example, the forecasts shown in Figure 3. Looking at this figure, most individuals would agree that the forecast in week 15 is bad – that something must be wrong with the forecasting method. In actuality, the forecast includes the fact that a large convention is scheduled to be in town.

Figure 3: Forecasts for the number of car rentals at an airport location on Mondays.

Figure 3: Forecasts for the number of car rentals at an airport location on Mondays.

A convention is straightforward to understand, and most decision-makers would react by saying that the forecast wasn’t as bad as they initially thought. But what if the forecast was the result of the interaction of a number of predictors through a complex mathematical formula? One of the simplest ways to evaluate forecasts is to view them in a format similar to Figure 3. Because it’s so simple, it’s frequently used by decision-makers to evaluate forecasts even though

it can be quite misleading. When the formulas grow complicated enough, there’s rarely a picture that can adequately explain exactly what’s going on.

The anecdotal “failure” of a handful of poor forecasts also has the potential to call an entire forecasting methodology into question. If millions of forecasts are being generated, as is often the case in operational forecasting systems, it’s likely that some forecasts will look bad due to uncommon data conditions – quite possibly errors in the input data. A forecast of one rental for a Monday at La Guardia would rightly raise eyebrows. While such forecasts require investigation, they shouldn’t necessarily cast doubt on the remaining forecasts, though they often do.

Overcoming Barriers to Forecast Acceptance

With so many potential barriers, getting an organization to accept forecasts as good is challenging. However, there are a number of guidelines all forecasting initiatives can benefit from.

• Always be aware of the goal. Forecasting is undertaken to improve some aspect of running a business. If no forecasting methodology is in place (someone is pulling numbers out of the air), then virtually any systematic effort based on numbers will represent an improvement. If a forecasting methodology is already being used, then a baseline for a forecasting initiative is to improve upon what exists. In either case, these are very different goals than generating forecasts that achieve some platonic ideal of good. Is a flat forecast not good enough when the alternative is no forecast at all? Forecasting initiatives far too often lose sight of the goal of improvement.

• Don’t get lost in the search for improved forecasts that don’t exist. Improving forecasts is a laudable goal and worth pursuing whenever there’s evidence that additional effort will indeed yield better forecasts, but it’s counterproductive to seek improvement when it doesn’t appear improvement is possible. Switching to an entirely different paradigm (from regression to neural nets, for example) rarely if ever works. More often than not, the underlying random process simply won’t allow for the desired improvement. Accept the state of affairs, deal with the business problem and move on.

• Recognize that forecasting isn’t always the answer. Forecasting is sometimes used as a band-aid for trying to solve more fundamental business problems. Consider the monthly demand for container space experienced by a container shipping company at one particular port, as shown in Figure 4. If the company could forecast demand with sufficient accuracy, it could position and schedule its ships profitably. But the demand is lumpy and erratic, and forecasting can’t solve the problem. Trying to forecast a way out of the predicament would lead to failure.

Figure 4: Observed monthly demand by a container shipping company over a two-year period.

Figure 4: Observed monthly demand by a container shipping company over a two-year period.

Alternatively, the company could examine its business processes. For example, if the company booked space by simply waiting for its customers to call a few days in advance, it could consider working with its largest customers to improve the flow of communication so that large shipments weren’t a surprise. Or the company could consider selling blocks of space well in advance for a reduced rate. Similar approaches address the fundamental business problem, which is not a forecasting problem and can’t be solved with forecasting.

In addition to general guidelines, there are techniques experienced forecasters can use to improve the chances of a successful forecasting initiative.

• Educate. Preconceptions change with experience. By introducing individuals without forecasting experience to an array of examples tailored for educational purposes, they will come to better understand what can and can’t be achieved with forecasting.

• Don’t set unrealistic expectations. It’s good to set out with fervor in an effort to generate the best possible forecasts. Exuberance is a good thing. But it can raise expectations among others to a point that those expectations can’t be met. It’s best to convey a sense of benign diligence when a forecasting initiative begins and to share exciting results if and when they become available. Frequently, in an effort to gain support for an initiative, sponsors may set very high, even unrealistic, expectations. In such cases, it’s necessary to go back and carefully reset those expectations once actual work begins.

• Consider simpler forecasting methods. Sophisticated forecasting methods offer potential forecast improvements but are often difficult to understand without an advanced degree in statistics. Yet, many good methods exist that are comprehensible to a much broader audience and often yield forecasts of comparable quality to their sophisticated cousins. Forecasts are more likely to gain acceptance when there’s some understanding of where they’ve come from. Also, sophisticated methods are often employed with the expectation of big improvements over simpler methods. If actual improvements don’t meet expectations, a new barrier to success is constructed.

Forecasting is a powerful tool for improving business operations. To take advantage of this tool, forecasts must be embraced by those who use them. By recognizing that individuals have their own ideas about what makes forecasts good, and elevating the process of organizational acceptance to its rightful place, forecasting can fully realize its potential.

Andrew Boyd served as executive and chief scientist at an analytics firm for many years. He can be reached at



Using machine learning and optimization to improve refugee integration

Andrew C. Trapp, a professor at the Foisie Business School at Worcester Polytechnic Institute (WPI), received a $320,000 National Science Foundation (NSF) grant to develop a computational tool to help humanitarian aid organizations significantly improve refugees’ chances of successfully resettling and integrating into a new country. Built upon ongoing work with an international team of computer scientists and economists, the tool integrates machine learning and optimization algorithms, along with complex computation of data, to match refugees to communities where they will find appropriate resources, including employment opportunities. Read more →

Gartner releases Healthcare Supply Chain Top 25 rankings

Gartner, Inc. has released its 10th annual Healthcare Supply Chain Top 25 ranking. The rankings recognize organizations across the healthcare value chain that demonstrate leadership in improving human life at sustainable costs. “Healthcare supply chains today face a multitude of challenges: increasing cost pressures and patient expectations, as well as the need to keep up with rapid technology advancement, to name just a few,” says Stephen Meyer, senior director at Gartner. Read more →

Meet CIMON, the first AI-powered astronaut assistant

CIMON, the world’s first artificial intelligence-enabled astronaut assistant, made its debut aboard the International Space Station. The ISS’s newest crew member, developed and built in Germany, was called into action on Nov. 15 with the command, “Wake up, CIMON!,” by German ESA astronaut Alexander Gerst, who has been living and working on the ISS since June 8. Read more →



INFORMS Computing Society Conference
Jan. 6-8, 2019; Knoxville, Tenn.

INFORMS Conference on Business Analytics & Operations Research
April 14-16, 2019; Austin, Texas

INFORMS International Conference
June 9-12, 2019; Cancun, Mexico

INFORMS Marketing Science Conference
June 20-22; Rome, Italy

INFORMS Applied Probability Conference
July 2-4, 2019; Brisbane, Australia

INFORMS Healthcare Conference
July 27-29, 2019; Boston, Mass.

2019 INFORMS Annual Meeting
Oct. 20-23, 2019; Seattle, Wash.

Winter Simulation Conference
Dec. 8-11, 2019: National Harbor, Md.


Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to