Share with your friends


Analytics Magazine

Executive Edge: The interactive nature of analytics

March/April 2011


Colin KessingerBy Colin Kessinger

A few months ago on a professional networking discussion group someone posed a question asking about the best tools for predictive analytics. There were the predictable responses from vendors pretending not to be vendors. There were a few thoughtful but long-winded responses (like my own) that suggested you should only buy an analytical tool to improve the cost, speed, reliability, etc. of solving a problem you already know how to solve, but never buy one to solve a problem that you do not yet know how to solve. And there was one response that absolutely nailed it. The “winning” response? “Your brain.”

Despite the billions of dollars spent on analytical tools for problems such as forecasting, inventory optimization and supply chain network design, there is still no “killer app” for supply chain analytics. Why not? Often the issue is not with the product itself but with the expectations of the product. These products are often sold on the premise that a user with limited understanding of analytics can be trained to use an analytical solution in a short period of time. The problem, of course, is that the solution is selling around the best tool of all: the human brain.

If indeed your brain is the best available analytical tool, then solving analytical business problems must be an interactive exercise. And in many situations, it not only is an interactive exercise, but one that requires a lot of creativity and artistry. The artistry may be required to create the right abstraction of reality (such as how to make discrete a continuous state-space), how to deal with inadequate sample sizes (e.g. insufficient history to estimate seasonal parameters on a part-by-part basis), or simply to present and explain the output (e.g. a small change in one input results in a dramatically different result) so that the decision-maker will actually trust the model. Basic time series forecasting models are perhaps the best example of this issue. The formulas are trivial, but without the artistry of an expert they perform fairly poorly. And without the proper metrics, validation and explanation, decision-makers rarely trust them.

For example, during one of our forecasting projects, the project sponsor mentioned that he had a team of former nuclear physicists working on a complex forecasting model, but that they had run into difficulties. The problem, apparently, was not that the algorithm was performing poorly. Rather, the problem was that no one could understand how the forecast was created. A senior executive, having just spoken with one of the company’s largest customers, asked the forecasting team whether or not the new model already accounted for a major promotion that the large customer was about to run. The model was so complex that no one knew the answer. This, and a few other questions, raised enough doubts that the executives always deferred to their intuition when the model suggested something different. In summary, the decision-maker could not interact with the model and therefore it was not adopted.

So what’s the point? The point is that most analytical efforts would benefit mightily from a better understanding of the necessary interaction model. Too many solutions are either built on the assumption that the user is a moron or an engineer, but in most cases they are neither. The moron’s solution is overly simplified and never provides enough access to the mechanics to enable real interaction (Why give a monkey a sharp knife?). The poor user just needs an answer. The engineer’s solution assumes that all of the inputs are correct, that the outputs are self-explanatory and that the entire problem is in fact described by the model. Thus, the only logical course of action is to accept the output. Any management interaction with the model would result in a suboptimal solution. Hence, most solutions and projects focus on the answer and not how the decision-maker will interact with it.

We frequently see this issue when companies take on supply chain network design projects using off-the-shelf network analysis tools. There are two typical failure modes. The first is that, while a novice user can be trained to create and load input data, using the software and solving the problem are not synonymous. In many cases, the user does not understand the consequences of their modeling decisions. When we intercept a project mid-stream, there are usually several faults in the problem formulation and a number of gaps where the user just assumes that the “model takes care of that.” (As a side note, “the model takes care of that” is second only to “to be on the safe side” when defending half-baked assumptions.) The second failure mode is that the modeler is often perplexed by the results, especially when seemingly small changes to inputs result in wildly different results. Their natural reaction is to add additional constraints to make the solution more “realistic” and ultimately to force the model to the result they expected in the first place. This is an example of a very poor interaction model.

But what really handicaps these efforts is the lack of adequate interaction. Most network design packages essentially report the “optimal” network design, perhaps accompanied by a rudimentary sensitivity analysis. Unfortunately, merely reporting the optimal answer is rarely sufficient. For these more strategic decisions, the supporting evidence is as important as the answer. The decision-maker usually wants some insight into why the optimal solution is optimal, particularly if it runs at all counter to their intuition. If asked to defend the solution, the decision-maker cannot merely reply that the model said so.

We see the same issues with sensitivity analysis. Most traditional sensitivity analyses quantify the change in the final measure as key inputs change or identify thresholds across which the optimal solution changes. Done well, this is definitely useful, but much more so if the user can explain why the solution changes. However, decision-makers usually really want insight into the robustness of the solution. Specifically, if they make the decision, how good or bad will it be if the future unfolds differently? The best solution may be one that is consistently better than most across a range of scenarios but is never the optimal solution. Left to their own devices, the newly trained user will struggle mightily to find alternative “suboptimal” solutions. After all, the tools they have been given are there to find optimal solutions.

The right level of interaction between the decision-maker, the analyst and the model can deliver superior results. This starts with the acknowledgement that in many cases there is no one correct model or abstraction of reality. The analyst has to extract (usually iteratively throughout the design process) the requirements from the decision-maker to formulate the problem and to understand how they will engage with the final solution. This could include creating a dashboard of metrics that highlight conditions where the decision-maker typically intervenes or where the model is not working particularly well. It may require the capability within the model to deconstruct solutions to understand which assumptions or constraints are driving the counter-intuitive result. The point is that in all of these situations, the ability and access to interact with the model is critical to the success of the model.

Colin Kessinger ( is the managing director of End-to-End Analytics, a Palo Alto, Calif.-based supply chain and operations consulting services firm. Over his career, Kessinger’s work has focused on applying quantitative techniques to risk management, forecasting, supply chain flexibility, capacity planning and strategic contract design. He was a professor at the University of Michigan and has been a lecturer at Stanford University and the Berkeley Hass School of Business. He holds a Ph.D. in industrial engineering and engineering management from Stanford University.


Analytics Blog

Electoral College put to the math test

With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.



Gartner: AI technologies to be pervasive in new software products

Market hype and growing interest in artificial intelligence (AI) are pushing established software vendors to introduce AI into their product strategy, creating considerable confusion in the process, according to Gartner, Inc. Analysts predict that by 2020, AI technologies will be virtually pervasive in almost every new software product and service. Read more →

Drone delivery: Professor develops solution to minimize delays in operations

When delivery companies like FedEx, Amazon and UPS launch drones to deliver packages in the near future, one Kennesaw State University computer science professor may be at the crux of solving one of its most complicated problems. Donghyun (David) Kim, assistant professor of computer science and an expert in computer algorithm optimization, is designing a fast-running algorithm to tackle simultaneous coordination problems among multiple delivery trucks and the drones launched from them. Read more →

Tech spending growth limited to about 5 percent through 2018

Forrester predicts U.S. business and government tech spending will continue to grow by 4.8 percent through 2017 and increase to 5.2 percent in 2018. While these forecasts are higher than Forrester’s projections following the 2016 presidential election, they are lower than the expected numbers from a year ago. Read more →



Essential Practice Skills for High-Impact Analytics Projects
Sept. 26-27, Executive Conference Center, Arlington, Va.

Foundations of Modern Predictive Analytics
Oct. 2-3, VT Executive Briefing Center, Arlington, Va.

2017 INFORMS Annual Meeting
October 22-25, 2017, Houston

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to