Executive Edge: The interactive nature of analytics
By Colin Kessinger
A few months ago on a professional networking discussion group someone posed a question asking about the best tools for predictive analytics. There were the predictable responses from vendors pretending not to be vendors. There were a few thoughtful but long-winded responses (like my own) that suggested you should only buy an analytical tool to improve the cost, speed, reliability, etc. of solving a problem you already know how to solve, but never buy one to solve a problem that you do not yet know how to solve. And there was one response that absolutely nailed it. The “winning” response? “Your brain.”
Despite the billions of dollars spent on analytical tools for problems such as forecasting, inventory optimization and supply chain network design, there is still no “killer app” for supply chain analytics. Why not? Often the issue is not with the product itself but with the expectations of the product. These products are often sold on the premise that a user with limited understanding of analytics can be trained to use an analytical solution in a short period of time. The problem, of course, is that the solution is selling around the best tool of all: the human brain.
If indeed your brain is the best available analytical tool, then solving analytical business problems must be an interactive exercise. And in many situations, it not only is an interactive exercise, but one that requires a lot of creativity and artistry. The artistry may be required to create the right abstraction of reality (such as how to make discrete a continuous state-space), how to deal with inadequate sample sizes (e.g. insufficient history to estimate seasonal parameters on a part-by-part basis), or simply to present and explain the output (e.g. a small change in one input results in a dramatically different result) so that the decision-maker will actually trust the model. Basic time series forecasting models are perhaps the best example of this issue. The formulas are trivial, but without the artistry of an expert they perform fairly poorly. And without the proper metrics, validation and explanation, decision-makers rarely trust them.
For example, during one of our forecasting projects, the project sponsor mentioned that he had a team of former nuclear physicists working on a complex forecasting model, but that they had run into difficulties. The problem, apparently, was not that the algorithm was performing poorly. Rather, the problem was that no one could understand how the forecast was created. A senior executive, having just spoken with one of the company’s largest customers, asked the forecasting team whether or not the new model already accounted for a major promotion that the large customer was about to run. The model was so complex that no one knew the answer. This, and a few other questions, raised enough doubts that the executives always deferred to their intuition when the model suggested something different. In summary, the decision-maker could not interact with the model and therefore it was not adopted.
So what’s the point? The point is that most analytical efforts would benefit mightily from a better understanding of the necessary interaction model. Too many solutions are either built on the assumption that the user is a moron or an engineer, but in most cases they are neither. The moron’s solution is overly simplified and never provides enough access to the mechanics to enable real interaction (Why give a monkey a sharp knife?). The poor user just needs an answer. The engineer’s solution assumes that all of the inputs are correct, that the outputs are self-explanatory and that the entire problem is in fact described by the model. Thus, the only logical course of action is to accept the output. Any management interaction with the model would result in a suboptimal solution. Hence, most solutions and projects focus on the answer and not how the decision-maker will interact with it.
We frequently see this issue when companies take on supply chain network design projects using off-the-shelf network analysis tools. There are two typical failure modes. The first is that, while a novice user can be trained to create and load input data, using the software and solving the problem are not synonymous. In many cases, the user does not understand the consequences of their modeling decisions. When we intercept a project mid-stream, there are usually several faults in the problem formulation and a number of gaps where the user just assumes that the “model takes care of that.” (As a side note, “the model takes care of that” is second only to “to be on the safe side” when defending half-baked assumptions.) The second failure mode is that the modeler is often perplexed by the results, especially when seemingly small changes to inputs result in wildly different results. Their natural reaction is to add additional constraints to make the solution more “realistic” and ultimately to force the model to the result they expected in the first place. This is an example of a very poor interaction model.
But what really handicaps these efforts is the lack of adequate interaction. Most network design packages essentially report the “optimal” network design, perhaps accompanied by a rudimentary sensitivity analysis. Unfortunately, merely reporting the optimal answer is rarely sufficient. For these more strategic decisions, the supporting evidence is as important as the answer. The decision-maker usually wants some insight into why the optimal solution is optimal, particularly if it runs at all counter to their intuition. If asked to defend the solution, the decision-maker cannot merely reply that the model said so.
We see the same issues with sensitivity analysis. Most traditional sensitivity analyses quantify the change in the final measure as key inputs change or identify thresholds across which the optimal solution changes. Done well, this is definitely useful, but much more so if the user can explain why the solution changes. However, decision-makers usually really want insight into the robustness of the solution. Specifically, if they make the decision, how good or bad will it be if the future unfolds differently? The best solution may be one that is consistently better than most across a range of scenarios but is never the optimal solution. Left to their own devices, the newly trained user will struggle mightily to find alternative “suboptimal” solutions. After all, the tools they have been given are there to find optimal solutions.
The right level of interaction between the decision-maker, the analyst and the model can deliver superior results. This starts with the acknowledgement that in many cases there is no one correct model or abstraction of reality. The analyst has to extract (usually iteratively throughout the design process) the requirements from the decision-maker to formulate the problem and to understand how they will engage with the final solution. This could include creating a dashboard of metrics that highlight conditions where the decision-maker typically intervenes or where the model is not working particularly well. It may require the capability within the model to deconstruct solutions to understand which assumptions or constraints are driving the counter-intuitive result. The point is that in all of these situations, the ability and access to interact with the model is critical to the success of the model.
Colin Kessinger (firstname.lastname@example.org) is the managing director of End-to-End Analytics, a Palo Alto, Calif.-based supply chain and operations consulting services firm. Over his career, Kessinger’s work has focused on applying quantitative techniques to risk management, forecasting, supply chain flexibility, capacity planning and strategic contract design. He was a professor at the University of Michigan and has been a lecturer at Stanford University and the Berkeley Hass School of Business. He holds a Ph.D. in industrial engineering and engineering management from Stanford University.