Share with your friends


Analytics Magazine

Supply Chain and Manufacturing: Why optimization models fail

How to avoid chaos in the field by combining simulation and optimization.

Patricia RandallBy Patricia Randall

Many optimization models fail because they do not sufficiently account for real-world randomness or suboptimal decisions by employees or vendors, or their solutions are not robust enough to handle the disruptions that occur throughout the day. In one representative example, a transportation company implemented an optimization model for a real-time work assignment system at a large facility without adequately testing the model in a realistic environment. When the model went live, it caused chaos in the field. Management realized the interdependencies between different tasks were not properly accounted for, so the model was promptly turned off.

Shortcomings of Traditional Testing

In supply chain and manufacturing, building a real-time optimization model with complex interdependencies requires rigorous testing without disrupting the organization’s operations. The challenge is to test in a dynamic environment the quality of the solution over an extended period with agents that interact and adjust their behavior based on the actions of others.

Traditional testing is often insufficient because modelers restrict themselves to overly specific solutions or test cases when asking nontechnical business users to review spreadsheets or outputs that can look like an endless stream of numbers and letters. When traditional testing cannot demonstrate that the model can adapt to an ever-changing environment before it is deployed in the real world, the modeler has failed to convince business users that the new solution will make things better, not worse.

At the transportation company previously referenced, executives rebooted the project and sought a different operational optimization model with similar requirements. The new model would require a new testing approach.

Leveraging Simulation

Joining simulation with optimization, we can test the solution quality and robustness of complex, high-impact models. Because the simulation incorporates real-life randomness, the results will differ with each run and provide a more realistic and varied testing environment with evolving model inputs and a variety of test scenarios.

Many optimization models fail because they do not sufficiently account for real-world randomness or suboptimal decisions by employees or vendors.

Many optimization models fail because they do not sufficiently account for real-world randomness or suboptimal decisions by employees or vendors.
Source: ThinkStock

Running sensitivity analyses, we can see which parts of the problem are more apt to “break” the solution and identify the best techniques to reduce these disruptions. Simulation provides an interface that is much easier for business people to use to validate models – simulation “speaks their language.” This leads to better user buy-in and a greater comfort at launch time because the users have reviewed and validated the results and helped the modeling team refine the solution. Testing with the simulation also provides the ability to create what-if scenarios to test the model under different operating conditions and sources of disruption.

Steps to Take

The following steps will help you join optimization and simulation to build and successfully deploy a high-value solution. For reference, we’ll consider two very different problems: assigning parking at a freight transportation terminal and scheduling at a small-batch manufacturing facility where several processes utilize the same limited resources.

Think “data” first. Many practitioners spend much more time with the data – reading it in, organizing and cleaning it – than building the model. Make use of current powerful tools, such as pandas, that accelerate and simplify data organization, cleaning and validation. Our firm developed a comprehensive framework that incorporates advanced data cleaning in the development and management of optimization models.

Select the right simulation software. There are many excellent simulation software packages – base your choice according to criteria that fit your project. In our firm’s practice, we prioritize extensive customization to ensure the flexibility needed to evaluate the models and produce the right feedback for tuning the models. Our recent simulations needed to be able to communicate with the models through a variety of mechanisms such as RESTful interfaces. We additionally required strong graphics that business users could easily consume to gain confidence in the results of both the simulations and model.

Create a feedback loop. The feedback loop from the simulation to the optimization model is another important consideration. As the simulation progresses, critical model inputs will change, and those changes need to be communicated to the model. For the transportation terminal problem, inventory updates are sent to the optimization model from the simulation just as inventory would be updated in the production environment. In the small-batch manufacturing facility, actual process durations will be shorter or longer than planned and resources may occasionally be down due to maintenance or failure. The simulation communicates these events to the scheduling model, so it can reschedule processes as needed and adjust start times of future processes accordingly.

Allow for error handling. The simulation needs a way to alert the model or modeler to pieces of the solution that are infeasible in the simulation or when a solution cannot be found at all. For instance, the terminal model might provide a parking assignment to a lot that is already completely full. The simulation feedback is output consumed by the modeler and is not automatically incorporated into the model, but this would make a great use case for machine learning and a more integrated feedback loop between the simulation and optimization model.

Build the simulation and model to work together. For a transportation terminal, testing with the simulation may identify issues with input data, like missing parking preferences in the source system and problems with the interface that would be used by client systems. Through the simulation, the modelers could test a variety of initial inventory levels and parking directives to see how the model handled these different conditions. The simulation could also park units outside the assigned location to see how the model handled this deviation.

For a small-batch manufacturer, a simulation of the plant would test the efficacy of the scheduling optimization model. The simulation would identify resource bottlenecks and enable the business to evaluate the impact of changes in process, resources and sequence on throughput, timeline reliability, turnaround time, product quality and cost. The scheduling optimization model would assign process start times to maximize throughput, minimize cycle time, minimize operator overtime and balance resource utilization. The model would re-solve throughout the day based on the current status of the in-progress processes, sometimes adjusting start times as processes run longer or shorter than planned, which would be easily tested with the simulation.

Test the solution’s subjective aspects. A simulation’s UI will let business users judge more subjective aspects of the solution. Updates can then be made to the model, for example, to better balance asset utilization, speed up parking and provide less motivation to deviate from the assigned location.

A key point of testing can be robustness to disruptions. In small-batch manufacturing, it could be found that scheduling a process to start near the end of an operational window would result in a riskier schedule, since any delays in the processing prior to that start could push the process start out of the window and into the next day. In such a case, the product could expire before the next step –which is an unrecoverable failure. Testing may identify areas within the process that needed more lead time to prep the resources for upcoming processes. This lead time could be incorporated into the optimization model to better utilize scarce resources and prevent additional schedule disruptions.

Right Team, Right Opportunity

Simulations can be costly to build, depending on the complexity of the system or environment being simulated and the level of operational detail that needs to be incorporated to ensure a sufficiently realistic environment for testing. Work with business users to determine what key features must be included and which pieces of functionality are less vital to the optimization and can be ignored or modeled at a lower level of detail. There is often more art than science in determining what can be left out without compromising the integrity of the simulation.

Testing an optimization model with a simulation requires a close relationship between the teams developing the model and the simulation to make sure they are in sync and the simulation’s design and metrics align with the feedback that will provide the most value to the optimization model team. The simulation cannot analyze only the “happy path.” Make sure to simulate key situations that allow for edge case and robustness testing to see what happens to the solution when things start to go wrong in the field. The simulation should be flexible enough to accommodate new requirements that arise during the optimization model testing process. Testing will lead to new questions and new scenarios to investigate, especially once the business users help with model tuning.

Embedding optimization models in simulations can dramatically improve and accelerate testing and implementation. Large facilities and small-batch manufacturing processes are just two areas where this approach has succeeded. Assess your projects to see which merit exploring and assemble the appropriate team of optimization and simulation practitioners, business people and external experts.

As the pace of business speeds up, executives will increasingly demand greater efficiency and innovation that optimization can deliver. Make sure you can sufficiently test your solutions to make long-term solution implementation a reality, not a pipe dream.

Patricia Randall ( is a director at Princeton Consultants, where she leads the design, development and implementation of large-scale, high-impact systems that help businesses optimize their decision-making at strategic, tactical and operational levels. Randall holds a Ph.D. in industrial engineering from Clemson University.

Analytics data science news articles

Related Posts

  • 54
    The 2016 Winter Simulation Conference (WSC) has been the premier international forum for disseminating recent advances in the field of systems simulation for almost 50 years. The longest-running conference devoted to simulation as a discipline, this year’s WSC will be held on Dec. 11-14 at the Crystal Gateway in Arlington,…
    Tags: simulation
  • 53
    The idea of simulation is now a part of our culture. For example, the popular “Matrix” movies had it that our perceived reality was actually a simulation, while a recent episode of the British series “Dr. Who” imagined that the world we knew was a simulation being used by an…
    Tags: simulation
  • 38
    Gurobi Optimization recently introduced Gurobi Optimizer v7.0, with higher performance and powerful new modeling capabilities.
    Tags: optimization, model, solution
  • 34
    Four analytic technologies recently patented by analytic software firm FICO are being incorporated into solutions for cyber security, the Internet of Things (IoT), model governance and optimization.
    Tags: model, optimization
  • 34
    Features Eyes on the road, not dashboards How automated analytics help detect significant business incidents; each anomaly creates an opportunity to save or earn money. By Patrick Vernon Basic Sales Analysis So much data, so little insight. Twelve ideas for anyone assigned the task of analyzing a firm’s sales data.…
    Tags: simulation, model


Former INFORMS President Cook named to U.S. Census committee

Tom Cook, a former president of INFORMS, a founding partner of Decision Analytics International and a member of the National Academy of Engineering, was recently named one of five new members of the U.S. Census Bureau’s Census Scientific Advisory Committee (CSAC). The committee meets twice a year to address policy, research and technical issues relating to a full range of Census Bureau programs and activities, including census tests, policies and operations. The CSAC will meet for its fall 2018 meeting at Census Bureau headquarters in Suitland, Md., Sept. 13-14. Read more →

Gartner identifies six barriers to becoming a digital business

As organizations continue to embrace digital transformation, they are finding that digital business is not as simple as buying the latest technology – it requires significant changes to culture and systems. A recent Gartner, Inc. survey found that only a small number of organizations have been able to successfully scale their digital initiatives beyond the experimentation and piloting stages. “The reality is that digital business demands different skills, working practices, organizational models and even cultures,” says Marcus Blosch, research vice president at Gartner. Read more →

Innovation and speculation drive stock market bubble activity

A group of data scientists conducted an in-depth analysis of major innovations and stock market bubbles from 1825 through 2000 and came away with novel takeaways of their own as they found some very distinctive patterns in the occurrence of bubbles over 175 years. The study authors detected bubbles in approximately 73 percent of the innovations they studied, revealing the close relationship between innovation and stock market bubbles. Read more →



INFORMS Annual Meeting
Nov. 4-7, 2018, Phoenix

Winter Simulation Conference
Dec. 9-12, 2018, Gothenburg, Sweden


Applied AI & Machine Learning | Comprehensive
Sept. 10-13, 17-20 and 24-25

Advancing the Analytics-Driven Organization
Sept. 17-20, 12-5 p.m. LIVE Online

The Analytics Clinic: Ensemble Models: Worth the Gains?
Sept. 20, 11 a.m.-12:30 p.m.

Predictive Analytics: Failure to Launch Webinar
Oct. 3, 11 a.m.

Advancing the Analytics-Driven Organization
Oct. 1-4, 12 p.m.-5 p.m.

Applied AI & Machine Learning | Comprehensive
Oct. 15-19, Washington, D.C.

Making Data Science Pay
Oct. 29 -30, 12 p.m.-5 p.m.


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to