Share with your friends










Submit

Analytics Magazine

For Practioners Only: Seven Sins of Simulation Practice

Summer 2009

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Practitioners, be advised: Avoid these mistakes or risk damage to your career.

By Jerry Banks and Randall R . Gibson

Seven is a number that appears many places in nature, music and religion. There are seven oceans, seven continents, seven notes on the musical scale, seven days of the week and many other “sevens.” In this article, we will discuss major mistakes; we call them the “seven sins of simulation” to which many simulation practitioners fall prey. Certainly, there could be more, and others may find different ones than those we cite here. We consider the seven major mistakes in this article to be like the major causes of problems when conducting a Pareto analysis, i.e., 80 percent of the problems are attributed to 20 percent of the causes.

Mistake 1: Focusing more on the model than on the problem.

As it is said, “it’s possible not to see the forest for the trees.” Some simulation practitioners get so involved in model building that they miss the real objective, which is to solve the client’s problem. They build an elegant model that has too many features and too much detail. This can occur in several ways. One of those ways is modeling aspects of the system that contribute little, if anything, to the solution of the problem. Another way is adding interface features that aren’t needed, aren’t requested, but are nice cosmetically or operationally. An example would be a Visual Basic interface that insulates the model user from the model and makes it easy to alter and run various scenarios. If the client requests such an interface, then build it. Otherwise, it can detract from the focus of the project.

Mistake 2: Providing point estimates.

Simulation models generally have random arrivals and random service times. There are additional sources of randomness that are possible such as breakdowns and path choices. Thus, the output is a random variable. Running simulation models is the same as conducting a statistical experiment where there is randomness. If you were trying to learn the effects of water and fertilizer on the growing of corn, you wouldn’t simply conduct one trial and report your result. The same is true of simulation. Don’t conduct one trial and report the result because it is highly unlikely that the real system is going to react exactly like the single simulation run. We assume that the value of a simulation run is one of many that come from output that is normally distributed. You should conduct many trials and perform a statistical analysis that includes a confidence interval or a precision interval.

Mistake 3: Not knowing when to stop.

“A simulation model should be small, but not too small. A simulation model should be big, but not too big.” Somewhere between those extremes is a happy medium. In this “mistake” however, we are looking at the second part of the statement. The simulation model should be big enough (sufficient scope and detail included) to respond to the questions asked in an appropriate manner. But, there is no need to continue to refine the model after that point. Some simulation practitioners have great difficulty in stopping the model refinement. The model becomes larger and larger and the verification and validation of the model becomes more and more difficult.

Think of the modeling effort as follows: Say that 80 percent of the accuracy can be obtained with 50 percent of the effort.

Another 10 percent can be gained with 25 percent more of the effort. Another 5 percent can be gained with the remaining 25 percent of the effort. To go from 95 percent to 99 percent accuracy may require the same amount of funds that have already been expended. And, to go from 99 percent to 99.9 percent requires an equivalent input of funds. To go from 99.9 percent to 100 percent accuracy requires a vast amount of effort, perhaps, the replication of reality!

On the other hand, many simulation practitioners stop the verification and validation effort far too short. Since it’s not possible to declare a simulation model to be verified and/or validated, many tests should be applied to achieve the assurance that the model can be used as a representation of reality.

Somewhere, it’s time to stop the modeling effort and the verification and validation process. If the future of the nation depended on the model, a lot of effort might be expended. But, that is not usually required. Ultimate truth is not affordable.

 

Mistake 4: Reporting what the client wants to hear rather than what the model results say.

This can occur in numerous ways, all of them inappropriate. The simulation practitioner should act as a scientist, gathering information, analyzing it and reporting it factually. But, in order to please the client, to help the client prove a point, or, for whatever reason, there are those that will distort the model results.

It can occur somewhat innocently, such as the statement made by the decision-maker, “Show that we can achieve the desired throughput,” with the simulation practitioner doing whatever is needed to the model to make that dream come true. Or, it can occur upon using clearly false input such as the reliability of a piece of equipment that is reported to have a 99.99 percent uptime when the simulation practitioner has observed the same piece of equipment in a down state numerous times, and knows that the high rate of uptime is just some advertising ploy. Or, it can be a political issue where two managers are arguing about a process and the simulation practitioner works for one of the managers. In his or her zeal to curry favor with the manager, the simulation practitioner reports whatever outputs bring favor on the position of his or her manager. Whatever the cause, deliberate or innocent, it’s a mistake!

Mistake 5: Lack of understanding of statistics.

This mistake has lots of roots and lots of consequences. Virtually all simulation software has output analysis capability. But, many simulation practitioners don’t take advantage if it. The reason is that they don’t understand it! And, they might get a question from their manager like the following, “What does a Type I error of 0.05 really mean?” They might have been able to answer that question when they studied statistical methods, but they can’t recall the answer now. If they do answer, it is
likely that they will get it wrong.

Then, there are more subtle points of statistics, such as in multiple comparisons of confidence intervals, how do you know when there is no significant difference between two of them? Here is a concrete example: There are three speeds for a machine to accomplish a task. We call them Speed A, Speed B, and Speed C. The speeds are normally distributed with means and standard deviations. Which of these machines will result in the lowest expected waiting time if the arrivals are exponentially distributed with a mean of 50 seconds? You asked for the confidence intervals for differences in means between Machines A versus B and Machines C versus B. How do you know if no statistically significant differences are present? The answer is that there is no statistical significance if zero is in the interval. Think about that. Ask your local statistician if it doesn’t make sense.

There are many questions like this in statistics. When do we use a t-test versus using the normal test? What’s the difference between a c-test and an F-test? When do I use a two-tailed test and when do I use a one- tailed test? If it’s a one-tailed test, is it an upper or lower? Because these questions exist and the answer is unclear, many simulation practitioners may choose to avoid statistical methods.

Mistake 6: Confusing cause and effect.

A stud reported on in the Wall Street Journal (Sept. 13, 2003) indicated that teenage girls who use tanning booths are more
likely to smoke and drink. But, do the tanning booths cause the behavior, or do those teenage girls who are risk takers (they smoke and drink) also accept the risk of the tanning booth?

Another study concluded that being overweight isn’t unhealthy. This was finally straightened out in Newsweek (Dec. 11, 2006) among other news media. In the Newsweek article, the question was whether being thin caused poor health, or whether poor health caused thinness. The latter view prevailed.

In their clever book, “Freakonomics,” by Levitt and Dubner (2005), many of these cause-and-effect relationships were discussed. For example, it was shown that the presence of books in a household is a good indicator of how children will perform in school. But, don’t rush out and buy a bookcase and lots of books if your child is performing poorly; it won’t help.

Similarly, with simulation we have confusion with cause and effect. A client will say something like, “At around 3 p.m. every afternoon, the entire production system (we are discussing a production system here, but we could be discussing a service system) comes to a standstill. Can you simulate that? The answer is, “No, that’s an effect, we simulate causes.” But, that’s not entirely true, we simulate breakdowns and those are effects. It would be way too difficult to simulate the cause of a breakdown. For example, we would be simulating the erosion of a shaft in a machine, erosion of the gears and so on. So, we take the easy way out. We get a distribution of time between failure and time to repair, and we simulate the effect. The trick is to know when it is acceptable to simulate effects and when we must simulate causes.

Mistake 7: Failure to replicate reality.

We will explain this mistake using breakdowns (again, a similar discussion could be carried on for a service system) as an example. There are many ways to simulate breakdowns. And, these are very crucial to reality in a simulation model. Let’s say that a machine is up for an average of 48 minutes, and down for an average of 12 minutes. Now, we provide some complications. What happens to the load being processed when the breakdown occurs? When the breakdown occurs, it could be destroyed, it could be sent to another machine for processing, processing might need to be restarted when the machine comes back up, or the load could be processed with a different amount of time required. The load could be sent to another machine with higher priority, lower priority, or its same priority. The objective is to replicate reality, so choose wisely.

Conclusion

Just by avoiding these seven mistakes of simulation practice does not, of course, guarantee the practitioner a path to valid and useful models. However, we submit that if the practitioner continues to make mistakes such as those discussed in this article, he or she risks meeting simulation project goals, and that could be costly, dangerous and a threat to his or her career.

Jerry Banks (atljerry@earthlink.net) holds the title Academic Leader at Tecnológico de Monterrey in Monterrey, México. Previously, he retired from the faculty of the School of Industrial and Systems Engineering at Georgia Tech in Atlanta and then he was Senior Simulation Technology Advisor at Brooks Automation (now Applied Materials).

Randall Gibson (randall.gibson@me.com) is an independent consultant based in Solana Beach, Calif. Previously, he was Principal and Senior Vice President in the Management and Supply Chain Consulting group at TranSystems Corporation, where he managed the Simulation and Analytical Modeling Practice Area. He was founder and president of Automation Associates, which was acquired by TranSystems in 2005. He has over 20 years experience in simulation modeling and consulting.

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.




Headlines

Survey: Despite the hype, AI adoption still in early stages

The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few have even begun to implement suitable projects. There is much optimism about the potential of AI, although fewer were confident that their organization was ready to exploit that potential. Read more →

Data professionals spend almost as much time prepping data as analyzing it

Nearly 40 percent of data professionals spend more than 20 hours per week accessing, blending and preparing data rather than performing actual analysis, according to a survey conducted by TMMData and the Digital Analytics Association. More than 800 DAA community members participated in the survey held earlier this year. The survey revealed that data access, quality and integration present persistent, interrelated roadblocks to efficient and confident analysis across industries. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.