Share with your friends


Analytics Magazine

Modeling experience yields key insights

January/February 2013

Bruce W. PattyBy Bruce W. Patty

After 20 years of consulting in the freight transportation arena, I joined Pacer Stacktrain as AVP of Equipment in 2003. One of the key responsibilities of our group was to determine how many chassis of each size (20 feet, 40 feet, 48 feet and 53 feet) needed to be positioned at each location across North America where Pacer containers would arrive on trains. At the time, Pacer had the largest domestic container fleet in North America with more than 27,000 containers. It also had contracts with its rail partners that allowed Pacer to provide its own chassis at rail terminals across North America. [Note: In the domestic intermodal marketplace, containers are designed to move around North America on trains, then be mounted on chassis at rail terminals in order to be transported from the rail terminal to the destination by trucks.]

In the years preceding my arrival, Pacer had developed a spreadsheet model to estimate the number of chassis of each size that would be needed at each equipment supply point (EQSP). This analytic model used traditional inventory planning inputs such as turn-time (estimated number of days that an arriving container would use a chassis), forecasted number of containers arriving on a train each day and the number of days each week that trains arrived or departed. In general, this model did a good job at estimating the number of chassis that would be needed in “steady state” conditions. And yet, more often than was desirable, the number of chassis actually needed far exceeded the projection. We needed to identify what was causing the model to be so far off.

Problem Approach

Since the model was developing accurate projections at about 90 percent of the EQSPs, we believed the fundamentals of the model must be working properly. Given that, our initial guess was that one or more of the inputs to the model were off. The most likely possibilities were that inbound freight had surged, turn-times had significantly increased or the number of trains operated each week had dramatically dropped. However, when we analyzed updated measurements for these values, we found that actual numbers were quite close to those used in the model! With our first hypothesis proven wrong, we needed to consider other possibilities.

We decided to step back from the problem and see if we could identify any business conditions that consistently were present at EQSPs where the actual number of chassis needed exceeded the projections. We set up conference calls with both the Equipment team and the Operations team to discuss what was happening at the terminals that were “in trouble.” After several calls it became evident that we needed to conduct some historic analyses prior to the calls or we would get bogged down with anecdotal discussions about what happened on one particular day when some unusual situation took place. This made it virtually impossible to move the discussion to the underlying fundamentals.

After using these analyses to discredit some theories that were driven by these one-time occurrences, we realized that EQSPs where we were running short of chassis tended to be locations where empty containers would build up until they were repositioned out on trains. That is, inbound loaded container volume exceeded outbound loads and empties were building up at the terminal.

We then went back and looked at the model to see how it handled this situation. We found out that turn-times were being measured from when the container and chassis left the terminal after arriving on an inbound train to when the container and chassis “ingated” the terminal after being released by the customer. The time between when the container ingated the terminal and when the container was taken off the chassis and placed on the outbound train was not included in this measurement, often because those events were not transmitted to Pacer by the rail carrier. However, this time was not included for both loaded and empty containers. Why was its omission only causing problems at terminals where empties accumulated?

To answer this question, we arranged another round of conference calls with the Operations team. We found out that a key difference in the way that loaded containers and empty containers were handled by the railroads was that, if there was limited space on the trains, the loaded containers would get priority. So, empty containers would be left behind. While this worked fine in terms of meeting delivery promises for the loaded containers, it caused situations where empty containers would stay mounted on chassis for days. And since these days were not being captured in our measurement of turn-time, the model was not accounting for this in the chassis projection. In short, we discovered that under certain and occasional conditions, our modeling assumptions did not reflect operational practice.

We ended up modifying the model that estimated chassis requirements by using historic chassis usage trends that did include chassis on terminal, and then looking at averages, maximums and variances from the norm to develop demand projections. With this change, we were able to dramatically improve the accuracy of the model. The change in our modeling approach was one of the key reasons that Pacer was able to meet chassis needs with an industry low chassis-to-container ratio of 85 percent, but I’ll save that story for another article.

Best Practice Insights

What can be gleaned from the process described above that can be applied to many business problems? Below are just three key insights:

  1. Confirm the assumptions behind a model. Analytic models are just that, an attempt to model a real-world phenomenon. These models are based on fundamental assumptions such as the probability distribution of arrivals, linearity of a cost function or limitations on supply. Often when models are developed and subsequently used, assumptions are glossed over and attention is paid to getting the inputs as accurate as possible, or ensuring that all of the constraints are accurately represented. But, in situations where the results from the model are not accurately reflecting the real world phenomenon, it is often best to start with confirming that the model assumptions are truly valid for the situations where the model is failing. In our case, the assumption that the chassis requirements were driven by inbound loaded container volumes did not hold for locations where empties could build up, requiring significant quantities of chassis. That said, the original modeling assumptions were reasonable for 90 percent of the actual situations!
  2. Diagnose causes of problems by identifying similarities or commonalities. Often, there will be situations where models are working well for a majority of cases and not working for only a few. In these situations, one of the quickest ways to diagnose the problem is to identify what the few “problem” cases have in common and then determine how the model behaves or handles those similarities. In our process, by identifying that the locations where the model was not performing well were locations where empties built up, we were able to focus our attention on how the model handled empties.
  3. Understand how measures are being calculated. In school, we’re often presented problem descriptions where the values (costs, supplies, demands, times, etc.) are provided to us and we are then responsible for building a model or solving a system of equations. We don’t spend much time questioning how the values were calculated or derived. In practice, determining how to come up with these parameters is often the most challenging aspect.

I’ve never encountered a situation where my manager or my client came to me with a table of numbers and asked me to solve for the correct answer. Often, we are limited in our ability to come up with the most accurate set of values by the data that is captured in our systems. To develop accurate and useful models, we must understand how these limitations will impact our solutions and make allowances for these impacts. In our situation, the fact that turn-times did not include the on-terminal time after a container came back into the terminal on a chassis until the container was loaded onto the train became a serious shortcoming, especially at EQSPs where empties could build up and this time became significant.

Bruce W. Patty ( is vice president of transportation analytics at Veritec Solutions ( Patty is a member of INFORMS. A version of this article appeared in Analytics Lens.

business analytics news and articles


Using machine learning and optimization to improve refugee integration

Andrew C. Trapp, a professor at the Foisie Business School at Worcester Polytechnic Institute (WPI), received a $320,000 National Science Foundation (NSF) grant to develop a computational tool to help humanitarian aid organizations significantly improve refugees’ chances of successfully resettling and integrating into a new country. Built upon ongoing work with an international team of computer scientists and economists, the tool integrates machine learning and optimization algorithms, along with complex computation of data, to match refugees to communities where they will find appropriate resources, including employment opportunities. Read more →

Gartner releases Healthcare Supply Chain Top 25 rankings

Gartner, Inc. has released its 10th annual Healthcare Supply Chain Top 25 ranking. The rankings recognize organizations across the healthcare value chain that demonstrate leadership in improving human life at sustainable costs. “Healthcare supply chains today face a multitude of challenges: increasing cost pressures and patient expectations, as well as the need to keep up with rapid technology advancement, to name just a few,” says Stephen Meyer, senior director at Gartner. Read more →

Meet CIMON, the first AI-powered astronaut assistant

CIMON, the world’s first artificial intelligence-enabled astronaut assistant, made its debut aboard the International Space Station. The ISS’s newest crew member, developed and built in Germany, was called into action on Nov. 15 with the command, “Wake up, CIMON!,” by German ESA astronaut Alexander Gerst, who has been living and working on the ISS since June 8. Read more →



INFORMS Computing Society Conference
Jan. 6-8, 2019; Knoxville, Tenn.

INFORMS Conference on Business Analytics & Operations Research
April 14-16, 2019; Austin, Texas

INFORMS International Conference
June 9-12, 2019; Cancun, Mexico

INFORMS Marketing Science Conference
June 20-22; Rome, Italy

INFORMS Applied Probability Conference
July 2-4, 2019; Brisbane, Australia

INFORMS Healthcare Conference
July 27-29, 2019; Boston, Mass.

2019 INFORMS Annual Meeting
Oct. 20-23, 2019; Seattle, Wash.

Winter Simulation Conference
Dec. 8-11, 2019: National Harbor, Md.


Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to