Share with your friends










Submit

Analytics Magazine

Modeling experience yields key insights

January/February 2013

Bruce W. PattyBy Bruce W. Patty

After 20 years of consulting in the freight transportation arena, I joined Pacer Stacktrain as AVP of Equipment in 2003. One of the key responsibilities of our group was to determine how many chassis of each size (20 feet, 40 feet, 48 feet and 53 feet) needed to be positioned at each location across North America where Pacer containers would arrive on trains. At the time, Pacer had the largest domestic container fleet in North America with more than 27,000 containers. It also had contracts with its rail partners that allowed Pacer to provide its own chassis at rail terminals across North America. [Note: In the domestic intermodal marketplace, containers are designed to move around North America on trains, then be mounted on chassis at rail terminals in order to be transported from the rail terminal to the destination by trucks.]

In the years preceding my arrival, Pacer had developed a spreadsheet model to estimate the number of chassis of each size that would be needed at each equipment supply point (EQSP). This analytic model used traditional inventory planning inputs such as turn-time (estimated number of days that an arriving container would use a chassis), forecasted number of containers arriving on a train each day and the number of days each week that trains arrived or departed. In general, this model did a good job at estimating the number of chassis that would be needed in “steady state” conditions. And yet, more often than was desirable, the number of chassis actually needed far exceeded the projection. We needed to identify what was causing the model to be so far off.

Problem Approach

Since the model was developing accurate projections at about 90 percent of the EQSPs, we believed the fundamentals of the model must be working properly. Given that, our initial guess was that one or more of the inputs to the model were off. The most likely possibilities were that inbound freight had surged, turn-times had significantly increased or the number of trains operated each week had dramatically dropped. However, when we analyzed updated measurements for these values, we found that actual numbers were quite close to those used in the model! With our first hypothesis proven wrong, we needed to consider other possibilities.

We decided to step back from the problem and see if we could identify any business conditions that consistently were present at EQSPs where the actual number of chassis needed exceeded the projections. We set up conference calls with both the Equipment team and the Operations team to discuss what was happening at the terminals that were “in trouble.” After several calls it became evident that we needed to conduct some historic analyses prior to the calls or we would get bogged down with anecdotal discussions about what happened on one particular day when some unusual situation took place. This made it virtually impossible to move the discussion to the underlying fundamentals.

After using these analyses to discredit some theories that were driven by these one-time occurrences, we realized that EQSPs where we were running short of chassis tended to be locations where empty containers would build up until they were repositioned out on trains. That is, inbound loaded container volume exceeded outbound loads and empties were building up at the terminal.

We then went back and looked at the model to see how it handled this situation. We found out that turn-times were being measured from when the container and chassis left the terminal after arriving on an inbound train to when the container and chassis “ingated” the terminal after being released by the customer. The time between when the container ingated the terminal and when the container was taken off the chassis and placed on the outbound train was not included in this measurement, often because those events were not transmitted to Pacer by the rail carrier. However, this time was not included for both loaded and empty containers. Why was its omission only causing problems at terminals where empties accumulated?

To answer this question, we arranged another round of conference calls with the Operations team. We found out that a key difference in the way that loaded containers and empty containers were handled by the railroads was that, if there was limited space on the trains, the loaded containers would get priority. So, empty containers would be left behind. While this worked fine in terms of meeting delivery promises for the loaded containers, it caused situations where empty containers would stay mounted on chassis for days. And since these days were not being captured in our measurement of turn-time, the model was not accounting for this in the chassis projection. In short, we discovered that under certain and occasional conditions, our modeling assumptions did not reflect operational practice.

We ended up modifying the model that estimated chassis requirements by using historic chassis usage trends that did include chassis on terminal, and then looking at averages, maximums and variances from the norm to develop demand projections. With this change, we were able to dramatically improve the accuracy of the model. The change in our modeling approach was one of the key reasons that Pacer was able to meet chassis needs with an industry low chassis-to-container ratio of 85 percent, but I’ll save that story for another article.

Best Practice Insights

What can be gleaned from the process described above that can be applied to many business problems? Below are just three key insights:

  1. Confirm the assumptions behind a model. Analytic models are just that, an attempt to model a real-world phenomenon. These models are based on fundamental assumptions such as the probability distribution of arrivals, linearity of a cost function or limitations on supply. Often when models are developed and subsequently used, assumptions are glossed over and attention is paid to getting the inputs as accurate as possible, or ensuring that all of the constraints are accurately represented. But, in situations where the results from the model are not accurately reflecting the real world phenomenon, it is often best to start with confirming that the model assumptions are truly valid for the situations where the model is failing. In our case, the assumption that the chassis requirements were driven by inbound loaded container volumes did not hold for locations where empties could build up, requiring significant quantities of chassis. That said, the original modeling assumptions were reasonable for 90 percent of the actual situations!
  2. Diagnose causes of problems by identifying similarities or commonalities. Often, there will be situations where models are working well for a majority of cases and not working for only a few. In these situations, one of the quickest ways to diagnose the problem is to identify what the few “problem” cases have in common and then determine how the model behaves or handles those similarities. In our process, by identifying that the locations where the model was not performing well were locations where empties built up, we were able to focus our attention on how the model handled empties.
  3. Understand how measures are being calculated. In school, we’re often presented problem descriptions where the values (costs, supplies, demands, times, etc.) are provided to us and we are then responsible for building a model or solving a system of equations. We don’t spend much time questioning how the values were calculated or derived. In practice, determining how to come up with these parameters is often the most challenging aspect.

I’ve never encountered a situation where my manager or my client came to me with a table of numbers and asked me to solve for the correct answer. Often, we are limited in our ability to come up with the most accurate set of values by the data that is captured in our systems. To develop accurate and useful models, we must understand how these limitations will impact our solutions and make allowances for these impacts. In our situation, the fact that turn-times did not include the on-terminal time after a container came back into the terminal on a chassis until the container was loaded onto the train became a serious shortcoming, especially at EQSPs where empties could build up and this time became significant.


Bruce W. Patty (bpatty@veritecsolutions.com) is vice president of transportation analytics at Veritec Solutions (www.veritecsolutions.com). Patty is a member of INFORMS. A version of this article appeared in Analytics Lens.

business analytics news and articles

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.

Headlines

Stereotypes hold back girls’ interest in STEM subjects

New research from Accenture reveals that young people in the United Kingdom and Ireland are most likely to associate a career in science and technology with “doing research” (52 percent), “working in a laboratory” (47 percent) and “wearing a white coat” (33 percent). The study found that girls are more likely to make these stereotypical associations than boys. Read more →

Gartner: Connected ‘things’ will jump 31 percent in 2017

Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. Total spending on endpoints and services will reach almost $2 trillion in 2017. Regionally, China, North America and Western Europe are driving the use of connected things, and the three regions together will represent 67 percent of the overall Internet of Things (IoT) installed base in 2017. Read more →

U.S. News: Analytics jobs rank among the best

When it comes to the best business jobs, analytics- and operations research-oriented disciplines dominate the list, according to U.S. News & World Report’s rankings of the “2017 Best Jobs.” In order, the top five “best business jobs” listings include: 1. statistician
, 2. mathematician
, 3. financial advisor, 
4. actuary, and 
5. operations research analyst. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

CONFERENCES

2017 INFORMS Business Analytics Conference
April 2-4, 2017, Las Vegas

2017 INFORMS Healthcare Conference
July 26-28, 2017, Rotterdam, the Netherlands

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.