Share with your friends










Submit

Analytics Magazine

Analytics applications in consumer credit and retail marketing

November/December 2011

Four areas that present a significant opportunity to impact the bottom-line of companies in different business verticals through the use of advanced analytics and sophisticated data modeling.

Ming Zhang, Clay Duan and Arun MuthupalaniappanBy Ming Zhang, Clay Duan and Arun Muthupalaniappan (LEFT TO RIGHT)

The application of analytics in financial services is advanced and pervasive; the use of credit risk scoring techniques permeates the entire customer lifecycle management – from credit grant to account management to loss reduction and recovery. Without automated credit scoring techniques, the large-scale, modern day consumer credit industry as we know it would not exist. By comparison, analytics in consumer retail marketing is still relatively undeveloped. Except in the area of direct marketing, where consumer response-based targeting models are widely used and well accepted, analytics applications in other areas of the business is less systematic and varies widely across companies and sophistication of executives in charge of marketing. This article briefly reviews the successful evolution of analytics in the consumer credit business, while exploring four areas in consumer retail marketing that are not as well developed but hold great potential for substantial business impact.

Applications of analytics in consumer financial services

Applications of analytics in consumer financial services

The primary application of analytics in consumer financial services relates to credit risk scoring. Credit scoring techniques assess the credit risk of lending to a specific consumer. Credit scores are used in deciding whether to grant credit to a consumer applying for credit, how much credit to grant and at what interest rate. It is also used in managing existing customer relationships. For example credit scoring is used in determining whether to increase or decrease an existing customer’s credit limit or allowing a customer to charge certain amounts above their credit limit. It is common practice to periodically review an existing customer’s credit quality through behavior scoring to determine whether to increase or decrease the credit limit and adjust the interest rate charged to cover any change in credit risk. When it comes to managing collection of delinquent accounts and recovery of charged-off loans, credit scoring also plays a crucial role in the prioritization of collection accounts to maximize the collectors’ productivity.

Overall, despite the recent catastrophic failures in risk management related to consumer mortgage loans in the United States, the advancement of analytical applications in consumer credit businesses is no doubt one of most significant developments in modern finance. It greatly increased productivity of the financial services industry, which in turn helped increase the overall living standard by making consumer credit available and affordable to large sections of the population. In that sense the wide spread application of analytics helped make consumer credit into a mass product from a privilege only available to the elite.

Several reasons explain why analytics as exemplified by credit scoring achieved such tremendous success in consumer financial services and became so well accepted over the past 50 years:

  1. Development of a large-scale consumer credit market in the post World War II years in the United States demanded a low-cost and efficient way to assess a consumer’s credit risk. The emergence of small, evolving credit products meant the industry needed to replace expensive and labor-intensive manual evaluations by credit reviewers. The advancement in statistical modeling techniques and computers made this feasible.
  2. Market competition also played a large part in popularizing the applications of automated credit evaluation based on statistically derived credit scoring models. Once the success of such an approach was established by a consumer credit operator, the competitive edge it created quickly forced other industry players to adopt similar practices, thus making credit scoring quickly accepted and pervasive in consumer financial services. As one innovative application led to another, credit scoring practices quickly spread from granting credit applications to management of the whole credit lifecycle.
  3. Credit performance of consumers is fairly predictable in a statistical sense. After all, in the absence of intimate knowledge about a person’s character and his or her circumstances, the best prediction of a person’s future fulfillment of a contractual obligation of paying back loans is his or her past credit behavior, especially their history of payments and personal finance management. This is why metrics such as percentage of times past loans have been satisfactorily paid back, number of historic delinquencies and severity of delinquency, length of credit history, as well as utilization rate of available credit, are important and effective predictors of future credit performance.
  4. Credit bureaus made credit information widely available to consumer financial service providers. This factor cannot be emphasized enough. The availability of consumer credit data (contributed and shared by all major consumer credit industry players) made statistical evaluations of a consumer’s credit quality possible based on a complete history of the person’s financial record. It is no coincidence that countries such as the United States and United Kingdom – that have the most balanced legislation on consumers’ need for information privacy and the credit industry’s need for shared consumer credit history – also have the most advanced application of consumer credit analytics.

Today, the application of analytics in consumer financial services extends well beyond assessing consumer credit quality at the time of acquisition to maximizing overall business profitability by optimizing a series of business decisions through the customer lifecycle.

Application of analytics in consumer retail marketing

Analytics is certainly no stranger to consumer retail marketing. There are pockets of successful application of analytics in marketing. For example, in direct mail or telemarketing a standard practice is to apply statistical models to select consumers for targeting to maximize response rate or profit. Market research is another area that relies extensively on analytics for complex sample design and hypothesis testing. The practice of consumer segmentation has also advanced significantly with the increased availability of sophisticated statistical analysis techniques.

Other applications in marketing that have seen considerable thought leadership and forward thinking in recent years include:

  • sales forecasts and driver decomposition models,
  • micro market planning leveraging geo-spatial analysis,
  • pricing optimization based on consumer elasticity estimates, and
  • experimental designs accelerating organizational test and learn.

The rest of this article briefly discusses analytic frameworks in these emerging areas, their potential to have significant business impact, and the challenges they have in being widely adopted and relied upon as a standard practice for ongoing business decision-making. This doesn’t represent an exhaustive overview of all such marketing applications or the implementation issues involved, but only the authors’ personal experiences from many years of field practice across multiple business verticals.

1. Sales forecasts and driver decomposition models. The most commonly used approaches to predict future sales and revenue in marketing rely on time series forecast or various smoothing methods. The time series forecast model most often employed is the ARIMA (Auto Regressive Integrated Moving Average) model. Smoothing methods include moving average, exponential and Holt-Winters triple parameter smoothing. The major advantage of time series or smoothing techniques is that they only use past sales or revenue data to forecast. And they are relatively simple to do with many built-in procedures and routines to choose from specialized software. However, these advantages have to be balanced against their shortcomings. First they don’t provide a very reliable forecast when the forecast goes very far into the future. Second and more seriously they don’t allow incorporation of various known factors that impact sales and revenues, such as marketing campaigns or price promotions.

Often marketing executives are challenged with explaining why the forecasted sales or revenue fell short of (or performed better than) the plan. What factors, internal or external, contributed to the difference? By how much? And therefore how do we plan better and allocate resources smarter going forward? Time series or smoothing methods will not be able to answer these sorts of questions. Business driver decomposition based on sophisticated marketing mix models is increasingly becoming a strategic tool for expert marketers to further organizational thinking around short-term versus long-term growth accelerators in the business. Mix models allow for controllable internal factors such as marketing, distribution, price, promotion, loyalty, etc. to be modeled concurrently with uncontrollable external factors such as seasonality, macro-economic trend, competitive activity, etc.

Figure 1: Volume driver decomposition.
Figure 1: Volume driver decomposition.

Incorporating a Bayesian component within this framework provides the model foundation to be “evolutionary” in nature, i.e., the model can easily incorporate new information as new data or new business learning is captured. Optimization processes can further ensure that the best model result with the highest stability is generated incorporating prior known knowledge at any given point in time. A robust implementation of such an approach could enable business decision-makers to proactively do what-if scenario analysis (simulation) of possible outcomes under different combination of business actions and external business environments.

2. Micro market planning and geo-spatial analysis. Borrowing fundamental scoring principles from consumer credit risk applications, micro market planning is a methodology to forecast demand potential based on consumer geo-demographics, but at an aggregated unit of geography like Block Group or ZIP+4. In developed markets, the Census Bureau typically makes available detailed socio-demographic profiling data at a micro-geographic unit level. Other commercially available economic data in these markets such as aggregated credit profiles and automobile ownership data can all be used in combination to predict local consumer demand and market potential.

Micro market planning models can be used to improve brand performance (share capture) at a local market level by optimizing distribution, pricing and marketing tactics. The framework involves:

  • Understanding population characteristics that fundamentally drive demand for a brand,
  • Objectively scoring the entire universe of micro-geographies in a country for potential revenue opportunity (demand volumetrics),
  • Establishing how effectively the brand is currently capturing local market potential and if there is opportunity to improve resource allocation,
  • Mapping the competitive landscape in each micro-market and assessing the strength of our relative value proposition,
  • Identifying pockets to open new stores, activate localized pricing, improve grass-roots marketing actions, manage local retail outlet performance, etc.

In developed economies such as the United States, the United Kingdom, France, etc., the availability of rich sources of detailed demographic and socio-economic information at very refined geo-levels makes the rollout of micro market planning analytics quite easy and very successful. In emerging markets such as India, however, adoption faces many technical obstacles, as well as some cultural challenges. The availability of actionable external demographic data is often limited and the ability to acquire the data cost effectively in a user-friendly manner is difficult. Even more challenging is the ability to geo-code (rooftop address-based lat/long) distribution locations and customer addresses accurately given the underdeveloped postal systems in many parts of the world. The cultural bias ranges from simple lack of trust in statistical models to not believing any decision science based approach and reliance purely on one’s gut feeling. However, this type of analytics has great potential to become more broadly used in the near future.

Figure 2: Micro market planning.
Figure 2: Micro market planning.

Our own experience of introducing this approach to the retail marketing arena shows that once you get past initial resistance in the field (often simply due to unfamiliarity with this type of scoring approach) and ensure that senior executives understand the robustness of the underlying methodology, micro market planning can become a strategic tool to optimize a series of local market decisions creating significant top-line growth. Great examples of such applications in retail would be in improving capital intensive decisions around store openings/closures and optimization decisions around targeting ads/flyers in field marketing.

3. Pricing optimization. The concept of price elasticity is certainly not new to marketers. However, other than its conceptual application during pricing discussions, the development of analytic consumer elasticity estimates is not widely practiced in the retail marketing arena. The main obstacle is that the development of such a model requires many price point changes within a carefully designed experiment. A reasonably robust price elasticity model also needs to include data on competition and customer disposable income, both of which are hard to come by. However it is still possible to develop price elasticity models with limited price ranges and narrow customer segments to optimize decisions.

Companies can expect to increase the ROI on pricing decisions and substantially improve margins, provided they are willing to invest time and resources in building the foundational analytic infrastructure needed to develop and continuously refine consumer elasticity models. A good starting point would be creating a historical database with detailed information on all past pricing actions. All relevant data pertaining to a price change such as duration (temporary vs. permanent), geography (national vs. regional), product (core vs. substitute), discount (promotion vs. base price), extent of price change (percent drop), etc. would be key inputs to the elasticity model.

Figure 3: Simulation of possible outcomes from pricing.
Figure 3: Simulation of possible outcomes from pricing.

Information on market conditions such as overall trajectory of the economy, trend in relative share, etc. would need to be well understood and accounted for in the price elasticity model. Competitive intelligence information related to competitor prices, marketing activities, new entrants and emerging alternatives will impact consumer response and therefore need to be introduced in the model as well. Armed with the right set of data, specialized software packages can test a series of econometrics models that best capture the time effects and cross-sectional effects of price change, while controlling for other intervening factors. Deploying a simulation tool that leverages these complex elasticity estimates in the background helps company executives and line managers perform simple what-if scenario analyses to determine the best path of action with the pricing decision. In many instances, a range of possible outcomes exists; we could grow volumes or increase revenues or try to do both. An optimized price recommendation can be derived from the simulation based on observed/expected consumer elasticity and real world business constraints.

In our experience, getting pricing optimization correct could easily lead to 50-100 bases points improvement in margins, especially in verticals such as retail with hi-lo promotion pricing models.

4. Experimental designs. Without an effective learning and feedback mechanism, the basis for decision-making regresses to intuition. Reliable small-scale, in-market testing processes can help companies avoid costly mistakes and prevent strategic blunders with full-scale rollout. With the help of robust experimental designs, company executives can confidently measure the net impact of changing different levers impacting business concurrently. As data grows exponentially with technological advances and competition gets fiercer with globalization, a sophisticated “test & learn” capability is a must-have for companies seeking to gain competitive advantage from analytics in the long run.

No one software package currently covers all types of tests, and in most cases broader knowledge of statistics is still required for effective analysis and accurate interpretation. This is particularly true in the case for tests run at a geo-level compared to customer/account level. Geo-level tests use geographical areas or retail stores as unit of analysis instead of account or customer level. This implies sample size of geo-level tests is in general a lot smaller than account or customer level tests. In addition, geographical differences and dynamic localized competition further complicates impact measurement with geo-level tests.

Figure 4: In-market test results.
Figure 4: In-market test results.

In many instances, business runners focused on the need-for-speed test new ideas in selected geographies without setting up a robust test design (holdout group) upfront and statisticians are then asked to measure impact subsequently. The challenge in all these situations is: How do you separate treatment effects (impact of specific business initiative) from other uncontrollable effects; how do you confidently separate signal from noise and make an informed judgment?

Getting a stable and comparable baseline so that true treatment effects can be measured accurately and reliably is certainly not a trivial job. Theoretically speaking, given a test sample, if we can find baselines for test units and they are highly positively correlated, we can use paired sample T statistics instead of two independent sample T statistics so that T statistics from paired samples is greater than T statistics from two independent samples. Simulation results show a paired sample T test with a well-constructed baseline has higher power, smaller type I error and smaller Type II error. Of course, a tremendous amount of empirical research and simulation is required in order to find the right baselining rules that best fit your industry and brand situation.

Not surprising, in general, companies that have institutionalized test & learn processes have better performance over their peers.

Conclusions

The sophistication of analytics and deeper integration with business operations in consumer financial services is attributable to the successful application of credit-scoring techniques in managing customer credit lifecycles and availability of robust behavioral data through credit bureaus. The consumer credit industry benefited from the fact that credit risk management dominates the profitability of the industry and most decisions are structured at a customer level and therefore lend themselves to analytical approaches very well. In contrast, business problems in consumer retail marketing are less structured, often diverse and involve units of analysis much larger than individual customers. These problems require innovative approaches and out-of-the-box creativity to solve. Lack of actionable data for many retail-marketing problems has been a deterrent in the past. Scarcity of sophisticated analytic talent in the marketing function also contributed to the relatively slow progress over the years.

But all of that is beginning to change. Senior executives in retail marketing are beginning to embrace advanced decision sciences as a potential competitive advantage and differentiator in the marketplace. As seasoned practitioners of analytics, we are also beginning to see and expect a fundamental shift in the availability and access to consumer marketing data with the growth of the web and e-commerce globally.

Ming Zhang, Ph.D, (Ming.Zhang@westernunion.com) is director of Customer Analytics at Western Union, specializing in customer behavior analytics and scorecard development. Clay Duan is director of Customer Analytics at Western Union, overseeing modeling analytics in areas of network optimization, marketing, pricing and business DoE. Arun Muthupalaniappan is senior vice president at Western Union, responsible for its Global Business Intelligence, Customer Strategy & Analytics function.

business analytics news and articles

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.




Headlines

Survey: Despite the hype, AI adoption still in early stages

The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few have even begun to implement suitable projects. There is much optimism about the potential of AI, although fewer were confident that their organization was ready to exploit that potential. Read more →

Data professionals spend almost as much time prepping data as analyzing it

Nearly 40 percent of data professionals spend more than 20 hours per week accessing, blending and preparing data rather than performing actual analysis, according to a survey conducted by TMMData and the Digital Analytics Association. More than 800 DAA community members participated in the survey held earlier this year. The survey revealed that data access, quality and integration present persistent, interrelated roadblocks to efficient and confident analysis across industries. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.