Share with your friends










Submit

Analytics Magazine

Corporate Profile: Continuous improvement based on valid measurements

July/August 2011

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Analytics helps education organization Kaplan go to the head of the class.

Bror Saxberg (Left) and David Niemi (right)By Bror Saxberg (Left) and David Niemi (right)

“Continuous improvement” is a commitment to evidence-based improvement, but it is at least inefficient if not risky to make decisions based on inaccurate evidence. For important decisions, it’s worth some investment in understanding the validity of the evidence you are using — understanding the accuracy and reliability of the evidence, particularly the metrics, for the decision you are making.

To illustrate we draw on examples from Kaplan, Inc., a world-wide educational organization with more than a million students per year at all stages in life, and many levels of learning complexity, from middle-school math tutoring to law degrees, MBAs and more.

Complex, sophisticated analyses are readily available through the Web, through open source tools like R or through packaged software. Analyses drive decisions, but too often the quality of the data that go into the analyses is not checked.

In our experience, many users tend to think quantitative metrics or data are likely valid simply because they are quantitative: Objective-appearing numbers inspire confidence for many managers, and the more numbers the better — indeed, the more complex the analysis, the more the conclusion looks good. This is not right. The mere existence of data, even an immense amount of data, says nothing about whether it can be used with confidence for important decisions. The essence of validity is to collect additional evidence to show that inferences, decisions and actions based on the data or metrics reflect reality.

The good news is there are long-standing, extensively tested practices and principles of measurement validation to draw on, often developed through work in complex domains in the sciences, where much work has been done on validating evidence to avoid falling prey to faddish, capricious, popular or simply wrong decisions based on deficient information.

Learning Innovations

To accelerate evidence-based continuous improvement efforts worldwide, Kaplan has set up a Learning Innovations group (KLI) with expertise in evaluation, measurement, instructional design, training and analysis of expert performance. Initially, KLI identified a set of key quality indicators and performance metrics linked to them. KLI identified an early set of performance metrics for which initial validation effort was simply to check that the numbers are right; e.g., for enrollment numbers, withdrawal rates and graduation rates. Limited validation resources could then be concentrated on other key performance metrics more challenging to validate, including: learning; the effectiveness of instruction, curricula and other program components; student satisfaction; employer satisfaction; and career success. We’ve started with learning metrics, which are difficult to design and validate, but we expect the validation strategies we developed to apply to many important decision domains and the data that drive them.

kaplan logo

Analytics Magazine About Kaplan, Inc.
Analytics Magazine Kaplan, Inc. is a leading international provider of educational and career services for individuals, schools and businesses. Kaplan serves students of all ages through a wide array of offerings including higher education, test preparation, professional training and programs for kids in grades K through 12. Kaplan is a subsidiary of The Washington Post Company and its largest division. For more information, visit http://www.kaplan.com.
Analytics Magazine

An index example of this initial work is the validation of learning metrics undertaken by Kaplan University (KU), one of Kaplan’s major business units. Unlike many higher education institutions, KU has been working for some time on measures of learning that focus systematically on mastery of learning objectives across its 1,000 or so course offerings. KU’s group of assessment professionals, the Office of Instructional Effectiveness, early on determined that typical grades are noisy and incomplete measures of mastery: It is not clear what objectives different students with a “B” grade have mastered, and it is also not clear if these students were graded in comparison to each other rather than based on actual mastery of a certain number of objectives. The goal then became to have a measure of learning that was as objective as possible, and to gather evidence about whether the measure had sufficient validity to justify its use in evaluating and improving teaching, and the design and implementation of curriculum and instruction, as well as student learning. A more valid learning metric could also be a great help in communicating with stakeholders:  learners, faculty, regulators and prospective employers.

To address this need, Kaplan University staff developed and administered a system of Course Level Assessments (CLAs) across 1,000 courses. Typically, there are four to six CLAs in a course, each focused on an important course-learning objective, such as “Discuss typical neurobiological and behavioral responses to stress and their implications for physical and mental functioning.” Some CLAs are performance assessments, involving, for example, student analysis of a complex problem or realistic situation; scoring rubrics were developed to assist teachers in evaluating these performances. In other cases, CLAs consist of easily scored multiple choice quizzes or tests.

In all cases, however, there has been a concerted and sustained effort to validate the information obtained from the CLAs. Assessment experts reviewed the quality of the CLA tasks and the rubrics, as well as the alignment of the rubrics with the course learning objectives they were intended to assess. The results of these expert reviews are currently driving the refinement of the assessment tasks and rubrics.

Empirical Analysis

Empirical analysis is another critical part of any validation effort. We have investigated, for instance, frequency distributions of CLA scores and the patterns of correlations between CLAs scores and other important metrics such as grades, teaching evaluations, and student retention, failure and withdrawal rates. Results have prompted further investigations and efforts to revise tasks or rubrics. We have also found, as a side benefit, that fluctuations in grades and CLAs scores may enable us to better predict which students are likely to fail or leave programs, and to intervene to help them.

Finally, since teacher scoring of CLAs is crucial, we have designed a series of studies of the reliability of that scoring. Discovering issues here can lead to improvements in scoring rubrics and how we train teachers. Another bonus will be that students get more consistent and valid feedback on their learning, which should improve their rate of learning (as extensive research has demonstrated).

Valid measures of learning are critical to Kaplan’s students and the whole enterprise, even though learning is not easy to measure well. Indeed, reliable learning data gives us a tool to make decisions about new approaches to learning, including multimedia, new hardware platforms and other online and offline techniques.

We expect in the near future to tackle validations of other critical metrics, determining in each case what the right levels and types of evidence are. We look forward to benefits (both expected and unexpected) as this work allows us to make better decisions to bring value to learners and our business.

Bror Saxberg (http://www.kaplan.com/brorsblog) is the chief learning officer and David Niemi is vice president, measurement & evaluation, at Kaplan, Inc.

CLICK HERE TO GO TO THE DIGITAL VERSION OF THIS ARTICLE

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.


Headlines

Gaining distribution in small retail formats brings big payoffs

Small retail formats with limited assortments such as Save-A-Lot and Aldi and neighborhood stores like Target Express have been growing in popularity in the United States and around the world. For brands, the limited assortments mean greater competition for shelf-space, raising the question of whether it is worth expending marketing effort and slotting allowances to get on to their shelves. According to a forthcoming study in a leading INFORMS scholarly marketing journal, Marketing Science, the answer is “yes.” Read more →

Cognitive computing a disruptive force, but are CMOs ready?

While marketing and sales professionals increasingly find themselves drowning in data, a new IBM study finds that 64 percent of surveyed CMOs and sales leaders believe their industries will be ready to adopt cognitive technologies in the next three years. However, despite this stated readiness, the study finds that only 24 percent of those surveyed believe they have strategy in place to implement these technologies today. Read more →

How weather can impact consumer purchase response to mobile ads

Among the many factors that impact digital marketing and online advertising strategy, a new study in the INFORMS journal Marketing Science provides insight to a growing trend among firms and big brands: weather-based advertising. According to the study, certain weather conditions are more amenable for consumer responses to mobile marketing efforts, while the tone of the ad content can either help or hurt such response depending on the current local weather. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

Essential Practice Skills for High-Impact Analytics Projects
Sept. 26-27, Executive Conference Center, Arlington, Va.

Foundations of Modern Predictive Analytics
Oct. 2-3, VT Executive Briefing Center, Arlington, Va.

2017 INFORMS Annual Meeting
October 22-25, 2017, Houston

2017 Winter Simulation Conference (WSC 2017)
Dec. 3-6, 2017, Las Vegas

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.