Share with your friends










Submit

Analytics Magazine

Explaining ‘How to Measure Anything: Finding the Value of ‘Intangibles’ in Business’

May/June 2014

Latest edition of book takes another look at seven arguments, new and old.

Douglas W. Hubbard Douglas A. Samuelson

By Douglas W. Hubbard (left) and Douglas A. Samuelson

Analytics professionals and decision-makers are often stymied by the lack of good metrics on which to base decisions. But everything that matters has observable consequences and, with a bit of (often trivial) math, these observations provide the grounds for reducing uncertainty. Even imperfect information has a computable value for decisions.

These ideas were summarized in the book “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business” [Hubbard, 2007, 2010, 2014], written by one of the authors of this article. With 65,000 copies of the book sold in five languages, the message seems to strike a chord. The client list of the author’s firm, Hubbard Decision Research (HDR), and the thousands of individuals who have registered on the book’s website indicate a diverse audience. They include engineers, human resources, software developers, information-security specialists, scientists from many fields, managers in many industries, actuaries and teachers. It appears that the challenge of measuring what – at first – appear to be “intangible” is common for many analysts and managers in organizations of all types.

A third edition has just been released, with an accompanying workbook to facilitate classroom teaching and self-study. The third edition also allowed the author to include cases from new clients and to respond to the most common challenges sent in by readers in the seven years since the first edition. As in the two earlier editions, readers learn how to frame the measurement problem and how to avoid measuring the wrong things, and they see the value of relying on their quantitative models over pure intuition.

However, even for the most fervent advocates of quantitative methods among our clients and readers, we find that they can easily be bogged down by some of the same obstacles as the skeptics of quantitative methods. Even though we make what seems to us to be a strong argument for the correct way to approach these issues and even though clients say they “conceptually” agree with the argument, they sometimes still seem to repeat, unknowingly, some of the same errors. What follows are seven areas where we build on the message of the previous edition by adding new cases, new research and new responses to the challenges we continue to observe among our readers and clients.

How to Measure Anything: Finding the Value of ‘Intangibles’ in Business1. It’s still true, anything can be measured.
We haven’t found a real “immeasurable” yet, although many things initially appear to be. In the past several years, HDR has developed measures of the risk of a mine flooding, drought resilience in the Horn of Africa, the market for new laboratory devices, the risks of cyberattacks and the value of industry standards, to name a few. The other author of this article (Samuelson) measured the asset value of information technology [Samuelson, 2000] and the value of deterrence in security situations [unpublished]. In each of these cases something was perceived to be virtually impossible to measure and, yet, the authors were able to show that we can use informative observations and simple, established mathematical approaches to reduce uncertainty enough to make decisions. As in earlier editions, the book explains the three reasons anything is ever perceived to be immeasurable, and why all three are mistaken.

2. Do the math.
A key point in every edition of the book was that we measure to feed quantitative decision models, and that even naïve quantitative models easily outperform human experts in a variety of estimation and decision problems. In a meta-study of 150 studies comparing expert judgment to statistical models, the models clearly outperformed the experts in 144 of the cases [Meehl, 1975]. More and more research confirms this. The third edition adds the findings of Philip Tetlock’s giant undertaking to track more than 82,000 forecasts of 284 experts over a 20-year period. From this, Tetlock could confidently state, “It is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones” [Tetlock, 2006]. The book reviews additional research to show that, unless we do the math, most people, even statistically trained experts, are susceptible to common inference errors.

3. Just about everyone can be trained to assess odds like a pro.
Almost everyone can be trained to be an expert “probability estimator.” Building on the work of others in decision psychology [Lichtenstein and B. Fischhoff, 1980], HDR started providing “calibrated probability assessment” training in the mid-1990s. The third edition included data from more than 900 people calibrated by HDR. The data consistently shows that virtually everyone is extremely overconfident before the training (e.g., over a large number of trials, when they say they are 90 percent certain, they may have less than a 60 percent chance of being correct). But HDR also found that about 80 percent of individuals can be trained to be nearly perfectly calibrated (they are right just as often as they expect to be). In other words, they can be trained in about half a day to be as good as a bookie at putting odds on uncertain events. This skill becomes critical in the process of quantifying someone’s current uncertainty about a decision.

4. Calculating information values avoids “the measurement inversion.”
A defined decision should always be the objective of measurement. Uncertain variables in such a decision have a computable expected value of information (EVI); that is, what is it worth if we had less uncertainty about this? When HDR compared the EVI to clients’ past measurement habits, virtually always what got measured and what needed to be measured were very different things. With the third edition, HDR has conducted more than 80 major decisions analysis, and the results are consistent with earlier findings: This phenomenon appears to pervade every industry and profession from software development to pharmaceuticals, real estate to military logistics, and environmental policy to technology startups. It appears that the intuition managers follow to determine what to measure routinely leads them astray; they tend not to measure the very things for which they have the poorest information and would therefore benefit most from more data. Hubbard calls this practice “the measurement inversion,” and it appears that the best guarantee to avoid this problem is simply to know the information values of uncertainties relevant to a decision.

5. A philosophical dilemma: Does probability describe the object of observation or the observer?
When someone says, “but how do I know what the exact probability is?” they are implicitly adopting a particular definition of the word “probability.” Since the author observed the challenges some readers were having with this issue, the newest edition of “How to Measure Anything” expands more on it. We generally take a Bayesian position on the interpretation of probability – that is, probability is used to quantify the uncertainty of an observer, not a state of the thing being observed. This stands in contrast to the “frequentist” point of view, which treats a probability as a kind of idealized frequency of occurrence in some objective system. Somewhat ironically, the validity of applying subjective probabilities to uncertain outcomes has been tested with frequentist methods. That is, extremely large trials have been conducted where individuals’ probabilities were compared to observed outcomes. As mentioned earlier, the authors and other researchers have verified that people who are trained as “calibrated probability assessors” can repeatedly assign probabilities that, after sufficient trials, align with the observed frequencies. Since probability is your state of uncertainty, and since you can be calibrated, you can always state a probability – in the Bayesian sense.

6. Statistical significance doesn’t mean what you think, and what it does mean you probably don’t need.
Another issue the author was observing that was getting in the way of useful measurements was that there was a widely held, but very vague, understanding of the concept of “statistical significance.” The new edition addresses pervasive misunderstandings of this idea and then makes the case that, even when it is understood, it isn’t directly relevant to most real decision-making problems. The book contains examples in which just five sample observations, or in some cases even just one observation, substantially reduce uncertainty. We have had clients who looked at a small sample and – without attempting any math – questioned the statistical significance of the sample and the results. But sample size alone is not sufficient to determine statistical significance. Nor does statistical significance mean the chance that a claim is true, nor that if we fall short of statistical significance we have learned nothing. The newest edition argues that the entire concept is not necessary when even small reductions in uncertainty can have significant economic value.

7. You need less data than you think, and you have more data than you think.
A client or reader who says, “I would like to measure this but we just don’t have enough data” is very likely making a series of erroneous assumptions. As in the previous point on statistical significance, managers may seriously underestimate how much uncertainty reduction they get from a small amount of data. In fact, we have never seen anyone who made this claim who had actually calculated the uncertainty reduction from a given set of data and computed the value of it to the decision, to ascertain that the uncertainty reduction had no value. Managers also underestimate how much data they really have. One example of this, discussed in the third edition of the book, is the “uniqueness fallacy.” This is the tendency to believe that only highly similar if not identical examples are informative.

The latest edition includes cases where experts insisted that since each situation is unique, they cannot extrapolate from historical data. Then – and without a hint of irony – they will claim that therefore they must rely on their experience. Of course, as the book argues, expertise and science are both based on past observations – one of these with much more selective recall and tendency for flawed inferences than the other. Managers make just such a mistake whenever they say that they can’t make estimates about implementing a new technology because it is so unique – even though they have a long history of implementing new technologies. Using that same logic, your insurance company couldn’t possibly compute a life insurance premium for you because you’re unique and because you haven’t died yet. In fact, insurance actuaries know how to extrapolate from larger, more heterogeneous populations.

The third edition also expands on developments in how big data, social media, mobile phones and personal measurement devices are making the “we don’t have enough data” excuse much harder to justify.

Summary

You can, in fact, measure anything, in our view, but doing so is sometimes a challenge even for those who are convinced the claim is true. We simply need to recognize that the perceived challenge results from some of the same old, entrenched misconceptions. Your problem is most likely not as unusual as you think; there are sources of information you can use, if you think creatively about how to apply them; calibrated experts can make good estimates of their uncertainty about the data points they provide; and calculating the expected value of information can focus you on collecting the most useful additional data, not wasting effort and resources on data that won’t help much.


Douglas W. Hubbard (dwhubbard@hubbardresearch.com) is president of Hubbard Decision Research in Glen Ellyn, Ill., and an internationally recognized expert in measurement and decision analysis. Douglas A. Samuelson (samuelsondoug@yahoo.com), D.Sc., is president and chief scientist of InfoLogix, Inc., a consulting and R&D company in Annandale, Va., and a contributing editor of OR/MS Today and Analytics magazines. He is a longtime member of INFORMS.

Notes and References

  1. Douglas W. Hubbard, 2007, “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business,” Wiley; third edition, 2014.
  2. S. Lichtenstein and B. Fischhoff, 1980, “Training for Calibration,” Organizational Behavior and Human Performance, Vol. 26, No. 2, pp.149-171.
  3. Paul Meehl, 1986, “Causes and Effects of My Disturbing Little Book,” Journal of Personality Assessment, Vol. 50, pp. 370-375.
  4. Douglas A. Samuelson, 2001, “Information Technology Benefits Assessment,” Encyclopedia of Operations Research and the Management Sciences, Second Edition, Springer. (A revised version also appears in the third edition, 2013.)
  5. Philip E. Tetlock, 2006, “Expert Political Judgment: How Good Is It? How Can We Know?” Princeton, N.J.: Princeton University Press.

business analytics news and articles

Analytics Blog

Electoral College put to the math test


With the campaign two months behind us and the inauguration of Donald Trump two days away, isn’t it time to put the 2016 U.S. presidential election to bed and focus on issues that have yet to be decided? Of course not.

Headlines

Stereotypes hold back girls’ interest in STEM subjects

New research from Accenture reveals that young people in the United Kingdom and Ireland are most likely to associate a career in science and technology with “doing research” (52 percent), “working in a laboratory” (47 percent) and “wearing a white coat” (33 percent). The study found that girls are more likely to make these stereotypical associations than boys. Read more →

Gartner: Connected ‘things’ will jump 31 percent in 2017

Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. Total spending on endpoints and services will reach almost $2 trillion in 2017. Regionally, China, North America and Western Europe are driving the use of connected things, and the three regions together will represent 67 percent of the overall Internet of Things (IoT) installed base in 2017. Read more →

U.S. News: Analytics jobs rank among the best

When it comes to the best business jobs, analytics- and operations research-oriented disciplines dominate the list, according to U.S. News & World Report’s rankings of the “2017 Best Jobs.” In order, the top five “best business jobs” listings include: 1. statistician
, 2. mathematician
, 3. financial advisor, 
4. actuary, and 
5. operations research analyst. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

CONFERENCES

2017 INFORMS Business Analytics Conference
April 2-4, 2017, Las Vegas

2017 INFORMS Healthcare Conference
July 26-28, 2017, Rotterdam, the Netherlands

CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.