Making better, just-in-time decisions by combining human and computer intelligence.
By Russ Vane (LEFT) and Doug Samuelson (RIGHT)
Decision-makers do not lack information, although they often complain that they lack the information they need. We are awash in data and have increasing trouble making sense of it. Artificial intelligence (AI) has been promising to help us for four decades. Computers are fast at processing large amounts of relevant data following human-developed rules, but they struggle to turn data into information and information into evidence that makes sense of our situation. What we need is intelligent augmentation (IA), combining humans and computers to make sense of what data we have by using the strengths of each.
As Karla Hoffman of George Mason University said in 1999, “Successful O.R. processes are being translated into software.” In this succinct statement she summarized how hard it is to sell better thinking alone or even better processes. This statement is even truer today with the proliferation of small, personal computing devices.
To forage effectively for data, one benefits by knowing in advance what one seeks to find. Thus, sense-making must precede information collection, as otherwise the searcher tends to fail both by collecting too much and by searching too much. Because of this, simply gathering vast quantities of data is unpromising: exabyte repositories look like a dead-end technology. Instead, we recommend trading knowledge for search by setting up a filter that watches information pipes (or streams, as IBM calls them). This way the computer’s filtering and scanning keeps the decision-maker up-to-date with which of numerous proposed courses of action is most promising, in real time.
Furthermore, this is exactly the kind of technology that rewards a number of startup companies in significant, financial ways. Smart and wise people will be able to deliver niche applications for the financial, military and medical communities that pay for themselves in months, not years. Even the venture capitalists would benefit from having better models for how to assess investments – but models that extend the humans’ expertise rather than trying to replace it.
We present a simple model of sense-making, foraging and stewardship that will help us make rapid advances and engender substantial improvements in business decision-making.
Why IA Means Better Business
Some companies fire customers. Why? The short story is that such companies assess that the relationship is Win-Lose and that they are losing. A more comprehensive way to ask the question is: What higher-level patterns are operative, or is this decision actually a mistake wrapped in self-justification? When done right, such a decision is more than that, a lot more. The situation must indicate that there are more customers available, and that the companies feel that they have a quality advantage. And that this option, however negative, is actually better than other options. Such assessments involve a lot of thinking and some pertinent information.
We must assess the quality of the relationship. In order to specify any improvement over things as they are, we must be able to envision something better, which is based on knowledge. From that we understand normal, exceptional or poor performance. We frequently even know explicit contexts where we suspend our standard assessments and invoke extraordinary ones.
The key factor in effective IA will be the decision-maker’s (your) genius. No one else is likely to have the same ideas or beliefs. So IAs are designed to make you better and to do the accounting cleanup. Your experience and relationships are precisely what you and your colleagues will use to provide the edge to your next business venture.
Furthermore, being able to wait to execute a decision is often an advantage because it denies competitors from knowing your decisions and puts them under time pressure to respond.
Quantified Requirements for IA Entrepreneurs
A decent metaphor for great IA is that of a well-connected and wise confidante. Computers and software are not wise, but they may be programmed to emulate wise well enough, and that’s what people will choose. To help specify and guide such projects, we start with two essential elements: quality and quantity.
A helpful first step is to identify the benefits desired from a product and then identify the qualities needed to engineer it well. Performance, trustworthiness, understandability and capacity-to-improve are all qualities of useful good employees and effective intelligent augmentation (IA).
Lord Kelvin  said, “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind…” Tom Gilb has written numerous articles and some books [1988, 2005] on practical ways to quantify quality, introducing and explaining the concept of quantified requirements (QRs). We believe this is an important first step to improving anything, but particularly IA.
We will use four qualities as an example of QRs; the reader may think of some that are better. We welcome comments.
First, as specified by Gilb, QRs have two levels of performance of interest: the “must” and “target” criteria. A “must” criterion is the least benefit that may be provided by a solution to continue spending money on developing a product. For example, IA may not decrease the performance and value of its user’s decisions. A “target” criterion is the most benefit for which the product’s client will pay. For example, the client will not pay for any voice recognition software that ingests more than 500 words per minute. We can therefore think of the relevant range for this requirement as a 100-point scale, with “must” at 0 and “target” at 100. Together, these criteria prevent waste and guide designs by encouraging deliveries of products that deliver as much value as possible in each version of the product.
QRs can also help to see how much knowledge the design staff has. IA developers are encouraged to consider multiple solutions and to rate them by the QRs for their project and to make a rough cost estimate. An Impact Estimation Table (see figure 1) allows the CEO/CIO to select the preferred approach during the next in-progress review, based on his/her judgment about the future. Sometimes that may be financially driven, but often it is based on quality gaps in the current version of an IA project.
Figure 1: Impact estimation table.
*The system is very fast, but will slow down as the system gains capabilities.
The impact estimation table in Figure 1 shows the four notional QRs are Capacity-to-Learn, Trustworthiness, Understandability and Performance. These are listed in the left-most column; the next column is always the test results of what has been delivered to-date. Any other columns are designs that may guide future development. Design A is the project as specified. Designs B and C are less costly or just simpler to build.
Capacity-to-Learn (sense-making from knowledge) – The flexibility of informational models that can record knowledge that can be “taught” by the user. Must is that we are able to model 20 percent of our beliefs about normal. Target is that we are able to model 60 percent of our beliefs about normal and extraordinary.
Trustworthiness (foraging for trusted information) – The degree to which people will agree to: “You are rarely surprised by sloppy information vetting.” Must is that three of five reviewers (using a five-point Likert scale) rate the model from 4 to 5 (Agree to Strongly Agree). Target is that four of five reviewers rate the model as a “strongly agree” to the same question.
Understandability (stewardship of IA users) – The amount of time in seconds that [a fraction] of users require to understand the information presented. Must is that 80 percent of users understand in less than 2 minutes. Target is that 98 percent of users understand in less than 5 seconds.
Performance – The inference lag associated with data ingest of 54 Mbps in seconds. Must is that 4 of 5 users are likely to be current to within ten minutes. Target is that 98 percent of users are likely to be current to within three minutes.
These QRs can be used to drive sense-making, foraging for information and support stewardship of users of IA. More qualities can be used to specify an exceptional product, but these serve as examples.
How to Use Your IA
We now turn to some ideas on the natural flow of user guidance to the IA to perform tailored sense-making, its sensing/foraging mode, and its presentation of information to (or stewardship of) the user.
A Northern Virginia entrepreneur and ex-Air Force fighter pilot, Arthur Farrington says, “Nothing is anything until you make it something.” As information scientists have explored the boundaries between information and knowledge, “knowledge” captured in a machine has remained elusive. The authors’ premise is that humans are important for any system to have knowledge. In one example, recent trends in the last several years in U.S. intelligence community research have been stressing sense-making as a preparatory step.
Psychologists have acknowledged difficulties with sensing almost anything that humans don’t understand. Frequently, such sensing (a house sound, a dog bark, etc.) is assigned to random forces. This is even truer of computers – just think of the great usability divide between structured and unstructured data. Consider how we address these issues. To the alert or paranoid human, these sounds may be cause for immediate action.
Information is not knowledge, and sense-making requires exploring the information’s valid and erroneous contexts and protecting the user from misuse. Even more so, we need to be able to snap into place inhibitory circuits to interrupt processing when the reasoning is overcome by events.
To bootstrap early IA devices we will probably require specifying knowledge emulation modes – mathematical, typological, linguistic – in ways that have not been invented. But knowledge elicitation has been refined with significant advances during the last 10 years through the application of wargaming techniques [Herman 2009] and hypergame theory [Vane 2001].
Or you may just want to adopt the favorite IA applications of a colleague, mentor or boss.
Foraging for Information
We do not recommend looking for data first, and that is why this aspect appears second. Each person’s perception of our shared universe is often very different, so even the data that we will accept as evidence is different. This observation should suggest that most of us will need tailored foraging approaches.
The factory model for information collection isn’t quite dead, yet. But it will not be the consumer data of the future. In fact, it’s unlikely to be data at all, but information that is moving up the context ladder towards immediate contextually sensitive relevant information about those aspects of life that we value.
And we’ll need a way to compare them to address conflict. Deductive “accounting” for data sources, assessments of verity and source verification is absolutely boring. It is not mindless, however, and requires effort. Arguments that devolve away from good faith negotiations are observed too frequently.
Stewardship of IA Users
IA raises new issues as it promises to resolve current problems. As processors decrease in power consumption and increase in speed, are we headed toward a technically supported elite segment of society that establishes its preeminence by the information that it has? Are we already there? What is the effect of cloud computing; does it make vast computing power available to many at the cost of making most people’s information increasingly vulnerable to the actions of a few?
People communicate using information – symbols – not mind-to-mind transfers. And computing networks increase the speed of transfer of information. But we need better. Speed is not the only consideration.
Examples abound, usually under the guise of visualization. But visualization is about communicating to the eyes – not the mind. We need processing oriented to creating and maintaining a theory of mind – just like humans do – to best serve IA users. For example, analysts often tailor the information for presentation to military people in geo-spatial form, as these decision-makers’ learned senses of power, space, and timing allows them to understand much more, such as probability of success, supply rates and morale of combatants. What counts is not volume but verisimilitude. We can show less, but keep it “real” and communicate more.
For biologists/chemists/physicists, it may be virtual experiments, such as those described by Dr. John Bodnar of Stevenson University , who points out that modeling far beyond thought experiments of the 1910s will help the scientific community to shed broken, too simple models of biology (and even space-time). It’s too hard to apply more than a few ideas at the same time. But IAs can support such reasoning and require the models obey entropy, enthalpy, photon energy transfer and other physical laws that we frequently only rediscover by conducting actual experiments at costs that are more than one thousand times more expensive.
For instance, any of the electronic book systems can support many features that make using a Bible better: search, multiple versions, embedded notes and so on. But a theologian friend is even better. She remembers Scripture as you discuss it and advances (or challenges) your theses with relevant, helpful information that serves to construct your conclusions. So should IA.
In 2050, the question won’t be, “Are you using intelligent augmentation?” but whether you are using the latest model from the three companies who will be vying for supremacy in the consolidating sector. As we foresee it, in 2020 there will be 30 and in 2030 there will be at least 100. Analysts will do well to advance IA and become facile with it, and decision-makers will do well to learn how to use it and benefit from it.
Russ Vane (email@example.com) is a managing consultant in IBM’s Advanced Analytics and Optimization practice. He has been designing and facilitating wargames for four years for the U.S. government to counter-explosives worldwide against U.S. service members and coalition partners. His research passion is hypergame theory, specifying systems and advancing intelligent augmentation. Douglas A. Samuelson (firstname.lastname@example.org) is president and chief scientist of InfoLogix, Inc., a consulting and R&D firm in Annandale, Va. He is a frequent contributor to Analytics and OR/MS Today magazines.
- Bodnar, J.W., Vane, R., Rogers, B.S., 2009, “Hypothesis-Testing in a Biological Cartographic Virtual World,” in “Working Through Synthetic Worlds,” edited by C.A.P. Smith, Jeffrey G. Morrison and Kenneth W. Kisiel, (December 2009), ISBN: 978-0-7546-7712-3.
- Bodnar, J.W., 2003-2010, personal communication.
- Farrington, A., 2006, personal communication.
- Gilb, T., 1988, “Principles of Software Engineering Management,” Addison Wesley.
- Gilb, T., 2005, “Competitive Engineering: A Handbook For Systems Engineering, Requirements Engineering and Software Engineering Using Planguage,” Elsevier.
- Herman, M.L., Frost, M.D., Kurz, R., 2009, “Wargaming for Leaders: Strategic Decision Making from the Battlefield to the Boardroom,” McGraw Hill.
- Kelvin, Lord, 1883, “Electrical Units of Measurement,” delivered 1883-05-03, Popular Lectures and Addresses (1891-1894, three volumes) Vol. 1.
- Vane, R.R., 2001, “Using Hypergames to Select Plans in Competitive Environments,” George Mason University doctoral dissertation.