Share with your friends


Analytics Magazine

Forum: Anxiety over artificial intelligence

Gartner predicts that by 2022, one in five workers engaged in mostly non-routine tasks will rely on AI to do a job. Photo Courtesy of | © abidal

Gartner predicts that by 2022, one in five workers engaged in mostly non-routine tasks will rely on AI to do a job.
Photo Courtesy of | © abidal

Now is the time to address misunderstandings, before it’s too late.

Joseph ByrumBy Joseph Byrum

Operations research (O.R.) has had it easy. From its earliest days, the discipline harnessed the incredible power of mathematical algorithms to drive efficiency without drawing undue public attention. Militaries have relied on O.R. ever since World War I to better move around supplies or to ensure strike missions reach their target. In business, just the Franz Edelman Award finalist projects alone have brought home the tidy sum of $250 billion in cost savings.

Despite these notable achievements, good luck finding a random person on the street able to explain what O.R. stands for. The same can’t be said for artificial intelligence. “AI” is among the most recognizable set of initials today.

One reason for the notoriety is widespread media coverage. AI is a novelty with a lot of promise, and that makes it the subject of intense speculation. The McKinsey Global Institute called AI the next digital frontier. With up to $30 billion invested annually in AI, companies are placing a significant bet on AI’s ability to improve R&D, create more accurate forecasts, drive productivity and customize the whole experience for the end user.

So far, so good. But not all of the attention is quite so positive.

Hollywood has cast AI in central roles for decades – usually as the bad guy. In 1968, theater audiences watched in amazement as the HAL 9000 supercomputer disobeyed, and ultimately tried to kill, all of its human masters. The 1983 film “WarGames” kicked it up a notch with an autonomous military computer system itching to trigger World War III with no more remorse than it would take to make the opening move in a game of chess. In the following year, “The Terminator” kicked off a massively successful movie franchise featuring Skynet, an all-powerful AI system so committed to wiping out humanity that it sent robots through time to ensure anyone who happened to get in its way would never have a chance to exist.

All of this is sheer fantasy, but sometimes that fantasy ends up shaping reality.

AI in its Infancy

Today’s AI systems are very much in their infancy. We’re nowhere near the point where machine learning algorithms could become self-aware, much less develop an unrelenting grudge against mankind. Even the most sophisticated military AI projects only scratch the surface of what’s theoretically possible.

Because AI is so new, this is our one and only opportunity to better explain what it means to a skeptical public before science fiction screenwriters and celebrities completely drown out the conversation with compelling, but not exactly accurate, stories.

Analytics experts can play an important role in providing needed clarity. The use of mathematical algorithms to solve business problems and make better decisions through optimization creates a kinship between AI and O.R.

AI is a system in which algorithms, or computer programs, make choices. AI separates from analytics in that AI’s choices are made in response to some form of learning – a process by which the program alters itself in response to a changing environment. This “learning” capability is one distinctive trait of AI made possible by techniques such as modeling artificial neural networks after the human brain itself [1]. AI involves a certain amount of autonomy.

Traditional O.R. applications are created by data scientists and mathematicians who craft custom algorithms to optimize the task at hand. AI goes to the next level by optimizing the algorithm creation process itself by greatly reducing the need for human intervention. It’s O.R. on steroids.

Analytics professionals are as familiar as anyone can be with the inherent limitations of computer algorithms. Perhaps one day these learning algorithms will become so effective that the machines that employ them could become a threat, but worrying about that now would be like the Wright brothers worrying about the implications of space travel a few minutes after landing their first prototype airplane.

Narrow and General AI

Business-focused AI systems today are narrow, focused on solving discrete problems. It is possible that an artificial general intelligence might one day be simulated by combining multiple narrow AI systems [2], but we’re not there yet.

General AI is what’s responsible for all the drama, which is why it’s important to draw the distinction between narrow and general AI If we don’t take the opportunity right now to spell out why a smart algorithm will always be bounded by its programming, we will, as a community, face problems in the future. History is full of examples of what happens when pioneers fail to educate the public about new technologies.

Two centuries ago, British textile workers rioted. The angry mobs smashed machines and set fire to newly automated factories to protest the onset of the first Industrial Revolution. From hindsight, it’s clear the machinery that the Luddites fought so hard against brought more jobs, not fewer, to a textile industry that grew to clothe half the world. Yet the rioters didn’t see it that way. The Luddites weren’t so much afraid of the machines as the social upheaval they represented.

This isn’t just a historic curiosity. We now see similar doubts about biotechnology that spurred several European nations to band together to outlaw crops that are genetically modified to be higher-yielding, more nutritious or more resistant to dangerous pests. Despite overwhelming evidence of their inherent safety, there hasn’t been a single instance of GMO poisoning, ever – these nations banned the technology based upon a speculative possibility of harm. Better safe than sorry. Even in the United States, where the technology is widespread, a large segment of the public remains skeptical of biotech.

This is the case because doubters will move in and fill the void whenever the pioneers of new technology leave space for others by failing to address the risks and benefits. Regulators, who will never fully understand complex adaptive systems, are attuned to the general sentiment. They won’t hesitate to act on public doubts about AI, and that would be a shame. We can’t afford to lose one of the most promising technological advances of our lifetime.

AI Not Without Risk

None of this is to say that we need to pretend that AI is without risk. Quite the opposite is the case – it’s just a risk of a much different sort. Mechanical looms and steam engines did indeed cause disruption in the 18th century, but the problems hardly qualified as apocalyptic.

So it is with the AI of the present. The downsides are real, but they’re manageable. It’s more important to be honest about what it means and to prepare for what lies ahead by thinking through the likely scenarios.

We must, as the Centre for the Study of Existential Risk at the University of Cambridge suggests, weigh the potential benefits against the potential for harm, and plan accordingly. The most obvious issue is the impact of AI on employment, a topic that has been explored in extensive detail.

One of the less examined possibilities is that providing relief from repetitive tasks and driving efficiencies in critical functions could make life really dull. As Bill Gates put it, “What if people run out of things to do?” The Microsoft founder used those words as his title for a review of a book that asked whether people become unhappier the more society is perfected.

This raises the important point that, no matter how good AI becomes, it cannot solve all of our problems. It will never be a cure-all for human error. Artificial intelligence is best seen as a supplement, an augmentation, of human abilities, not a replacement.

The first official crash investigation involving a “self-driving” car illustrates how this is so. A federal panel looking into the deadly incident concluded that the driver of a Tesla Model S with autopilot was killed because of “inattention due to overreliance on vehicle automation, which resulted in the car driver’s lack of reaction” to the truck that was turning left into his path at an intersection.

Like any tool, AI can be dangerous when misused. And we must be honest about that, and explore all of the possibilities. It’s the unknown that’s scariest of all, and fear is the biggest threat to technological advances.

Expanding knowledge – eliminating the unknown – is the best way to alleviate anxiety and reduce the natural impulse of politicians to ban what they don’t understand. If we wait until the point where we need to plead the case for AI and O.R. to lawmakers, the battle will already have been lost. y

Joseph Byrum is the chief data scientist at Principal Financial Group. Connect with him on Twitter @ByrumJoseph.


  1. Core technologies typically associated with AI include deep and/or machine learning, natural language processing platforms, predictive application programming interfaces and speech or image recognition. Source: “Artificial Intelligence Industry: An Overview by Segment,” July 25, 2016,
  2. U.S. Deputy Secretary of Defense Bob Work told attendees at a conference at the Johns Hopkins University Applied Physics Laboratory: “We’ve never gotten to the point where we’ve had enough narrow AI systems working together throughout a network for us to be able to see what type of interactions we might have.” Source: “War Without Fear: DepSecDef Work on How AI Changes Conflict.” May 31, 2017,

business analytics news and articles

Related Posts

  • 86
    The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few…
    Tags: ai
  • 72
    A recent Gartner survey reveals that saving time and money are the top reasons why consumers would use artificial intelligence (AI). “AI is among the technologies that consumers consider using for tangible and more ‘serious’ benefits, as opposed to socializing, projecting self-image and having fun – three common reasons for…
    Tags: ai
  • 67
    Advanced artificial intelligence algorithms have the ability to take over tasks traditionally reserved for skilled human operators, such as driving a truck or performing a medical diagnosis. What once was the stuff of science fiction is now reality. This technology has made tremendous leaps in the last decade, yet it…
    Tags: ai
  • 60
    At last there is some respite for those of us in the healthcare analytics and technology business. The neverending uncertainty and word storms emanating from the nation’s capital is now silent. Perhaps temporarily but what a relief. We now can make strategies and plan for execution for the next six…
    Tags: ai


Using machine learning and optimization to improve refugee integration

Andrew C. Trapp, a professor at the Foisie Business School at Worcester Polytechnic Institute (WPI), received a $320,000 National Science Foundation (NSF) grant to develop a computational tool to help humanitarian aid organizations significantly improve refugees’ chances of successfully resettling and integrating into a new country. Built upon ongoing work with an international team of computer scientists and economists, the tool integrates machine learning and optimization algorithms, along with complex computation of data, to match refugees to communities where they will find appropriate resources, including employment opportunities. Read more →

Gartner releases Healthcare Supply Chain Top 25 rankings

Gartner, Inc. has released its 10th annual Healthcare Supply Chain Top 25 ranking. The rankings recognize organizations across the healthcare value chain that demonstrate leadership in improving human life at sustainable costs. “Healthcare supply chains today face a multitude of challenges: increasing cost pressures and patient expectations, as well as the need to keep up with rapid technology advancement, to name just a few,” says Stephen Meyer, senior director at Gartner. Read more →

Meet CIMON, the first AI-powered astronaut assistant

CIMON, the world’s first artificial intelligence-enabled astronaut assistant, made its debut aboard the International Space Station. The ISS’s newest crew member, developed and built in Germany, was called into action on Nov. 15 with the command, “Wake up, CIMON!,” by German ESA astronaut Alexander Gerst, who has been living and working on the ISS since June 8. Read more →



INFORMS Computing Society Conference
Jan. 6-8, 2019; Knoxville, Tenn.

INFORMS Conference on Business Analytics & Operations Research
April 14-16, 2019; Austin, Texas

INFORMS International Conference
June 9-12, 2019; Cancun, Mexico

INFORMS Marketing Science Conference
June 20-22; Rome, Italy

INFORMS Applied Probability Conference
July 2-4, 2019; Brisbane, Australia

INFORMS Healthcare Conference
July 27-29, 2019; Boston, Mass.

2019 INFORMS Annual Meeting
Oct. 20-23, 2019; Seattle, Wash.

Winter Simulation Conference
Dec. 8-11, 2019: National Harbor, Md.


Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to