Share with your friends


Analytics Magazine

Forum: Anxiety over artificial intelligence

Gartner predicts that by 2022, one in five workers engaged in mostly non-routine tasks will rely on AI to do a job. Photo Courtesy of | © abidal

Gartner predicts that by 2022, one in five workers engaged in mostly non-routine tasks will rely on AI to do a job.
Photo Courtesy of | © abidal

Now is the time to address misunderstandings, before it’s too late.

Joseph ByrumBy Joseph Byrum

Operations research (O.R.) has had it easy. From its earliest days, the discipline harnessed the incredible power of mathematical algorithms to drive efficiency without drawing undue public attention. Militaries have relied on O.R. ever since World War I to better move around supplies or to ensure strike missions reach their target. In business, just the Franz Edelman Award finalist projects alone have brought home the tidy sum of $250 billion in cost savings.

Despite these notable achievements, good luck finding a random person on the street able to explain what O.R. stands for. The same can’t be said for artificial intelligence. “AI” is among the most recognizable set of initials today.

One reason for the notoriety is widespread media coverage. AI is a novelty with a lot of promise, and that makes it the subject of intense speculation. The McKinsey Global Institute called AI the next digital frontier. With up to $30 billion invested annually in AI, companies are placing a significant bet on AI’s ability to improve R&D, create more accurate forecasts, drive productivity and customize the whole experience for the end user.

So far, so good. But not all of the attention is quite so positive.

Hollywood has cast AI in central roles for decades – usually as the bad guy. In 1968, theater audiences watched in amazement as the HAL 9000 supercomputer disobeyed, and ultimately tried to kill, all of its human masters. The 1983 film “WarGames” kicked it up a notch with an autonomous military computer system itching to trigger World War III with no more remorse than it would take to make the opening move in a game of chess. In the following year, “The Terminator” kicked off a massively successful movie franchise featuring Skynet, an all-powerful AI system so committed to wiping out humanity that it sent robots through time to ensure anyone who happened to get in its way would never have a chance to exist.

All of this is sheer fantasy, but sometimes that fantasy ends up shaping reality.

AI in its Infancy

Today’s AI systems are very much in their infancy. We’re nowhere near the point where machine learning algorithms could become self-aware, much less develop an unrelenting grudge against mankind. Even the most sophisticated military AI projects only scratch the surface of what’s theoretically possible.

Because AI is so new, this is our one and only opportunity to better explain what it means to a skeptical public before science fiction screenwriters and celebrities completely drown out the conversation with compelling, but not exactly accurate, stories.

Analytics experts can play an important role in providing needed clarity. The use of mathematical algorithms to solve business problems and make better decisions through optimization creates a kinship between AI and O.R.

AI is a system in which algorithms, or computer programs, make choices. AI separates from analytics in that AI’s choices are made in response to some form of learning – a process by which the program alters itself in response to a changing environment. This “learning” capability is one distinctive trait of AI made possible by techniques such as modeling artificial neural networks after the human brain itself [1]. AI involves a certain amount of autonomy.

Traditional O.R. applications are created by data scientists and mathematicians who craft custom algorithms to optimize the task at hand. AI goes to the next level by optimizing the algorithm creation process itself by greatly reducing the need for human intervention. It’s O.R. on steroids.

Analytics professionals are as familiar as anyone can be with the inherent limitations of computer algorithms. Perhaps one day these learning algorithms will become so effective that the machines that employ them could become a threat, but worrying about that now would be like the Wright brothers worrying about the implications of space travel a few minutes after landing their first prototype airplane.

Narrow and General AI

Business-focused AI systems today are narrow, focused on solving discrete problems. It is possible that an artificial general intelligence might one day be simulated by combining multiple narrow AI systems [2], but we’re not there yet.

General AI is what’s responsible for all the drama, which is why it’s important to draw the distinction between narrow and general AI If we don’t take the opportunity right now to spell out why a smart algorithm will always be bounded by its programming, we will, as a community, face problems in the future. History is full of examples of what happens when pioneers fail to educate the public about new technologies.

Two centuries ago, British textile workers rioted. The angry mobs smashed machines and set fire to newly automated factories to protest the onset of the first Industrial Revolution. From hindsight, it’s clear the machinery that the Luddites fought so hard against brought more jobs, not fewer, to a textile industry that grew to clothe half the world. Yet the rioters didn’t see it that way. The Luddites weren’t so much afraid of the machines as the social upheaval they represented.

This isn’t just a historic curiosity. We now see similar doubts about biotechnology that spurred several European nations to band together to outlaw crops that are genetically modified to be higher-yielding, more nutritious or more resistant to dangerous pests. Despite overwhelming evidence of their inherent safety, there hasn’t been a single instance of GMO poisoning, ever – these nations banned the technology based upon a speculative possibility of harm. Better safe than sorry. Even in the United States, where the technology is widespread, a large segment of the public remains skeptical of biotech.

This is the case because doubters will move in and fill the void whenever the pioneers of new technology leave space for others by failing to address the risks and benefits. Regulators, who will never fully understand complex adaptive systems, are attuned to the general sentiment. They won’t hesitate to act on public doubts about AI, and that would be a shame. We can’t afford to lose one of the most promising technological advances of our lifetime.

AI Not Without Risk

None of this is to say that we need to pretend that AI is without risk. Quite the opposite is the case – it’s just a risk of a much different sort. Mechanical looms and steam engines did indeed cause disruption in the 18th century, but the problems hardly qualified as apocalyptic.

So it is with the AI of the present. The downsides are real, but they’re manageable. It’s more important to be honest about what it means and to prepare for what lies ahead by thinking through the likely scenarios.

We must, as the Centre for the Study of Existential Risk at the University of Cambridge suggests, weigh the potential benefits against the potential for harm, and plan accordingly. The most obvious issue is the impact of AI on employment, a topic that has been explored in extensive detail.

One of the less examined possibilities is that providing relief from repetitive tasks and driving efficiencies in critical functions could make life really dull. As Bill Gates put it, “What if people run out of things to do?” The Microsoft founder used those words as his title for a review of a book that asked whether people become unhappier the more society is perfected.

This raises the important point that, no matter how good AI becomes, it cannot solve all of our problems. It will never be a cure-all for human error. Artificial intelligence is best seen as a supplement, an augmentation, of human abilities, not a replacement.

The first official crash investigation involving a “self-driving” car illustrates how this is so. A federal panel looking into the deadly incident concluded that the driver of a Tesla Model S with autopilot was killed because of “inattention due to overreliance on vehicle automation, which resulted in the car driver’s lack of reaction” to the truck that was turning left into his path at an intersection.

Like any tool, AI can be dangerous when misused. And we must be honest about that, and explore all of the possibilities. It’s the unknown that’s scariest of all, and fear is the biggest threat to technological advances.

Expanding knowledge – eliminating the unknown – is the best way to alleviate anxiety and reduce the natural impulse of politicians to ban what they don’t understand. If we wait until the point where we need to plead the case for AI and O.R. to lawmakers, the battle will already have been lost. y

Joseph Byrum is the chief data scientist at Principal Financial Group. Connect with him on Twitter @ByrumJoseph.


  1. Core technologies typically associated with AI include deep and/or machine learning, natural language processing platforms, predictive application programming interfaces and speech or image recognition. Source: “Artificial Intelligence Industry: An Overview by Segment,” July 25, 2016,
  2. U.S. Deputy Secretary of Defense Bob Work told attendees at a conference at the Johns Hopkins University Applied Physics Laboratory: “We’ve never gotten to the point where we’ve had enough narrow AI systems working together throughout a network for us to be able to see what type of interactions we might have.” Source: “War Without Fear: DepSecDef Work on How AI Changes Conflict.” May 31, 2017,

business analytics news and articles

Related Posts

  • 86
    The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few…
    Tags: ai
  • 72
    A recent Gartner survey reveals that saving time and money are the top reasons why consumers would use artificial intelligence (AI). “AI is among the technologies that consumers consider using for tangible and more ‘serious’ benefits, as opposed to socializing, projecting self-image and having fun – three common reasons for…
    Tags: ai
  • 67
    Advanced artificial intelligence algorithms have the ability to take over tasks traditionally reserved for skilled human operators, such as driving a truck or performing a medical diagnosis. What once was the stuff of science fiction is now reality. This technology has made tremendous leaps in the last decade, yet it…
    Tags: ai
  • 60
    At last there is some respite for those of us in the healthcare analytics and technology business. The neverending uncertainty and word storms emanating from the nation’s capital is now silent. Perhaps temporarily but what a relief. We now can make strategies and plan for execution for the next six…
    Tags: ai


Fighting terrorists online: Identifying extremists before they post content

New research has found a way to identify extremists, such as those associated with the terrorist group ISIS, by monitoring their social media accounts, and can identify them even before they post threatening content. The research, “Finding Extremists in Online Social Networks,” which was recently published in the INFORMS journal Operations Research, was conducted by Tauhid Zaman of the MIT, Lt. Col. Christopher E. Marks of the U.S. Army and Jytte Klausen of Brandeis University. Read more →

Syrian conflict yields model for attrition dynamics in multilateral war

Based on their study of the Syrian Civil War that’s been raging since 2011, three researchers created a predictive model for multilateral war called the Lanchester multiduel. Unless there is a player so strong it can guarantee a win regardless of what others do, the likely outcome of multilateral war is a gradual stalemate that culminates in the mutual annihilation of all players, according to the model. Read more →

SAS, Samford University team up to generate sports analytics talent

Sports teams try to squeeze out every last bit of talent to gain a competitive advantage on the field. That’s also true in college athletic departments and professional team offices, where entire departments devoted to analyzing data hunt for sports analytics experts that can give them an edge in a game, in the stands and beyond. To create this talent, analytics company SAS will collaborate with the Samford University Center for Sports Analytics to support teaching, learning and research in all areas where analytics affects sports, including fan engagement, sponsorship, player tracking, sports medicine, sports media and operations. Read more →



INFORMS Annual Meeting
Nov. 4-7, 2018, Phoenix

Winter Simulation Conference
Dec. 9-12, 2018, Gothenburg, Sweden


Applied AI & Machine Learning | Comprehensive
Starts Oct. 29, 2018 (live online)

The Analytics Clinic
Citizen Data Scientists | Why Not DIY AI?
Nov. 8, 2018, 11 a.m. – 12:30 p.m.

Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to