Share with your friends


Analytics Magazine

Artificial Intelligence: The values that should guide the AI revolution

The technology has made tremendous leaps forward, yet it remains nowhere near its full potential.

Joseph ByrumBy Joseph Byrum

Advanced artificial intelligence algorithms have the ability to take over tasks traditionally reserved for skilled human operators, such as driving a truck or performing a medical diagnosis. What once was the stuff of science fiction is now reality. This technology has made tremendous leaps in the last decade, yet it remains nowhere near its full potential.

It is still early, and we have the opportunity to guide AI’s development in a rational way, following a set of clear principles. Thinking through those principles provides insight into what a fully developed, ethical AI system ought to look like.

A number of organizations, including the Association for Computing Machinery (ACM) [1], Future of Life Institute (FLI) [2], Institute for Electronics and Electrical Engineers (IEEE) [3] and Google [4] have asked their experts to think through every possible scenario, from avoiding Hollywood’s overused vision of megalomaniacal AI to preventing a programmer’s implicit biases from infecting algorithms that ought to be free from prejudice.

Reviewing these concepts is important in understanding how to best take advantage of AI’s potential. Fortunately, a consensus is emerging on the main principles that should be respected by technology developers.

Human Control

The first and most important principle is maintaining human control of AI systems. Source: ThinkStock

The first and most important principle is maintaining human control of AI systems.
Source: ThinkStock

The first and most important principle is that we must maintain human control of AI systems. One example would be to have an easily accessible “off” switch to ensure humans can intervene and stop problems from rapidly growing into crises when AI steps out of line. In Stanley Kubrick’s film “2001: A Space Odyssey,” it was so hard for the human crew to unplug the uppity HAL 9000 system that the lip-reading AI realized what was happening and decided that, “This mission is too important for me to allow you to jeopardize it.”

But it’s more than just an off switch. The current state of self-driving cars illustrates the risk of making human involvement an afterthought. At the current level of technology, human intervention is essential for AI systems that have yet to master every aspect of the chaotic road environment. At the same time, however, humans don’t really do much while the car drives itself; their barely engaged attention lapses.

This scenario played out in a tragic accident in March 2018 in Tempe, Ariz. [5]. A self-driving Volvo struck and killed a darkly clad pedestrian who darted out from the middle of a dark road – something the system’s designers had not anticipated. The forward- and side-mounted cameras, radar and lidar sensors did detect an unknown object, but by the time the human responded by hitting the brakes, it was too late.

Pushing too many functions off on the AI can create a dangerous complacency, which is why it’s important to maintain human control.

Human Safety

Ensuring human safety is another key principle. Isaac Asimov developed his three laws of robotics, designed to protect humanity, as a defense against the cheap plot device. Apparently, having robots turn on their creator in what had already become a cliché by the 1950s. While the laws served a fictional purpose initially, they remain an enduring statement of programming safeguards that still make sense today:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The basic concept is simple, but real-world issues tend to become more complicated at the edges of implementation. Self-driving cars once again pose the classic dilemma. If a robotic car rounds a corner and suddenly comes across a group of school children crossing the road, does the car continue straight, endangering the children, or swerve into an obstacle that risks the life of the owner? The right answer is fodder for endless discussion and debate, but what matters most is that AI systems are developed to handle realistic scenarios and the consequences of the choices made. AI should be better than humans at making the right choice in terms of human safety.

Human Well-Being

“Human well-being” means the AI should work on humanity’s behalf, not the other way around. Machines are supposed to be labor-saving devices, but often we find ourselves spending endless hours supplying data to algorithms (which is the primary purpose of social media, at least from the perspective of Facebook and Twitter), or spending long hours working to be able to afford the latest expensive gadget.

AI systems must also be fed a never-ending stream of data to function effectively, and this is an area that should require minimal human interaction. AI is best at maintaining human well-being when such systems take over repetitive tasks with a level of accuracy and precision only available to systems that are not susceptible to boredom and fatigue. This is an area where AI can easily fill in for a human weakness.

Human Freedom

The principle of “human freedom” means we must be free to make our own choices. The temptation to let AI take over and make every decision while humans relax must be avoided. First, having AI take over our decision-making would conflict with the previous principle of maintaining human well-being because it would be essentially enslaving mankind.

The primary problem with human judgment is that there are so many variables involved in any particular choice that we often rely on intuition or luck in making a selection in the face of nearly infinite options. We might decide what car to buy because we like the color or make a stock pick based on a small slice of information such as the price-to-earnings ratio or a recent earnings report.

AI’s strength is its ability to process all available data, sorting through what’s relevant so that the algorithm can present options to the human based on analysis rather than superstition. Operating under this principle, AI doesn’t substitute its judgment for human decisions. Rather, it augments the power of human decisions by focusing human effort.


The term AI covers a wide variety of technologies. It can refer to deep-learning algorithms that draw patterns out of data that the AI system then uses to adjust its own programming. This can often lead to situations where human programmers have no idea why the AI did what it did.

Amazon used neural network algorithms of this sort in its Reckognition facial recognition system [6], which was designed to enable “immediate response for public safety and security.” When a public interest group put Reckognition to the test, it falsely identified 28 members of Congress as criminals [7].

Such goofs are inevitable with deep-learning algorithms that alter themselves in ways that can’t actually be explained. Developing systems with transparency as a primary principle can help reduce embarrassing mistakes caused when strange inputs yield even more odd output.
In some cases, government regulatory agencies expect and demand accountability, making transparency even more important.

Putting it All Together

AI is not a magic solution to all of life’s problems. It is best seen as a tool that, when developed in accordance with the principles above, can enhance human-led projects. Augmented intelligence systems take advantage of AI’s strengths to fill in for human weakness in accordance with the principles of good practice for AI.

Think of the Iron Man suit from the movies of the same name. The suit’s AI system feeds the most relevant information to Tony Stark so that he can make the ultimate choice about the best course of action.

Combining the AI’s data processing abilities with human judgment is what gives the whole system the ability to perform better than either AI alone or humans on their own. Such mutual dependence also ensures the AI will never judge human beings to be obsolete, significantly reducing the chance of a reduction in human safety from a robot-triggered nuclear holocaust. This approach leaves humans in control, respects human freedom, and leaves you with someone who can explain the reasoning behind the decisions that were made at each critical step.

Following these principles in AI development will promote AI systems likely to enhance our lives, but at the cost of making movies more boring.

Joseph Byrum is the chief data scientist at Principal Financial Group. Connect with him on Twitter @ByrumJoseph.


Analytics data science news articles

Related Posts

  • 67
    Operations research (O.R.) has had it easy. From its earliest days, the discipline harnessed the incredible power of mathematical algorithms to drive efficiency without drawing undue public attention. Militaries have relied on O.R. ever since World War I to better move around supplies or to ensure strike missions reach their…
    Tags: ai
  • 58
    The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few…
    Tags: ai
  • 52
    Tomorrow’s most vexing problems will be solved with smart automation. By combining key technologies – artificial intelligence (AI), big data, advanced analytics and autonomous systems – smart technology can achieve far more than the unassisted human mind could ever accomplish. Smart automation represents the next frontier in improving the human…
    Tags: human, ai
  • 50
    A recent Gartner survey reveals that saving time and money are the top reasons why consumers would use artificial intelligence (AI). “AI is among the technologies that consumers consider using for tangible and more ‘serious’ benefits, as opposed to socializing, projecting self-image and having fun – three common reasons for…
    Tags: ai
  • 45
    At last there is some respite for those of us in the healthcare analytics and technology business. The neverending uncertainty and word storms emanating from the nation’s capital is now silent. Perhaps temporarily but what a relief. We now can make strategies and plan for execution for the next six…
    Tags: ai


Former INFORMS President Cook named to U.S. Census committee

Tom Cook, a former president of INFORMS, a founding partner of Decision Analytics International and a member of the National Academy of Engineering, was recently named one of five new members of the U.S. Census Bureau’s Census Scientific Advisory Committee (CSAC). The committee meets twice a year to address policy, research and technical issues relating to a full range of Census Bureau programs and activities, including census tests, policies and operations. The CSAC will meet for its fall 2018 meeting at Census Bureau headquarters in Suitland, Md., Sept. 13-14. Read more →

Gartner identifies six barriers to becoming a digital business

As organizations continue to embrace digital transformation, they are finding that digital business is not as simple as buying the latest technology – it requires significant changes to culture and systems. A recent Gartner, Inc. survey found that only a small number of organizations have been able to successfully scale their digital initiatives beyond the experimentation and piloting stages. “The reality is that digital business demands different skills, working practices, organizational models and even cultures,” says Marcus Blosch, research vice president at Gartner. Read more →

Innovation and speculation drive stock market bubble activity

A group of data scientists conducted an in-depth analysis of major innovations and stock market bubbles from 1825 through 2000 and came away with novel takeaways of their own as they found some very distinctive patterns in the occurrence of bubbles over 175 years. The study authors detected bubbles in approximately 73 percent of the innovations they studied, revealing the close relationship between innovation and stock market bubbles. Read more →



INFORMS Annual Meeting
Nov. 4-7, 2018, Phoenix

Winter Simulation Conference
Dec. 9-12, 2018, Gothenburg, Sweden


Applied AI & Machine Learning | Comprehensive
Sept. 10-13, 17-20 and 24-25

Advancing the Analytics-Driven Organization
Sept. 17-20, 12-5 p.m. LIVE Online

The Analytics Clinic: Ensemble Models: Worth the Gains?
Sept. 20, 11 a.m.-12:30 p.m.

Predictive Analytics: Failure to Launch Webinar
Oct. 3, 11 a.m.

Advancing the Analytics-Driven Organization
Oct. 1-4, 12 p.m.-5 p.m.

Applied AI & Machine Learning | Comprehensive
Oct. 15-19, Washington, D.C.

Making Data Science Pay
Oct. 29 -30, 12 p.m.-5 p.m.


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to