Share with your friends










Submit

Analytics Magazine

Artificial Intelligence: The values that should guide the AI revolution

The technology has made tremendous leaps forward, yet it remains nowhere near its full potential.

Joseph ByrumBy Joseph Byrum

Advanced artificial intelligence algorithms have the ability to take over tasks traditionally reserved for skilled human operators, such as driving a truck or performing a medical diagnosis. What once was the stuff of science fiction is now reality. This technology has made tremendous leaps in the last decade, yet it remains nowhere near its full potential.

It is still early, and we have the opportunity to guide AI’s development in a rational way, following a set of clear principles. Thinking through those principles provides insight into what a fully developed, ethical AI system ought to look like.

A number of organizations, including the Association for Computing Machinery (ACM) [1], Future of Life Institute (FLI) [2], Institute for Electronics and Electrical Engineers (IEEE) [3] and Google [4] have asked their experts to think through every possible scenario, from avoiding Hollywood’s overused vision of megalomaniacal AI to preventing a programmer’s implicit biases from infecting algorithms that ought to be free from prejudice.

Reviewing these concepts is important in understanding how to best take advantage of AI’s potential. Fortunately, a consensus is emerging on the main principles that should be respected by technology developers.

Human Control

The first and most important principle is maintaining human control of AI systems. Source: ThinkStock

The first and most important principle is maintaining human control of AI systems.
Source: ThinkStock

The first and most important principle is that we must maintain human control of AI systems. One example would be to have an easily accessible “off” switch to ensure humans can intervene and stop problems from rapidly growing into crises when AI steps out of line. In Stanley Kubrick’s film “2001: A Space Odyssey,” it was so hard for the human crew to unplug the uppity HAL 9000 system that the lip-reading AI realized what was happening and decided that, “This mission is too important for me to allow you to jeopardize it.”

But it’s more than just an off switch. The current state of self-driving cars illustrates the risk of making human involvement an afterthought. At the current level of technology, human intervention is essential for AI systems that have yet to master every aspect of the chaotic road environment. At the same time, however, humans don’t really do much while the car drives itself; their barely engaged attention lapses.

This scenario played out in a tragic accident in March 2018 in Tempe, Ariz. [5]. A self-driving Volvo struck and killed a darkly clad pedestrian who darted out from the middle of a dark road – something the system’s designers had not anticipated. The forward- and side-mounted cameras, radar and lidar sensors did detect an unknown object, but by the time the human responded by hitting the brakes, it was too late.

Pushing too many functions off on the AI can create a dangerous complacency, which is why it’s important to maintain human control.

Human Safety

Ensuring human safety is another key principle. Isaac Asimov developed his three laws of robotics, designed to protect humanity, as a defense against the cheap plot device. Apparently, having robots turn on their creator in what had already become a cliché by the 1950s. While the laws served a fictional purpose initially, they remain an enduring statement of programming safeguards that still make sense today:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The basic concept is simple, but real-world issues tend to become more complicated at the edges of implementation. Self-driving cars once again pose the classic dilemma. If a robotic car rounds a corner and suddenly comes across a group of school children crossing the road, does the car continue straight, endangering the children, or swerve into an obstacle that risks the life of the owner? The right answer is fodder for endless discussion and debate, but what matters most is that AI systems are developed to handle realistic scenarios and the consequences of the choices made. AI should be better than humans at making the right choice in terms of human safety.

Human Well-Being

“Human well-being” means the AI should work on humanity’s behalf, not the other way around. Machines are supposed to be labor-saving devices, but often we find ourselves spending endless hours supplying data to algorithms (which is the primary purpose of social media, at least from the perspective of Facebook and Twitter), or spending long hours working to be able to afford the latest expensive gadget.

AI systems must also be fed a never-ending stream of data to function effectively, and this is an area that should require minimal human interaction. AI is best at maintaining human well-being when such systems take over repetitive tasks with a level of accuracy and precision only available to systems that are not susceptible to boredom and fatigue. This is an area where AI can easily fill in for a human weakness.

Human Freedom

The principle of “human freedom” means we must be free to make our own choices. The temptation to let AI take over and make every decision while humans relax must be avoided. First, having AI take over our decision-making would conflict with the previous principle of maintaining human well-being because it would be essentially enslaving mankind.

The primary problem with human judgment is that there are so many variables involved in any particular choice that we often rely on intuition or luck in making a selection in the face of nearly infinite options. We might decide what car to buy because we like the color or make a stock pick based on a small slice of information such as the price-to-earnings ratio or a recent earnings report.

AI’s strength is its ability to process all available data, sorting through what’s relevant so that the algorithm can present options to the human based on analysis rather than superstition. Operating under this principle, AI doesn’t substitute its judgment for human decisions. Rather, it augments the power of human decisions by focusing human effort.

Transparency

The term AI covers a wide variety of technologies. It can refer to deep-learning algorithms that draw patterns out of data that the AI system then uses to adjust its own programming. This can often lead to situations where human programmers have no idea why the AI did what it did.

Amazon used neural network algorithms of this sort in its Reckognition facial recognition system [6], which was designed to enable “immediate response for public safety and security.” When a public interest group put Reckognition to the test, it falsely identified 28 members of Congress as criminals [7].

Such goofs are inevitable with deep-learning algorithms that alter themselves in ways that can’t actually be explained. Developing systems with transparency as a primary principle can help reduce embarrassing mistakes caused when strange inputs yield even more odd output.
In some cases, government regulatory agencies expect and demand accountability, making transparency even more important.

Putting it All Together

AI is not a magic solution to all of life’s problems. It is best seen as a tool that, when developed in accordance with the principles above, can enhance human-led projects. Augmented intelligence systems take advantage of AI’s strengths to fill in for human weakness in accordance with the principles of good practice for AI.

Think of the Iron Man suit from the movies of the same name. The suit’s AI system feeds the most relevant information to Tony Stark so that he can make the ultimate choice about the best course of action.

Combining the AI’s data processing abilities with human judgment is what gives the whole system the ability to perform better than either AI alone or humans on their own. Such mutual dependence also ensures the AI will never judge human beings to be obsolete, significantly reducing the chance of a reduction in human safety from a robot-triggered nuclear holocaust. This approach leaves humans in control, respects human freedom, and leaves you with someone who can explain the reasoning behind the decisions that were made at each critical step.

Following these principles in AI development will promote AI systems likely to enhance our lives, but at the cost of making movies more boring.

Joseph Byrum is the chief data scientist at Principal Financial Group. Connect with him on Twitter @ByrumJoseph.

  1. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
  2. https://futureoflife.org/ai-principles/
  3. http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf
  4. https://www.blog.google/technology/ai/ai-principles/
  5. https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx
  6. https://aws.amazon.com/rekognition/
  7. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

Analytics data science news articles

Related Posts

  • 67
    Operations research (O.R.) has had it easy. From its earliest days, the discipline harnessed the incredible power of mathematical algorithms to drive efficiency without drawing undue public attention. Militaries have relied on O.R. ever since World War I to better move around supplies or to ensure strike missions reach their…
    Tags: ai
  • 58
    The hype surrounding artificial intelligence (AI) is intense, but for most European businesses surveyed in a recent study by SAS, adoption of AI is still in the early or even planning stages. The good news is, the vast majority of organizations have begun to talk about AI, and a few…
    Tags: ai
  • 52
    Tomorrow’s most vexing problems will be solved with smart automation. By combining key technologies – artificial intelligence (AI), big data, advanced analytics and autonomous systems – smart technology can achieve far more than the unassisted human mind could ever accomplish. Smart automation represents the next frontier in improving the human…
    Tags: human, ai
  • 50
    A recent Gartner survey reveals that saving time and money are the top reasons why consumers would use artificial intelligence (AI). “AI is among the technologies that consumers consider using for tangible and more ‘serious’ benefits, as opposed to socializing, projecting self-image and having fun – three common reasons for…
    Tags: ai
  • 45
    The First Wave of AI from 1975 to 1990 never quite lived up to expectations because of the inherent limitations of systems based on deductive logic algorithms. The succeeding Second Wave, which we’re still in, took advantage of massive leaps in sensor and processing capabilities that enabled a transition to…
    Tags: ai, human


Headlines

Fighting terrorists online: Identifying extremists before they post content

New research has found a way to identify extremists, such as those associated with the terrorist group ISIS, by monitoring their social media accounts, and can identify them even before they post threatening content. The research, “Finding Extremists in Online Social Networks,” which was recently published in the INFORMS journal Operations Research, was conducted by Tauhid Zaman of the MIT, Lt. Col. Christopher E. Marks of the U.S. Army and Jytte Klausen of Brandeis University. Read more →

Syrian conflict yields model for attrition dynamics in multilateral war

Based on their study of the Syrian Civil War that’s been raging since 2011, three researchers created a predictive model for multilateral war called the Lanchester multiduel. Unless there is a player so strong it can guarantee a win regardless of what others do, the likely outcome of multilateral war is a gradual stalemate that culminates in the mutual annihilation of all players, according to the model. Read more →

SAS, Samford University team up to generate sports analytics talent

Sports teams try to squeeze out every last bit of talent to gain a competitive advantage on the field. That’s also true in college athletic departments and professional team offices, where entire departments devoted to analyzing data hunt for sports analytics experts that can give them an edge in a game, in the stands and beyond. To create this talent, analytics company SAS will collaborate with the Samford University Center for Sports Analytics to support teaching, learning and research in all areas where analytics affects sports, including fan engagement, sponsorship, player tracking, sports medicine, sports media and operations. Read more →

UPCOMING ANALYTICS EVENTS

INFORMS-SPONSORED EVENTS

INFORMS Annual Meeting
Nov. 4-7, 2018, Phoenix

Winter Simulation Conference
Dec. 9-12, 2018, Gothenburg, Sweden

OTHER EVENTS

Applied AI & Machine Learning | Comprehensive
Starts Oct. 29, 2018 (live online)


The Analytics Clinic
Citizen Data Scientists | Why Not DIY AI?
Nov. 8, 2018, 11 a.m. – 12:30 p.m.


Advancing the Analytics-Driven Organization
Jan. 28–31, 2019, 1 p.m.– 5 p.m. (live online)


CAP® EXAM SCHEDULE

CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:


 
For more information, go to 
https://www.certifiedanalytics.org.