You Call This Artificial “Intelligence”? AI Meets the Real World

​AI systems can perform specific, defined tasks so well that their capability can appear superhuman. For instance, AI can recognize common images and objects better than human beings, AI can sift through large amounts of data faster than human beings, and AI can master more languages than human beings.1 However, it is important to remember that an AI’s success is task specific and AI’s ability to complete a task – such as recognizing images – is contingent on the data it receives and the environment it operates in. Because of this, sometimes AI applications are fooled in ways that humans never would be, particularly if these systems encounter situations beyond their abilities. The examples below describe situations where environmental factors exceeded AI’s “superhuman” capabilities and invalidated any contingency planning that developers or deployers introduced.

 

 
Explore the Three Fails in This Category:

Sensing is Believing

When sensors are faulty, or the code that interprets the data is faulty, the result can be extremely damaging.

 

Examples

When the battery died on early versions of “smart” thermostats, houses got really, really cold.2 Later versions had appropriate protections built into the code to ensure this wouldn’t happen.

A preliminary analysis of the Boeing 737 MAX airline crashes found that a faulty sensor “erroneously reported that the airplane was stalling… which triggered an automated system… to point the aircraft’s nose down,” when the aircraft was not actually stalling.3 Boeing subsequently included the safety features that would have alerted pilots to the disagreement between working sensors and the failed sensor to all models.

A woman discovered that any person’s fingerprint could unlock her phone’s “vault-like security” after she had fitted the phone with a $3 screen protector. Customers were told to avoid logging in through fingerprint until the vendor could fix the code.4

Why is this a fail?

Humans use sight, smell, hearing, taste, and touch to perceive and make sense of the world. These senses work in tandem and can serve as backups for one another; for instance, you might smell smoke before you see it. But human senses and processing aren’t perfect; they can be influenced, be confused, or degrade.

 

What happens when things fail?

Similar to humans, some automated systems rely on sensors to get data about their operating environments and rely on code to process and act on that data. And like human senses, these sensors and the interpretation of their readings are imperfect and can be influenced by the composition or labelling of the training dataset, can get confused by erroneous or unexpected inputs, and can degrade as parts get older. AI applications tend to break if we haven’t included redundancy, guardrails to control behavior, or code to gracefully deal with programming errors.

We can learn from a long history of research on sensor failure, for example in the automobile, power production, manufacturing, and aviation industries. In the latter case research findings have led to certification requirements like triple redundancy for any parts on an aircraft necessary for flight.5

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Insecure AI

When AI’s software and information technology (IT) architecture are not hardened against cybersecurity threats, users and systems are vulnerable to accidental or malicious interference.

Examples

Responding to what it thought were explicit commands but was actually background noise, an Amazon Echo recorded a family’s private conversation and sent it to a random user.2 This is one way in which users can unknowingly cause data spills.

HIS Group, a Japanese hotel chain, installed in-room cameras with facial recognition and speech recognition to cater to guest’s needs. Hackers were able to remotely view video streams.3 This is one way purchasers can unintentionally create situations that attract malicious behavior.

Why is this a fail?

AI’s software and IT architecture are as vulnerable to cybersecurity threats as other connected technologies – and potentially vulnerable in new ways as well (see more in the “AI Pwned” Fail). Just deploying an AI into the world introduces it as a new attack surface (i.e., something to attack).4 Even the most secure AI can face continuous attacks that aim to expose, alter, disable, destroy, or gain unauthorized access to it. Therefore, we must design all software systems in a way that makes cyber protections and privacy considerations inherent to the design from the beginning.5

AI systems often have access to potentially sensitive user information.

What happens when things fail?

Smart devices – like internet-connected speakers, wireless door locks, and wireless implants – have increasingly been introduced into people’s homes and even into their bodies, which makes the consequences of their being hacked especially terrifying.6,7,8 Such systems are often networked because they rely on cloud resources to do some of the processing, or they communicate with other networked sensors. The market growth of these kind of products will make such devices more common.

Another cybersecurity threat arises because AI systems often have access to potentially sensitive user information. For example, smart home devices have an unusual level of access, including contact lists, conversations, voice signatures, and times when someone is home. Any system that collects GPS data can recreate a detailed picture of someone’s location and movement patterns.9 Physical AI systems that provide critical capabilities, such as autonomous vehicles, could become targets for attacks that would put a person’s safety at risk.10 Finally, existing methods of de-identifying individuals from their personal data have been shown to be ineffective (although researchers are working on this challenge).11 As organizations seek to collect more data for their algorithms, the rewards for stealing this information grow as well.

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

AI Pwned

Malicious actors can fool an AI or get it to reveal protected information.

Note: “Pwned” is a computer-slang term that means “to own” or to completely get the better of an opponent or rival.2

 

Examples

Researchers created eyeglasses whose frames had a special pattern that defeats facial recognition algorithms by executing targeted (impersonation of another person) or untargeted (avoiding identification) attacks on the algorithms.3 A human being would easily be able to identify the person correctly.

Researchers explored a commercial facial recognition system that used a picture of a face as input, searched its database, and outputted the name of the person with the closest matching face (and a confidence score in that match). Over time, the researchers discovered information about the individual faces the system had been trained on – information they should not have had access to. They then built their own AI system that, when supplied with a person’s name, returned an imperfect image of the person, revealing data that had never been made public and should not have been.4 This kind of attack illustrates that the sensitive information used for training an AI may not be as well protected as desired.

Why is this a fail?

Cyber-attacks that target AI systems are called “adversarial AI.” AI may not have the defenses to prevent malicious actors from fooling the algorithm into doing what they want, or from interfering with the data on which the model trains, all without making any changes to the algorithm or gaining access to the code. At the most basic level, adversaries present lots of input to the AI and monitor what it does in response, so that they can track how the model makes very specific decisions. Adversaries can then very slightly alter the input so that a human cannot tell the difference, but the AI has great confidence in its wrong conclusion.5 Adversaries can also extract sensitive information about individual elements of the training sets6 or adversaries can make assumptions about which data sources are used and then insert data to bias the learning process.7

What happens when things fail?

The results can have serious real-world consequences. Researchers have demonstrated examples of a self-driving car not “seeing” a stop sign8 and Google Home interpreting a greeting as a command to unlock the front door.9 Researchers have also documented a hacker’s ability to identify and decipher an individual’s healthcare records from a published database of de-identified names.10

Pwning an AI is particularly powerful because 1) it is invisible to humans, so it is hard to detect; 2) it scales, so that a method to fool one AI can often trick other AIs; and 3) it works.11

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.