​Failure to Launch: How People Can React to AI

People often hold multiple, contradictory views at the same time. There are plenty of examples when it comes to human interaction with technology: people can be excited that Amazon or Netflix recommendations really reflect their tastes, yet worry about what that means for their privacy; they can use Siri and Google voice to help them remember things, yet lament about losing their short-term memory; they can rely on various newsfeeds to give them information, even if they know (or suspect) that the primary goal of the algorithms behind those newsfeeds is to keep their attention, not to deliver the broadest news coverage. These seeming dichotomies all revolve around trust, which involves belief and understanding, dependency and choice, perception and evidence, emotion and context. All of these elements of trust are critical to having someone accept and adopt an AI. When we as AI developers and deployers include technical, cultural, organizational, sociological, interpersonal, psychological, and neurological perspectives, we can more accurately align people’s trust in the AI to the actual trustworthiness of the AI, and thereby facilitate how people adopt of the AI.

 

 
Explore the Three Fails in This Category:

In AI We Overtrust

When people aren’t familiar with AI, cognitive biases and external factors can prompt them to trust the AI more than they should. Even professionals can overtrust AIs deployed in their own fields. Worse, people can change their perceptions and beliefs to be more in line with an algorithm’s, rather than the other way around.

 

Examples

A research team put 42 test participants into a fire emergency scenario featuring a robot responsible for escorting them to an emergency exit. Even though the robot passed obvious exits and got lost, 37 participants continued to follow it.1,2

Consumers who received a digital ad said they were more interested in a product that was specifically targeted for them, and even adjusted their own preferences to align with what the ad suggested about them.3

In a research experiment, students were told that a robot would determine who had pushed a button and “buzzed in” first, thus winning a game. In reality, the robot tried to maximize participant engagement by evenly distributing who won. Even as the robot made noticeably inaccurate choices, the participants did not attribute the discrepancy to the robot having ulterior motives.4

Why is this a fail?

When an AI is helping people do things better than they would on their own, it is easy to assume that the platform’s goals mirror the user’s goals. However, there is no such thing as a “neutral” AI.5 During the design process we make conscious and unconscious assumptions about what the AI’s goals and priorities should be and what data streams the AI should learn from. Lots of times, our incentives and user incentives align, so this works out wonderfully: users drive to their destinations, or they enjoy the AI-recommended movie. But when goals don’t align, most users don’t realize that they’re potentially acting against their interests. They are convinced that they’re making rational and objective decisions, because they are listening to a rational and objective AI.6

Furthermore, how users actually act and how they think they’ll act often differs. For example, a journalist documented eight drivers in 2013 who overrode their own intuition and blindly followed their GPS, including drivers who turned onto the stairs of the entrance to a park, a driver who drove into a body of water, and another driver who ran straight into a house, all because of their interpretation of the GPS instructions.7

Users are convinced that they’re making rational and objective decisions, because they are listening to a rational and objective AI

Numerous biases can contribute to overtrusting technology. Research highlights three prevalent ones:

1. Humans can have a bias to assume automation is perfect; therefore, they have high initial trust.8 This “automation bias” leads users to trust automated and decision support systems even when it is unwarranted.

2. Similarly, people generally believe something is true if it comes from an authority or expert, even if no supporting evidence is supplied.9 In this case, the AI is perceived as the expert.

3. Lastly, humans use mental short-cuts to make sense of complex information, which can lead to overtrusting an AI if it behaves in a way that conforms to our expectations, or if we have an unclear understanding of how the AI works. Cathy O’Neil, mathematician and author, writes that our relationship to data is similar to an ultimate belief in God: “I think it has a few hallmarks of worship – we turn off parts of our brain, we somehow feel like it’s not our duty, not our right to question this.”10

Therefore, the more an AI is associated with a supposedly flawless, data-driven authority, the more likely that humans will overtrust the AI. In these conditions, even professionals in a given field can cede their authority despite their specialized knowledge.11,12

Another outcome of overtrust is that the AI reinforces a tendency to align with the model’s solution rather than the individual’s own, pushing AI predictions to become self-fulfilling.13 These outcomes also show that having a human supervise an AI will not necessarily work as a failsafe.14

 

What happens when things fail?

The phenomenon of overtrust in AI has contributed to two powerful and potentially frightening outcomes. First, since AIs often have a single objective and reinforce increasingly specialized ends (see more in the “Feeding the Feedback Loop” Fail ) users aren’t presented with alternative perspectives and are directed toward more individualistic, non-inclusive ways of thinking.

Second, the pseudo-authority of AI has allowed pseudosciences to re-emerge with a veneer of validity. Demonstrably invalid examples of AI have been used to look at a person’s face and assess that person’s tendencies toward criminality or violence,15,16 current feelings,17 sexual orientation,18 and IQ or personality traits.19 These phrenology and physiognomy products and claims are unethical, irresponsible, and dangerous.

Although these outcomes may seem extreme, overtrust has a wide range of consequences, from causing people to act against self-interest to promulgating discriminatory practices.

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Lost in Translation: Automation Surprise

End-users can be surprised by how an AI acts, or that it failed to act when expected.

 

Examples

When drivers take their hands off the wheel in modern cars, they can make dangerous assumptions about the car’s automated capabilities and who or what is in control of what part of the vehicle.1 This example illustrates the importance of providing training and time for the general population to familiarize themselves with a new automated technology.2

An investigation of a 2012 airplane near-crash (Tel Aviv – Airbus A320) revealed “significant issues with crew understanding of automation… and highlighted the inadequate provision by the aircraft operator of both procedures and pilot training for this type of approach.”3 This example shows how even professionals in a field need training when a new, automated system is introduced.

Facebook trained AIs through unsupervised learning (without human supervision) to learn how to negotiate. The “Bob” and “Alice” chatbots started talking to each other in their own, made-up language, which was unintelligible to humans.4 This example shows that even AI experts can be completely surprised by an AI’s outcome.

Why is this a fail?

When automated system behaviors cause users to ask, “What’s it doing now?” or “What’s it going to do next?” the literature calls this automation surprise.5 These behaviors leave users unable to predict how an automated system will act, even if it is working properly. Surprise can occur when the system is too complicated to understand, when we make erroneous assumptions about the environment in which the system will be used, or when people simply expect automated systems to act the same way they do.6 AI can exacerbate automation surprise because its decisions evolve and change over time.

 

What happens when things fail?

The more transparent we are about what the AI can and cannot do (which isn’t always possible because sometimes even we don’t know), the better we can educate users of that system about how it will or will not act. Human-machine teaming (HMT) principles help us understand the importance of good communication. When an AI is designed to help the human partner understand what the automation will do next, the human partner can anticipate those actions and act in concert with them, or override or tweak the automation if needed.7,8,9

Without this context and awareness, the human partner may become frustrated and stop using the AI. Alternatively, the human partner may be unprepared for the AI action and be unable to recover from a bad decision.

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

The AI Resistance

Not everyone wants AI or believes that its benefits outweigh the costs. If we dismiss the cautious as Luddites, the technology can genuinely victimize the people who use it.

Note: “Luddite” is a term describing the 19th century English workmen who vandalized the labor-saving machinery that took their jobs. The term has since been extended to refer to one who is opposed to technological change.1

 

Examples

When Waymo decided to test self-driving cars in a town in Arizona without first seeking the residents’ approval, residents feared losing their jobs and their lives. Feeling they had no other options open to them, they threw rocks at the automated cars and slashed their tires as means of protest.2

Cambridge Analytica used AI to surreptitiously influence voters through false information that was individually targeted. Public officials, privacy specialists, and investigative journalists channeled feelings of outrage, betrayal, confusion, and distrust into increased pressure to strengthen legislative protection.3

Why is this a fail?

The reluctance to adopt AI without reservation is warranted. Just a few years ago, the AI developer community saw the increase in AI capabilities as unadulterated progress and good. Recently, we’re learning that sometimes this holds true, and sometimes progress means progress only for some – that AI can have harmful impacts on users, communities, and employees of our AI companies.4,5

Sometimes progress means progress only for some – that AI can have harmful impacts on users, communities, and employees of our AI companies

What happens when things fail?

Even those who are “early adopters” or an “early majority” in the technology adoption lifecycle6 may still have reservations about fully integrating the new technology into their lives. The people who reject AI entirely may have concerns that cannot be addressed by time, education, and training. For instance, some people find the automated email replies that mimic individual personalities creepy,7 some people are worried about the national security implications caused by deepfakes,8 some decry the mishandling of the private data that drives AI platforms,9 some fear losing their jobs to AI,10 some protest the disproportionate impact of mass surveillance on minority groups,11,12,13 and some fear losing their lives to an AI-driven vehicle.14

Anger, frustration, and resistance to AI are natural reactions to a society that seems to assume that technology adoption is inevitable and disruptive to their safety or way of life. The idea that the believers should just wait out the laggards and Luddites − or worse, treat them as the problem – is flawed. Therefore, we should listen to their concerns and bring in the resisters to guide the solution.

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.