​Failure to Launch: How People Can React to AI

People often hold multiple, contradictory views at the same time. There are plenty of examples when it comes to human interaction with technology: people can be excited that Amazon or Netflix recommendations really reflect their tastes, yet worry about what that means for their privacy; they can use Siri and Google voice to help them remember things, yet lament about losing their short-term memory; they can rely on various newsfeeds to give them information, even if they know (or suspect) that the primary goal of the algorithms behind those newsfeeds is to keep their attention, not to deliver the broadest news coverage. These seeming dichotomies all revolve around trust, which involves belief and understanding, dependency and choice, perception and evidence, emotion and context. All of these elements of trust are critical to having someone accept and adopt an AI. When we as AI developers and deployers include technical, cultural, organizational, sociological, interpersonal, psychological, and neurological perspectives, we can more accurately align people’s trust in the AI to the actual trustworthiness of the AI, and thereby facilitate how people adopt of the AI.

 

 
Explore the Three Fails in This Category:

In AI We Overtrust

When people aren’t familiar with AI, cognitive biases and external factors can prompt them to trust the AI more than they should. Even professionals can overtrust AIs deployed in their own fields. Worse, people can change their perceptions and beliefs to be more in line with an algorithm’s, rather than the other way around.

 

Examples

A research team put 42 test participants into a fire emergency scenario featuring a robot responsible for escorting them to an emergency exit. Even though the robot passed obvious exits and got lost, 37 participants continued to follow it.1,2

Consumers who received a digital ad said they were more interested in a product that was specifically targeted for them, and even adjusted their own preferences to align with what the ad suggested about them.3

In a research experiment, students were told that a robot would determine who had pushed a button and “buzzed in” first, thus winning a game. In reality, the robot tried to maximize participant engagement by evenly distributing who won. Even as the robot made noticeably inaccurate choices, the participants did not attribute the discrepancy to the robot having ulterior motives.4

Why is this a fail?

When an AI is helping people do things better than they would on their own, it is easy to assume that the platform’s goals mirror the user’s goals. However, there is no such thing as a “neutral” AI.5 During the design process we make conscious and unconscious assumptions about what the AI’s goals and priorities should be and what data streams the AI should learn from. Lots of times, our incentives and user incentives align, so this works out wonderfully: users drive to their destinations, or they enjoy the AI-recommended movie. But when goals don’t align, most users don’t realize that they’re potentially acting against their interests. They are convinced that they’re making rational and objective decisions, because they are listening to a rational and objective AI.6

Furthermore, how users actually act and how they think they’ll act often differs. For example, a journalist documented eight drivers in 2013 who overrode their own intuition and blindly followed their GPS, including drivers who turned onto the stairs of the entrance to a park, a driver who drove into a body of water, and another driver who ran straight into a house, all because of their interpretation of the GPS instructions.7

Users are convinced that they’re making rational and objective decisions, because they are listening to a rational and objective AI

Numerous biases can contribute to overtrusting technology. Research highlights three prevalent ones:

1. Humans can have a bias to assume automation is perfect; therefore, they have high initial trust.8 This “automation bias” leads users to trust automated and decision support systems even when it is unwarranted.

2. Similarly, people generally believe something is true if it comes from an authority or expert, even if no supporting evidence is supplied.9 In this case, the AI is perceived as the expert.

3. Lastly, humans use mental short-cuts to make sense of complex information, which can lead to overtrusting an AI if it behaves in a way that conforms to our expectations, or if we have an unclear understanding of how the AI works. Cathy O’Neil, mathematician and author, writes that our relationship to data is similar to an ultimate belief in God: “I think it has a few hallmarks of worship – we turn off parts of our brain, we somehow feel like it’s not our duty, not our right to question this.”10

Therefore, the more an AI is associated with a supposedly flawless, data-driven authority, the more likely that humans will overtrust the AI. In these conditions, even professionals in a given field can cede their authority despite their specialized knowledge.11,12

Another outcome of overtrust is that the AI reinforces a tendency to align with the model’s solution rather than the individual’s own, pushing AI predictions to become self-fulfilling.13 These outcomes also show that having a human supervise an AI will not necessarily work as a failsafe.14

 

What happens when things fail?

The phenomenon of overtrust in AI has contributed to two powerful and potentially frightening outcomes. First, since AIs often have a single objective and reinforce increasingly specialized ends (see more in the “Feeding the Feedback Loop” Fail ) users aren’t presented with alternative perspectives and are directed toward more individualistic, non-inclusive ways of thinking.

Second, the pseudo-authority of AI has allowed pseudosciences to re-emerge with a veneer of validity. Demonstrably invalid examples of AI have been used to look at a person’s face and assess that person’s tendencies toward criminality or violence,15,16 current feelings,17 sexual orientation,18 and IQ or personality traits.19 These phrenology and physiognomy products and claims are unethical, irresponsible, and dangerous.

Although these outcomes may seem extreme, overtrust has a wide range of consequences, from causing people to act against self-interest to promulgating discriminatory practices.

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.