The Cult of AI
The Cult of AI: Perceiving AI to Be More Mature Than It Is

AI is all about boundaries: the AI works well if we as developers and deployers define the task and genuinely understand the environment in which the AI will be used. New AI applications are exciting in part because they exceed previous technical boundaries − like AI winning at chess, then Jeopardy, then Go, then StarCraft. But what happens when we assume that AI is ready to break those barriers before the technology or the environment is truly ready? This section presents examples where AIs exceeded either technical or environmental limits – whether because AI was put in roles it wasn’t suited for, user expectations didn’t align with its abilities, or because the world was assumed to be simpler than it really is.

 

 
Explore the Three Fails in This Category:

No Human Needed: The AI’s Got This

We often intend to design AIs to assist their human partners, but what we create can end up replacing some human partners. When the AI isn’t ready to completely perform the task without the help of humans, this could lead to significant problems.

 

Examples

Microsoft released Tay, an AI chatbot designed “to engage and entertain” and learn from the communication patterns of the 18-to-24-year-olds with whom it interacted. Within hours, Tay started repeating some users’ sexist, anti-Semitic, racist, and other inflammatory statements. Although the chatbot met its learning objective, the way it did so required individuals within Microsoft to modify the AI and address the public fallout from the experiment.1

Because Amazon employs so many warehouse workers, the company has used a heavily automated process that tracks employee productivity and is authorized to fire people without the intervention of a human supervisor. As a result, some employees have said they avoid using the bathroom for fear of being fired on the spot. Implementing this system has led to legal and public relations challenges, even if it did reduce the workload for the company’s human resources employees or remaining supervisors.2

Why is this a fail?

Perception about what AI is suited for may not always align with the research. Deciding which tasks are better suited for humans or for machines can be traced back to Fitts’s ‘machines are better at’ (MABA) list from 1951.3 A modern-day interpretation of that list might allocate tasks that involve judgment, creativity, and intuition to humans, and tasks that involve responding quickly or storing and sifting through large amounts of data to the AI.4,5 More advanced AI applications can be designed to blur those lines, but even in those cases the AI will likely need to interact with humans in some capacity.

Like any technology, AI may not work as intended or may have undesirable consequences. Consequently, if the AI is intended to work by itself, any design considerations meant to foster partnership will be overlooked, which will impose additional burdens on the human partners when they are called upon.6,7

 

What happens when things fail?

Semi-autonomous cars provide a great example of how the same burdens that have been studied and addressed over decades in the aviation industry are re-emerging in a new technology and marketplace.

Lost context – As more inputs and decisions are automated, human partners risk losing the context they often rely on to make informed decisions. Further, sometimes they can be surprised by decisions their AI partner makes because they fail to fully understand how that decision was made,8 since information that they would usually rely on to make a decision is often obscured from them by AI processes. For example, when a semi-autonomous car passes control back to the human driver, the driver may have to make quick decisions about what to do without knowing why the AI transferred control of the car to him or her, which increases the likelihood of making errors.

Cognitive drain – As AIs get better at conducting tasks that humans find dull and routine, humans can be left with only the hardest and most cognitively demanding tasks. For example, traveling in a semi-autonomous car might require human drivers to monitor both the vehicle to see if it’s acting reliably, and the road to see if conditions require human intervention. Because the humans are then more engaged in more cognitively demanding work, they are at a higher risk of the negative effects of cognitive overload, such as decreased vigilance or increased likelihood of making errors.

Human error traded for new kinds of error – Human-AI coordination can lead to new sets of challenges and learning curves. For example, researchers have documented that drivers believe they will be able to respond to rare events more quickly and effectively than they actually can.9 If this mistaken belief is unintentionally included in the AI’s programming, it could create a dangerously false sense of security for both developers and drivers.

Reduced human skills or abilities – If the AI becomes responsible for doing everything, humans will have less opportunity to practice the skills that were often important in the development of their knowledge and expertise on the topic (i.e., experiences that enable them to perform more complex or nuanced activities). Driving studies have indicated that human attentiveness and monitoring of traffic and road conditions decrease as automation increases. Thus, at moments when experience and attention are needed most, they might potentially have atrophied due to humans’ reliance on AI.

 

 

 
 
       
Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.