You Call This Artificial “Intelligence”? AI Meets the Real World

​AI systems can perform specific, defined tasks so well that their capability can appear superhuman. For instance, AI can recognize common images and objects better than human beings, AI can sift through large amounts of data faster than human beings, and AI can master more languages than human beings.1 However, it is important to remember that an AI’s success is task specific and AI’s ability to complete a task – such as recognizing images – is contingent on the data it receives and the environment it operates in. Because of this, sometimes AI applications are fooled in ways that humans never would be, particularly if these systems encounter situations beyond their abilities. The examples below describe situations where environmental factors exceeded AI’s “superhuman” capabilities and invalidated any contingency planning that developers or deployers introduced.

 

 
Explore the Three Fails in This Category:

Sensing is Believing

When sensors are faulty, or the code that interprets the data is faulty, the result can be extremely damaging.

 

Examples

When the battery died on early versions of “smart” thermostats, houses got really, really cold.2 Later versions had appropriate protections built into the code to ensure this wouldn’t happen.

A preliminary analysis of the Boeing 737 MAX airline crashes found that a faulty sensor “erroneously reported that the airplane was stalling… which triggered an automated system… to point the aircraft’s nose down,” when the aircraft was not actually stalling.3 Boeing subsequently included the safety features that would have alerted pilots to the disagreement between working sensors and the failed sensor to all models.

A woman discovered that any person’s fingerprint could unlock her phone’s “vault-like security” after she had fitted the phone with a $3 screen protector. Customers were told to avoid logging in through fingerprint until the vendor could fix the code.4

Why is this a fail?

Humans use sight, smell, hearing, taste, and touch to perceive and make sense of the world. These senses work in tandem and can serve as backups for one another; for instance, you might smell smoke before you see it. But human senses and processing aren’t perfect; they can be influenced, be confused, or degrade.

 

What happens when things fail?

Similar to humans, some automated systems rely on sensors to get data about their operating environments and rely on code to process and act on that data. And like human senses, these sensors and the interpretation of their readings are imperfect and can be influenced by the composition or labelling of the training dataset, can get confused by erroneous or unexpected inputs, and can degrade as parts get older. AI applications tend to break if we haven’t included redundancy, guardrails to control behavior, or code to gracefully deal with programming errors.

We can learn from a long history of research on sensor failure, for example in the automobile, power production, manufacturing, and aviation industries. In the latter case research findings have led to certification requirements like triple redundancy for any parts on an aircraft necessary for flight.5

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.