image

Benjamin Franklin once said, “If you fail to plan, you are planning to fail.”1 Yet planning for failure can make people uncomfortable, which pushes them to avoid talking about fails, instead of seeing failure as an opportunity. This website makes the case that understanding and sharing information about artificial intelligence (AI) failures can provide lessons for better preventing, anticipating, or mitigating future fails.

These lessons derive from a more holistic view of automated technologies. Such technologies are more than independent widgets; they are part of a complex ecosystem that interacts with and influences human behavior, decision making, preferences, strategies, and ways of life in beneficial, and sometimes less beneficial, ways.

“AI Fails” proposes a shift in perspective: we should measure the success of an AI system by its impact on human beings, rather than prioritizing its mathematical or economic properties (e.g., accuracy, false alarm rate, or efficiency). Such a shift has the potential to empower the development and deployment of amazing as well as responsible AI.

So please dive in and explore. The 6 categories below each contain several types of fails, along with lessons learned applicable to that fail. Also, all lessons learned can be found by clicking on the 7th icon. If you want more context, to read how this website defines “AI” and “we”, and a synopsis of the key lessons, scroll below the icons.

Turning Lemons into Lemon

The Cult of AI:
Perceiving AI to Be More Mature Than It Is

You Call This “Intelligence”?:
AI Meets the Real World

Turning Lemons into Reflux:
When AI Makes Things Worse

We’re Not Done Yet:
After Developing the AI

Failure to Launch:
How People Can React to AI

AI Registry:
The Things We’ll Need That Support AI

Lessons Learned

 

 

AI’s Balancing Act: Amazing Possibilities and Potential Harm

The most advanced of these technologies – AI – is not just emerging everywhere, it is being rapidly integrated into people’s lives. The 2018 Department of Defense AI Strategy provides a great way to think about AI: simply as “the ability of machines to perform tasks that normally require human intelligence.”2

AI has tremendously valuable applications, for instance when it promises to translate a person’s conversation into another language in real time,3 more accurately diagnose patients and propose treatments,4 or take care of the elderly.5 In these cases, everyone can enthusiastically accept AI.

However, when it is reported that individuals can be microtargeted with falsified information to sway their election choices,6 that mass surveillance leads to imprisonment and suppression of populations,7,8,9 or that self-driving cars have caused deaths,10 people realize that AI can lead to real harm. In these cases, the belief in AI’s inevitability can elicit terror. As AI developers and deployers, we experience and observe both extremes of this continuum, and everything in between.

 

Embracing and Learning from AI’s Deep History

This website draws heavily on decades of research and expertise, particularly in domains where the cost of failure is high enough (e.g., the military or aviation) that human factors and human-machine teaming have been thoroughly analyzed and the findings well integrated into system development. Though many of these fails and lessons apply to more than AI, collectively they represent the systemic challenges faced by AI developers and practitioners.

In addition, AI is fundamentally different from other technologies in several ways, notably that 1) decisions aren’t static, since data and model versions are updated all the time, and 2) models don’t always come with explanations, which means that even designers may not know what factors affect or even drive decisions.

AI is also fundamentally different in the way it interacts with humans, since 1) the technology is new enough to most people that they can be (and have been) influenced to trust an AI system more than they should, and 2) its reach is vast enough that a single AI with a single programmed objective can scale to affect human decisions at a global level.

On this website, the term “AI” encompasses capabilities ranging from previous and often simpler versions of automated technology whose lessons are still applicable, through more sophisticated AI approaches, some of whose lessons are relatively new and unresolved.

 

Key Lessons from a Human-centric Mindset Regarding AI 

1. Developing AI is a multidisciplinary problem. AI challenges and products can be technical or based on human behavior, and often are a blend of the two. By including multidisciplinary perspectives, we can more clearly articulate design tradeoffs between different priorities and outcomes. Then the broader team can work towards having the human and technical sides of AI reinforce, rather than interfere with, each other.

2. An AI application affects more than just end-users. Input from stakeholders is essential to helping us structure the AI’s objectives to increase adoption and reduce potential undesired consequences. We need to involve end-users, domain experts, and the communities affected by AI, early and repeatedly. These stakeholders can also provide societal and political contexts of the domain where the AI will operate, and can share information about how previous attempts to address their issues fared. Adopting the mindset that all stakeholders are our customers will help us design with all their goals in mind and to create resources that give them the context and tools they need to work with the AI successfully.

3. Our assumptions shape AI. There is no such thing as a neutral, impartial, or unbiased AI. Our underlying assumptions about the data, model, user behaviors, and environment affect the AI’s objectives and outcomes. We should remember that those assumptions stem from our own, often subconscious, social values, and that an AI system can unintentionally replicate and encode those values into practice when the AI is deployed. Given the current composition of the AI development workforce, all too often those values represent how young, white, technically oriented, Western men interact with the world, and no homogeneous group, regardless of its characteristics, can reflect the full spectrum of priorities and considerations of all possible system users. To address this concern, we should strive for diversity in teammates’ experiences and backgrounds, be responsive when teammates or stakeholders raise issues, and provide documentation about the assumptions that went into the AI system.

4. Documentation can be a key tool in reducing future failures. When we make a good product, end-users and consumers will want to use it, and other AI developers may want to repurpose it for their own domains. To do so appropriately and safely, they will need to know what uses of the AI we did and did not intend, the design tradeoffs we considered and acted on, and the risks we identified and the mitigations we put in place. Therefore, the original developers need to capture their assumptions and tradeoff decisions, and organizations have to develop processes that facilitate proactive and ongoing outreach.

5. Accountability must be tied to an AI’s impact. When using the data or AI could cause financial, psychological, physical, or other harm, we must consider if AI offers the best solution to a given problem. In addition to our good intentions and commitment to ethical values, the oversight, accountability, and enforcement mechanisms in place will lead to ethical outcomes. These mechanisms shouldn’t equate to excessive standardization or policies that stymie technological development. Instead, they should encourage proactive approaches to implementing the previous lessons. The more the AI application could influence people’s behavior and livelihoods, the more careful considerations and governance are needed.

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.