Turning Lemons into Lemon…Reflux: When AI Makes Things WorseTurning Lemons into Reflux: When AI Makes Things Worse

Sometimes the biggest challenges emerge when AI does exactly what it is programmed to do! An AI doesn’t recognize social contexts or constructs, and this section examines some of the unwanted impacts that can result from the divergence between technical and social outcomes. The three fails explore three components of the AI: the training data fed into the model, the objective of the AI and the metrics chosen to measure its success, and the AI’s interactions with its environment.

 

 
Explore the Four Fails in This Category:

Irrelevant Data, Irresponsible Outcomes

A lack of understanding about the training data, its properties, or the conditions under which the data was collected can result in flawed outcomes for the AI application.

 

Examples

In 2008, early webcam facial tracking algorithms could not identify faces of darker skinned individuals because all the training data (and most of the developers) were white skinned.1 One particularly illuminating demonstration of this fail occurred in 2018, when Amazon’s facial recognition system confused pictures of 28 members of Congress (the majority of them dark-skinned) with mugshots.2 The ten-year persistence of these fails highlights the systemic and cultural barriers to fixing the problem, despite it being well acknowledged.

40,000 Michigan residents were wrongly accused of fraud by a state-operated computer system that had an error rate as high as 93%. Why? The system could not convert some data from legacy sources, and documentation and records were missing, meaning the system often issued a fraud determination without having access to all the information it needed. A lack of human supervision meant the problem was not addressed for over a year, but that wouldn’t change the underlying problem that the data may not be usable for this application.3

An AI for allocating healthcare services offered more care to white patients than to equally sick black patients. Why? The AI was trained on real data patterns, where unequal access to care means less money is traditionally spent on black patients than white patients with the same level of need. Since the AI’s goal was to drive down costs, it focused on the more expensive group, and therefore offered more care to white patients.4,5 This example shows the danger of relying on existing data with a history of systemic injustice, as well as the importance of selecting between a mathematical and a human-centric measure to promote the desired outcome.

Why is this a fail?

Many AI approaches reflect the patterns in the data they are fed. Unfortunately, data can be inaccurate, incomplete, unavailable, outdated, irrelevant, or systematically problematic. Even relevant and accurate data may be unrepresentative and unsuitable for the new AI task. Since data is highly contextual, the original purposes for collecting the data may be unknown or not appropriate to the new task, and/or the data may reflect historical and societal imbalances and prejudices that are now deemed illegal or harmful to segments of society.6

 

What happens when things fail?

When an AI system is trained on data with flawed patterns, the system doesn’t just replicate them, it can encode and amplify them.7 Without qualitative and quantitative scientific methods to understand the data and how it was collected, the quality of data and its impacts are difficult to appreciate. Even when we apply these methods, data introduces unknown nuances and patterns (which are sometimes incorrectly grouped together with human influences and jointly categorized as ‘biases’) that are really hard to detect, let alone fix.8,9

Statistics can help us address some of these pitfalls, but we have to be careful to collect enough, and appropriate, statistical data. The larger issue is that statistics don’t capture social and political contexts and histories. We must remember that these contexts and histories have too often resulted in comparatively greater harm to minority groups (gender, sexuality, race, ethnicity, religion, etc.).10

The ten-year persistence of these fails highlights the systemic and cultural barriers to fixing the problem, despite it being well acknowledged

Documentation about the data, including why the data was collected, the method of collection, and how it was analyzed, goes a long way toward helping us understanding the data’s impact.

 

 

 
 
       
Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

You Told Me to Do This

An AI will do what we program it to do. But how it does so may differ from what users want, especially if we don’t consider social and contextual factors when developing the application.

 

Examples

An AI trained to identify cancerous skin lesions in images was successful, not because the AI learned to distinguish the shapes and colors of cancerous lesions from those of non-cancerous features, but because only the images of cancerous lesions contained rulers and the AI based its decision on the presence or absence of rulers in the photos.1 This example shows the importance of understanding the key parameters an AI uses to make a decision, and illustrates how we may incorrectly assume that an AI makes decisions just as a human would.

An algorithm designed to win at Tetris chose to pause the game indefinitely right before the next piece would cause it to lose.2 This example shows how an AI will mathematically satisfy its objective but fail to achieve the intended goals, and that the “spirit” of the rules is a human constraint that may not apply to the AI.

Open AI created a text-generating AI (i.e., an application that can write text all on its own) whose output was indistinguishable from text written by humans. The organization decided to withhold full details of the original model since it was so convincing that malicious actors could direct it to generate propaganda and hate speech.3,4 This example shows how a well-performing algorithm does not inherently incorporate moral restrictions; adding that awareness would be the responsibility of the original developers or deployers.

Why is this a fail?

Even if an AI has perfectly relevant and representative data to learn from, the way the AI tries to perform its job can lead to actions we didn’t want or anticipate. We give the AI a specific task and a mathematical way to measure progress (sometimes called the “objective function” and “error function,” respectively). Being human, we make assumptions about how the algorithm will perform its task, but all the algorithm does is find a mathematically valid solution, even if that solution goes against the spirit of what we intended (the literature calls this “reward hacking”). Unexpected results are more common in: complicated systems, in applications that operate over longer periods of time, and in systems that have less human oversight.5

 

What happens when things fail?

The AI doesn’t recognize social context or constructs; it doesn’t appreciate that some solutions go against the spirit of the rules. Therefore, the data and the algorithms aren’t ‘biased,’ but the way the data interacts with our programmed goals can lead to biased outcomes. As designers, we set those objectives and ways of measuring success, which effectively incorporate what we value and why (consciously or unconsciously) into the AI.6

Take the AI out of it for a moment, and just think about agreeing on a definition for a word. How would you define “fair”? (Arvind Narayanan, an associate professor of computer science at Princeton, defined “fairness” 21 different ways.)7 For example, for college admissions that make use of SAT scores, a reasonable expectation of fairness would be that two candidates with the same score should have an equal chance of being admitted – this approach relies on “individual fairness.” Yet, for a variety of socio-cultural reasons, students with more access to resources perform better on the test (in fact, the organization that creates the SAT recognized this in 2019 and began providing contextual information about the test taker’s “neighborhood” and high school).8 Therefore, another reasonable expectation of fairness would be that it takes into account demographic differences – this approach relies on “group fairness.” Thus, a potential tension exists between two laudable goals: individual fairness and group fairness.

If we want algorithms to be ‘fair’ or ‘accurate,’ we have to agree on how to best scope these terms mathematically and socially. This means being aware of encoding one interpretation of the problem or preference for an outcome at the expense of the considerations of others. Therefore, we need to create frameworks and guidelines for when to apply specific AI applications, and weigh when the potential negative impacts of an AI outweigh the benefits of implementing it.

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Feeding the Feedback Loop

When an AI’s prediction is geared towards assisting humans, how a user responds can influence the AI’s next prediction. Those new outputs can, in turn, impact user behavior, creating a cycle that pushes towards a single end. The scale of AI magnifies the impact of this feedback loop: if an AI provides thousands of users with predictions, then all those people can be pushed toward increasingly specialized or extreme behaviors.

 

Examples

If you’re driving in Leonia, NJ, and you don’t have a yellow tag hanging from your mirror, expect a $200 fine. Why? Navigation apps have redirected cars onto quiet, residential neighborhoods, where the infrastructure is not set up to support that traffic. Because the town could not change the algorithm, it tried to fight the outcomes, one car at a time.1

Predictive policing AI directs officers to concentrate on certain locations. This increased scrutiny leads to more crime reports for that area. Since the AI uses the number of crime reports as a factor in its decision making, this process reinforces the AI’s decisions to send more and more resources to a single location and overlook the rest.2 This feedback loop becomes increasingly hard to break.

YouTube’s algorithms are designed to engage an audience for as long as possible. Consequently, the recommendation engine pushes videos with more and more extreme content, since that’s what keeps most people’s attention. Widespread use of recommendation engines with similar objectives can bring fringe content – like conspiracy theories and extreme violence – into the mainstream.3

Why is this a fail?

The scale of AI deployment can result in substantial disruption and rewiring of everyday lives. Worse, people sometimes change their perceptions and beliefs to be more in line with an algorithm, rather than the other way around.4,5

The enormous extent of the problem makes fixing it much harder. Even recognizing problems is harder, since the patterns are revealed through collective harms and are challenging to discover by connecting individual cases.6

 

What happens when things fail?

Decisions that seem harmless and unimportant individually, when collectively scaled, can build to become at odds with public policies, financial outcomes, and even public health. Recommender systems for social media sites choose incendiary or fake articles for newsfeeds,7 health insurance companies decide which normal behaviors are deemed risky based off recommendations from AI,8 and governments allocate social services according to AIs that consider only one set of factors.9

Concerns over the extent of the feedback loops AI can cause have increased. One government organization has warned that this behavior has the potential to contradict the very principles of pluralism and diversity of ideas that are foundational to Western democracy and capitalism.10

 

 

Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

A Special Case: AI Arms Race

Even in the 1950s, Hollywood imagined that computers might launch a war. While today the general population is (mostly) confident that AI won’t be directly tied to the nuclear launch button, just the potential of AI in military-capable applications is escalating global tensions, without a counteracting, cautionary force.1 The RAND Corporation, a nonprofit institution that analyzes US policy and decision making, describes the race to develop AI as sowing distrust among nuclear powers. Information about adversaries’ capabilities is imperfect, and the speed at which AI-based attacks could happen means that humans have less contextual information for response and may fear losing the ability to retaliate. Since there is such an advantage to a first strike, humans, not AIs, may be more likely to launch preemptively.2 Finally, the perception of a race may prompt the deployment of less-than-fully tested AI systems.3

 

 
 
       
Hold AI to a Higher Standard Involve the Communities Affected by the AI Make Our Assumptions Explicit Monitor the AI’s Impact and Establish Layers of Accountability
It’s OK to Say No to Automation Plan to Fail Try Human-AI Couples Counseling Envision Safeguards for AI Advocates
AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team Ask for Help: Hire a Villain Offer the User Choices Require Objective, Third-party Verification and Validation 
Incorporate Privacy, Civil Liberties, and Security from the Beginning Use Math to Reduce Bad Outcomes Caused by Math Promote Better Adoption through Gameplay Entrust Sector-specific Agencies to Establish AI Standards for Their Domains 

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.