{"id":26,"date":"2020-03-17T04:17:46","date_gmt":"2020-03-17T04:17:46","guid":{"rendered":"https:\/\/sites.mitre.org\/aifails\/?page_id=26"},"modified":"2020-08-26T12:19:36","modified_gmt":"2020-08-26T16:19:36","slug":"turning-lemons-into-lemon","status":"publish","type":"page","link":"https:\/\/sites.mitre.org\/aifails\/turning-lemons-into-lemon\/","title":{"rendered":"Turning Lemons into Reflux: When AI Makes Things Worse"},"content":{"rendered":"\n<p>[et_pb_section fb_built=&#8221;1&#8243; specialty=&#8221;on&#8221; _builder_version=&#8221;4.0.3&#8243;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_sidebar area=&#8221;et_pb_widget_area_47&#8243; admin_label=&#8221;Turning Lemons into Reflux menu widget&#8221; _builder_version=&#8221;4.4.1&#8243; min_height=&#8221;200px&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221;][\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; specialty_columns=&#8221;3&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_row_inner _builder_version=&#8221;4.0.3&#8243; custom_padding=&#8221;||0px||false|false&#8221;][et_pb_column_inner saved_specialty_column_type=&#8221;3_4&#8243; _builder_version=&#8221;4.0.3&#8243;][et_pb_text admin_label=&#8221;Turning Lemons into Reflux: When AI Makes Things Worse content&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;0px||0px||false|false&#8221;]<\/p>\n<h1><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3835 size-full alignleft\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/02\/icon_lemons-002.png\" alt=\"Turning Lemons into Lemon\u2026Reflux: When AI Makes Things Worse\" width=\"135\" height=\"100\" \/>Turning Lemons into Reflux: When AI Makes Things Worse<\/h1>\n<p>Sometimes the biggest challenges emerge when AI does exactly what it is programmed to do! An AI doesn\u2019t recognize social contexts or constructs, and this section examines some of the unwanted impacts that can result from the divergence between technical and social outcomes. The three fails explore three components of the AI: the training data fed into the model, the objective of the AI and the metrics chosen to measure its success, and the AI\u2019s interactions with its environment.<\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_text admin_label=&#8221;Fail Title&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;5px||0px||false|false&#8221; hover_enabled=&#8221;0&#8243;]<\/p>\n<div id=\"lemons\">\n\u00a0\n<\/div>\n<h5>Explore the Four Fails in This Category:<\/h5>\n<p>[\/et_pb_text][et_pb_tabs active_tab_background_color=&#8221;#a0ddf3&#8243; inactive_tab_background_color=&#8221;#e5e7e8&#8243; active_tab_text_color=&#8221;#000000&#8243; admin_label=&#8221;Fails and Lessons Learned content&#8221; module_class=&#8221;icon-tabs&#8221; _builder_version=&#8221;4.5.8&#8243; tab_text_color=&#8221;#000000&#8243; body_text_color=&#8221;#000000&#8243; tab_font_size=&#8221;15px&#8221; tab_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;|||0px|false|false&#8221; custom_padding=&#8221;|||0px|false|false&#8221; hover_enabled=&#8221;0&#8243;][et_pb_tab title=&#8221;Irrelevant Data, Irresponsible Outcomes &#8221; _builder_version=&#8221;4.5.8&#8243; hover_enabled=&#8221;0&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Irrelevant Data, Irresponsible Outcomes<\/h3>\n<p>A lack of understanding about the training data, its properties, or the conditions under which the data was collected can result in flawed outcomes for the AI application.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>In 2008, early webcam facial tracking algorithms could not identify faces of darker skinned individuals because all the training data (and most of the developers) were white skinned.<a class=\"reference\" href=\".\/references\/#7.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> One particularly illuminating demonstration of this fail occurred in 2018, when Amazon\u2019s facial recognition system confused pictures of 28 members of Congress (the majority of them dark-skinned) with mugshots.<a class=\"reference\" href=\".\/references\/#7.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> The ten-year persistence of these fails highlights the systemic and cultural barriers to fixing the problem, despite it being well acknowledged.<\/p>\n<p>40,000 Michigan residents were wrongly accused of fraud by a state-operated computer system that had an error rate as high as 93%. Why? The system could not convert some data from legacy sources, and documentation and records were missing, meaning the system often issued a fraud determination without having access to all the information it needed. A lack of human supervision meant the problem was not addressed for over a year, but that wouldn\u2019t change the underlying problem that the data may not be usable for this application.<a class=\"reference\" href=\".\/references\/#7.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<p>An AI for allocating healthcare services offered more care to white patients than to equally sick black patients. Why? The AI was trained on real data patterns, where unequal access to care means less money is traditionally spent on black patients than white patients with the same level of need. Since the AI\u2019s goal was to drive down costs, it focused on the more expensive group, and therefore offered more care to white patients.<a class=\"reference\" href=\".\/references\/#7.4\" target=\"_blank\" rel=\"noopener noreferrer\">4,<\/a><a class=\"reference\" href=\".\/references\/#7.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> This example shows the danger of relying on existing data with a history of systemic injustice, as well as the importance of selecting between a mathematical and a human-centric measure to promote the desired outcome.<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\" id=\"_edn1\"><\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>Many AI approaches reflect the patterns in the data they are fed. Unfortunately, data can be inaccurate, incomplete, unavailable, outdated, irrelevant, or systematically problematic. Even relevant and accurate data may be unrepresentative and unsuitable for the new AI task. Since data is highly contextual, the original purposes for collecting the data may be unknown or not appropriate to the new task, and\/or the data may reflect historical and societal imbalances and prejudices that are now deemed illegal or harmful to segments of society.<a class=\"reference\" href=\".\/references\/#7.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>When an AI system is trained on data with flawed patterns, the system doesn\u2019t just replicate them, it can encode and amplify them.<a class=\"reference\" href=\".\/references\/#7.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a> Without qualitative and quantitative scientific methods to understand the data and how it was collected, the quality of data and its impacts are difficult to appreciate. Even when we apply these methods, data introduces unknown nuances and patterns (which are sometimes incorrectly grouped together with human influences and jointly categorized as \u2018biases\u2019) that are really hard to detect, let alone fix.<a class=\"reference\" href=\".\/references\/#7.8\" target=\"_blank\" rel=\"noopener noreferrer\">8,<\/a><a class=\"reference\" href=\".\/references\/#7.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a><\/p>\n<p>Statistics can help us address some of these pitfalls, but we have to be careful to collect enough, and appropriate, statistical data. The larger issue is that statistics don\u2019t capture social and political contexts and histories. We must remember that these contexts and histories have too often resulted in comparatively greater harm to minority groups (gender, sexuality, race, ethnicity, religion, etc.).<a class=\"reference\" href=\".\/references\/#7.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a><\/p>\n<blockquote>\n<p>The ten-year persistence of these fails highlights the systemic and cultural barriers to fixing the problem, despite it being well acknowledged<\/p>\n<\/blockquote>\n<p>Documentation about the data, including why the data was collected, the method of collection, and how it was analyzed, goes a long way toward helping us understanding the data\u2019s impact.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\">\u00a0<\/th>\n  <th class=\"categoryhdr2\">\u00a0<\/th>\n  <th class=\"categoryhdr3\">\u00a0<\/th>\n  <th class=\"categoryhdr4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;You Told Me to Do This&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>You Told Me to Do This<\/h3>\n<p>An AI will do what we program it to do. But how it does so may differ from what users want, especially if we don\u2019t consider social and contextual factors when developing the application.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>An AI trained to identify cancerous skin lesions in images was successful, not because the AI learned to distinguish the shapes and colors of cancerous lesions from those of non-cancerous features, but because only the images of cancerous lesions contained rulers and the AI based its decision on the presence or absence of rulers in the photos.<a class=\"reference\" href=\".\/references\/#8.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> This example shows the importance of understanding the key parameters an AI uses to make a decision, and illustrates how we may incorrectly assume that an AI makes decisions just as a human would.<\/p>\n<p>An algorithm designed to win at Tetris chose to pause the game indefinitely right before the next piece would cause it to lose.<a class=\"reference\" href=\".\/references\/#8.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> This example shows how an AI will mathematically satisfy its objective but fail to achieve the intended goals, and that the \u201cspirit\u201d of the rules is a human constraint that may not apply to the AI.<\/p>\n<p>Open AI created a text-generating AI (i.e., an application that can write text all on its own) whose output was indistinguishable from text written by humans. The organization decided to withhold full details of the original model since it was so convincing that malicious actors could direct it to generate propaganda and hate speech.<a class=\"reference\" href=\".\/references\/#8.3\" target=\"_blank\" rel=\"noopener noreferrer\">3,<\/a><a class=\"reference\" href=\".\/references\/#8.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> This example shows how a well-performing algorithm does not inherently incorporate moral restrictions; adding that awareness would be the responsibility of the original developers or deployers.<\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>Even if an AI has perfectly relevant and representative data to learn from, the way the AI tries to perform its job can lead to actions we didn\u2019t want or anticipate. We give the AI a specific task and a mathematical way to measure progress (sometimes called the \u201cobjective function\u201d and \u201cerror function,\u201d respectively). Being human, we make assumptions about how the algorithm will perform its task, but all the algorithm does is find a mathematically valid solution, even if that solution goes against the spirit of what we intended (the literature calls this \u201creward hacking\u201d). Unexpected results are more common in: complicated systems, in applications that operate over longer periods of time, and in systems that have less human oversight.<a class=\"reference\" href=\".\/references\/#8.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>The AI doesn\u2019t recognize social context or constructs; it doesn\u2019t appreciate that some solutions go against the spirit of the rules. Therefore, the data and the algorithms aren\u2019t \u2018biased,\u2019 but the way the data interacts with our programmed goals can lead to biased outcomes. As designers, we set those objectives and ways of measuring success, which effectively incorporate what we value and why (consciously or unconsciously) into the AI.<a class=\"reference\" href=\".\/references\/#8.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>Take the AI out of it for a moment, and just think about agreeing on a definition for a word. How would you define \u201cfair\u201d? (Arvind Narayanan, an associate professor of computer science at Princeton, defined \u201cfairness\u201d 21 different ways.)<a class=\"reference\" href=\".\/references\/#8.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a> For example, for college admissions that make use of SAT scores, a reasonable expectation of fairness would be that two candidates with the same score should have an equal chance of being admitted \u2013 this approach relies on \u201cindividual fairness.\u201d Yet, for a variety of socio-cultural reasons, students with more access to resources perform better on the test (in fact, the organization that creates the SAT recognized this in 2019 and began providing contextual information about the test taker\u2019s \u201cneighborhood\u201d and high school).<a class=\"reference\" href=\".\/references\/#8.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a> Therefore, another reasonable expectation of fairness would be that it takes into account demographic differences \u2013 this approach relies on \u201cgroup fairness.\u201d Thus, a potential tension exists between two laudable goals: individual fairness and group fairness.<\/p>\n<p>If we want algorithms to be \u2018fair\u2019 or \u2018accurate,\u2019 we have to agree on how to best scope these terms mathematically and socially. This means being aware of encoding one interpretation of the problem or preference for an outcome at the expense of the considerations of others. Therefore, we need to create frameworks and guidelines for when to apply specific AI applications, and weigh when the potential negative impacts of an AI outweigh the benefits of implementing it.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Feeding the Feedback Loop&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Feeding the Feedback Loop<\/h3>\n<p>When an AI\u2019s prediction is geared towards assisting humans, how a user responds can influence the AI\u2019s next prediction. Those new outputs can, in turn, impact user behavior, creating a cycle that pushes towards a single end. The scale of AI magnifies the impact of this feedback loop: if an AI provides thousands of users with predictions, then all those people can be pushed toward increasingly specialized or extreme behaviors.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>If you\u2019re driving in Leonia, NJ, and you don\u2019t have a yellow tag hanging from your mirror, expect a $200 fine. Why? Navigation apps have redirected cars onto quiet, residential neighborhoods, where the infrastructure is not set up to support that traffic. Because the town could not change the algorithm, it tried to fight the outcomes, one car at a time.<a class=\"reference\" href=\".\/references\/#9.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a><\/p>\n<p>Predictive policing AI directs officers to concentrate on certain locations. This increased scrutiny leads to more crime reports for that area. Since the AI uses the number of crime reports as a factor in its decision making, this process reinforces the AI\u2019s decisions to send more and more resources to a single location and overlook the rest.<a class=\"reference\" href=\".\/references\/#9.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> This feedback loop becomes increasingly hard to break.<\/p>\n<p>YouTube\u2019s algorithms are designed to engage an audience for as long as possible. Consequently, the recommendation engine pushes videos with more and more extreme content, since that\u2019s what keeps most people\u2019s attention. Widespread use of recommendation engines with similar objectives can bring fringe content \u2013 like conspiracy theories and extreme violence \u2013 into the mainstream.<a class=\"reference\" href=\".\/references\/#9.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>The scale of AI deployment can result in substantial disruption and rewiring of everyday lives. Worse, people sometimes <a href=\".\/failure-to-launch\">change their perceptions and beliefs to be more in line with an algorithm<\/a>, rather than the other way around.<a class=\"reference\" href=\".\/references\/#9.4\" target=\"_blank\" rel=\"noopener noreferrer\">4,<\/a><a class=\"reference\" href=\".\/references\/#9.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<p>The enormous extent of the problem makes fixing it much harder. Even recognizing problems is harder, since the patterns are revealed through collective harms and are challenging to discover by connecting individual cases.<a class=\"reference\" href=\".\/references\/#9.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>Decisions that seem harmless and unimportant individually, when collectively scaled, can build to become at odds with public policies, financial outcomes, and even public health. Recommender systems for social media sites choose incendiary or fake articles for newsfeeds,<a class=\"reference\" href=\".\/references\/#9.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a> health insurance companies decide which normal behaviors are deemed risky based off recommendations from AI,<a class=\"reference\" href=\".\/references\/#9.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a> and governments allocate social services according to AIs that consider only one set of factors.<a class=\"reference\" href=\".\/references\/#9.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a><\/p>\n<p>Concerns over the extent of the feedback loops AI can cause have increased. One government organization has warned that this behavior has the potential to contradict the very principles of pluralism and diversity of ideas that are foundational to Western democracy and capitalism.<a class=\"reference\" href=\".\/references\/#9.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;A Special Case: AI Arms Race&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px\">\n<h3>A Special Case: AI Arms Race<\/h3>\n<p>Even in the 1950s, Hollywood imagined that computers might launch a war. While today the general population is (mostly) confident that AI won\u2019t be directly tied to the nuclear launch button, just the potential of AI in military-capable applications is escalating global tensions, without a counteracting, cautionary force.<a class=\"reference\" href=\".\/references\/#10.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> The RAND Corporation, a nonprofit institution that analyzes US policy and decision making, describes the race to develop AI as sowing distrust among nuclear powers. Information about adversaries\u2019 capabilities is imperfect, and the speed at which AI-based attacks could happen means that humans have less contextual information for response and may fear losing the ability to retaliate. Since there is such an advantage to a first strike, humans, not AIs, may be more likely to launch preemptively.<a class=\"reference\" href=\".\/references\/#10.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> Finally, the perception of a race may prompt the deployment of less-than-fully tested AI systems.<a class=\"reference\" href=\".\/references\/#10.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\">\u00a0<\/th>\n  <th class=\"categoryhdr2\">\u00a0<\/th>\n  <th class=\"categoryhdr3\">\u00a0<\/th>\n  <th class=\"categoryhdr4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column_inner][\/et_pb_row_inner][\/et_pb_column][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.4.8&#8243; module_alignment=&#8221;center&#8221; global_module=&#8221;3880&#8243; saved_tabs=&#8221;all&#8221;][et_pb_fullwidth_code disabled_on=&#8221;off|off|off&#8221; admin_label=&#8221;Footer menu&#8221; _builder_version=&#8221;4.5.0&#8243; background_color=&#8221;#d5dde0&#8243; text_orientation=&#8221;center&#8221; module_alignment=&#8221;center&#8221; custom_padding=&#8221;10px||10px||false|false&#8221;]Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking <a href=\"mailto:jrotner@mitre.org;rhodge@mitre.org;ldanley@mitre.org?subject=AI Fails website\">here<\/a>.[\/et_pb_fullwidth_code][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Turning Lemons into Reflux: When AI Makes Things Worse Sometimes the biggest challenges emerge when AI does exactly what it is programmed to do! An AI doesn\u2019t recognize social contexts or constructs, and this section examines some of the unwanted impacts that can result from the divergence between technical and social outcomes. The three fails [&hellip;]<\/p>\n","protected":false},"author":142,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-26","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/26","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/comments?post=26"}],"version-history":[{"count":1,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/26\/revisions"}],"predecessor-version":[{"id":8775,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/26\/revisions\/8775"}],"wp:attachment":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/media?parent=26"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}