{"id":174,"date":"2020-03-18T03:53:36","date_gmt":"2020-03-18T03:53:36","guid":{"rendered":"https:\/\/sites.mitre.org\/aifails\/?page_id=174"},"modified":"2020-07-13T08:44:04","modified_gmt":"2020-07-13T12:44:04","slug":"failure-to-launch","status":"publish","type":"page","link":"https:\/\/sites.mitre.org\/aifails\/failure-to-launch\/","title":{"rendered":"Failure to Launch: How People Can React to AI"},"content":{"rendered":"\n<p>[et_pb_section fb_built=&#8221;1&#8243; specialty=&#8221;on&#8221; _builder_version=&#8221;4.0.3&#8243;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_sidebar area=&#8221;et_pb_widget_area_20&#8243; admin_label=&#8221;Failure to Launch menu widget&#8221; _builder_version=&#8221;4.0.9&#8243; min_height=&#8221;200px&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221;][\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; specialty_columns=&#8221;3&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_row_inner _builder_version=&#8221;4.0.3&#8243; custom_padding=&#8221;||0px||false|false&#8221;][et_pb_column_inner saved_specialty_column_type=&#8221;3_4&#8243; _builder_version=&#8221;4.0.3&#8243;][et_pb_text admin_label=&#8221;Failure to Launch content&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;5px||0px||false|false&#8221;]<\/p>\n<h1><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4846 size-full alignleft\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_launch-003.jpg\" alt=\"\" width=\"135\" height=\"106\" \/>\u200bFailure to Launch: How People Can React to AI<\/h1>\n<p>People often hold multiple, contradictory views at the same time. There are plenty of examples when it comes to human interaction with technology: people can be excited that Amazon or Netflix recommendations really reflect their tastes, yet worry about what that means for their privacy; they can use Siri and Google voice to help them remember things, yet lament about losing their short-term memory; they can rely on various newsfeeds to give them information, even if they know (or suspect) that the primary goal of the algorithms behind those newsfeeds is to keep their attention, not to deliver the broadest news coverage. These seeming dichotomies all revolve around trust, which involves belief and understanding, dependency and choice, perception and evidence, emotion and context. All of these elements of trust are critical to having someone accept and adopt an AI. When we as AI developers and deployers include technical, cultural, organizational, sociological, interpersonal, psychological, and neurological perspectives, we can more accurately align people\u2019s trust in the AI to the actual trustworthiness of the AI, and thereby facilitate how people adopt of the AI.<\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_text admin_label=&#8221;Fail Title&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;5px||0px||false|false&#8221;]<\/p>\n<div id=\"launch\">\n\u00a0\n<\/div>\n<h5>Explore the Three Fails in This Category:<\/h5>\n<p>[\/et_pb_text][et_pb_tabs active_tab_background_color=&#8221;#a0ddf3&#8243; inactive_tab_background_color=&#8221;#e5e7e8&#8243; active_tab_text_color=&#8221;#000000&#8243; admin_label=&#8221;Fails and Lessons Learned content&#8221; module_class=&#8221;icon-tabs&#8221; _builder_version=&#8221;4.4.8&#8243; tab_text_color=&#8221;#000000&#8243; body_text_color=&#8221;#000000&#8243; tab_font_size=&#8221;15px&#8221; tab_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;|||0px|false|false&#8221; custom_padding=&#8221;|||0px|false|false&#8221;][et_pb_tab title=&#8221;In AI We Overtrust &#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>In AI We Overtrust<\/h3>\n<p>When people aren\u2019t familiar with AI, cognitive biases and external factors can prompt them to trust the AI more than they should. Even professionals can overtrust AIs deployed in their own fields. Worse, people can change their perceptions and beliefs to be more in line with an algorithm\u2019s, rather than the other way around.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>A research team put 42 test participants into a fire emergency scenario featuring a robot responsible for escorting them to an emergency exit. Even though the robot passed obvious exits and got lost, 37 participants continued to follow it.<a class=\"reference\" href=\".\/references\/#14.1\" target=\"_blank\" rel=\"noopener noreferrer\">1,<\/a><a class=\"reference\" href=\".\/references\/#14.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<p>Consumers who received a digital ad said they were more interested in a product that was specifically targeted for them, and even adjusted their own preferences to align with what the ad suggested about them.<a class=\"reference\" href=\".\/references\/#14.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<p>In a research experiment, students were told that a robot would determine who had pushed a button and \u201cbuzzed in\u201d first, thus winning a game. In reality, the robot tried to maximize participant engagement by evenly distributing who won. Even as the robot made noticeably inaccurate choices, the participants did not attribute the discrepancy to the robot having ulterior motives.<a class=\"reference\" href=\".\/references\/#14.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>When an AI is helping people do things better than they would on their own, it is easy to assume that the platform\u2019s goals mirror the user\u2019s goals. However, there is no such thing as a \u201cneutral\u201d AI.<a class=\"reference\" href=\".\/references\/#14.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> During the design process we make conscious and unconscious assumptions about what the AI\u2019s goals and priorities should be and what data streams the AI should learn from. Lots of times, our incentives and user incentives align, so this works out wonderfully: users drive to their destinations, or they enjoy the AI-recommended movie. But when goals don\u2019t align, most users don\u2019t realize that they\u2019re potentially acting against their interests. They are convinced that they\u2019re making rational and objective decisions, because they are listening to a rational and objective AI.<a class=\"reference\" href=\".\/references\/#14.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>Furthermore, how users actually act and how they think they\u2019ll act often differs. For example, a journalist documented eight drivers in 2013 who overrode their own intuition and blindly followed their GPS, including drivers who turned onto the stairs of the entrance to a park, a driver who drove into a body of water, and another driver who ran straight into a house, all because of their interpretation of the GPS instructions.<a class=\"reference\" href=\".\/references\/#14.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a><\/p>\n<blockquote>\n<p>Users are convinced that they\u2019re making rational and objective decisions, because they are listening to a rational and objective AI<\/p>\n<\/blockquote>\n<p>Numerous biases can contribute to overtrusting technology. Research highlights three prevalent ones:<\/p>\n<p><em>1. Humans can have a bias to assume automation is perfect; therefore, they have high initial trust.<\/em><a class=\"reference\" href=\".\/references\/#14.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a> This \u201cautomation bias\u201d leads users to trust automated and decision support systems even when it is unwarranted.<\/p>\n<p><em>2. Similarly, people generally believe something is true if it comes from an authority or expert, even if no supporting evidence is supplied.<\/em><a class=\"reference\" href=\".\/references\/#14.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a> In this case, the AI is perceived as the expert.<\/p>\n<p><em>3. Lastly, humans use mental short-cuts to make sense of complex information, which can lead to overtrusting an AI if it behaves in a way that conforms to our expectations, or if we have an unclear understanding of how the AI works.<\/em> Cathy O\u2019Neil, mathematician and author, writes that our relationship to data is similar to an ultimate belief in God: \u201cI think it has a few hallmarks of worship \u2013 we turn off parts of our brain, we somehow feel like it\u2019s not our duty, not our right to question this.\u201d<a class=\"reference\" href=\".\/references\/#14.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a><\/p>\n<p>Therefore, the more an AI is associated with a supposedly flawless, data-driven authority, the more likely that humans will overtrust the AI. In these conditions, even professionals in a given field can cede their authority despite their specialized knowledge.<a class=\"reference\" href=\".\/references\/#14.b\" target=\"_blank\" rel=\"noopener noreferrer\">11,<\/a><a class=\"reference\" href=\".\/references\/#14.c\" target=\"_blank\" rel=\"noopener noreferrer\">12<\/a><\/p>\n<p>Another outcome of overtrust is that the AI reinforces a tendency to align with the model\u2019s solution rather than the individual\u2019s own, pushing AI predictions to become self-fulfilling.<a class=\"reference\" href=\".\/references\/#14.d\" target=\"_blank\" rel=\"noopener noreferrer\">13<\/a> These outcomes also show that having a human supervise an AI will not necessarily work as a failsafe.<a class=\"reference\" href=\".\/references\/#14.e\" target=\"_blank\" rel=\"noopener noreferrer\">14<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>The phenomenon of overtrust in AI has contributed to two powerful and potentially frightening outcomes. First, since AIs often have a single objective and reinforce increasingly specialized ends (see more in the &#8220;Feeding the Feedback Loop&#8221; <a href=\".\/turning-lemons-into-lemon\">Fail <\/a>) users aren\u2019t presented with alternative perspectives and are directed toward more individualistic, non-inclusive ways of thinking.<\/p>\n<p>Second, the pseudo-authority of AI has allowed pseudosciences to re-emerge with a veneer of validity. Demonstrably invalid examples of AI have been used to look at a person\u2019s face and assess that person\u2019s tendencies toward criminality or violence,<a class=\"reference\" href=\".\/references\/#14.f\" target=\"_blank\" rel=\"noopener noreferrer\">15,<\/a><a class=\"reference\" href=\".\/references\/#14.g\" target=\"_blank\" rel=\"noopener noreferrer\">16<\/a> current feelings,<a class=\"reference\" href=\".\/references\/#14.h\" target=\"_blank\" rel=\"noopener noreferrer\">17<\/a> sexual orientation,<a class=\"reference\" href=\".\/references\/#14.i\" target=\"_blank\" rel=\"noopener noreferrer\">18<\/a> and IQ or personality traits.<a class=\"reference\" href=\".\/references\/#14.j\" target=\"_blank\" rel=\"noopener noreferrer\">19<\/a> These phrenology and physiognomy products and claims are unethical, irresponsible, and dangerous.<\/p>\n<p>Although these outcomes may seem extreme, overtrust has a wide range of consequences, from causing people to act against self-interest to promulgating discriminatory practices.<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\" id=\"_edn1\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Lost in Translation: Automation Surprise&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Lost in Translation: Automation Surprise<\/h3>\n<p>End-users can be surprised by how an AI acts, or that it failed to act when expected.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>When drivers take their hands off the wheel in modern cars, they can make dangerous assumptions about the car\u2019s automated capabilities and who or what is in control of what part of the vehicle.<a class=\"reference\" href=\".\/references\/#15.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> This example illustrates the importance of providing training and time for the general population to familiarize themselves with a new automated technology.<a class=\"reference\" href=\".\/references\/#15.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<p>An investigation of a 2012 airplane near-crash (Tel Aviv \u2013 Airbus A320) revealed \u201csignificant issues with crew understanding of automation\u2026 and highlighted the inadequate provision by the aircraft operator of both procedures and pilot training for this type of approach.\u201d<a class=\"reference\" href=\".\/references\/#15.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a> This example shows how even professionals in a field need training when a new, automated system is introduced.<\/p>\n<p>Facebook trained AIs through unsupervised learning (without human supervision) to learn how to negotiate. The \u201cBob\u201d and \u201cAlice\u201d chatbots started talking to each other in their own, made-up language, which was unintelligible to humans.<a class=\"reference\" href=\".\/references\/#15.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> This example shows that even AI experts can be completely surprised by an AI\u2019s outcome.<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>When automated system behaviors cause users to ask, \u201cWhat\u2019s it doing now?\u201d or \u201cWhat\u2019s it going to do next?\u201d the literature calls this <em>automation surprise<\/em>.<a class=\"reference\" href=\".\/references\/#15.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> These behaviors leave users unable to predict how an automated system will act, even if it is working properly. Surprise can occur when the system is too complicated to understand, when we make erroneous assumptions about the environment in which the system will be used, or when people simply expect automated systems to act the same way they do.<a class=\"reference\" href=\".\/references\/#15.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a> AI can exacerbate automation surprise because its decisions evolve and change over time.<\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>The more transparent we are about what the AI can and cannot do (which isn\u2019t always possible because sometimes even we don\u2019t know), the better we can educate users of that system about how it will or will not act. Human-machine teaming (HMT) principles help us understand the importance of good communication. When an AI is designed to help the human partner understand what the automation will do next, the human partner can anticipate those actions and act in concert with them, or override or tweak the automation if needed.<a class=\"reference\" href=\".\/references\/#15.7\" target=\"_blank\" rel=\"noopener noreferrer\">7,<\/a><a class=\"reference\" href=\".\/references\/#15.8\" target=\"_blank\" rel=\"noopener noreferrer\">8,<\/a><a class=\"reference\" href=\".\/references\/#15.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a><\/p>\n<p>Without this context and awareness, the human partner may become frustrated and stop using the AI. Alternatively, the human partner may be unprepared for the AI action and be unable to recover from a bad decision.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;The AI Resistance&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>The AI Resistance<\/h3>\n<p>Not everyone wants AI or believes that its benefits outweigh the costs. If we dismiss the cautious as Luddites, the technology can genuinely victimize the people who use it.<\/p>\n<p><em>Note: <\/em>\u201cLuddite\u201d is a term describing the 19th century English workmen who vandalized the labor-saving machinery that took their jobs. The term has since been extended to refer to one who is opposed to technological change.<a class=\"reference\" href=\".\/references\/#16.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>When Waymo decided to test self-driving cars in a town in Arizona without first seeking the residents\u2019 approval, residents feared losing their jobs and their lives. Feeling they had no other options open to them, they threw rocks at the automated cars and slashed their tires as means of protest.<a class=\"reference\" href=\".\/references\/#16.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<p>Cambridge Analytica used AI to surreptitiously influence voters through false information that was individually targeted. Public officials, privacy specialists, and investigative journalists channeled feelings of outrage, betrayal, confusion, and distrust into increased pressure to strengthen legislative protection.<a class=\"reference\" href=\".\/references\/#16.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>The reluctance to adopt AI without reservation is warranted. Just a few years ago, the AI developer community saw the increase in AI capabilities as unadulterated progress and good. Recently, we\u2019re learning that sometimes this holds true, and sometimes progress means progress only for some \u2013 that AI can have harmful impacts on users, communities, and employees of our AI companies.<a class=\"reference\" href=\".\/references\/#16.4\" target=\"_blank\" rel=\"noopener noreferrer\">4,<\/a><a class=\"reference\" href=\".\/references\/#16.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<blockquote>\n<p>Sometimes progress means progress only for some \u2013 that AI can have harmful impacts on users, communities, and employees of our AI companies<\/p>\n<\/blockquote>\n<h3><\/h3>\n<h3>What happens when things fail?<\/h3>\n<p>Even those who are \u201cearly adopters\u201d or an \u201cearly majority\u201d in the technology adoption lifecycle<a class=\"reference\" href=\".\/references\/#16.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a> may still have reservations about fully integrating the new technology into their lives. The people who reject AI entirely may have concerns that cannot be addressed by time, education, and training. For instance, some people find the automated email replies that mimic individual personalities creepy,<a class=\"reference\" href=\".\/references\/#16.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a> some people are worried about the national security implications caused by deepfakes,<a class=\"reference\" href=\".\/references\/#16.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a> some decry the mishandling of the private data that drives AI platforms,<a class=\"reference\" href=\".\/references\/#16.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a> some fear losing their jobs to AI,<a class=\"reference\" href=\".\/references\/#16.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a> some protest the disproportionate impact of mass surveillance on minority groups,<a class=\"reference\" href=\".\/references\/#16.b\" target=\"_blank\" rel=\"noopener noreferrer\">11,<\/a><a class=\"reference\" href=\".\/references\/#16.c\" target=\"_blank\" rel=\"noopener noreferrer\">12,<\/a><a class=\"reference\" href=\".\/references\/#16.d\" target=\"_blank\" rel=\"noopener noreferrer\">13<\/a> and some fear losing their lives to an AI-driven vehicle.<a class=\"reference\" href=\".\/references\/#16.e\" target=\"_blank\" rel=\"noopener noreferrer\">14<\/a><\/p>\n<p>Anger, frustration, and resistance to AI are natural reactions to a society that seems to assume that technology adoption is inevitable and disruptive to their safety or way of life. The idea that the believers should just wait out the laggards and Luddites \u2212 or worse, treat them as the problem \u2013 is flawed. Therefore, we should listen to their concerns and bring in the resisters to guide the solution.<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column_inner][\/et_pb_row_inner][\/et_pb_column][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.4.8&#8243; module_alignment=&#8221;center&#8221; global_module=&#8221;3880&#8243; saved_tabs=&#8221;all&#8221;][et_pb_fullwidth_code disabled_on=&#8221;off|off|off&#8221; admin_label=&#8221;Footer menu&#8221; _builder_version=&#8221;4.4.8&#8243; background_color=&#8221;#d5dde0&#8243; text_orientation=&#8221;center&#8221; module_alignment=&#8221;center&#8221; custom_padding=&#8221;10px||10px||false|false&#8221;]Add Your Experience! This site should be a community resource and would benefit from the addition of other examples and voices. You can write to us by clicking <a href=\"mailto:jrotner@mitre.org;rhodge@mitre.org;ldanley@mitre.org?subject=AI Fails website\">here<\/a>.[\/et_pb_fullwidth_code][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u200bFailure to Launch: How People Can React to AI People often hold multiple, contradictory views at the same time. There are plenty of examples when it comes to human interaction with technology: people can be excited that Amazon or Netflix recommendations really reflect their tastes, yet worry about what that means for their privacy; they [&hellip;]<\/p>\n","protected":false},"author":142,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-174","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/174","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/comments?post=174"}],"version-history":[{"count":0,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/174\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/media?parent=174"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}