{"id":118,"date":"2019-09-23T12:21:32","date_gmt":"2019-09-23T12:21:32","guid":{"rendered":"https:\/\/sites.mitre.org\/aifails\/?page_id=118"},"modified":"2020-07-13T08:04:30","modified_gmt":"2020-07-13T12:04:30","slug":"the-cult-of-ai","status":"publish","type":"page","link":"https:\/\/sites.mitre.org\/aifails\/the-cult-of-ai\/","title":{"rendered":"The Cult of AI: Perceiving AI to Be More Mature Than It Is"},"content":{"rendered":"\n<p>[et_pb_section fb_built=&#8221;1&#8243; specialty=&#8221;on&#8221; _builder_version=&#8221;4.0.5&#8243;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_sidebar area=&#8221;et_pb_widget_area_39&#8243; admin_label=&#8221;The Cult of AI&#8221; _builder_version=&#8221;4.4.1&#8243; min_height=&#8221;200px&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221;][\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; specialty_columns=&#8221;3&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_row_inner _builder_version=&#8221;4.4.1&#8243; custom_padding=&#8221;||0px||false|false&#8221;][et_pb_column_inner saved_specialty_column_type=&#8221;3_4&#8243; _builder_version=&#8221;4.0.3&#8243;][et_pb_text admin_label=&#8221;The Cult of AI: Misperceptions about AI\u2019s Maturity&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;0px||0px||false|false&#8221;]<\/p>\n<h1><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3863 size-medium alignleft\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_registry002.png\" alt=\"The Cult of AI\" width=\"106\" height=\"86\" \/><br \/>\nThe Cult of AI: Perceiving AI to Be More Mature Than It Is<\/h1>\n<p>AI is all about boundaries: the AI works well if we as developers and deployers define the task and genuinely understand the environment in which the AI will be used. New AI applications are exciting in part because they exceed previous technical boundaries \u2212 like AI winning at chess, then <em>Jeopardy<\/em>, then Go, then StarCraft. But what happens when we assume that AI is ready to break those barriers before the technology or the environment is truly ready? This section presents examples where AIs exceeded either technical or environmental limits \u2013 whether because AI was put in roles it wasn\u2019t suited for, user expectations didn\u2019t align with its abilities, or because the world was assumed to be simpler than it really is.<\/p>\n<p>&nbsp;[\/et_pb_text][et_pb_text admin_label=&#8221;Fails&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px|0px|false|false&#8221; custom_padding=&#8221;5px||0px|0px|false|false&#8221;]<\/p>\n<div id=\"cult\">\n\u00a0\n<\/div>\n<h5>Explore the Three Fails in This Category:<\/h5>\n<p>[\/et_pb_text][et_pb_tabs active_tab_background_color=&#8221;#a0ddf3&#8243; inactive_tab_background_color=&#8221;#e5e7e8&#8243; active_tab_text_color=&#8221;#000000&#8243; admin_label=&#8221;Fails and Lessons Learned&#8221; module_class=&#8221;icon-tabs&#8221; _builder_version=&#8221;4.4.8&#8243; tab_text_color=&#8221;#000000&#8243; body_text_color=&#8221;#000000&#8243; tab_font_size=&#8221;15px&#8221; tab_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;|||0px|false|false&#8221; custom_padding=&#8221;|||0px|false|false&#8221;][et_pb_tab title=&#8221;No Human Needed: The AI&#8217;s Got This&#8221; _builder_version=&#8221;4.4.8&#8243; tab_line_height=&#8221;1em&#8221;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>No Human Needed: The AI&#8217;s Got This<\/h3>\n<p>We often intend to design AIs to assist their human partners, but what we create can end up <em>replacing<\/em> some human partners. When the AI isn\u2019t ready to completely perform the task without the help of humans, this could lead to significant problems.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>Microsoft released Tay, an AI chatbot designed \u201cto engage and entertain\u201d and learn from the communication patterns of the 18-to-24-year-olds with whom it interacted. Within hours, Tay started repeating some users\u2019 sexist, anti-Semitic, racist, and other inflammatory statements. Although the chatbot met its learning objective, the way it did so required individuals within Microsoft to modify the AI and address the public fallout from the experiment.<a class=\"reference\" href=\".\/references\/#1.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a><\/p>\n<p>Because Amazon employs so many warehouse workers, the company has used a heavily automated process that tracks employee productivity and is authorized to fire people without the intervention of a human supervisor. As a result, some employees have said they avoid using the bathroom for fear of being fired on the spot. Implementing this system has led to legal and public relations challenges, even if it did reduce the workload for the company\u2019s human resources employees or remaining supervisors.<a class=\"reference\" href=\".\/references\/#1.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>Perception about what AI is suited for may not always align with the research. Deciding which tasks are better suited for humans or for machines can be traced back to Fitts\u2019s \u2018machines are better at\u2019 (MABA) list from 1951.<a class=\"reference\" href=\".\/references\/#1.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a> A modern-day interpretation of that list might allocate tasks that involve judgment, creativity, and intuition to humans, and tasks that involve responding quickly or storing and sifting through large amounts of data to the AI.<a class=\"reference\" href=\".\/references\/#1.4\" target=\"_blank\" rel=\"noopener noreferrer\">4,<\/a><a class=\"reference\" href=\".\/references\/#1.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> More advanced AI applications can be designed to blur those lines, but even in those cases the AI will likely need to interact with humans in some capacity.<\/p>\n<p>Like any technology, AI may not work as intended or may have undesirable consequences. Consequently, if the AI is intended to work by itself, any design considerations meant to foster partnership will be overlooked, which will impose additional burdens on the human partners when they are called upon.<a class=\"reference\" href=\".\/references\/#1.6\" target=\"_blank\" rel=\"noopener noreferrer\">6,<\/a><a class=\"reference\" href=\".\/references\/#1.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a><\/p>\n<p class=\"cate-hover\"><a class=\"popmake-4140\" href=\"#\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-5638 size-full alignnone\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_did_you_know_guy-002-1.png\" alt=\"\" width=\"234\" height=\"62\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>Semi-autonomous cars provide a great example of how the same burdens that have been studied and addressed over decades in the aviation industry are re-emerging in a new technology and marketplace.<\/p>\n<p><em>Lost context<\/em> \u2013 As more inputs and decisions are automated, human partners risk losing the context they often rely on to make informed decisions. Further, sometimes they can be surprised by decisions their AI partner makes because they fail to fully understand how that decision was made,<a class=\"reference\" href=\".\/references\/#1.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a> since information that they would usually rely on to make a decision is often obscured from them by AI processes. For example, when a semi-autonomous car passes control back to the human driver, the driver may have to make quick decisions about what to do without knowing why the AI transferred control of the car to him or her, which increases the likelihood of making errors.<\/p>\n<p><em>Cognitive drain<\/em> \u2013 As AIs get better at conducting tasks that humans find dull and routine, humans can be left with only the hardest and most cognitively demanding tasks. For example, traveling in a semi-autonomous car might require human drivers to monitor both the vehicle to see if it\u2019s acting reliably, and the road to see if conditions require human intervention. Because the humans are then more engaged in more cognitively demanding work, they are at a higher risk of the negative effects of cognitive overload, such as decreased vigilance or increased likelihood of making errors.<\/p>\n<p><em>Human error traded for new kinds of error<\/em> \u2013 Human-AI coordination can lead to new sets of challenges and learning curves. For example, researchers have documented that drivers believe they will be able to respond to rare events more quickly and effectively than they actually can.<a class=\"reference\" href=\".\/references\/#1.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a> If this mistaken belief is unintentionally included in the AI\u2019s programming, it could create a dangerously false sense of security for both developers and drivers.<\/p>\n<p><em>Reduced human skills or abilities \u2013<\/em> If the AI becomes responsible for doing everything, humans will have less opportunity to practice the skills that were often important in the development of their knowledge and expertise on the topic (i.e., experiences that enable them to perform more complex or nuanced activities). Driving studies have indicated that human attentiveness and monitoring of traffic and road conditions decrease as automation increases. Thus, at moments when experience and attention are needed most, they might potentially have atrophied due to humans\u2019 reliance on AI.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\">\u00a0<\/th>\n  <th class=\"categoryhdr2\">\u00a0<\/th>\n  <th class=\"categoryhdr3\">\u00a0<\/th>\n  <th class=\"categoryhdr4\">\u00a0<\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Perfectionists and \u201cPixie Dusters\u201d&#8221; _builder_version=&#8221;4.4.8&#8243; tab_line_height=&#8221;1em&#8221;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Perfectionists and \u201cPixie Dusters\u201d<\/h3>\n<p>There is a temptation to overestimate the range and scale of problems that can be solved by technology. This can contribute to two mindsets: \u201cperfectionists\u201d who expect performance beyond what the AI can achieve, and \u201cpixie dusters\u201d who believe AI to be more broadly applicable than it is. Both groups could then reject current or future technical solutions (AI or not) that are more appropriate to a particular task.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>In 2015, Amazon used an AI to find the top talent from stacks of resumes. One person involved with the trial run said, \u201cEveryone wanted this holy grail&#8230; give[n] 100 resumes, it will spit out the top five, and we\u2019ll hire those.\u201d But because the AI was trained on data from previous hires, its selections reflected those existing patterns and strongly preferred male candidates to female ones.<a class=\"reference\" href=\".\/references\/#2.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> Even after adjusting the AI and its hiring process, Amazon abandoned the project in 2017. The original holy grail expectation may have diverted the firm from designing a more balanced hiring process.<\/p>\n<p>The 2012 Defense Science Board Study titled \u201cThe Role of Autonomy in DoD Systems\u201d concluded that &#8220;Most [Defense Department] deployments of unmanned systems were motivated by the pressing needs of conflict, so systems were rushed to theater with inadequate support, resources, training and concepts of operation.&#8221; This push to deploy first and understand later likely had an impact on warfighters\u2019 general opinions and future adoption of autonomous systems.<a class=\"reference\" href=\".\/references\/#2.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<\/div>\n<p><span style=\"color: #333333;font-size: 22px\">Why is this a fail?<\/span><br \/>\nNon-AI experts can have inflated expectations of AI\u2019s abilities. When AI is presented as having superhuman abilities based on proven mathematical principles, it is tremendously compelling to want to try it out.<\/p>\n<p>Turn on the radio, ride the bus, watch a TV ad, and someone is talking about AI. AI hype has never been higher,<a class=\"reference\" href=\".\/references\/#2.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a> which means more people and organizations are asking, \u2018How can I have AI solve my problems?\u2019<\/p>\n<p>AI becomes even more appealing because of the belief that algorithms are \u201cobjective and true and scientific,\u201d since they are based on math. In reality, as mathematician and author Cathy O&#8217;Neil puts it, &#8220;algorithms are opinions embedded in code,&#8221; and some vendors ask buyers to \u201cput blind faith in big data.\u201d<a class=\"reference\" href=\".\/references\/#2.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> Even AI experts can fall victim to this mentality, convinced that complex problems can be solved by purely technical solutions if the algorithm and its developer are brilliant enough.<a class=\"reference\" href=\".\/references\/#2.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<blockquote>\n<p>In the end, it\u2019s about balance.\u00a0AI has its limits and intended and appropriate uses. We have to identify the individual applications and environments for which AI is well suited, and better align non-experts\u2019 expectations to the way the AI will actually perform.<\/p>\n<\/blockquote>\n<p>What can result is a false hope in a seemingly magical technology. As a result, people can want to apply it to everything, regardless of whether it\u2019s appropriate.<\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>Misaligned expectations can contribute to the rejection of relevant technical solutions. Two mentalities that emerge \u2013 \u201cperfectionists\u201d and \u201cpixie dusters\u201d (as in \u201cAI is a magical bit of pixie dust that can be used to solve anything\u201d) \u2013 can both lead to disappointment and skepticism once expectations must confront reality.<\/p>\n<p>Perfectionist deployers and users may expect perfect autonomy and a perfect understanding of autonomy, which could (rightly or wrongly) delay the adoption of AI until it meets those impossible standards. Perfectionists may prevent technologies from being explored and tested even in carefully monitored target environments, because they set too high a bar for acceptability.<\/p>\n<p>In contrast, AI pixie-dusters may want to employ AI as soon and as widely as possible, even if an AI solution isn\u2019t appropriate to the problem. One common manifestation of this belief occurs when people want to take an excellent AI model and replicate it for a different problem. This technique is referred to as \u201ctransfer learning,\u201d where \u201ca model developed for one task is reused as the starting point for a model on a second task.\u201d<a class=\"reference\" href=\".\/references\/#2.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a> While this approach can expedite the operationalization of a second AI model, problems arise when people are overly eager to attempt it. The new application must have the right data, equipment, environment, governance structures, and training in place for transfer learning to be successful.<\/p>\n<p>Perhaps counterintuitively, an eagerness to adopt autonomy too early can backfire if the immature system behaves in unexpected, unpredictable, or dangerous ways. When pixie dusters have overinflated expectations of AI outcomes and the AI fails to meet those expectations, they can be dissuaded from trying other, even appropriate and helpful, AI-applications (as happened in the \u201cAI Winter\u201d in the 1980s<a class=\"reference\" href=\".\/references\/#2.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a>).<a class=\"reference\" href=\".\/references\/#2.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a><\/p>\n<p>In the end, it\u2019s about balance. AI has its limits and intended and appropriate uses. We have to identify the individual applications and environments for which AI is well suited, and better align non-experts\u2019 expectations to the way the AI will actually perform.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Developers Are Wizards and Operators Are Muggles&#8221; _builder_version=&#8221;4.4.8&#8243; tab_line_height=&#8221;1em&#8221;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Developers Are Wizards and Operators Are Muggles<\/h3>\n<p>When AI developers think we know how to solve a problem, we may overlook including input from the users of that AI, or the communities the AI will affect. Without consulting these groups, we may develop something that doesn\u2019t match, or even conflicts with, what they want.<\/p>\n<p><em>Note: &#8220;<\/em>Muggle\u201d is a term used in the <em>Harry Potter<\/em> books to derogatorily refer to an individual who has no magical abilities, yet lives in a magical world.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>After one of the Boeing 737 MAX aircraft crashes, pilots were furious that they had not been told that the aircraft had new software, the software would override pilot commands in some rare but dangerous situations, and the pilot manual did not include mention of the software.<a class=\"reference\" href=\".\/references\/#3.1\" target=\"_blank\" rel=\"noopener noreferrer\">1,<\/a><a class=\"reference\" href=\".\/references\/#3.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<p>Uber\u2019s self-driving car was not programmed to recognize jaywalking, only pedestrians crossing in or near a crosswalk,<a class=\"reference\" href=\".\/references\/#3.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a> which would work in some areas of the country but runs counter to the norms in others, putting those pedestrians in danger.<\/p>\n<\/div>\n<p><span style=\"color: #333333;font-size: 22px\">Why is this a fail?<\/span><br \/>\nIt\u2019s a natural inclination to assume that end-users will act the same way we do or will want the same results we want. Unless we include in the design and testing process the individuals who will use the AI, or communities affected by it, we\u2019re unintentionally limiting the AI\u2019s success and its adoption, as well as diminishing the value of other perspectives that would improve AI\u2019s effectiveness.<\/p>\n<p>Despite our long-standing recognition of how important it is to include those affected by what we\u2019re designing, we don\u2019t always follow through. Even if we do consult users, a single interview is not enough to discover how user behaviors and goals change in different environments or in response to different levels of pressure or emotional states, or how those goals and behaviors might shift over time.<\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>At best, working in a vacuum results in irritating system behavior \u2013 like a driver\u2019s seat that vibrates every time it wants to get the driver\u2019s attention.<a class=\"reference\" href=\".\/references\/#3.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> Sometimes users may respond to misaligned goals by working around the AI, turning it off, or not adopting it at all. At worst, the objectives of the solution don\u2019t match users\u2019 goals, or it does the opposite of what users want. But with AI\u2019s scope and scale, the stakes can get higher.<\/p>\n<blockquote>\n<p>If we start thinking about the \u2018customer\u2019 not only as the purchaser or user of the technology, but also as the community the deployed technology will affect, our perspective changes.<\/p>\n<\/blockquote>\n<p>Let\u2019s look at a relevant yet controversial AI topic to see how a different design perspective can result in drastically different outcomes. All over the country, federal, state, and local law enforcement agencies want to use facial recognition AI systems to identify criminals. As AI developers, we may want to make the technology as accurate or with as few false positives as possible, in order to correctly identify criminals. However, the communities that have been heavily policed understand the deep historical patterns of abuse and profiling that result, regardless of technology. As Betty Medsger, investigative reporter, writes, \u201cbeing Black was enough [to justify surveillance].\u201d<a class=\"reference\" href=\".\/references\/#3.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> So if accuracy and false positives are the only consideration, we create an adoption challenge if communities push back against the technology, maybe leading to its not being deployed at all, even if it would be beneficial in certain situations. If we bridge this gap by involving these communities, we may learn about their tolerances for the technology and identify appropriate use cases for it.<\/p>\n<p>If we start thinking about the \u2018customer\u2019 not only as the purchaser or user of the technology, but also as the community the deployed technology will affect, our perspective changes.<a class=\"reference\" href=\".\/references\/#3.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\" id=\"_edn1\"><\/a><\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column_inner][\/et_pb_row_inner][\/et_pb_column][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.4.8&#8243; module_alignment=&#8221;center&#8221; global_module=&#8221;3880&#8243; saved_tabs=&#8221;all&#8221;][et_pb_fullwidth_code admin_label=&#8221;Footer menu&#8221; _builder_version=&#8221;4.4.8&#8243; background_color=&#8221;#d5dde0&#8243; text_orientation=&#8221;center&#8221; module_alignment=&#8221;center&#8221; custom_padding=&#8221;10px||10px||false|false&#8221; disabled_on=&#8221;off|off|off&#8221;]Add Your Experience! This site should be a community resource and would benefit from the addition of other examples and voices. You can write to us by clicking <a href=\"mailto:jrotner@mitre.org;rhodge@mitre.org;ldanley@mitre.org?subject=AI Fails website\">here<\/a>.[\/et_pb_fullwidth_code][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Cult of AI: Perceiving AI to Be More Mature Than It Is AI is all about boundaries: the AI works well if we as developers and deployers define the task and genuinely understand the environment in which the AI will be used. New AI applications are exciting in part because they exceed previous technical [&hellip;]<\/p>\n","protected":false},"author":142,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-118","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/118","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/comments?post=118"}],"version-history":[{"count":0,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/118\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/media?parent=118"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}