{"id":5156,"date":"2020-03-18T21:38:01","date_gmt":"2020-03-18T21:38:01","guid":{"rendered":"https:\/\/sites.mitre.org\/aifails\/?page_id=5156"},"modified":"2020-07-13T08:46:01","modified_gmt":"2020-07-13T12:46:01","slug":"ai-registry-the-things-well-need-that-support-ai","status":"publish","type":"page","link":"https:\/\/sites.mitre.org\/aifails\/ai-registry-the-things-well-need-that-support-ai\/","title":{"rendered":"AI Registry: The Things We&#8217;ll Need That Support AI"},"content":{"rendered":"\n<p>[et_pb_section fb_built=&#8221;1&#8243; specialty=&#8221;on&#8221; _builder_version=&#8221;4.2.2&#8243;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_sidebar area=&#8221;et_pb_widget_area_45&#8243; admin_label=&#8221;AI Registry menu&#8221; _builder_version=&#8221;4.4.1&#8243; min_height=&#8221;200px&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221;][\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; specialty_columns=&#8221;3&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_row_inner _builder_version=&#8221;4.0.3&#8243; custom_padding=&#8221;||0px||false|false&#8221;][et_pb_column_inner saved_specialty_column_type=&#8221;3_4&#8243; _builder_version=&#8221;4.0.3&#8243;][et_pb_text admin_label=&#8221;AI Registry: The Things We\u2019ll Need That Support AI&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;5px||0px||false|false&#8221;]<\/p>\n<h1><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4775 size-full alignleft\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_CultOfAi002.png\" alt=\"\" width=\"135\" height=\"100\" \/>AI Registry: The Things We\u2019ll Need That Support AI<\/h1>\n<p>AI isn\u2019t just about the data and algorithms. To be successful, we as developers and deployers depend on a whole line of supporting elements. This section addresses some, but not all, of those elements, including the right governing policies, the right people, the right data, and the right equipment.<\/p>\n<p>&nbsp;[\/et_pb_text][et_pb_text admin_label=&#8221;Fails&#8221; _builder_version=&#8221;4.4.8&#8243; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;5px||0px||false|false&#8221;]<\/p>\n<div id=\"registry\">\n\u00a0\n<\/div>\n<h5>Explore the Four Fails in This Category:<\/h5>\n<p>[\/et_pb_text][et_pb_tabs active_tab_background_color=&#8221;#a0ddf3&#8243; inactive_tab_background_color=&#8221;#e5e7e8&#8243; active_tab_text_color=&#8221;#000000&#8243; admin_label=&#8221;Fails and Lessons Learned&#8221; module_class=&#8221;icon-tabs&#8221; _builder_version=&#8221;4.4.8&#8243; tab_text_color=&#8221;#000000&#8243; body_text_color=&#8221;#000000&#8243; tab_font_size=&#8221;15px&#8221; tab_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;|||0px|false|false&#8221; custom_padding=&#8221;|||0px|false|false&#8221;][et_pb_tab title=&#8221;Good (Grief!) Governance &#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Good (Grief!) Governance<\/h3>\n<p>We sometimes implement AI without a detailed strategy for how it will be governed, and there aren\u2019t any laws that ensure oversight and accountability. In that vacuum, the technology itself is redefining cultural and societal norms.<\/p>\n<\/div>\n<blockquote>\n<p>Without proper governance, and legal accountability and oversight, the technology becomes the de-facto norm. Therefore, we must recognize that because we control the code, we may unintentionally become de-facto decision makers.<\/p>\n<\/blockquote>\n<div class=\"example-callout\">\n<img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/>\n<h3>Examples<\/h3>\n<p>Police departments can purchase crime prediction products that estimate where crimes will occur or who will be involved. Many of the products are \u201cblack boxes,\u201d meaning it is not clear how decisions are made, and many police departments deploy them in the absence of clear or publicly available policies to guide how they should be applied.<a class=\"reference\" href=\".\/references\/#17.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> Often a new technology is acquired and used first, while policy and governance for its use are developed later.<\/p>\n<p>Employees of a contractor working for Google paid dark-skinned, homeless people $5 for letting the contractor take a picture of their faces in order to make its training dataset more diverse.<a class=\"reference\" href=\".\/references\/#17.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> In addition, these workers may have misled the homeless about the purpose of their participation. Without comprehensive legislation about data collection and privacy infringement, ending such questionable practices becomes the responsibility of the governance policies of each company.<\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>AI has reached a state of maturity where governance is a necessary, yet difficult, element. AI systems continue to be increasingly integrated into daily life, but this occurs without adequate governance, oversight, or accountability. This happens in part because:<\/p>\n<p>1. AI is a probabilistic and dynamic process, meaning AI outcomes will not be fully replicable, consistent, and predictable. Therefore, new governance mechanisms must be developed.<\/p>\n<p>2. Organizations allocate money to buy products, but often do not add funds for creating and testing internal governance policies. Therefore, those policies may not be introduced until the effects of the technology\u2019s use have had an impact on people\u2019s lives.<\/p>\n<p>3. Government and private organizations sometimes keep policies that govern AI use and development hidden from the public in order to protect national security interests or trade secrets.<a class=\"reference\" href=\".\/references\/#17.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a><\/p>\n<p>4. There are no mature AI laws, standards or norms that apply across multiple domains, and laws within a specific domain are only now emerging. Therefore, standardizing policies or sharing best practices face additional obstacles.<\/p>\n<p>The result is that in the United States there are few clear governance models for industry or government to replicate, and there are limited legal authorities that specify whom to hold accountable when things go wrong.<a class=\"reference\" href=\".\/references\/#17.4\" target=\"_blank\" rel=\"noopener noreferrer\">4,<\/a><a class=\"reference\" href=\".\/references\/#17.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<p><a class=\"popmake-5712\" href=\"#\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-5638 alignnone size-full\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_did_you_know_guy-002-1.png\" alt=\"Did You Know?\" width=\"234\" height=\"62\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>In response to unclear legal accountabilities, organizations have embraced declarations of ethical principles and frameworks that promote responsible AI development.<a class=\"reference\" href=\".\/references\/#17.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a> These statements vary in detail and specificity, but almost all declare principles of transparency, non-discrimination, accountability, and safety. These current approaches represent important steps, but evidence shows that they are not enough. They are almost universally voluntary commitments, and few of the declarations include recommendations, specifics, or use cases for how to make the principles actionable and implementable (though in the largest AI companies, these are being developed).<a class=\"reference\" href=\".\/references\/#17.7\" target=\"_blank\" rel=\"noopener noreferrer\">7<\/a> Finally, researchers have shown that pledges to uphold ethical principles do not guarantee ethical behavior.<a class=\"reference\" href=\".\/references\/#17.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a><\/p>\n<p>In parallel with private efforts, the US government is beginning to define guidance, but it is still in early stages. In January 2020, the White House published draft principles for guiding federal regulatory and non-regulatory approaches to AI,<a class=\"reference\" href=\".\/references\/#17.9\" target=\"_blank\" rel=\"noopener noreferrer\">9<\/a> and state governments are also getting more involved in regulation.<a class=\"reference\" href=\".\/references\/#17.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a> However, often state laws are contradictory or lag the technology. As of January 2020, several cities in California and Massachusetts have banned the use of facial recognition technology by public entities,<a class=\"reference\" href=\".\/references\/#17.b\" target=\"_blank\" rel=\"noopener noreferrer\">11<\/a> but at the same time other US cities, as well as airports and private entities, are increasing their adoption of the same technology.<a class=\"reference\" href=\".\/references\/#17.c\" target=\"_blank\" rel=\"noopener noreferrer\">12,<\/a><a class=\"reference\" href=\".\/references\/#17.d\" target=\"_blank\" rel=\"noopener noreferrer\">13<\/a> Because this field of law is so new there are limited precedents.<\/p>\n<p>Absent precedent, AI applications \u2013 or more accurately we, the developers \u2013unintentionally create new norms. The dangers that we must keep in mind are that the AI can undermine traditional figures of authority and reshape the rule of law. Without proper governance, and legal accountability and oversight, the technology becomes the de-facto norm. Therefore, we must recognize that because we control the code, we may unintentionally become de-facto decision makers.<a class=\"reference\" href=\".\/references\/#17.3\" target=\"_blank\" rel=\"noopener noreferrer\">14<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Just Add (Technical) People &#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Just Add (Technical) People<\/h3>\n<p>AI skills are in ever-higher demand, but employers erroneously believe that they only need to hire technical people (with backgrounds in computer science, engineering, mathematics, or related fields), even though developing successful and beneficial AI is not purely a technical challenge.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/><\/p>\n<h3>Examples<\/h3>\n<p>IBM Watson produced \u201cunsafe and incorrect\u201d cancer treatment recommendations, including \u201crecommendations that conflicted with national treatment guidelines and that physicians did not find useful for treating patients.\u201d Internal IBM documents reveal that training was based on only a few hypothetical cases and a few specialists\u2019 opinions. This finding suggests that including more doctors, hospital administrators, nurses, and patients early in the development process could have led to the use of proper diagnostic guidelines and training data.<a class=\"reference\" href=\".\/references\/#18.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a><\/p>\n<p>A crash between a US Navy destroyer and an oil tanker resulted from a navigation system interface that was poorly designed, overly complicated, and provided limited feedback.<a class=\"reference\" href=\".\/references\/#18.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a> Engineers and scientists who study how poor interfaces lead to mishaps can and have helped shape better interface design and safety processes.<\/p>\n<p>In 2015, Google\u2019s automated photo-tagging software mislabeled images of dark-skinned people as \u201cgorillas.\u201d Through 2018, Google\u2019s solution was to remove \u201cgorilla\u201d and the names of other, similar animals from the application\u2019s list of labels.<a class=\"reference\" href=\".\/references\/#18.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a> Hiring employees and managers trained in diverse disciplines, and not merely technical ones, could have resulted in alternative, more inclusive, outcomes.<\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>The small size of the AI workforce is often cited as the greatest barrier to AI adoption.<a class=\"reference\" href=\".\/references\/#18.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> This same problem applies in other fields; for example, healthcare and cybersecurity have similar shortages of skilled technical workers. When responding to the immediate need for AI talent, companies rightly focus on hiring and training data scientists with expertise in AI algorithms, or other specialists in the fields of computer science, engineering, mathematics, and related technical areas. While these employees are absolutely necessary to develop and implement AI at a technical level, <em>just as necessary<\/em> are specialists from other fields who can balance and contextualize how AI is applied in that domain.<\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>The healthcare and cyber fields are a couple of years ahead of AI when it comes to articulating the skills and abilities necessary for a fully representative workforce. Leaders in both fields recognize that the shortage of technical skills is one challenge, while creating multidisciplinary teams is another. For example, the US government developed a National Initiative for Cybersecurity Education (NICE) framework that \u201cdescribes the interdisciplinary nature of the cybersecurity workforce [and]&#8230; describes cybersecurity work and workers irrespective of where or for whom the work is performed.<a class=\"reference\" href=\".\/references\/#18.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> Healthcare organizations have long realized that meeting workforce needs involves more than just hiring doctors and have acted on evidence that interdisciplinary collaboration leads to better patient outcomes.<a class=\"reference\" href=\".\/references\/#18.6\" target=\"_blank\" rel=\"noopener noreferrer\">6,<\/a><a class=\"reference\" href=\".\/references\/#18.7\" target=\"_blank\" rel=\"noopener noreferrer\">7,<\/a><a class=\"reference\" href=\".\/references\/#18.8\" target=\"_blank\" rel=\"noopener noreferrer\">8<\/a><\/p>\n<p>In contrast, the companies and organizations that develop and deploy AI have not yet designed or agreed on similar AI workforce guidelines, though the US government does recognize the importance of interdisciplinary and inclusive teams in several AI strategy publications.<a class=\"reference\" href=\".\/references\/#18.9\" target=\"_blank\" rel=\"noopener noreferrer\">9,<\/a><a class=\"reference\" href=\".\/references\/#18.a\" target=\"_blank\" rel=\"noopener noreferrer\">10<\/a> The next step is to move from recognition to implementation.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;Square Data, Round Problem &#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>Square Data, Round Problem<\/h3>\n<p>Having data doesn\u2019t mean we have a solution: the right data for the problem is not always easily collectable, or in formats that are ingestible or comparable. What\u2019s more, we may not be able to collect data on all the factors that a given AI application must take into account for adequately understanding the problem space.<\/p>\n<blockquote>\n <p>Care must be taken to ensure that the obsession for [sic] effectiveness and predictability behind the use of algorithms does not lead to us designing legal rules and categories no longer on the grounds of our ideal of justice, but so that they are more readily \u2018codable&#8217;<\/p>\n<\/blockquote>\n<\/div>\n<div class=\"example-callout\">\n<img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/>\n<h3>Examples<\/h3>\n<p>United Airlines lost $1B in revenue in 2016 from relying on a system that drew on inaccurate and limited data. United had built a software system to forecast demand for passenger seating, but the assumptions behind the data were so flawed and out of date that two-thirds of the system\u2019s outputs were not good enough for accurate projections.<a class=\"reference\" href=\".\/references\/#19.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a><\/p>\n<p>The Navy, Air Force, and Army all collect different information when they investigate why an aircraft crashes or has a problem, making it difficult for the Department of Defense (DoD) to compare trends or share lessons learned.<a class=\"reference\" href=\".\/references\/#19.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>Some AI applications require large amounts of data to be effective. Fortuitously for the AI community, we are experiencing an explosion of data being generated (2.5 quintillion bytes a day, and growing<a class=\"reference\" href=\".\/references\/#19.3\" target=\"_blank\" rel=\"noopener noreferrer\">3<\/a>). But much of this data is not ready for exploitation. The data can be full of errors, leave gaps, or not be standardized, making its practical use challenging (as seen in the United Airlines example). As a result, a surprisingly high number of businesses (79%) are basing critical decisions on data that hasn&#8217;t been properly verified.<a class=\"reference\" href=\".\/references\/#19.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> On the other hand, valid and useful data can be incompatible across multiple similar applications, preventing an organization from creating a fuller picture (as seen in the DoD example).<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\" id=\"_edn1\"><\/a><\/p>\n<h3>What happens when things fail?<\/h3>\n<p>The challenge for some of us then, is to understand that more data isn\u2019t a solution to every problem. Aside from concerns over accuracy, completeness, and historical patterns, <a href=\".\/turning-lemons-into-lemon\">not all factors can be captured by data<\/a>. Some of the problem-spaces involved have complex, interrelated factors: for example, one study on community policing found that easy-to-collect data, like the number of crime reports and citations, was used for determining how to combat crime; yet this approach overlooks factors vital to correctly addressing the issues, such as identifying community problems, housing issues, and public health patterns.<a class=\"reference\" href=\".\/references\/#19.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a><\/p>\n<p>The French Data Protection Authority (the government agency responsible for the protection of personal data) warns against ignoring a complex reality for the sake of results: \u201ccare must be taken to ensure that the obsession for [sic] effectiveness and predictability behind the use of algorithms does not lead to us designing legal rules and categories no longer on the grounds of our ideal of justice, but so that they are more readily \u2018codable.\u2019\u201d<a class=\"reference\" href=\".\/references\/#19.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"active\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][et_pb_tab title=&#8221;My 8-Track Still Works, So What&#8217;s the Issue?&#8221; _builder_version=&#8221;4.4.8&#8243;]<\/p>\n<div style=\"padding: 0px 0px 15px 0px;margin: 0px 0px 0px 0px\">\n<h3>My 8-Track Still Works, So What&#8217;s the Issue?<\/h3>\n<p>Organizations often attempt to deploy AI without considering what hardware, computational resources, and information technology (IT) systems users actually have.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"example-callout\">\n<img loading=\"lazy\" decoding=\"async\" class=\"wp-image-4030 size-full alignleft z-index40\" src=\"https:\/\/sites.mitre.org\/aifails\/wp-content\/uploads\/sites\/15\/2020\/03\/icon_example_robot-002.png\" alt=\"\" width=\"100\" height=\"83\" \/>\n<h3>Examples<\/h3>\n<p>The Department of Defense still uses 8-inch floppy disks in a system that \u201ccoordinates the operational functions of the nation&#8217;s nuclear forces.\u201d<a class=\"reference\" href=\".\/references\/#20.1\" target=\"_blank\" rel=\"noopener noreferrer\">1<\/a> Implementing advanced algorithms would be impossible on this hardware.<\/p>\n<p>95% of ATM transactions still use COBOL, a 58-year-old programming language (numbers as of 2017), which raises concerns about maintaining critical software over the next generation of ATMs.<a class=\"reference\" href=\".\/references\/#20.2\" target=\"_blank\" rel=\"noopener noreferrer\">2<\/a><\/p>\n<\/div>\n<h3>Why is this a fail?<\/h3>\n<p>The latest processors have amazing computational power, and most AI companies can pay for virtual access to the fastest and most powerful machines in the cloud. Government agencies are often an exception: short-term budget priorities, long and costly acquisition cycles, and security requirements to host their own infrastructure in-house<a class=\"reference\" href=\".\/references\/#20.3\" target=\"_blank\" rel=\"noopener noreferrer\">3,<\/a><a class=\"reference\" href=\".\/references\/#20.4\" target=\"_blank\" rel=\"noopener noreferrer\">4<\/a> have pushed the government towards maintaining and sustaining existing IT, rather than modernizing the technology.<a class=\"reference\" href=\".\/references\/#20.5\" target=\"_blank\" rel=\"noopener noreferrer\">5<\/a> Another exception is established commercial institutions with vital legacy infrastructure (for instance, 92 of the top 100 banks still use mainframe computers), which have such entrenched dependencies that updating IT can have costly and potentially disruptive effects on the business.<a class=\"reference\" href=\".\/references\/#20.6\" target=\"_blank\" rel=\"noopener noreferrer\">6<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>What happens when things fail?<\/h3>\n<p>Any group that depends on legacy systems finds it hard to make use of the latest AI offerings, and the technology gap continues to increase over time. While an organization\u2019s current IT may not be as obsolete as the examples here, any older infrastructure has more limited libraries and software packages, and less computational power and memory, than modern systems, and therefore may not meet the requirements of heavy AI processing. So, algorithms developed elsewhere may not be compatible with existing solutions and can\u2019t simply be ported to an older generation of technology.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<table style=\"border: 5px solid #336699\">\n<tbody>\n <tr>\n  <th class=\"lessonshdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"lessons2hdr\" colspan=\"4\"><\/th>\n <\/tr>\n <tr>\n  <th class=\"categoryhdr1\"><\/th>\n  <th class=\"categoryhdr2\"><\/th>\n  <th class=\"categoryhdr3\"><\/th>\n  <th class=\"categoryhdr4\"><\/th>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3347\" href=\"#\">Hold AI to a Higher Standard<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3405\" href=\"#\">Involve the Communities Affected by the AI <\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3421\" href=\"#\">Make Our Assumptions Explicit<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-2906\" href=\"#\">Monitor the AI&#8217;s Impact and Establish Layers of Accountability<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"active\"><a class=\"popmake-3096\" href=\"#\">It&#8217;s OK to Say No to Automation<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3409\" href=\"#\">Plan to Fail<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3450\" href=\"#\">Try Human-AI Couples Counseling<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3552\" href=\"#\">Envision Safeguards for AI Advocates<\/a><\/td>\n <\/tr>\n <tr>\n  <td class=\"inactive\"><a class=\"popmake-3393\" href=\"#\">AI Challenges are Multidisciplinary, so They Require a Multidisciplinary Team<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3414\" href=\"#\">Ask for Help: Hire a Villain<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-4413\" href=\"#\">Offer the User Choices<\/a><\/td>\n  <td class=\"active\"><a class=\"popmake-3146\" href=\"#\">Require Objective, Third-party Verification and Validation\u00a0<\/a><\/td>\n <\/tr>\n <tr style=\"border-bottom: 5px solid #336699\">\n  <td class=\"inactive\"><a class=\"popmake-4415\" href=\"#\">Incorporate Privacy, Civil Liberties, and Security from the Beginning<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3417\" href=\"#\">Use Math to Reduce Bad Outcomes Caused by Math<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3103\" href=\"#\">Promote Better Adoption through Gameplay<\/a><\/td>\n  <td class=\"inactive\"><a class=\"popmake-3199\" href=\"#\">Entrust Sector-specific Agencies to Establish AI Standards for Their Domains\u00a0<\/a><\/td>\n <\/tr>\n<\/tbody>\n<\/table>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column_inner][\/et_pb_row_inner][\/et_pb_column][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.4.8&#8243; module_alignment=&#8221;center&#8221; global_module=&#8221;3880&#8243; saved_tabs=&#8221;all&#8221;][et_pb_fullwidth_code disabled_on=&#8221;off|off|off&#8221; admin_label=&#8221;Footer menu&#8221; _builder_version=&#8221;4.4.8&#8243; background_color=&#8221;#d5dde0&#8243; text_orientation=&#8221;center&#8221; module_alignment=&#8221;center&#8221; custom_padding=&#8221;10px||10px||false|false&#8221;]Add Your Experience! This site should be a community resource and would benefit from the addition of other examples and voices. You can write to us by clicking <a href=\"mailto:jrotner@mitre.org;rhodge@mitre.org;ldanley@mitre.org?subject=AI Fails website\">here<\/a>.[\/et_pb_fullwidth_code][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI Registry: The Things We\u2019ll Need That Support AI AI isn\u2019t just about the data and algorithms. To be successful, we as developers and deployers depend on a whole line of supporting elements. This section addresses some, but not all, of those elements, including the right governing policies, the right people, the right data, and [&hellip;]<\/p>\n","protected":false},"author":142,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-5156","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/5156","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/comments?post=5156"}],"version-history":[{"count":0,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/5156\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/media?parent=5156"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}