{"id":65,"date":"2019-09-12T14:01:53","date_gmt":"2019-09-12T14:01:53","guid":{"rendered":"https:\/\/sites.mitre.org\/aifails\/?page_id=65"},"modified":"2020-07-13T20:12:27","modified_gmt":"2020-07-14T00:12:27","slug":"references","status":"publish","type":"page","link":"https:\/\/sites.mitre.org\/aifails\/references\/","title":{"rendered":"References"},"content":{"rendered":"\n<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.29.3&#8243; background_color=&#8221;rgba(0,0,0,0)&#8221;][et_pb_row column_structure=&#8221;1_4,3_4&#8243; admin_label=&#8221;row&#8221; module_class=&#8221;.footnote_plugin_tooltip_text&#8221; _builder_version=&#8221;4.4.1&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; border_color_right=&#8221;#000000&#8243;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_sidebar area=&#8221;et_pb_widget_area_10&#8243; admin_label=&#8221;References&#8221; _builder_version=&#8221;4.4.8&#8243;][\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.5.0&#8243; text_line_height=&#8221;1.2em&#8221; content__hover_enabled=&#8221;off|desktop&#8221;]<\/p>\n<h1>References<\/h1>\n<h3>\u00a0<\/h3>\n<h3><em><strong>Home Page<\/strong><\/em><\/h3>\n<p><a class=\"anchor\" name=\"0.1\"><\/a><br \/>\n1. \u201cBenjamin Franklin quotable quote,\u201d Goodreads. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/www.goodreads.com\/quotes\/460142-if-you-fail-to-plan-you-are-planning-to-fail\">https:\/\/www.goodreads.com\/quotes\/460142-if-you-fail-to-plan-you-are-planning-to-fail<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.2\"><\/a><br \/>\n2. Department of Defense, \u201cSummary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity,\u201d <em>defense.gov,<\/em> February 12, 2019. [Online]. Available: <a href=\"https:\/\/media.defense.gov\/2019\/Feb\/12\/2002088963\/-1\/-1\/1\/SUMMARY-OF-DOD-AI-STRATEGY.PDF\">https:\/\/media.defense.gov\/2019\/Feb\/12\/2002088963\/-1\/-1\/1\/SUMMARY-OF-DOD-AI-STRATEGY.PDF<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.3\"><\/a><br \/>\n3. ichristianization, \u201cMicrosoft build 2017 translator demo,\u201d YouTube, June 13, 2017. [Online]. Available: <a href=\"https:\/\/www.youtube.com\/watch?v=u4cJoX-DoiY\">https:\/\/www.youtube.com\/watch?v=u4cJoX-DoiY<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.4\"><\/a><br \/>\n4. N. Martin, \u201cArtificial intelligence is being used to diagnose disease and design new drugs,\u201d <em>Forbes<\/em>, Sept. 30, 2019. [Online]. Available: <a href=\"https:\/\/www.forbes.com\/sites\/nicolemartin1\/2019\/09\/30\/artificial-intelligence-is-being-used-to-diagnose-disease-and-design-new-drugs\/#8874c44db51f\">https:\/\/www.forbes.com\/sites\/nicolemartin1\/2019\/09\/30\/artificial-intelligence-is-being-used-to-diagnose-disease-and-design-new-drugs\/#8874c44db51f<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.5\"><\/a><br \/>\n5. \u201cMeet the AI robots helping take care of elderly patients,\u201d <em>Time Magazine<\/em>, Aug. 23, 2019. [Online]. Available: <a href=\"https:\/\/time.com\/5660046\/robots-elderly-care\/\">https:\/\/time.com\/5660046\/robots-elderly-care\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.6\"><\/a><br \/>\n6. A. Chang, \u201cThe Facebook and Cambridge Analytica scandal, explained with a simple diagram,\u201d <em>Vox<\/em>, May 2, 2018. [Online]. Available: <a href=\"https:\/\/www.vox.com\/policy-and-politics\/2018\/3\/23\/17151916\/facebook-cambridge-analytica-trump-diagram\">https:\/\/www.vox.com\/policy-and-politics\/2018\/3\/23\/17151916\/facebook-cambridge-analytica-trump-diagram<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.7\"><\/a><br \/>\n7. P. Taddonio, \u201cHow China\u2019s government is using AI on its Uighur Muslim population,\u201d <em>Frontline<\/em>, Nov. 21, 2019. [Online]. Available: <a href=\"https:\/\/www.pbs.org\/wgbh\/frontline\/article\/how-chinas-government-is-using-ai-on-its-uighur-muslim-population\/\">https:\/\/www.pbs.org\/wgbh\/frontline\/article\/how-chinas-government-is-using-ai-on-its-uighur-muslim-population\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.8\"><\/a><br \/>\n8. D. Z. Morris, \u201cChina will block travel for those with bad \u2018social credit,\u2019\u201d <em>Fortune<\/em>, March 18, 2018. [Online]. Available: <a href=\"https:\/\/fortune.com\/2018\/03\/18\/china-travel-ban-social-credit\/\">https:\/\/fortune.com\/2018\/03\/18\/china-travel-ban-social-credit\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.9\"><\/a><br \/>\n9. R. Adams, \u201cHong Kong protesters are worried about facial recognition technology. But there are many other ways they&#8217;re being watched,\u201d <em>BuzzFeed News<\/em>, Aug. 17, 2019. [Online]. Available: <a href=\"https:\/\/www.buzzfeednews.com\/article\/rosalindadams\/hong-kong-protests-paranoia-facial-recognition-lasers\">https:\/\/www.buzzfeednews.com\/article\/rosalindadams\/hong-kong-protests-paranoia-facial-recognition-lasers<\/a><\/p>\n<p><a class=\"anchor\" name=\"0.a\"><\/a><br \/>\n10. S. Gibbs, \u201cTesla Model S cleared by auto safety regulator after fatal Autopilot crash,\u201d <em>Guardian<\/em>, Jan. 20, 2017. [Online]. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash\">https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3><strong><em>Fails<\/em><\/strong><\/h3>\n<p><strong>No Human Needed: the AI\u2019s Got This<\/strong><\/p>\n<p><a class=\"anchor\" name=\"1.1\"><\/a><br \/>\n1. E. Hunt, \u201cTay, Microsoft&#8217;s AI chatbot, gets a crash course in racism from Twitter,\u201d <em>Guardian<\/em>, March 24, 2016.\u00a0[Online]. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2016\/mar\/24\/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter\">https:\/\/www.theguardian.com\/technology\/2016\/mar\/24\/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.2\"><\/a><br \/>\n2. C. Lecher, \u201cHow Amazon automatically tracks and fires warehouse workers for \u2018productivity,\u2019\u201d <em>The Verge<\/em>, Apr. 25, 2019. [Online]. Available: <a href=\"https:\/\/www.theverge.com\/2019\/4\/25\/18516004\/amazon-warehouse-fulfillment-centers-productivity-firing-terminations\">https:\/\/www.theverge.com\/2019\/4\/25\/18516004\/amazon-warehouse-fulfillment-centers-productivity-firing-terminations<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.3\"><\/a><br \/>\n3. C. F. de Winter &amp; D. Dodou, \u201cWhy the Fitts list has persisted throughout the history of function allocation,\u201d <em>SpringerLink<\/em>, August 25, 2011. [Online]. Available: <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10111-011-0188-1\">https:\/\/link.springer.com\/article\/10.1007\/s10111-011-0188-1<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.4\"><\/a><br \/>\n4. E. Brynjolfsson and A. McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY, USA: W. W. Norton, 2016.<\/p>\n<p><a class=\"anchor\" name=\"1.5\"><\/a><br \/>\n5. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, \u201cSeven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,\u201d <em>IEEE Intelligent Systems<\/em>, Nov.\/Dec. 2014. [Online]. Available: <a href=\"http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf\">http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.6\"><\/a><br \/>\n6. N. D. Sarter, D. D. Woods, and C. E. Billings, \u201cAutomation surprises,\u201d in G.\u00a0 Salvendy\u00a0 (Ed.),\u00a0 Handbook of Human Factors &amp; Ergonomics (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.<\/p>\n<p><a class=\"anchor\" name=\"1.7\"><\/a><br \/>\n7. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.8\"><\/a><br \/>\n8. D. Sarter, D. D. Woods, and C. E. Billings, \u201cAutomation surprises,\u201d in G. Salvendy (Ed.), <em>Handbook of Human Factors &amp; Ergonomics<\/em> (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.<\/p>\n<p><a class=\"anchor\" name=\"1.9\"><\/a><br \/>\n9. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.a\"><\/a><br \/>\n10. T. Lewis, \u201cA brief history of artificial intelligence,\u201d <em>Live Science<\/em>, Dec. 4, 2014. [Online]. Available: <a href=\"https:\/\/www.livescience.com\/49007-history-of-artificial-intelligence.html\">https:\/\/www.livescience.com\/49007-history-of-artificial-intelligence.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.b\"><\/a><br \/>\n11. Data &amp; Society, \u201cAlgorithmic accountability: A primer,\u201d Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: <a href=\"https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf\">https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"1.c\"><\/a><br \/>\n12. T. D. Jajal, \u201cDistinguishing between narrow AI, general AI and super AI,\u201d Medium, May 21, 2018. [Online]. Available: <a href=\"https:\/\/medium.com\/@tjajal\/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22\">https:\/\/medium.com\/@tjajal\/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<h3>\u00a0<\/h3>\n<p><strong>AI Perfectionists and AI \u201cPixie Dusters\u201d<\/strong><\/p>\n<p><a class=\"anchor\" name=\"2.1\"><\/a><br \/>\n1. J. Dastin, \u201cAmazon scraps secret AI recruiting tool that showed bias against women,\u201d <em>Reuters<\/em>, Oct. 9, 2018. [Online]. Available: <a href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G\">https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.2\"><\/a><br \/>\n2. Defense Science Board, <em>Task Force Report: The Role of Autonomy in DoD Systems<\/em>, Washington, D.C., June 2016. [Online]. Available: <a href=\"https:\/\/www.hsdl.org\/?abstract&amp;did=722318\">https:\/\/www.hsdl.org\/?abstract&amp;did=722318<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.3\"><\/a><br \/>\n3. M. McDonough, \u201cBusiness-focus on artificial intelligence rising,\u201d Twitter, Feb. 28, 2017. [Online]. Available: <a href=\"https:\/\/twitter.com\/M_McDonough\/status\/836580294484451328\">https:\/\/twitter.com\/M_McDonough\/status\/836580294484451328<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.4\"><\/a><br \/>\n4. C. O\u2019Neil, \u201cThe era of blind faith in big data must end,\u201d TED, April 2017. [Online]. Available: <a href=\"https:\/\/www.ted.com\/talks\/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end\">https:\/\/www.ted.com\/talks\/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.5\"><\/a><br \/>\n5. \u201cHere to help,\u201d <em>xkcd<\/em>. Accessed March 18, 2020. [Online]. Available: <a href=\"https:\/\/www.xkcd.com\/1831\/\">https:\/\/www.xkcd.com\/1831\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.6\"><\/a><br \/>\n6. J. Brownlee, \u201cA gentle introduction to transfer learning for deep learning,\u201d Machine Learning Mastery, Sept. 16, 2019. [Online]. Available: <a href=\"https:\/\/machinelearningmastery.com\/transfer-learning-for-deep-learning\/\">https:\/\/machinelearningmastery.com\/transfer-learning-for-deep-learning\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.7\"><\/a><br \/>\n7. S. Schuchmann, \u201cHistory of the second AI winter,\u201d towards data science, May 12, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/history-of-the-second-ai-winter-406f18789d45\">https:\/\/towardsdatascience.com\/history-of-the-second-ai-winter-406f18789d45<\/a><\/p>\n<p><a class=\"anchor\" name=\"2.8\"><\/a><br \/>\n8. Defense Science Board, <em>Task Force Report: The Role of Autonomy in DoD Systems<\/em>, Washington, D.C., June 2016. [Online]. Available: <a href=\"https:\/\/www.hsdl.org\/?abstract&amp;did=722318\">https:\/\/www.hsdl.org\/?abstract&amp;did=722318<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Developers Are Wizards and Operators Are Muggles<\/strong><\/p>\n<p><a class=\"anchor\" name=\"3.1\"><\/a><br \/>\n1. A. Gregg, J. O&#8217;Connell, A. Ba Tran, and F. Siddiqui. \u201cAt tense meeting with Boeing executives, pilots fumed about being left in dark on plane software,\u201d <em>Washington Post<\/em>, March 13, 2019. [Online]. Available: <a href=\"https:\/\/www.washingtonpost.com\/business\/economy\/new-software-in-boeing-737-max-planes-under-scrutinty-after-second-crash\/2019\/03\/13\/06716fda-45c7-11e9-90f0-0ccfeec87a61_story.html\">https:\/\/www.washingtonpost.com\/business\/economy\/new-software-in-boeing-737-max-planes-under-scrutinty-after-second-crash\/2019\/03\/13\/06716fda-45c7-11e9-90f0-0ccfeec87a61_story.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"3.2\"><\/a><br \/>\n2. A. MacGillis, \u201cThe case against Boeing,\u201d <em>New Yorker<\/em>, Nov. 11, 2019. [Online]. Available: <a href=\"https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing\">https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing<\/a><\/p>\n<p><a class=\"anchor\" name=\"3.3\"><\/a><br \/>\n3. P. McCausland, \u201cSelf-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk,\u201c <em>NBC News<\/em>, Nov. 9, 2019. [Online]. Available: <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281\">https:\/\/www.nbcnews.com\/tech\/tech-news\/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281<\/a><\/p>\n<p><a class=\"anchor\" name=\"3.4\"><\/a><br \/>\n4. M. McFarland, \u201cMy seat keeps vibrating. Will it make me a better driver before driving me insane?\u201d <em>Washington Post<\/em>, Jan. 12, 2015. [Online]. Available: <a href=\"https:\/\/www.washingtonpost.com\/news\/innovations\/wp\/2015\/01\/12\/my-seat-keeps-vibrating-will-it-make-me-a-better-driver-before-driving-me-insane\/?noredirect=on&amp;utm_term=.31792eb87c03\">https:\/\/www.washingtonpost.com\/news\/innovations\/wp\/2015\/01\/12\/my-seat-keeps-vibrating-will-it-make-me-a-better-driver-before-driving-me-insane\/?noredirect=on&amp;utm_term=.31792eb87c03<\/a><\/p>\n<p><a class=\"anchor\" name=\"3.5\"><\/a><br \/>\n5. M. Cyril, \u201cWatching the Black body,\u201d Electronic Frontier Foundation, Feb. 28, 2019. [Online]. Available: <a href=\"https:\/\/www.eff.org\/deeplinks\/2019\/02\/watching-black-body\">https:\/\/www.eff.org\/deeplinks\/2019\/02\/watching-black-body<\/a><\/p>\n<p><a class=\"anchor\" name=\"3.6\"><\/a><br \/>\n6. Barry Friedman: Is technology making police better\u2014or\u2026\u201d <em>Recode Decode podcast<\/em>, Nov. 24, 2019. [Online]. Available: <a href=\"https:\/\/www.stitcher.com\/podcast\/vox\/recode-decode\/e\/65519494?curator=MediaREDEF\">https:\/\/www.stitcher.com\/podcast\/vox\/recode-decode\/e\/65519494?curator=MediaREDEF<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>You Call This Artificial \u201cIntelligence\u201d? AI Meets the Real World<\/strong><\/p>\n<p><a class=\"anchor\" name=\"f.1\"><\/a><br \/>\n1. R. Steinberg, \u201c6 areas where artificial neural networks outperform humans,\u201d <em>Venture Beat<\/em>, Dec. 8, 2017. [Online]. Available: <a href=\"https:\/\/venturebeat.com\/2017\/12\/08\/6-areas-where-artificial-neural-networks-outperform-humans\/\">https:\/\/venturebeat.com\/2017\/12\/08\/6-areas-where-artificial-neural-networks-outperform-humans\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Sensing is Believing<\/strong><\/p>\n<p>1. Reference 1 in this section comes from the category header: &#8220;You Call This Artificial \u201cIntelligence\u201d? AI Meets the Real World&#8221;<\/p>\n<p><a class=\"anchor\" name=\"4.2\"><\/a><br \/>\n2. Bilton, \u201cNest thermostat glitch leaves users in the cold,\u201d <em>New York Times<\/em>, Jan. 13, 2016. [Online]. Available: <a href=\"https:\/\/www.nytimes.com\/2016\/01\/14\/fashion\/nest-thermostat-glitch-battery-dies-software-freeze.html\">https:\/\/www.nytimes.com\/2016\/01\/14\/fashion\/nest-thermostat-glitch-battery-dies-software-freeze.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"4.3\"><\/a><br \/>\n3. A. J. Hawkins, \u201cEverything you need to know about the Boeing 737 Max airplane crashes,\u201d <em>The Verge<\/em>, March 22, 2019. [Online]. Available: <a href=\"https:\/\/www.theverge.com\/2019\/3\/22\/18275736\/boeing-737-max-plane-crashes-grounded-problems-info-details-explained-reasons\">https:\/\/www.theverge.com\/2019\/3\/22\/18275736\/boeing-737-max-plane-crashes-grounded-problems-info-details-explained-reasons<\/a><\/p>\n<p><a class=\"anchor\" name=\"4.4\"><\/a><br \/>\n4. E. Ongweso, \u201cSamsung Galaxy S10 \u2018vault-like security\u2019 beaten by a $3 screen protector,\u201d <em>Vice<\/em>, Oct. 17, 2019. [Online]. Available: <a href=\"https:\/\/www.vice.com\/en_us\/article\/59nqdb\/samsung-galaxy-s10-vault-like-security-beaten-by-a-dollar3-screen-protector\">https:\/\/www.vice.com\/en_us\/article\/59nqdb\/samsung-galaxy-s10-vault-like-security-beaten-by-a-dollar3-screen-protector<\/a><\/p>\n<p><a class=\"anchor\" name=\"4.5\"><\/a><br \/>\n5. \u201cAirplane redundancy systems\u201d <em>Poente Technical. <\/em>Accessed April 3, 2020. [Online]. Available: <a href=\"https:\/\/www.poentetechnical.com\/aircraft-engineer\/airplane-redundancy-systems\/\">https:\/\/www.poentetechnical.com\/aircraft-engineer\/airplane-redundancy-systems\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Insecure AI<\/strong><\/p>\n<p>1. Reference 1 in this section comes from the category header: &#8220;You Call This Artificial \u201cIntelligence\u201d? AI Meets the Real World&#8221;<\/p>\n<p><a class=\"anchor\" name=\"5.2\"><\/a><br \/>\n2. AJ Vicens, \u201cAn Amazon Echo recorded a family\u2019s private conversation and sent it to some random person,\u201d <em>Mother Jones<\/em>, May 24, 2018. [Online]. Available: <a href=\"https:\/\/www.motherjones.com\/politics\/2018\/05\/an-amazon-echo-recorded-a-familys-private-conversation-and-sent-it-to-some-random-person\/\">https:\/\/www.motherjones.com\/politics\/2018\/05\/an-amazon-echo-recorded-a-familys-private-conversation-and-sent-it-to-some-random-person\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.3\"><\/a><br \/>\n3. J. Oates, \u201cJapanese hotel chain sorry that hackers may have watched guests through bedside robots,\u201d <em>Register<\/em>, Oct. 22, 2019. [Online]. Available: <a href=\"https:\/\/www.theregister.co.uk\/2019\/10\/22\/japanese_hotel_chain_sorry_that_bedside_robots_may_have_watched_guests\">https:\/\/www.theregister.co.uk\/2019\/10\/22\/japanese_hotel_chain_sorry_that_bedside_robots_may_have_watched_guests<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.4\"><\/a><br \/>\n4. T. G. Dietterich and E. J. Horvitz, \u201cRise of Concerns about AI: Reflections and Directions,\u201d <em>Communications of the ACM<\/em>, vol. 58, no. 10, pp. 38-40, October 2015. [Online]. Available: <a href=\"http:\/\/erichorvitz.com\/CACM_Oct_2015-VP.pdf\">http:\/\/erichorvitz.com\/CACM_Oct_2015-VP.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.5\"><\/a><br \/>\n5. J. S. McEwen and S. S. Shapiro, \u201cMITRE\u2019S Privacy Engineering Tools and Their Use in a Privacy Assessment Framework,\u201d The MITRE Corporation, McLean, VA, Nov. 2019. [Online]. Available: <a href=\"https:\/\/www.mitre.org\/publications\/technical-papers\/mitre%E2%80%99s-privacy-engineering-tools-and-their-use-in-a-privacy\">https:\/\/www.mitre.org\/publications\/technical-papers\/mitre%E2%80%99s-privacy-engineering-tools-and-their-use-in-a-privacy<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.6\"><\/a><br \/>\n6. University of Michigan Engineering, \u201cWatch engineers hack a \u2018smart home\u2019 door lock,\u201d YouTube, May 2, 2016. [Online]. Available: <a href=\"https:\/\/www.youtube.com\/watch?v=Iwm6nvC9Xhc\">https:\/\/www.youtube.com\/watch?v=Iwm6nvC9Xhc<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.7\"><\/a><br \/>\n7. M. Hanrahan, \u201cRing security camera hacks see homeowners subjected to racial abuse, ransom demands,\u201d <em>ABC News<\/em>, Dec. 12, 2019. [Online]. Available: <a href=\"https:\/\/abcnews.go.com\/US\/ring-security-camera-hacks-homeowners-subjected-racial-abuse\/story?id=67679790\">https:\/\/abcnews.go.com\/US\/ring-security-camera-hacks-homeowners-subjected-racial-abuse\/story?id=67679790<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.8\"><\/a><br \/>\n8. &#8220;Cybersecurity Vulnerabilities Affecting Medtronic Implantable Cardiac Devices, Programmers, and Home Monitors: FDA Safety Communication.\u201d US Food &amp; Drug Administration, March 2019. [Online]. Available: <a href=\"https:\/\/www.fda.gov\/medical-devices\/safety-communications\/cybersecurity-vulnerabilities-affecting-medtronic-implantable-cardiac-devices-programmers-and-home\">https:\/\/www.fda.gov\/medical-devices\/safety-communications\/cybersecurity-vulnerabilities-affecting-medtronic-implantable-cardiac-devices-programmers-and-home<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.9\"><\/a><br \/>\n9. J. Herrman, \u201cGoogle knows where you\u2019ve been but does it know who you are,\u201d <em>New York Times Magazine<\/em>, Sept. 12, 2018. [Online]. Available: <a href=\"https:\/\/www.nytimes.com\/2018\/09\/12\/magazine\/google-maps-location-data-privacy.html\">https:\/\/www.nytimes.com\/2018\/09\/12\/magazine\/google-maps-location-data-privacy.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.a\"><\/a><br \/>\n10. A. Greenberg, \u201cHackers remotely kill a Jeep on the highway\u2014with me in it,\u201d <em>Wired<\/em>, July 21, 2015. [Online]. Available: <a href=\"https:\/\/www.wired.com\/2015\/07\/hackers-remotely-kill-jeep-highway\/\">https:\/\/www.wired.com\/2015\/07\/hackers-remotely-kill-jeep-highway\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"5.b\"><\/a><br \/>\n11. L. Rocher, J. M. Hendrickx, and Y.-A. de Montjoye, \u201cEstimating the success of re-identifications in incomplete datasets using generative models,\u201d <em>Nature Communications<\/em>, July 23, 2019. [Online]. Available: <a href=\"https:\/\/www.nature.com\/articles\/s41467-019-10933-3\">https:\/\/www.nature.com\/articles\/s41467-019-10933-3<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>AI Pwned<\/strong><\/p>\n<p>1. Reference 1 in this section comes from the category header: &#8220;You Call This Artificial \u201cIntelligence\u201d? AI Meets the Real World&#8221;<\/p>\n<p><a class=\"anchor\" name=\"6.2\"><\/a><br \/>\n2.\u00a0\u201cpwned,\u201d <em>Urban Dictionary<\/em>. Accessed on: March 11, 2020. [Online]. Available: <a href=\"https:\/\/www.urbandictionary.com\/define.php?term=pwned\">https:\/\/www.urbandictionary.com\/define.php?term=pwned<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.3\"><\/a><br \/>\n3. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, \u201cA general framework for adversarial examples with objectives,\u201d <em>arXiv.org<\/em>, April 4, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1801.00349\">https:\/\/arxiv.org\/abs\/1801.00349<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.4\"><\/a><br \/>\n4. M. Fredrikson, S. Jha, T. Ristenpart, \u201cModel Inversion Attacks that Exploit Confidence Information and Basic Countermeasures,\u201d <em>CCS &#8217;15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,<\/em> October 2015, pp. 1322\u20131333. [Online]. Available: <a href=\"https:\/\/www.cs.cmu.edu\/~mfredrik\/papers\/fjr2015ccs.pdf\">https:\/\/www.cs.cmu.edu\/~mfredrik\/papers\/fjr2015ccs.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.5\"><\/a><br \/>\n5. M. James, \u201cAdversarial attacks on voice input,\u201d <em>I Programmer<\/em>, Jan. 31, 2018. [Online]. Available: <a href=\"https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/11515-adversarial-attacks-on-voice-input.html\">https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/11515-adversarial-attacks-on-voice-input.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.6\"><\/a><br \/>\n6. G. Ateniese et al., \u201cHacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers,\u201d <em>arXiv.org<\/em>, June 19, 2013. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1306.4447\">https:\/\/arxiv.org\/abs\/1306.4447<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.7\"><\/a><br \/>\n7. A. Polyakov, \u201cHow to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors),\u201d towards data science, August 6, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c\">https:\/\/towardsdatascience.com\/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.8\"><\/a><br \/>\n8. K. Eykholt et al., \u201cRobust physical-world attacks on deep learning models,\u201d <em>arXiv.org<\/em>, April 10, 2018. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1707.08945\">https:\/\/arxiv.org\/abs\/1707.08945<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.9\"><\/a><br \/>\n9. M. James, \u201cAdversarial attacks on voice input,\u201d <em>I Programmer<\/em>, Jan. 31, 2018. [Online]. Available: <a href=\"https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/11515-adversarial-attacks-on-voice-input.html\">https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/11515-adversarial-attacks-on-voice-input.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.a\"><\/a><br \/>\n10. A. Dorschel, \u201cRethinking data privacy: The impact of machine learning,\u201d <em>Medium<\/em>, April 24, 2019. [Online]. Available: <a href=\"https:\/\/medium.com\/luminovo\/data-privacy-in-machine-learning-a-technical-deep-dive-f7f0365b1d60\">https:\/\/medium.com\/luminovo\/data-privacy-in-machine-learning-a-technical-deep-dive-f7f0365b1d60<\/a><\/p>\n<p><a class=\"anchor\" name=\"6.b\"><\/a><br \/>\n11. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, \u201cA general framework for adversarial examples with objectives,\u201d <em>arXiv.org<\/em>, April 4, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1801.00349\">https:\/\/arxiv.org\/abs\/1801.00349<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Irrelevant Data, Irresponsible Outcomes<\/strong><\/p>\n<p><a class=\"anchor\" name=\"7.1\"><\/a><br \/>\n1. M. Simon, \u201cHP looking into claim webcams can\u2019t see black people,\u201d <em>CNN.com<\/em>, Dec. 23, 2009. [Online]. Available: <a href=\"http:\/\/www.cnn.com\/2009\/TECH\/12\/22\/hp.webcams\/index.html\">http:\/\/www.cnn.com\/2009\/TECH\/12\/22\/hp.webcams\/index.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.2\"><\/a><br \/>\n2. B. Barrett, \u201cLawmakers can\u2019t ignore facial recognition\u2019s bias anymore,\u201d <em>Wired<\/em>, July 26, 2018. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/amazon-facial-recognition-congress-bias-law-enforcement\/\">https:\/\/www.wired.com\/story\/amazon-facial-recognition-congress-bias-law-enforcement\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.3\"><\/a><br \/>\n3. P. Egan, \u201cData glitch was apparent factor in false fraud charges against jobless claimants,\u201d <em>Detroit Free Press<\/em>, July 30, 2017. [Online]. Available: <a href=\"https:\/\/www.freep.com\/story\/news\/local\/michigan\/2017\/07\/30\/fraud-charges-unemployment-jobless-claimants\/516332001\/\">https:\/\/www.freep.com\/story\/news\/local\/michigan\/2017\/07\/30\/fraud-charges-unemployment-jobless-claimants\/516332001\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.4\"><\/a><br \/>\n4. S. Mullainathan, \u201cBiased algorithms are easier to fix than biased people,\u201d <em>New York Times<\/em>, Dec. 6, 2019. [Online]. Available: <a href=\"https:\/\/www.nytimes.com\/2019\/12\/06\/business\/algorithm-bias-fix.html?searchResultPosition=1\">https:\/\/www.nytimes.com\/2019\/12\/06\/business\/algorithm-bias-fix.html?searchResultPosition=1<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.5\"><\/a><br \/>\n5. Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, \u201cDissecting racial bias in an algorithm used to manage the health of populations,\u201d <em>Science,<\/em> vol. 366, no. 6464, pp. 447-453, Oct. 25, 2019. [Online]. Available: <a href=\"https:\/\/science.sciencemag.org\/content\/366\/6464\/447\">https:\/\/science.sciencemag.org\/content\/366\/6464\/447<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.6\"><\/a><br \/>\n6. K. Hao, \u201cThis is how AI bias really happens\u2014and why it\u2019s so hard to fix,\u201d <em>MIT Technology Review<\/em>, Feb. 4, 2020. [Online]. Available: <a href=\"https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/\">https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.7\"><\/a><br \/>\n7. Data &amp; Society, \u201cAlgorithmic accountability: A primer,\u201d Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: <a href=\"https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf\">https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.8\"><\/a><br \/>\n8. N. Barrowman, \u201cWhy data is never raw,\u201d <em>New Atlantis<\/em>, Summer\/Fall 2018. [Online]. Available: <a href=\"https:\/\/www.thenewatlantis.com\/publications\/why-data-is-never-raw\">https:\/\/www.thenewatlantis.com\/publications\/why-data-is-never-raw<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.9\"><\/a><br \/>\n9. K. Hao, \u201cThis is how AI bias really happens\u2014and why it\u2019s so hard to fix,\u201d <em>MIT Technology Review<\/em>, Feb. 4, 2020. [Online]. Available: <a href=\"https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/\">https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"7.a\"><\/a><br \/>\n10. K. Crawford and R. Calo, \u201cThere is a blind spot in AI research,\u201d <em>Nature<\/em>, Oct. 13, 2016. [Online]. Available: <a href=\"https:\/\/www.nature.com\/articles\/538311a\">https:\/\/www.nature.com\/articles\/538311a<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>You Told Me to Do This<\/strong><\/p>\n<p><a class=\"anchor\" name=\"8.1\"><\/a><br \/>\n1. N. V. Patel, \u201cWhy doctors aren\u2019t afraid of better, more efficient AI diagnosing cancer,\u201d <em>Daily Beast<\/em>, Dec. 22, 2017. [Online]. Available: <a href=\"https:\/\/www.thedailybeast.com\/why-doctors-arent-afraid-of-better-more-efficient-ai-diagnosing-cancer\">https:\/\/www.thedailybeast.com\/why-doctors-arent-afraid-of-better-more-efficient-ai-diagnosing-cancer<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.2\"><\/a><br \/>\n2. T. Murphy VII, \u201cThe first level of Super Mario Bros. is easy with lexicographic orderings and time travel&#8230; after that it gets a little tricky,\u201d April 1, 2013. [Online]. Available: <a href=\"http:\/\/www.cs.cmu.edu\/~tom7\/mario\/mario.pdf\">http:\/\/www.cs.cmu.edu\/~tom7\/mario\/mario.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.3\"><\/a><br \/>\n3. J. Vincent, \u201cOpenAI has published the text-generating AI it said was too dangerous to share,\u201d <em>The Verge,<\/em> November 7, 2019. [Online]. Available: <a href=\"https:\/\/www.theverge.com\/2019\/11\/7\/20953040\/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters\">https:\/\/www.theverge.com\/2019\/11\/7\/20953040\/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.4\"><\/a><br \/>\n4. \u201cGPT-2: 1.5B Release,\u201d <em>OpenAI<\/em>, November 5, 2019. [Online]. Available: <a href=\"https:\/\/openai.com\/blog\/gpt-2-1-5b-release\/\">https:\/\/openai.com\/blog\/gpt-2-1-5b-release\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.5\"><\/a><br \/>\n5. Amodei, \u201cConcrete problems in AI safety,\u201d <em>arXiv.org<\/em>, July 25, 2016. [Online]. Available: <a href=\"https:\/\/arxiv.org\/pdf\/1606.06565.pdf\">https:\/\/arxiv.org\/pdf\/1606.06565.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.6\"><\/a><br \/>\n6. Data &amp; Society, \u201cAlgorithmic accountability: A primer,\u201d Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: <a href=\"https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf\">https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.7\"><\/a><br \/>\n7. Narayanan, \u201c21 fairness definitions and their politics,\u201d presented at Conference on Fairness, Accountability, and Transparency, Feb. 23, 2018. [Online]. Available: <a href=\"https:\/\/fairmlbook.org\/tutorial2.html\">https:\/\/fairmlbook.org\/tutorial2.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"8.8\"><\/a><br \/>\n8. \u201cCollege Board Announces Improved Admissions Resource,\u201d <em>College Board, <\/em>August 27, 2019. [Online]. Available: <a href=\"https:\/\/www.collegeboard.org\/releases\/2019\/college-board-announces-improved-admissions-resource\">https:\/\/www.collegeboard.org\/releases\/2019\/college-board-announces-improved-admissions-resource<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Feeding the Feedback Loop<\/strong><\/p>\n<p><a class=\"anchor\" name=\"9.1\"><\/a><br \/>\n1. A. Jenkins, \u201cThis town is fining drivers to fight &#8216;horrific&#8217; traffic from Google Maps and Waze,\u201d <em>Travel + Leisure<\/em>, Dec. 26, 2017. [Online]. Available: <a href=\"https:\/\/www.travelandleisure.com\/travel-news\/leonia-waze-google-maps-fines\">https:\/\/www.travelandleisure.com\/travel-news\/leonia-waze-google-maps-fines<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.2\"><\/a><br \/>\n2. A. Feng and S. Wu, \u201cThe myth of the impartial machine,\u201d <em>Parametric Press<\/em>, no. 01 (Science + Society), May 1, 2019. [Online]. Available: <a href=\"https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/\">https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.3\"><\/a><br \/>\n3. E. Lacey, \u201cThe toxic potential of YouTube\u2019s feedback loop,\u201d <em>Wired<\/em>, July 13, 2019. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/the-toxic-potential-of-youtubes-feedback-loop\/\">https:\/\/www.wired.com\/story\/the-toxic-potential-of-youtubes-feedback-loop\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.4\"><\/a><br \/>\n4. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.5\"><\/a><br \/>\n5. M. Heid, \u201cThe unsettling ways tech is changing your personal reality,\u201d <em>Elemental<\/em>, Oct. 3, 2019. [Online]. Available: <a href=\"https:\/\/elemental.medium.com\/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee\">https:\/\/elemental.medium.com\/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.6\"><\/a><br \/>\n6. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.7\"><\/a><br \/>\n7. W. Oremus, \u201cWho controls your Facebook feed,\u201d <em>Slate<\/em>, Jan. 3, 2016. [Online]. Available: <a href=\"http:\/\/www.slate.com\/articles\/technology\/cover_story\/2016\/01\/how_facebook_s_news_feed_algorithm_works.html\">http:\/\/www.slate.com\/articles\/technology\/cover_story\/2016\/01\/how_facebook_s_news_feed_algorithm_works.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.8\"><\/a><br \/>\n8. \u201cTech experts: What you post online could be directly impacting your insurance coverage,\u201d <em>CBS New York<\/em>, March 21, 2019. [Online]. Available: <a href=\"https:\/\/newyork.cbslocal.com\/2019\/03\/21\/online-posting-dangerous-selfies-insurance-coverage\/\">https:\/\/newyork.cbslocal.com\/2019\/03\/21\/online-posting-dangerous-selfies-insurance-coverage\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.9\"><\/a><br \/>\n9. R. Deller, \u201cBook review: Automating inequality: How high-tech tools profile, police and punish the poor by Virginia Eubanks,\u201d <em>LSE Review of Books blog<\/em>, July 2, 2018. [Online]. Available: <a href=\"https:\/\/blogs.lse.ac.uk\/lsereviewofbooks\/2018\/07\/02\/book-review-automating-inequality-how-high-tech-tools-profile-police-and-punish-the-poor-by-virginia-eubanks\/\">https:\/\/blogs.lse.ac.uk\/lsereviewofbooks\/2018\/07\/02\/book-review-automating-inequality-how-high-tech-tools-profile-police-and-punish-the-poor-by-virginia-eubanks\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"9.a\"><\/a><br \/>\n10. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>A Special Case: AI Arms Race<\/strong><\/p>\n<p><a class=\"anchor\" name=\"10.1\"><\/a><br \/>\n1. \u201cHow artificial intelligence could increase the risk of nuclear war,\u201d <em>The RAND blog<\/em>, April 23, 2018. [Online]. Available: <a href=\"https:\/\/www.rand.org\/blog\/articles\/2018\/04\/how-artificial-intelligence-could-increase-the-risk.html\">https:\/\/www.rand.org\/blog\/articles\/2018\/04\/how-artificial-intelligence-could-increase-the-risk.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"10.2\"><\/a><br \/>\n2. \u201cHow artificial intelligence could increase the risk of nuclear war,\u201d <em>The RAND blog<\/em>, April 23, 2018. [Online]. Available: <a href=\"https:\/\/www.rand.org\/blog\/articles\/2018\/04\/how-artificial-intelligence-could-increase-the-risk.html\">https:\/\/www.rand.org\/blog\/articles\/2018\/04\/how-artificial-intelligence-could-increase-the-risk.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"10.3\"><\/a><br \/>\n3. P. Scharre, \u201cKiller apps: The real dangers of an AI arms race,\u201d <em>Foreign Affairs<\/em>, March\/April 2019. [Online]. Available: <a href=\"https:\/\/www.foreignaffairs.com\/articles\/2019-04-16\/killer-apps\">https:\/\/www.foreignaffairs.com\/articles\/2019-04-16\/killer-apps<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Testing in the Wild<\/strong><\/p>\n<p><a name=\"11.1\"><\/a><br \/>\n1. A. MacGillis, \u201cThe case against Boeing,\u201d <em>New Yorker<\/em>, Nov. 11, 2019. [Online]. Available: <a href=\"https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing\">https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing<\/a><\/p>\n<p><a name=\"11.2\"><\/a><br \/>\n2. N. Sonnad, \u201cA flawed algorithm led the UK to deport thousands of students,\u201d <em>Quartz<\/em>, May 3, 2018. [Online]. Available: <a href=\"https:\/\/qz.com\/1268231\/a-toeic-test-led-the-uk-to-deport-thousands-of-students\/\">https:\/\/qz.com\/1268231\/a-toeic-test-led-the-uk-to-deport-thousands-of-students\/<\/a><\/p>\n<p><a name=\"11.3\"><\/a><br \/>\n3. \u201cAhsan v The Secretary of State for the Home Department (Rev 1) [2017] EWCA Civ 2009 (05 December 2017).\u201d British and Irish Legal Information Institute, December 5, 2017. [Online]. Available: <a href=\"http:\/\/www.bailii.org\/ew\/cases\/EWCA\/Civ\/2017\/2009.html\">http:\/\/www.bailii.org\/ew\/cases\/EWCA\/Civ\/2017\/2009.html<\/a><\/p>\n<p><a name=\"11.4\"><\/a><br \/>\n4. P. Wu, \u201cTest your machine learning algorithm with metamorphic testing,\u201d <em>Medium<\/em>, Nov. 13, 2017. [Online]. Available: <a href=\"https:\/\/medium.com\/trustableai\/testing-ai-with-metamorphic-testing-61d690001f5c\">https:\/\/medium.com\/trustableai\/testing-ai-with-metamorphic-testing-61d690001f5c<\/a><\/p>\n<p><a name=\"11.5\"><\/a><br \/>\n5. I. Goodfellow and N. Papernot, \u201cThe challenge of verification and testing of machine learning,\u201d <em>Cleverhans blog<\/em>, June 14, 2017. [Online]. Available: <a href=\"http:\/\/www.cleverhans.io\/security\/privacy\/ml\/2017\/06\/14\/verification.html\">http:\/\/www.cleverhans.io\/security\/privacy\/ml\/2017\/06\/14\/verification.html<\/a><\/p>\n<p><a name=\"11.6\"><\/a><br \/>\n6. Raphael, \u201cIntroducing tf-explain, interpretability for TensorFlow 2.0,\u201d <em>Sicara blog<\/em>, July 30, 2019. [Online]. Available: <a href=\"https:\/\/blog.sicara.com\/tf-explain-interpretability-tensorflow-2-9438b5846e35\">https:\/\/blog.sicara.com\/tf-explain-interpretability-tensorflow-2-9438b5846e35<\/a><\/p>\n<p><a name=\"11.7\"><\/a><br \/>\n7. \u201cFit interpretable machine learning models. Explain blackbox machine learning,\u201d GitHub. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/github.com\/Microsoft\/interpret\">https:\/\/github.com\/Microsoft\/interpret<\/a><\/p>\n<p><a name=\"11.8\"><\/a><br \/>\n8. Y. Sun et al., \u201d Structural test coverage criteria for deep neural networks,\u201d in <em>2019 IEEE\/ACM 41st International Conference on Software Engineering: Companion Proceedings<\/em>, 2019. [Online]. Available: <a href=\"https:\/\/www.kroening.com\/papers\/emsoft2019.pdf\">https:\/\/www.kroening.com\/papers\/emsoft2019.pdf<\/a><\/p>\n<p><a name=\"11.9\"><\/a><br \/>\n9. L. M. Strickhart and H.N.J. Lee, \u201cShow your work: Machine learning explainer tools and their use in artificial intelligence assurance,\u201d The MITRE Corporation, McLean, VA, June 2019, unpublished.<\/p>\n<p><a name=\"11.a\"><\/a><br \/>\n10. D. Sculley et al., \u201cMachine learning: The high interest credit card of technical debt,\u201d in <em>SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop)<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/ai.google\/research\/pubs\/pub43146\">https:\/\/ai.google\/research\/pubs\/pub43146<\/a><\/p>\n<p><a name=\"11.b\"><\/a><br \/>\n11. A. Madan, \u201d3 practical ways to future-proof your IoT devices,\u201d <em>IoT Times<\/em>, July 2, 2019. [Online]. Available:\u00a0 <a href=\"https:\/\/iot.eetimes.com\/3-practical-ways-to-future-proof-your-iot-devices\/\">https:\/\/iot.eetimes.com\/3-practical-ways-to-future-proof-your-iot-devices\/<\/a><\/p>\n<p><a name=\"11.c\"><\/a><br \/>\n12. A. Gonfalonieri, \u201cWhy machine learning models degrade in production,\u201d towards data science, July 25, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214\">https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214<\/a><\/p>\n<p><a name=\"11.d\"><\/a><br \/>\n13. D. Sculley et al., \u201cMachine learning: The high interest credit card of technical debt,\u201d in <em>SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop)<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/ai.google\/research\/pubs\/pub43146\">https:\/\/ai.google\/research\/pubs\/pub43146<\/a><\/p>\n<p><a name=\"11.e\"><\/a><br \/>\n14. D. Sculley et al., \u201cMachine learning: The high interest credit card of technical debt,\u201d in <em>SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop)<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/ai.google\/research\/pubs\/pub43146\">https:\/\/ai.google\/research\/pubs\/pub43146<\/a><\/p>\n<p><a name=\"11.f\"><\/a><br \/>\n15. R. Potember, \u201cPerspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,\u201d Defense Technical Information Center, Jan. 1, 2017. [Online]. Available: <a href=\"https:\/\/apps.dtic.mil\/docs\/citations\/AD1024432\">https:\/\/apps.dtic.mil\/docs\/citations\/AD1024432<\/a><\/p>\n<p><a name=\"11.g\"><\/a><br \/>\n16. J. Zittrain, \u201cThe hidden costs of automated thinking,\u201d <em>The New Yorker<\/em>, July 23, 2019. [Online]. Available:\u00a0 <a href=\"https:\/\/www.newyorker.com\/tech\/annals-of-technology\/the-hidden-costs-of-automated-thinking\">https:\/\/www.newyorker.com\/tech\/annals-of-technology\/the-hidden-costs-of-automated-thinking<\/a><\/p>\n<p><a name=\"11.h\"><\/a><br \/>\n17. N. Carne, \u201cBlaming the driver in a \u2018driverless\u2019 car,\u201d <em>Cosmos<\/em>, Oct. 29. 2019. [Online]. Available: <a href=\"https:\/\/cosmosmagazine.com\/technology\/blaming-the-driver-in-a-driverless-car\">https:\/\/cosmosmagazine.com\/technology\/blaming-the-driver-in-a-driverless-car<\/a><\/p>\n<p><a name=\"11.i\"><\/a><br \/>\n18. S. Captain, \u201cHumans were to blame in Google self-driving car crash, police say,\u201d Fast Company, May 4, 2018. [Online]. Available: <a href=\"https:\/\/www.fastcompany.com\/40568609\/humans-were-to-blame-in-google-self-driving-car-crash-police-say\">https:\/\/www.fastcompany.com\/40568609\/humans-were-to-blame-in-google-self-driving-car-crash-police-say<\/a><\/p>\n<p><a name=\"11.j\"><\/a><br \/>\n19. J. Stewart, \u201cTesla&#8217;s autopilot was involved in another deadly car crash,\u201d <em>Wired<\/em>, March 30, 2018. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/tesla-autopilot-self-driving-crash-california\/\">https:\/\/www.wired.com\/story\/tesla-autopilot-self-driving-crash-california\/<\/a><\/p>\n<p><a name=\"11.k\"><\/a><br \/>\n20. J. Stewart, \u201cWhy Tesla\u2019s autopilot can\u2019t see a stopped firetruck,\u201d <em>Wired<\/em>, Aug. 27, 2018. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/tesla-autopilot-why-crash-radar\/\">https:\/\/www.wired.com\/story\/tesla-autopilot-why-crash-radar\/<\/a><\/p>\n<p><a name=\"11.l\"><\/a><br \/>\n21. M. McFarland, \u201cUber self-driving car kills pedestrian in first fatal autonomous crash,\u201d <em>CNN Business<\/em>, March 19, 2018. [Online]. Available: <a href=\"https:\/\/money.cnn.com\/2018\/03\/19\/technology\/uber-autonomous-car-fatal-crash\/index.html\">https:\/\/money.cnn.com\/2018\/03\/19\/technology\/uber-autonomous-car-fatal-crash\/index.html<\/a><\/p>\n<p><a name=\"11.m\"><\/a><br \/>\n22. A. MacGillis, \u201cThe case against Boeing,\u201d <em>New Yorker<\/em>, Nov. 11, 2019. [Online]. Available: <a href=\"https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing\">https:\/\/www.newyorker.com\/magazine\/2019\/11\/18\/the-case-against-boeing<\/a><\/p>\n<p><a name=\"11.n\"><\/a><br \/>\n23. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a name=\"11.o\"><\/a><br \/>\n24. W. Langewiesche, \u201cThe human factor,\u201d <em>Vanity Fair<\/em>, Oct. 2014. [Online]. Available: <a href=\"https:\/\/www.vanityfair.com\/news\/business\/2014\/10\/air-france-flight-447-crash\">https:\/\/www.vanityfair.com\/news\/business\/2014\/10\/air-france-flight-447-crash<\/a><\/p>\n<p><a name=\"11.p\"><\/a><br \/>\n25. \u201cA320, vicinity Tel Aviv Israel, 2012,\u201d <em>SKYbrary<\/em>. Accessed on: March 11, 2020. [Online]. Available: <a href=\"https:\/\/www.skybrary.aero\/index.php\/A320,_vicinity_Tel_Aviv_Israel,_2012\">https:\/\/www.skybrary.aero\/index.php\/A320,_vicinity_Tel_Aviv_Israel,_2012<\/a><\/p>\n<p><a name=\"11.q\"><\/a><br \/>\n26. S. Gibbs, \u201cTesla Model S cleared by auto safety regulator after fatal Autopilot crash,\u201d <em>Guardian<\/em>, Jan. 20, 2017. [Online]. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash\">https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash<\/a><\/p>\n<p><a name=\"11.r\"><\/a><br \/>\n27. C. Ross and I. Swetlitz, \u201cIBM\u2019s Watson supercomputer recommended \u2018unsafe and incorrect\u2019 cancer treatments, internal documents show,\u201d <em>STATnews, <\/em>July 25, 2018. [Online]. Available: <a href=\"https:\/\/www.statnews.com\/wp-content\/uploads\/2018\/09\/IBMs-Watson-recommended-unsafe-and-incorrect-cancer-treatments-STAT.pdf\">https:\/\/www.statnews.com\/wp-content\/uploads\/2018\/09\/IBMs-Watson-recommended-unsafe-and-incorrect-cancer-treatments-STAT.pdf<\/a><\/p>\n<p><a name=\"11.s\"><\/a><br \/>\n28. S. Fussell, \u201cPearson Embedded a &#8216;Social-Psychological&#8217; Experiment in Students&#8217; Educational Software [Updated],\u201d <em>Gizmodo<\/em>, April 18, 2018. [Online]. Available: <a href=\"https:\/\/gizmodo.com\/pearson-embedded-a-social-psychological-experiment-in-s-1825367784\">https:\/\/gizmodo.com\/pearson-embedded-a-social-psychological-experiment-in-s-1825367784<\/a><\/p>\n<p><a name=\"11.t\"><\/a><br \/>\n29. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Government Dependence on Black Box Vendors <\/strong><\/p>\n<p><a class=\"anchor\" name=\"12.1\"><\/a><br \/>\n1. S. Corbett-Davies, E. Pierson, A. Feller, and S. Goel, \u201cA computer program used for bail and sentencing decisions was labeled biased against blacks. It&#8217;s actually not that clear,\u201d <em>Washington Post<\/em>, Oct. 17, 2016. [Online]. Available: <a href=\"https:\/\/www.washingtonpost.com\/news\/monkey-cage\/wp\/2016\/10\/17\/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas\/?noredirect=on&amp;utm_term=.a9cfb19a549d\">https:\/\/www.washingtonpost.com\/news\/monkey-cage\/wp\/2016\/10\/17\/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas\/?noredirect=on&amp;utm_term=.a9cfb19a549d<\/a><\/p>\n<p><a class=\"anchor\" name=\"12.2\"><\/a><br \/>\n2. R. Wexler, \u201cWhen a computer program keeps you in jail,\u201d <em>New York Times<\/em>, June 13, 2017. [Online]. Available: <a href=\"https:\/\/www.nytimes.com\/2017\/06\/13\/opinion\/how-computers-are-harming-criminal-justice.html\">https:\/\/www.nytimes.com\/2017\/06\/13\/opinion\/how-computers-are-harming-criminal-justice.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"12.3\"><\/a><br \/>\n3. C. Langford, \u201cHouston Schools Must Face Teacher Evaluation Lawsuit,\u201d <em>Courthouse News Service, <\/em>May 8, 2017. [Online]. Available: <a href=\"https:\/\/www.courthousenews.com\/houston-schools-must-face-teacher-evaluation-lawsuit\/\">https:\/\/www.courthousenews.com\/houston-schools-must-face-teacher-evaluation-lawsuit\/<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Clear as Mud<\/strong><br \/>\n<a class=\"anchor\" name=\"13.1\"><\/a><br \/>\n1. A. Yoo, \u201cUPS: Driving performance by optimizing driver behavior,\u201d Harvard Business School Digital Initiative, April 5, 2017. [Online]. Available: <a href=\"https:\/\/digital.hbs.edu\/platform-digit\/submission\/ups-driving-performance-by-optimizing-driver-behavior\/\">https:\/\/digital.hbs.edu\/platform-digit\/submission\/ups-driving-performance-by-optimizing-driver-behavior\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.2\"><\/a><br \/>\n2. K. Hill, \u201cFacebook recommended that this psychiatrist&#8217;s patients friend each other,\u201d <em>Splinternews<\/em>, Aug. 29, 2016. [Online]. Available: <a href=\"https:\/\/splinternews.com\/facebook-recommended-that-this-psychiatrists-patients-f-1793861472\">https:\/\/splinternews.com\/facebook-recommended-that-this-psychiatrists-patients-f-1793861472<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.3\"><\/a><br \/>\n3. B. Khaleghi, \u201cThe what of explainable AI,\u201d Element AI, Sept. 3, 2019. [Online]. Available: <a href=\"https:\/\/www.elementai.com\/news\/2019\/the-what-of-explainable-ai\">https:\/\/www.elementai.com\/news\/2019\/the-what-of-explainable-ai<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.4\"><\/a><br \/>\n4. C. Rudin, \u201cStop explaining black box machine learning models for high stakes decisions and use interpretable models instead,\u201d <em>arXiv.org<\/em>, Sep. 22, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1811.10154\">https:\/\/arxiv.org\/abs\/1811.10154<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.5\"><\/a><br \/>\n5. P. L. McDermott, \u201cHuman-machine teaming systems engineering guide,\u201d The MITRE Corporation, Dec. 2018. [Online]. Available: <a href=\"https:\/\/www.mitre.org\/publications\/technical-papers\/human-machine-teaming-systems-engineering-guide\">https:\/\/www.mitre.org\/publications\/technical-papers\/human-machine-teaming-systems-engineering-guide<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.6\"><\/a><br \/>\n6. D. Gunning, \u201cExplainable artificial intelligence (XAI),\u201d Defense Advanced Research Projects Agency, Nov. 2017. [Online]. Available: <a href=\"https:\/\/www.darpa.mil\/attachments\/XAIProgramUpdate.pdf?source=post_page---------------------------\">https:\/\/www.darpa.mil\/attachments\/XAIProgramUpdate.pdf?source=post_page&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.7\"><\/a><br \/>\n7. Z. C. Lipton, \u201cThe mythos of model interpretability,\u201d <em>arXiv.org<\/em>, March 6, 2017. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1606.03490\">https:\/\/arxiv.org\/abs\/1606.03490<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.8\"><\/a><br \/>\n8. C. Rudin, \u201cStop explaining black box machine learning models for high stakes decisions and use interpretable models instead,\u201d <em>arXiv.org<\/em>, Sep. 22, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1811.10154\">https:\/\/arxiv.org\/abs\/1811.10154<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.9\"><\/a><br \/>\n9. Z. C. Lipton, \u201cThe mythos of model interpretability,\u201d <em>arXiv.org<\/em>, March 6, 2017. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1606.03490\">https:\/\/arxiv.org\/abs\/1606.03490<\/a><\/p>\n<p><a class=\"anchor\" name=\"13.a\"><\/a><br \/>\n10. C. Rudin, \u201cStop explaining black box machine learning models for high stakes decisions and use interpretable models instead,\u201d <em>arXiv.org<\/em>, Sep. 22, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1811.10154\">https:\/\/arxiv.org\/abs\/1811.10154<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>In AI We Overtrust<\/strong><\/p>\n<p><a name=\"14.1\"><\/a><br \/>\n1. P. Robinette, W. Li, R. Allen, A. M. Howard, and A. R. Wagner, \u201cOvertrust of robots in emergency evacuation scenarios,\u201d presented at 2016 11th ACM\/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 101-108. [Online]. Available: <a href=\"https:\/\/www.cc.gatech.edu\/~alanwags\/pubs\/Robinette-HRI-2016.pdf\">https:\/\/www.cc.gatech.edu\/~alanwags\/pubs\/Robinette-HRI-2016.pdf<\/a><\/p>\n<p><a name=\"14.2\"><\/a><br \/>\n2. Georgia Tech, \u201cIn emergencies, should you trust a robot?\u201d YouTube. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/www.youtube.com\/watch?v=frr6cVBQPXQ\">https:\/\/www.youtube.com\/watch?v=frr6cVBQPXQ<\/a><\/p>\n<p><a name=\"14.3\"><\/a><br \/>\n3. M. Heid, \u201cThe unsettling ways tech is changing your personal reality,\u201d <em>Elemental<\/em>, Oct. 3, 2019. [Online]. Available: <a href=\"https:\/\/elemental.medium.com\/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee\">https:\/\/elemental.medium.com\/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee<\/a><\/p>\n<p><a name=\"14.4\"><\/a><br \/>\n4. M. Vazquez, A. May, A. Steinfeld, and W.-H. Chen, \u201cA deceptive robot referee in a multiplayer gaming environment,\u201d Conference Paper, <em>Proceedings of 2011 International Conference on Collaboration Technologies and Systems (CTS)<\/em>, pp. 204-211, May 2011. [Online]. Available: <a href=\"https:\/\/www.ri.cmu.edu\/publications\/a-deceptive-robot-referee-in-a-multiplayer-gaming-environment\/\">https:\/\/www.ri.cmu.edu\/publications\/a-deceptive-robot-referee-in-a-multiplayer-gaming-environment\/<\/a><\/p>\n<p><a name=\"14.5\"><\/a><br \/>\n5. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"14.6\"><\/a><br \/>\n6. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"14.7\"><\/a><br \/>\n7. L. Hansen, \u201d8 drivers who blindly followed their GPS into disaster,\u201d <em>The Week<\/em>, May 7, 2013. [Online]. Available: <a href=\"https:\/\/theweek.com\/articles\/464674\/8-drivers-who-blindly-followed-gps-into-disaster\">https:\/\/theweek.com\/articles\/464674\/8-drivers-who-blindly-followed-gps-into-disaster<\/a><\/p>\n<p><a name=\"14.8\"><\/a><br \/>\n8. P. Madhavan and D. A. Wiegmann, \u201cSimilarities and differences between human-human and human-automation trust: An integrative review,\u201d <em>Theoretical Issues in Ergonomics Science<\/em>, vol. 8, no. 4, pp. 277-301, 2007).<\/p>\n<p><a name=\"14.9\"><\/a><br \/>\n9. \u201cAppeal to authority,\u201d Legally Fallacious. Accessed March 25, 2020. [Online]. Available:\u00a0 <a href=\"https:\/\/www.logicallyfallacious.com\/logicalfallacies\/Appeal-to-Authority\">https:\/\/www.logicallyfallacious.com\/logicalfallacies\/Appeal-to-Authority<\/a><\/p>\n<p><a name=\"14.a\"><\/a><br \/>\n10. M. Chalabi, \u201cWeapons of math destruction: Cathy O\u2019Neil adds up the damage of algorithms,\u201d <em>Guardian<\/em>, Oct. 27, 2016. [Online]. Available: <a href=\"https:\/\/www.theguardian.com\/books\/2016\/oct\/27\/cathy-oneil-weapons-of-math-destruction-algorithms-big-data\">https:\/\/www.theguardian.com\/books\/2016\/oct\/27\/cathy-oneil-weapons-of-math-destruction-algorithms-big-data<\/a><\/p>\n<p><a name=\"14.b\"><\/a><br \/>\n11. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a name=\"14.c\"><\/a><br \/>\n12. Data &amp; Society, \u201cAlgorithmic accountability: A primer,\u201d Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: <a href=\"https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf\">https:\/\/datasociety.net\/wp-content\/uploads\/2018\/04\/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf<\/a><\/p>\n<p><a name=\"14.d\"><\/a><br \/>\n13. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"14.e\"><\/a><br \/>\n14. B. Aguera y Arcas, \u201cPhysiognomy\u2019s new clothes,\u201d <em>Medium<\/em>, May 6, 2017. [Online]. Available: <a href=\"https:\/\/medium.com\/@blaisea\/physiognomys-new-clothes-f2d4b59fdd6a\">https:\/\/medium.com\/@blaisea\/physiognomys-new-clothes-f2d4b59fdd6a<\/a><\/p>\n<p><a name=\"14.f\"><\/a><br \/>\n15. Synced, \u201c2018 in review: 10 AI failures,\u201d <em>Medium<\/em>, Dec. 10, 2018. [Online]. Available: <a href=\"https:\/\/medium.com\/syncedreview\/2018-in-review-10-ai-failures-c18faadf5983\">https:\/\/medium.com\/syncedreview\/2018-in-review-10-ai-failures-c18faadf5983<\/a><\/p>\n<p><a name=\"14.g\"><\/a><br \/>\n16. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a name=\"14.h\"><\/a><br \/>\n17. S. Levin, \u201cNew AI can guess whether you&#8217;re gay or straight from a photograph,\u201d <em>Guardian<\/em>, Sept. 7, 2017. [Online]. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2017\/sep\/07\/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph\">https:\/\/www.theguardian.com\/technology\/2017\/sep\/07\/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph<\/a><\/p>\n<p><a name=\"14.8\"><\/a><br \/>\n18. Synced, \u201c2018 in review: 10 AI failures,\u201d <em>Medium<\/em>, Dec. 10, 2018. [Online]. Available: <a href=\"https:\/\/medium.com\/syncedreview\/2018-in-review-10-ai-failures-c18faadf5983\">https:\/\/medium.com\/syncedreview\/2018-in-review-10-ai-failures-c18faadf5983<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Lost in Translation: Automation Surprise<\/strong><\/p>\n<p><a name=\"15.1\"><\/a><br \/>\n1. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a name=\"15.2\"><\/a><br \/>\n2. \u201cA320, vicinity Tel Aviv Israel, 2012,\u201d <em>SKYbrary<\/em>. Accessed on: March 11, 2020. [Online]. Available: <a href=\"https:\/\/www.skybrary.aero\/index.php\/A320,_vicinity_Tel_Aviv_Israel,_2012\">https:\/\/www.skybrary.aero\/index.php\/A320,_vicinity_Tel_Aviv_Israel,_2012<\/a><\/p>\n<p><a name=\"15.3\"><\/a><br \/>\n3. R. Nieva, \u201cFacebook put cork in chatbots that created a secret language,\u201d <em>CNET<\/em>, July 31, 2017. [Online]. Available: <a href=\"https:\/\/www.cnet.com\/news\/what-happens-when-ai-bots-invent-their-own-language\/\">https:\/\/www.cnet.com\/news\/what-happens-when-ai-bots-invent-their-own-language\/<\/a><\/p>\n<p><a name=\"15.4\"><\/a><br \/>\n4. N. D. Sarter, D. D. Woods, and C. E. Billings, \u201cAutomation surprises,\u201d in G. Salvendy (Ed.), <em>Handbook of Human Factors &amp; Ergonomics<\/em> (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.<\/p>\n<p><a name=\"15.5\"><\/a><br \/>\n5. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a name=\"15.6\"><\/a><br \/>\n6. G. Klien et al., \u201cTen challenges for making automation a \u2018team player\u2019 in joint human-agent activity,\u201d <em>IEEE: Intelligent Systems<\/em>, vol. 19, no. 6, pp. 91-95, Nov.\/Dec. 2004. [Online]. Available: <a href=\"http:\/\/jeffreymbradshaw.net\/publications\/17._Team_Players.pdf_1.pdf\">http:\/\/jeffreymbradshaw.net\/publications\/17._Team_Players.pdf_1.pdf<\/a><\/p>\n<p><a name=\"15.7\"><\/a><br \/>\n7. J. B. Lyons, \u201cBeing transparent about transparency: A model for human-robot interaction,\u201d in <em>2013 AAAI Spring Symposium Series,<\/em> 2013. [Online]. Available: <a href=\"https:\/\/www.semanticscholar.org\/paper\/Being-Transparent-about-Transparency%3A-A-Model-for-Lyons\/840080df8a02de6aab098e7eabef84831ac95428\">https:\/\/www.semanticscholar.org\/paper\/Being-Transparent-about-Transparency%3A-A-Model-for-Lyons\/840080df8a02de6aab098e7eabef84831ac95428<\/a><\/p>\n<p><a name=\"15.8\"><\/a><br \/>\n8. D. Woods, \u201cGeneric support requirements for cognitive work: laws that govern cognitive work in action,\u201d <em>Proceedings of the Human Factors and Ergonomics Society Annual Meeting,<\/em> vol. 49, pp. 317-321, Sept. 1, 2005. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/154193120504900322\">https:\/\/journals.sagepub.com\/doi\/10.1177\/154193120504900322<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>The AI Resistance\u00a0 <\/strong><\/p>\n<p><a name=\"16.1\"><\/a><br \/>\n1. \u201cLuddite,\u201d <em>Merriam-Webster<\/em>. Accessed April 11, 2020. [Online]. Available: <a href=\"https:\/\/www.merriam-webster.com\/dictionary\/Luddite\">https:\/\/www.merriam-webster.com\/dictionary\/Luddite<\/a><\/p>\n<p><a name=\"16.2\"><\/a><br \/>\n2. D. Wray, \u201cThe companies cleaning the deepest, darkest parts of social media,\u201d <em>Vice<\/em>, June 26, 2018. [Online]. Available: <a href=\"https:\/\/www.vice.com\/en_us\/article\/ywe7gb\/the-companies-cleaning-the-deepest-darkest-parts-of-social-media\">https:\/\/www.vice.com\/en_us\/article\/ywe7gb\/the-companies-cleaning-the-deepest-darkest-parts-of-social-media<\/a><\/p>\n<p><a name=\"16.3\"><\/a><br \/>\n3. \u201cWhy a #Google walkout organizer left Google,\u201d <em>Medium<\/em>, June 7, 2019. [Online]. Available: <a href=\"https:\/\/medium.com\/@GoogleWalkout\/why-a-googlewalkout-organizer-left-google-26d1e3fbe317\">https:\/\/medium.com\/@GoogleWalkout\/why-a-googlewalkout-organizer-left-google-26d1e3fbe317<\/a><\/p>\n<p><a name=\"16.4\"><\/a><br \/>\n4. S. Romero, \u201cWielding rocks and knives, Arizonans attack self-driving cars,\u201d <em>New York Times<\/em>, Dec. 31, 2018. [Online]. Available: <a href=\"https:\/\/www.nytimes.com\/2018\/12\/31\/us\/waymo-self-driving-cars-arizona-attacks.html\">https:\/\/www.nytimes.com\/2018\/12\/31\/us\/waymo-self-driving-cars-arizona-attacks.html<\/a><\/p>\n<p><a name=\"16.5\"><\/a><br \/>\n5. D. Simberkoff, \u201cHow Facebook&#8217;s Cambridge Analytica scandal impacted the intersection of privacy and regulation,\u201d <em>CMS Wire<\/em>, Aug. 30, 2018. [Online]. Available: <a href=\"https:\/\/www.cmswire.com\/information-management\/how-facebooks-cambridge-analytica-scandal-impacted-the-intersection-of-privacy-and-regulation\/\">https:\/\/www.cmswire.com\/information-management\/how-facebooks-cambridge-analytica-scandal-impacted-the-intersection-of-privacy-and-regulation\/<\/a><\/p>\n<p><a name=\"16.6\"><\/a><br \/>\n6. \u201cTechnology adoption life cycle,\u201d Wikipedia. Accessed March 17, 2020. [Online]. Available: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Technology_adoption_life_cycle\">https:\/\/en.wikipedia.org\/wiki\/Technology_adoption_life_cycle<\/a><\/p>\n<p><a name=\"16.7\"><\/a><br \/>\n7. M. Anderson, \u201cUseful or creepy? Machines suggest Gmail replies,\u201d <em>AP News<\/em>, Aug. 30, 2018. [Online]. Available: <a href=\"https:\/\/apnews.com\/bcc384298fe944e89367e42e20d43f05\">https:\/\/apnews.com\/bcc384298fe944e89367e42e20d43f05<\/a><\/p>\n<p><a name=\"16.8\"><\/a><br \/>\n8. \u201cHouse Intelligence Committee hearing on \u2018Deepfake\u2019 videos,\u201d <em>C-SPAN<\/em>, June 13, 2019. [Online]. Available: <a href=\"https:\/\/www.c-span.org\/video\/?461679-1\/house-intelligence-committee-hearing-deepfake-videos\">https:\/\/www.c-span.org\/video\/?461679-1\/house-intelligence-committee-hearing-deepfake-videos<\/a><\/p>\n<p><a name=\"16.9\"><\/a><br \/>\n9. C. F. Kerry, \u201cProtecting privacy in an AI-driven world,\u201d Brookings, Feb. 10, 2020. [Online]. Available:\u00a0 <a href=\"https:\/\/www.brookings.edu\/research\/protecting-privacy-in-an-ai-driven-world\/\">https:\/\/www.brookings.edu\/research\/protecting-privacy-in-an-ai-driven-world\/<\/a> <u>\u00a0<\/u><\/p>\n<p><a name=\"16.a\"><\/a><br \/>\n10. C. Forrest, \u201cFear of losing job to AI is the no. 1 cause of stress at work,\u201d <em>TechRepublic<\/em>, June 6, 2017.\u00a0 [Online]. Available: <a href=\"https:\/\/www.techrepublic.com\/article\/report-fear-of-losing-job-to-ai-is-the-no-1-cause-of-stress-at-work\/\">https:\/\/www.techrepublic.com\/article\/report-fear-of-losing-job-to-ai-is-the-no-1-cause-of-stress-at-work\/<\/a><\/p>\n<p><a name=\"16.b\"><\/a><br \/>\n11. S. Browne, <em>Dark Matters: On the Surveillance of Blackness<\/em>, Durham, NC, USA: Duke University Press Books, 2015. [Online]. Available: <a href=\"https:\/\/www.dukeupress.edu\/dark-matters\">https:\/\/www.dukeupress.edu\/dark-matters<\/a><\/p>\n<p><a name=\"16.c\"><\/a><br \/>\n12. A. M. Bedoya, \u201cThe color of surveillance: What an infamous abuse of power teaches us about the modern spy era,\u201d <em>Slate<\/em>, Jan. 18, 2016. [Online]. Available: <a href=\"https:\/\/slate.com\/technology\/2016\/01\/what-the-fbis-surveillance-of-martin-luther-king-says-about-modern-spying.html\">https:\/\/slate.com\/technology\/2016\/01\/what-the-fbis-surveillance-of-martin-luther-king-says-about-modern-spying.html<\/a><\/p>\n<p><a name=\"16.d\"><\/a><br \/>\n13. M. Cyril, \u201cWatching the Black body,\u201d Electronic Frontier Foundation, Feb. 28, 2019. [Online]. Available: <a href=\"https:\/\/www.eff.org\/deeplinks\/2019\/02\/watching-black-body\">https:\/\/www.eff.org\/deeplinks\/2019\/02\/watching-black-body<\/a><\/p>\n<p><a name=\"16.e\"><\/a><br \/>\n14. P. McCausland, \u201cSelf-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk,\u201d <em>NBC News<\/em>, Nov. 9, 2019. [Online]. Available: <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281\">https:\/\/www.nbcnews.com\/tech\/tech-news\/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Good (Grief!) Governance<\/strong><\/p>\n<p><a name=\"17.1\"><\/a><br \/>\n1. A. M. Barry-Jester, B. Casselman, and D. Goldstein, \u201cShould prison sentences be based on crimes that haven\u2019t been committed yet?\u2019 <em>FiveThirtyEight<\/em>, Aug. 4, 2015. [Online]. Available: <a href=\"https:\/\/fivethirtyeight.com\/features\/prison-reform-risk-assessment\/\">https:\/\/fivethirtyeight.com\/features\/prison-reform-risk-assessment\/<\/a><\/p>\n<p><a name=\"17.2\"><\/a><br \/>\n2. E. Ongweso, \u201cGoogle is investigating why it trained facial recognition on \u2018dark skinned\u2019 homeless people,\u201d <em>Vice<\/em>, Oct. 4, 2019. [Online]. Available: <a href=\"https:\/\/www.vice.com\/en_us\/article\/43k7yd\/google-is-investigating-why-it-trained-facial-recognition-on-dark-skinned-homeless-people\">https:\/\/www.vice.com\/en_us\/article\/43k7yd\/google-is-investigating-why-it-trained-facial-recognition-on-dark-skinned-homeless-people<\/a><\/p>\n<p><a name=\"17.3\"><\/a><br \/>\n3. J. Stanley, \u201cSecret Service announces test of face recognition system around White House,\u201d <em>ACLU bog<\/em>, Dec. 4, 2018. [Online]. Available: <a href=\"https:\/\/www.aclu.org\/blog\/privacy-technology\/surveillance-technologies\/secret-service-announces-test-face-recognition\">https:\/\/www.aclu.org\/blog\/privacy-technology\/surveillance-technologies\/secret-service-announces-test-face-recognition<\/a><\/p>\n<p><a name=\"17.4\"><\/a><br \/>\n4. R. Courtland, \u201cBias detectives: The researchers striving to make algorithms fair,\u201d <em>Nature<\/em>, June 20, 2018. [Online]. Available: <a href=\"https:\/\/www.nature.com\/articles\/d41586-018-05469-3\">https:\/\/www.nature.com\/articles\/d41586-018-05469-3<\/a><\/p>\n<p><a name=\"17.5\"><\/a><br \/>\n5. D. Robinson and L. Koepke, \u201cStuck in a pattern: Early evidence on \u2018predictive policing\u2019 and civil rights,\u201d Upturn, Aug. 2016. [Online]. Available: <a href=\"https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/\">https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/<\/a><\/p>\n<p><a name=\"17.6\"><\/a><br \/>\n6. \u201cAn ethics guidelines global inventory,\u201d Algorithm Watch. Accessed on: Jan. 17, 2020. [Online]. Available: <a href=\"https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/\">https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/<\/a><\/p>\n<p><a name=\"17.7\"><\/a><br \/>\n7. \u201cAn ethics guidelines global inventory,\u201d Algorithm Watch. Accessed on: Jan. 17, 2020. [Online]. Available: <a href=\"https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/\">https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/<\/a><\/p>\n<p><a name=\"17.8\"><\/a><br \/>\n8. T. Hagendorff, \u201cThe ethics of AI ethics: An evaluation of guidelines,\u201d <em>arXiv.org<\/em>, Oct. 11, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1903.03425\">https:\/\/arxiv.org\/abs\/1903.03425<\/a><\/p>\n<p><a name=\"17.9\"><\/a><br \/>\n9. R. Vought, \u201cGuidance for regulation of artificial intelligence applications,\u201d Draft memorandum, <em>WhiteHouse.gov<\/em>. Accessed on: Jan. 21, 2020. [Online]. Available: <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2020\/01\/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf\">https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2020\/01\/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf<\/a><\/p>\n<p><a name=\"17.a\"><\/a><br \/>\n10. \u201cWrestling with AI governance around the world,\u201d <em>Forbes<\/em>, March 27, 2019. [Online]. Available: <a href=\"https:\/\/www.forbes.com\/sites\/insights-intelai\/2019\/03\/27\/wrestling-with-ai-governance-around-the-world\/#7d3f84ed1766\">https:\/\/www.forbes.com\/sites\/insights-intelai\/2019\/03\/27\/wrestling-with-ai-governance-around-the-world\/#7d3f84ed1766<\/a><\/p>\n<p><a name=\"17.b\"><\/a><br \/>\n11. G. Vyse, \u201cThree American cities have now banned the use of facial recognition technology in local government amid concerns it&#8217;s inaccurate and biased,\u201d <em>Governing<\/em>, July 24, 2019. [Online]. Available: <a href=\"https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html\">https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html<\/a><\/p>\n<p><a name=\"17.c\"><\/a><br \/>\n12. P. Martineau, \u201cCities examine proper\u2014and improper\u2014uses of facial recognition,\u201d <em>Wired<\/em>, Nov. 10, 2019. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/cities-examine-proper-improper-facial-recognition\/\">https:\/\/www.wired.com\/story\/cities-examine-proper-improper-facial-recognition\/<\/a><\/p>\n<p><a name=\"17.d\"><\/a><br \/>\n13. \u201cBan facial recognition.\u201d Accessed March 17, 2020. [Online]. Available: <a href=\"https:\/\/www.banfacialrecognition.com\/map\/\">https:\/\/www.banfacialrecognition.com\/map\/<\/a><\/p>\n<p><a name=\"17.e\"><\/a><br \/>\n14. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"17.f\"><\/a><br \/>\n15. R. Courtland, \u201cBias detectives: The researchers striving to make algorithms fair,\u201d <em>Nature<\/em>, June 20, 2018. [Online]. Available: <a href=\"https:\/\/www.nature.com\/articles\/d41586-018-05469-3\">https:\/\/www.nature.com\/articles\/d41586-018-05469-3<\/a><\/p>\n<p><a name=\"17.g\"><\/a><br \/>\n16. D. Robinson and L. Koepke, \u201cStuck in a pattern: Early evidence on \u2018predictive policing\u2019 and civil rights,\u201d Upturn, Aug. 2016. [Online]. Available: <a href=\"https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/\">https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/<\/a><\/p>\n<p><a name=\"17.h\"><\/a><br \/>\n17. D. Robinson and L. Koepke, \u201cStuck in a pattern: Early evidence on \u2018predictive policing\u2019 and civil rights,\u201d Upturn, Aug. 2016. [Online]. Available: <a href=\"https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/\">https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Just Add (Technical) People<\/strong><\/p>\n<p><a name=\"18.1\"><\/a><br \/>\n1. J. Spitzer, \u201cIBM&#8217;s Watson recommended &#8216;unsafe and incorrect&#8217; cancer treatments, STAT report finds,\u201d <em>Becker\u2019s Health IT<\/em>, July 25, 2018. [Online]. Available: <a href=\"https:\/\/www.beckershospitalreview.com\/artificial-intelligence\/ibm-s-watson-recommended-unsafe-and-incorrect-cancer-treatments-stat-report-finds.html\">https:\/\/www.beckershospitalreview.com\/artificial-intelligence\/ibm-s-watson-recommended-unsafe-and-incorrect-cancer-treatments-stat-report-finds.html<\/a><\/p>\n<p><a name=\"18.2\"><\/a><br \/>\n2. A. Liptak, \u201cThe US Navy will replace its touchscreen controls with mechanical ones on its destroyers,\u201d <em>The Verge<\/em>, Aug. 11, 2019. [Online]. Available: <a href=\"https:\/\/www.theverge.com\/2019\/8\/11\/20800111\/us-navy-uss-john-s-mccain-crash-ntsb-report-touchscreen-mechanical-controls\">https:\/\/www.theverge.com\/2019\/8\/11\/20800111\/us-navy-uss-john-s-mccain-crash-ntsb-report-touchscreen-mechanical-controls<\/a><\/p>\n<p><a name=\"18.3\"><\/a><br \/>\n3. T. Simonite, \u201cWhen It Comes to Gorillas, Google Photos Remains Blind,\u201d <em>Wired<\/em>, January 11, 2018. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/when-it-comes-to-gorillas-google-photos-remains-blind\/\">https:\/\/www.wired.com\/story\/when-it-comes-to-gorillas-google-photos-remains-blind\/<\/a><\/p>\n<p><a name=\"18.4\"><\/a><br \/>\n4. B. Marr, \u201cThe AI skills crisis and how to close the gap,\u201d <em>Forbes<\/em>, June 25, 2018. [Online]. Available: <a href=\"https:\/\/www.forbes.com\/sites\/bernardmarr\/2018\/06\/25\/the-ai-skills-crisis-and-how-to-close-the-gap\/%236525b57b31f3\">https:\/\/www.forbes.com\/sites\/bernardmarr\/2018\/06\/25\/the-ai-skills-crisis-and-how-to-close-the-gap\/#6525b57b31f3<\/a><\/p>\n<p><a name=\"18.5\"><\/a><br \/>\n5. NICE cybersecurity workforce framework resource center,\u201d National Institute of Standards and Technology. Accessed March 17, 2020. [Online]. Available: <a href=\"https:\/\/www.nist.gov\/itl\/applied-cybersecurity\/nice\/nice-cybersecurity-workforce-framework-resource-center\">https:\/\/www.nist.gov\/itl\/applied-cybersecurity\/nice\/nice-cybersecurity-workforce-framework-resource-center<\/a><\/p>\n<p><a name=\"18.6\"><\/a><br \/>\n6. S. Anand and T. B\u00e4rnighausen, \u201cHealth workers at the core of the health system: Framework and research issues,\u201d Global Health Workforce Alliance, 2011. [Online]. Available: <a href=\"https:\/\/www.who.int\/workforcealliance\/knowledge\/resources\/frameworkandresearch_dec2011\/en\/\">https:\/\/www.who.int\/workforcealliance\/knowledge\/resources\/frameworkandresearch_dec2011\/en\/<\/a><\/p>\n<p><a name=\"18.7\"><\/a><br \/>\n7. Lippincott Solutions, \u201cInterdisciplinary care plans: Teamwork makes the dream work,\u201d <em>Calling the Shots blog<\/em>, Sept. 6, 2018. [Online]. Available:\u00a0 <a href=\"http:\/\/lippincottsolutions.lww.com\/blog.entry.html\/2018\/09\/06\/interdisciplinaryca-z601.html\">http:\/\/lippincottsolutions.lww.com\/blog.entry.html\/2018\/09\/06\/interdisciplinaryca-z601.html<\/a><\/p>\n<p><a name=\"18.8\"><\/a><br \/>\n8. M. Mahdizadeh, A. Heydari, and H. K. Moonaghi, \u201cClinical interdisciplinary collaboration models and frameworks from similarities to differences: A systematic review,\u201d <em>Global Journal of Health Science<\/em>, vol. 7, no. 6, pp. 170-180, Nov. 2015. [Online]. Available: <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC4803863\/\">https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC4803863\/<\/a><\/p>\n<p><a name=\"18.9\"><\/a><br \/>\n9. C. Hagel, \u201cReagan national defense forum keynote,\u201d Secretary of Defense Speech, Ronald Reagan Presidential Library, Simi Valley, CA, Nov. 15, 2014. [Online]. Available: <a href=\"https:\/\/www.defense.gov\/Newsroom\/Speeches\/Speech\/Article\/606635\/\">https:\/\/www.defense.gov\/Newsroom\/Speeches\/Speech\/Article\/606635\/<\/a><\/p>\n<p><a name=\"18.a\"><\/a><br \/>\n10. \u201cReports,\u201d National Security Commission on Artificial Intelligence. Accessed March 18, 2020. [Online]. Available: <a href=\"https:\/\/www.nscai.gov\/reports\">https:\/\/www.nscai.gov\/reports<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Square Data, Round Problem<\/strong><\/p>\n<p><a class=\"anchor\" name=\"19.1\"><\/a><br \/>\n1. \u201cBad data costs United Airlines $1B annually,\u201d Travel Data Daily. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/www.traveldatadaily.com\/bad-data-costs-united-airlines-1b-annually\/\">https:\/\/www.traveldatadaily.com\/bad-data-costs-united-airlines-1b-annually\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"19.2\"><\/a><br \/>\n2. B. Vergakis, \u201cThe Navy, Air Force and Army collect different data on aircraft crashes. That&#8217;s a big problem,\u201d <em>Task &amp; Purpose<\/em>, Aug. 16, 2018. [Online]. Available: <a href=\"https:\/\/taskandpurpose.com\/aviation-mishaps-data-collection\">https:\/\/taskandpurpose.com\/aviation-mishaps-data-collection<\/a><\/p>\n<p><a class=\"anchor\" name=\"19.3\"><\/a><br \/>\n3. B. Marr, \u201cHow much data do we create every day? The mind-blowing stats everyone should read,\u201d <em>Forbes<\/em>, May 21, 2018. [Online]. Available: <a href=\"https:\/\/www.forbes.com\/sites\/bernardmarr\/2018\/05\/21\/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read\/\">https:\/\/www.forbes.com\/sites\/bernardmarr\/2018\/05\/21\/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"19.4\"><\/a><br \/>\n4. \u201cNo AI until the data is fixed,\u201d <em>Wired<\/em>, Feb. 22, 2019. [Online]. Available: <a href=\"https:\/\/www.wired.co.uk\/article\/no-ai-until-the-data-is-fixed\">https:\/\/www.wired.co.uk\/article\/no-ai-until-the-data-is-fixed<\/a><\/p>\n<p><a class=\"anchor\" name=\"19.5\"><\/a><br \/>\n5. D. Robinson and L. Koepke, \u201cStuck in a pattern: Early evidence on \u2018predictive policing\u2019 and civil rights,\u201d Upturn, Aug. 2016. [Online]. Available: <a href=\"https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/\">https:\/\/www.upturn.org\/reports\/2016\/stuck-in-a-pattern\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"19.6\"><\/a><br \/>\n6. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>My 8-Track Still Works So What\u2019s the Issue? <\/strong><\/p>\n<p><a class=\"anchor\" name=\"20.1\"><\/a><br \/>\n1. U.S. Government Accountability Office, \u201cInformation technology: Federal agencies need to address aging legacy systems,\u201d GAO-16-696T, May 25, 2016. [Online]. Available: <a href=\"https:\/\/www.gao.gov\/products\/GAO-16-696T\">https:\/\/www.gao.gov\/products\/GAO-16-696T<\/a><\/p>\n<p><a class=\"anchor\" name=\"20.2\"><\/a><br \/>\n2. D. Cassel, \u201cCOBOL is everywhere. Who will maintain it?\u201d <em>The New Stack<\/em>, May 6, 2017. [Online]. Available: <a href=\"https:\/\/thenewstack.io\/cobol-everywhere-will-maintain\/\">https:\/\/thenewstack.io\/cobol-everywhere-will-maintain\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"20.3\"><\/a><br \/>\n3. J. Uchill, \u201cHow did the government\u2019s technology get so bad?\u201d <em>The Hill<\/em>, Dec. 13, 2016. [Online]. Available: <a href=\"https:\/\/thehill.com\/policy\/technology\/310271-how-did-the-governments-technology-get-so-bad\">https:\/\/thehill.com\/policy\/technology\/310271-how-did-the-governments-technology-get-so-bad<\/a><\/p>\n<p><a class=\"anchor\" name=\"20.4\"><\/a><br \/>\n4. B. Balter, \u201c19 reasons why technologists don\u2019t want to work at your government agency,\u201d April 21, 2015. [Online]. Available: <a href=\"https:\/\/ben.balter.com\/2015\/04\/21\/why-technologists-dont-want-to-work-at-your-agency\/\">https:\/\/ben.balter.com\/2015\/04\/21\/why-technologists-dont-want-to-work-at-your-agency\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"20.5\"><\/a><br \/>\n5. U.S. Government Accountability Office, \u201cInformation technology: Federal agencies need to address aging legacy systems,\u201d GAO-16-696T, May 25, 2016. [Online]. Available: <a href=\"https:\/\/www.gao.gov\/products\/GAO-16-696T\">https:\/\/www.gao.gov\/products\/GAO-16-696T<\/a><\/p>\n<p><a class=\"anchor\" name=\"20.6\"><\/a><br \/>\n6. D. Cassel, \u201cCOBOL is everywhere. Who will maintain it?\u201d <em>The New Stack<\/em>, May 6, 2017. [Online]. Available: <a href=\"https:\/\/thenewstack.io\/cobol-everywhere-will-maintain\/\">https:\/\/thenewstack.io\/cobol-everywhere-will-maintain\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<h3><strong><em>Lessons Learned<\/em><\/strong><\/h3>\n<p><strong>Hold AI to a Higher Standard<\/strong><br \/>\n<a name=\"L1.1\"><\/a><br \/>\n1. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"L1.2\"><\/a><br \/>\n2. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a name=\"L1.3\"><\/a><br \/>\n3. S. Gibbs, \u201cTesla Model S cleared by auto safety regulator after fatal Autopilot crash,\u201d <em>Guardian<\/em>, Jan. 20, 2017. [Online]. Available: <a href=\"https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash\">https:\/\/www.theguardian.com\/technology\/2017\/jan\/20\/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash<\/a><\/p>\n<p><a name=\"L1.4\"><\/a><br \/>\n4. D. Tomchek and S. Krawlzik, \u201cLooking beyond the technical to fill America&#8217;s cyber workforce gap,\u201d <em>Nextgov<\/em>, Sept. 27, 2019. [Online]. Available: <a href=\"https:\/\/www.nextgov.com\/ideas\/2019\/09\/looking-beyond-technical-fill-americas-cyber-workforce-gap\/160222\/\">https:\/\/www.nextgov.com\/ideas\/2019\/09\/looking-beyond-technical-fill-americas-cyber-workforce-gap\/160222\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>It\u2019s OK to Say No to Automation<\/strong><br \/>\n<a name=\"L2.1\"><\/a><br \/>\n1. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, \u201cSeven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,\u201d <em>IEEE Intelligent Systems<\/em>, Nov.\/Dec. 2014. [Online]. Available: <a href=\"http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf\">http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf<\/a><\/p>\n<p><a name=\"L2.2\"><\/a><br \/>\n2. \u201cEthics &amp; algorithms toolkit.\u201d Accessed March 13, 2020. [Online]. Available: <a href=\"http:\/\/ethicstoolkit.ai\/\">http:\/\/ethicstoolkit.ai\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>AI Challenges Are Multidisciplinary, so They Require a Multidisciplinary Team<\/strong><\/p>\n<p><a name=\"L3.1\"><\/a><br \/>\n1. S. Ferro, \u201cHere\u2019s why facial recognition tech can\u2019t figure out black people,\u201d <em>HuffPost<\/em>, March 2, 2016. [Online]. Available: <a href=\"https:\/\/www.huffpost.com\/entry\/heres-why-facial-recognition-tech-cant-figure-out-black-people_n_56d5c2b1e4b0bf0dab3371eb\">https:\/\/www.huffpost.com\/entry\/heres-why-facial-recognition-tech-cant-figure-out-black-people_n_56d5c2b1e4b0bf0dab3371eb<\/a><\/p>\n<p><a name=\"L3.2\"><\/a><br \/>\n2. S. J. Freedberg, \u201c\u2019Guess what, there\u2019s a cost for that\u2019: Getting cloud &amp; AI right,\u201d Breaking Defense, Nov. 26, 2019. [Online]. Available: <a href=\"https:\/\/breakingdefense.com\/2019\/11\/guess-what-theres-a-cost-for-that-getting-cloud-ai-right\/\">https:\/\/breakingdefense.com\/2019\/11\/guess-what-theres-a-cost-for-that-getting-cloud-ai-right\/<\/a><\/p>\n<p><a name=\"L3.3\"><\/a><br \/>\n3. A. Campolo et al., <em>AI Now Report 2017<\/em>. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:\u00a0 <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Incorporate Privacy, Civil Liberties, and Security from the Beginning <\/strong><\/p>\n<p><a name=\"L4.1\"><\/a><br \/>\n1. R. V. Yampolskiy and M. S. Spellchecker, \u201cArtificial intelligence safety and cybersecurity: A timeline of AI failures,\u201d <em>arXiv.org<\/em>. Accessed March 25, 2020. [Online]. Available: <a href=\"https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1610\/1610.07997.pdf\">https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1610\/1610.07997.pdf<\/a><\/p>\n<p><a name=\"L4.2\"><\/a><br \/>\n2. J. Rotner, \u201cThe person at the other end of the data,\u201d <em>Knowledge-Driven Enterprise blog<\/em>, The MITRE Corporation, Oct. 1, 2019. [Online]. Available: <a href=\"https:\/\/kde.mitre.org\/blog\/2019\/10\/01\/the-person-at-the-other-end-of-the-data\/\">https:\/\/kde.mitre.org\/blog\/2019\/10\/01\/the-person-at-the-other-end-of-the-data\/<\/a><\/p>\n<p><a name=\"L4.3\"><\/a><br \/>\n3. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, \u201cThe role and limits of principles in AI ethics: Towards a focus on tensions,\u201d presented at AIES \u201919, Jan. 27\u201328, 2019, Honolulu, HI, USA. [Online]. Available: <a href=\"https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download\">https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download<\/a><\/p>\n<p><a name=\"L4.4\"><\/a><br \/>\n4. I. Goodfellow, P. McDaniel, and N. Papernot, \u201cMaking machine learning robust against adversarial inputs,\u201d <em>Communications of the ACM<\/em>, vol. 61, no. 7, pp. 56-66, July 2018. [Online]. Available: <a href=\"https:\/\/cacm.acm.org\/magazines\/2018\/7\/229030-making-machine-learning-robust-against-adversarial-inputs\/fulltext\">https:\/\/cacm.acm.org\/magazines\/2018\/7\/229030-making-machine-learning-robust-against-adversarial-inputs\/fulltext<\/a><\/p>\n<p><a name=\"L4.5\"><\/a><br \/>\n5. R. V. Yampolskiy and M. S. Spellchecker, \u201cArtificial intelligence safety and cybersecurity: A timeline of AI failures,\u201d <em>arXiv.org<\/em>. Accessed March 25, 2020. [Online]. Available: <a href=\"https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1610\/1610.07997.pdf\">https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1610\/1610.07997.pdf<\/a><\/p>\n<p><a name=\"L4.6\"><\/a><br \/>\n6. \u201cGeneral data protection regulation,\u201d European Union. Accessed March 25, 2020. [Online]. Available: <a href=\"https:\/\/eugdpr.com\/\">https:\/\/eugdpr.com\/<\/a><\/p>\n<p><a name=\"L4.7\"><\/a><br \/>\n7. D. Miralis and P. Gibson, \u201cAustralia: Data protection 2019,\u201d <em>ICLG.com<\/em>, March 7, 2019. [Online]. Available: <a href=\"https:\/\/iclg.com\/practice-areas\/data-protection-laws-and-regulations\/australia\">https:\/\/iclg.com\/practice-areas\/data-protection-laws-and-regulations\/australia<\/a><\/p>\n<p><a name=\"L4.8\"><\/a><br \/>\n8. \u201cData protection laws of the world: New Zealand,\u201d DLA Piper. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/www.dlapiperdataprotection.com\/index.html?t=law&amp;c=NZ\">https:\/\/www.dlapiperdataprotection.com\/index.html?t=law&amp;c=NZ<\/a><\/p>\n<p><a name=\"L4.9\"><\/a><br \/>\n9. G. Vyse, \u201cThree American cities have now banned the use of facial recognition technology in local government amid concerns it&#8217;s inaccurate and biased,\u201d <em>Governing.com<\/em>, July 24, 2019. [Online]. Available: <a href=\"https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html\">https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html<\/a><\/p>\n<p><a name=\"L4.a\"><\/a><br \/>\n10. L. Hautala, \u201cCalifornia\u2019s new data privacy law the toughest in the US,\u201d <em>CNET.com<\/em>, June 29, 2018. [Online]. Available: <a href=\"https:\/\/www.cnet.com\/news\/californias-new-data-privacy-law-the-toughest-in-the-us\/\">https:\/\/www.cnet.com\/news\/californias-new-data-privacy-law-the-toughest-in-the-us\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>Involve the Communities Affected by the AI<\/strong><\/p>\n<p><a name=\"L5.1\"><\/a><br \/>\n1. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a name=\"L5.2\"><\/a><br \/>\n2. \u201cDiverse Voices: A How-To Guide for Facilitating Inclusiveness in Tech Policy.\u201d Accessed April 8, 2020. [Online]. Available: <a href=\"https:\/\/techpolicylab.uw.edu\/project\/diverse-voices\/\">https:\/\/techpolicylab.uw.edu\/project\/diverse-voices\/<\/a><\/p>\n<p><a name=\"L5.3\"><\/a><br \/>\n3. A. Campolo et al., <em>AI Now Report 2017<\/em>. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:\u00a0 <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf<\/a><\/p>\n<p><a name=\"L5.4\"><\/a><br \/>\n4. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a name=\"L5.5\"><\/a><br \/>\n5. A. Campolo et al., <em>AI Now Report 2017<\/em>. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:\u00a0 <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf<\/a><\/p>\n<p><a name=\"L5.6\"><\/a><br \/>\n6. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Plan to Fail<\/strong><\/p>\n<p><a name=\"L6.1\"><\/a><br \/>\n1. \u201cBenjamin Franklin quotable quote,\u201d Goodreads. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/www.goodreads.com\/quotes\/460142-if-you-fail-to-plan-you-are-planning-to-fail\">https:\/\/www.goodreads.com\/quotes\/460142-if-you-fail-to-plan-you-are-planning-to-fail<\/a><\/p>\n<p><a name=\"L6.2\"><\/a><br \/>\n2. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, \u201cSeven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,\u201d <em>IEEE Intelligent Systems<\/em>, Nov.\/Dec. 2014. [Online]. Available: <a href=\"http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf\">http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf<\/a><\/p>\n<p><a name=\"L6.3\"><\/a><br \/>\n3. M. Baker and D. Gates, \u201cLack of redundancies on Boeing 737 MAX system baffles some involved in developing the jet,\u201d <em>Seattle Times<\/em>, March 27, 2019. [Online]. Available: <a href=\"https:\/\/www.seattletimes.com\/business\/boeing-aerospace\/a-lack-of-redundancies-on-737-max-system-has-baffled-even-those-who-worked-on-the-jet\/\">https:\/\/www.seattletimes.com\/business\/boeing-aerospace\/a-lack-of-redundancies-on-737-max-system-has-baffled-even-those-who-worked-on-the-jet\/<\/a><\/p>\n<p><a name=\"L6.4\"><\/a><br \/>\n4. E. Lacey, \u201cThe toxic potential of YouTube\u2019s feedback loop,\u201d <em>Wired<\/em>, July 13, 2019. [Online]. Available: <a href=\"https:\/\/www.wired.com\/story\/the-toxic-potential-of-youtubes-feedback-loop\/\">https:\/\/www.wired.com\/story\/the-toxic-potential-of-youtubes-feedback-loop\/<\/a><\/p>\n<p><a name=\"L6.5\"><\/a><br \/>\n5. D. Amodei, \u201cConcrete problems in AI safety,\u201d <em>arXiv.org<\/em>, July 25, 2016. [Online]. Available: <a href=\"https:\/\/arxiv.org\/pdf\/1606.06565.pdf\">https:\/\/arxiv.org\/pdf\/1606.06565.pdf<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Ask for Help: Hire a Villain<\/strong><\/p>\n<p><a name=\"L7.1\"><\/a><br \/>\n1. \u201cThe Netflix Simian Army,\u201d <em>The Netflix Tech Blog,<\/em> July 19, 2011. [Online]. Available: <a href=\"https:\/\/netflixtechblog.com\/the-netflix-simian-army-16e57fbab116\">https:\/\/netflixtechblog.com\/the-netflix-simian-army-16e57fbab116<\/a><\/p>\n<p><a name=\"L7.2\"><\/a><br \/>\n2. C. A. Cois, \u201cDevOps case study: Netflix and the chaos monkey,\u201d <em>DevOps blog<\/em>, Software Engineering Institute, April 30, 2015. [Online]. Available: <a href=\"https:\/\/insights.sei.cmu.edu\/devops\/2015\/04\/devops-case-study-netflix-and-the-chaos-monkey.html\">https:\/\/insights.sei.cmu.edu\/devops\/2015\/04\/devops-case-study-netflix-and-the-chaos-monkey.html<\/a><\/p>\n<p><a name=\"L7.3\"><\/a><br \/>\n3. \u201cWhite-hat,\u201d <em>Your Dictionary<\/em>. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/www.yourdictionary.com\/white-hat\">https:\/\/www.yourdictionary.com\/white-hat<\/a><\/p>\n<p><a name=\"L7.4\"><\/a><br \/>\n4. E. Tittel and E. Follis, \u201cHow to become a white hat hacker,\u201d <em>Business News Daily<\/em>, June 17, 2019. [Online]. Available: <a href=\"https:\/\/www.businessnewsdaily.com\/10713-white-hat-hacker-career.html\">https:\/\/www.businessnewsdaily.com\/10713-white-hat-hacker-career.html<\/a><\/p>\n<p><a name=\"L7.5\"><\/a><br \/>\n5. K. Lerwing, \u201cApple hired the hackers who created the first Mac firmware virus,\u201d <em>Business Insider<\/em>, Feb. 3, 2016. [Online]. Available: <a href=\"https:\/\/www.businessinsider.com\/apple-hired-the-hackers-who-created-the-first-mac-firmware-virus-2016-2\">https:\/\/www.businessinsider.com\/apple-hired-the-hackers-who-created-the-first-mac-firmware-virus-2016-2<\/a><\/p>\n<p><a name=\"L7.6\"><\/a><br \/>\n6. HackerOne, \u201cWhat was it like to hack the Pentagon?\u201d <em>h1 blog<\/em>, June 17, 2016. [Online]. Available: <a href=\"https:\/\/www.hackerone.com\/blog\/hack-the-pentagon-results\">https:\/\/www.hackerone.com\/blog\/hack-the-pentagon-results<\/a><\/p>\n<p><a name=\"L7.7\"><\/a><br \/>\n7. J. Talamantes, &#8220;What is red teaming and why do I need it?\u201d <em>RedTeam blog<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/www.redteamsecure.com\/what-is-red-teaming-and-why-do-i-need-it-2\/\">https:\/\/www.redteamsecure.com\/what-is-red-teaming-and-why-do-i-need-it-2\/<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Use Math to Reduce Bad Outcomes Caused by Math<\/strong><\/p>\n<p><a name=\"L8.1\"><\/a><br \/>\n1. K. Hao, \u201cThis is how AI bias really happens\u2014and why it\u2019s so hard to fix,\u201d <em>MIT Technology Review<\/em>, Feb. 4, 2020. [Online]. Available: <a href=\"https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/\">https:\/\/www.technologyreview.com\/s\/612876\/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix\/<\/a><\/p>\n<p><a name=\"L8.2\"><\/a><br \/>\n2. A. Feng and S. Wu, \u201cThe myth of the impartial machine,\u201d <em>Parametric Press<\/em>, no. 01 (Science + Society), May 1, 2019. [Online]. Available: <a href=\"https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/\">https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/<\/a><\/p>\n<p><a name=\"L8.3\"><\/a><br \/>\n3. \u201cAI fairness 360 open source toolkit,\u201d IBM Research Trusted AI. Accessed March 13, 2020. [Online]. Available: <a href=\"http:\/\/aif360.mybluemix.net\/\">http:\/\/aif360.mybluemix.net\/<\/a><\/p>\n<p><a name=\"L8.4\"><\/a><br \/>\n4. &#8220;Bias and fairness audit toolkit,\u201d GitHub. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/github.com\/dssg\/aequitas\">https:\/\/github.com\/dssg\/aequitas<\/a><\/p>\n<p><a name=\"L8.5\"><\/a><br \/>\n5. \u201cA Python package that implements a variety of algorithms that mitigate unfairness in supervised machine learning,\u201d GitHub. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/github.com\/Microsoft\/fairlearn\">https:\/\/github.com\/Microsoft\/fairlearn<\/a><\/p>\n<p><a name=\"L8.6\"><\/a><br \/>\n6. \u201cWhat-if tool,\u201d GitHub. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/pair-code.github.io\/what-if-tool\/\">https:\/\/pair-code.github.io\/what-if-tool\/<\/a><\/p>\n<p><a name=\"L8.7\"><\/a><br \/>\n7. \u201cFacets,\u201d GitHub. Accessed March 13, 2020. [Online]. Available: <a href=\"https:\/\/pair-code.github.io\/facets\/\">https:\/\/pair-code.github.io\/facets\/<\/a><\/p>\n<p><a name=\"L8.8\"><\/a><br \/>\n8. T. Bolukbasi, K. Chang, J. Zou, V. Saligrama, and A. Kalai, \u201cMan is to computer programmer as woman is to homemaker? Debiasing word embeddings,\u201d <em>arXiv.org<\/em>, July 21, 2016. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1607.06520\">https:\/\/arxiv.org\/abs\/1607.06520<\/a><\/p>\n<p><a name=\"L8.9\"><\/a><br \/>\n9. J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K. Chang, \u201cMen also like shopping: Reducing gender bias amplification using Corpus-level constraints,\u201d <em>arXiv.org<\/em>, July 29, 2017.[Online]. Available: <a href=\"https:\/\/arxiv.org\/pdf\/1707.09457.pdf\">https:\/\/arxiv.org\/pdf\/1707.09457.pdf<\/a><\/p>\n<p><a name=\"L8.a\"><\/a><br \/>\n10. A. Feng and S. Wu, \u201cThe myth of the impartial machine,\u201d <em>Parametric Press<\/em>, no. 01 (Science + Society), May 1, 2019. [Online]. Available: <a href=\"https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/\">https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/<\/a><\/p>\n<p><a name=\"L8.b\"><\/a><br \/>\n11. D. Sculley et al., \u201dHidden technical debt in machine learning systems,\u201d in <em>Advances in Neural Information Processing Systems 28 (NIPS 2015)<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/papers.nips.cc\/paper\/5656-hidden-technical-debt-in-machine-learning-systems.pdf\">https:\/\/papers.nips.cc\/paper\/5656-hidden-technical-debt-in-machine-learning-systems.pdf<\/a><\/p>\n<p><a name=\"L8.c\"><\/a><br \/>\n12. A. Feng and S. Wu, \u201cThe myth of the impartial machine,\u201d <em>Parametric Press<\/em>, no. 01 (Science + Society), May 1, 2019. [Online]. Available: <a href=\"https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/\">https:\/\/parametric.press\/issue-01\/the-myth-of-the-impartial-machine\/<\/a><\/p>\n<p><a name=\"L8.d\"><\/a><br \/>\n13. D. Sculley et al., \u201cHidden technical debt in machine learning systems,\u201d in <em>Advances in Neural Information Processing Systems 28 (NIPS 2015)<\/em>. Accessed March 16, 2020. [Online]. Available: <a href=\"https:\/\/papers.nips.cc\/paper\/5656-hidden-technical-debt-in-machine-learning-systems.pdf\">https:\/\/papers.nips.cc\/paper\/5656-hidden-technical-debt-in-machine-learning-systems.pdf<\/a><\/p>\n<p><a name=\"L8.e\"><\/a><br \/>\n14. Z. Rogers, \u201cHave strategists drunk the \u2018AI race\u2019 Kool-Aid,\u201d <em>War on the Rocks<\/em>, June 4, 2019. [Online]. Available: <a href=\"https:\/\/warontherocks.com\/2019\/06\/have-strategists-drunk-the-ai-race-kool-aid\/\">https:\/\/warontherocks.com\/2019\/06\/have-strategists-drunk-the-ai-race-kool-aid\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Make Our Assumptions Explicit<\/strong><\/p>\n<p><a class=\"anchor\" name=\"L9.1\"><\/a><br \/>\n1. J. Stoyanovich and B. Howe, \u201cFollow the data! Algorithmic transparency starts with data transparency,\u201d Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: <a href=\"https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency\">https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency<\/a><\/p>\n<p><a class=\"anchor\" name=\"L9.2\"><\/a><br \/>\n2. T. Gebru et al., \u201cDatasheets for datasets,\u201d <em>arXiv.org<\/em>, Jan. 14, 2020. [Online]. Available: [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1803.09010\">https:\/\/arxiv.org\/abs\/1803.09010<\/a><\/p>\n<p><a class=\"anchor\" name=\"L9.3\"><\/a><br \/>\n3. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a class=\"anchor\" name=\"L9.4\"><\/a><br \/>\n4. M. Mitchell et al., \u201cModel cards for model reporting,\u201d <em>arXiv.org<\/em>, Jan. 14, 2019. [Online]. Available: [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1810.03993\">https:\/\/arxiv.org\/abs\/1810.03993<\/a><\/p>\n<p><a class=\"anchor\" name=\"L9.5\"><\/a><br \/>\n5. \u201cAbout Us,\u201d Partnership On AI<em>. <\/em>Accessed May 27, 2020. [Online]. Available: <a href=\"https:\/\/www.partnershiponai.org\/about\/\">https:\/\/www.partnershiponai.org\/about\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L9.6\"><\/a><br \/>\n6. \u201cDeployed Examples,\u201d Partnership On AI<em>. <\/em>Accessed May 27, 2020. [Online]. Available: <a href=\"https:\/\/www.partnershiponai.org\/about-ml\/#examples\">https:\/\/www.partnershiponai.org\/about-ml\/#examples<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Try Human-AI Couples Counseling<\/strong><\/p>\n<p><a class=\"anchor\" name=\"L10.1\"><\/a><br \/>\n1. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, \u201cThe seven deadly myths of \u2018autonomous systems,\u2019\u201d <em>IEEE: Intelligent Systems<\/em>, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: <a href=\"https:\/\/ieeexplore.ieee.org\/document\/6588858\">https:\/\/ieeexplore.ieee.org\/document\/6588858<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.2\"><\/a><br \/>\n2. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, \u201cThe seven deadly myths of \u2018autonomous systems,\u2019\u201d <em>IEEE: Intelligent Systems<\/em>, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: <a href=\"https:\/\/ieeexplore.ieee.org\/document\/6588858\">https:\/\/ieeexplore.ieee.org\/document\/6588858<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.3\"><\/a><br \/>\n3. S. M. Casner and E. L. Hutchins, \u201cWhat do we tell the drivers? Toward minimum driver training standards for partially automated cars,\u201d <em>Journal of Cognitive Engineering and Decision Making<\/em>, March 8, 2019. [Online]. Available: <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901\">https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1555343419830901<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.4\"><\/a><br \/>\n4. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, \u201cSeven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,\u201d <em>IEEE Intelligent Systems<\/em>, Nov.\/Dec. 2014. [Online]. Available: <a href=\"http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf\">http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.5\"><\/a><br \/>\n5. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, \u201cThe seven deadly myths of \u2018autonomous systems,\u2019\u201d <em>IEEE: Intelligent Systems<\/em>, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: <a href=\"https:\/\/ieeexplore.ieee.org\/document\/6588858\">https:\/\/ieeexplore.ieee.org\/document\/6588858<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.6\"><\/a><br \/>\n6. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, \u201cSeven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,\u201d <em>IEEE Intelligent Systems<\/em>, Nov.\/Dec. 2014. [Online]. Available: <a href=\"http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf\">http:\/\/www.jeffreymbradshaw.net\/publications\/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.7\"><\/a><br \/>\n7. G. Klein et al., \u201cTen challenges for making automation a \u2019team player\u2019 in joint human-agent activity,\u201d <em>IEEE: Intelligent Systems<\/em>, vol. 19, no. 6, pp. 91-95, Nov.\/Dec. 2004. [Online]. Available: <a href=\"http:\/\/jeffreymbradshaw.net\/publications\/17._Team_Players.pdf_1.pdf\">http:\/\/jeffreymbradshaw.net\/publications\/17._Team_Players.pdf_1.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L10.8\"><\/a><br \/>\n8. W. Lawless, R. Mittu, D. Sofge, and L. Hiatt, \u201cArtificial intelligence, autonomy, and human-machine teams\u2014Interdependence, context, and explainable AI,\u201d <em>AI Magazine<\/em>, vol. 40, no. 3, pp. 5-13, 2019.<\/p>\n<p><a class=\"anchor\" name=\"L10.9\"><\/a><br \/>\n9. \u201cA framework for discussing trust in increasingly autonomous systems,\u201d The MITRE Corporation, June 2017. [Online]. Available: <a href=\"https:\/\/www.mitre.org\/sites\/default\/files\/publications\/17-2432-framework-discussing-trust-increasingly-autonomous-systems.pdf\">https:\/\/www.mitre.org\/sites\/default\/files\/publications\/17-2432-framework-discussing-trust-increasingly-autonomous-systems.pdf<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>Offer the User Choices<\/strong><\/p>\n<p><a class=\"anchor\" name=\"L11.1\"><\/a><br \/>\n1. M. Kearns, \u201cThe ethical algorithm,\u201d Carnegie Council for Ethics in International Affairs, Nov. 6, 2019. [Online]. Available: <a href=\"https:\/\/www.carnegiecouncil.org\/studio\/multimedia\/20191106-the-ethical-algorithm-michael-kearns\">https:\/\/www.carnegiecouncil.org\/studio\/multimedia\/20191106-the-ethical-algorithm-michael-kearns<\/a><\/p>\n<p><a class=\"anchor\" name=\"L11.2\"><\/a><br \/>\n2. J. Stoyanovich and B. Howe, \u201cFollow the data! Algorithmic transparency starts with data transparency,\u201d Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: <a href=\"https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency\">https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency<\/a><\/p>\n<p><a class=\"anchor\" name=\"L11.3\"><\/a><br \/>\n3. L. M. Strickhart and H.N.J. Lee, \u201cShow your work: Machine learning explainer tools and their use in artificial intelligence assurance,\u201d The MITRE Corporation, McLean, VA, June 2019, unpublished.<\/p>\n<p><a class=\"anchor\" name=\"L11.4\"><\/a><br \/>\n4. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Promote Better Adoption through Gameplay<\/strong><br \/>\n<a class=\"anchor\" name=\"L12.1\"><\/a><br \/>\n1. N. D. Sarter and D. D. Woods, \u201cHow in the world did I ever get into that mode? Mode error and awareness in supervisory control,\u201d <em>Human Factors<\/em>, vol. 37, pp. 5-19, 1995.<\/p>\n<p><a class=\"anchor\" name=\"L12.2\"><\/a><br \/>\n2. \u201cVirtuous cycle of AI: Build good product, get more users, collect more data, build better product, get more users, collect more data, etc.,\u201d in A. Ng, <em>AI Transformation Playbook: How to Lead Your Company into the AI Era<\/em>, Landing AI, Dec. 13, 2018. [Online]. Available: <a href=\"https:\/\/landing.ai\/ai-transformation-playbook\/\">https:\/\/landing.ai\/ai-transformation-playbook\/<\/a>.<\/p>\n<p><a class=\"anchor\" name=\"L12.3\"><\/a><br \/>\n3. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, \u201cThe role and limits of principles in AI ethics: Towards a focus on tensions,\u201d presented at AIES \u201919, Jan. 27\u201328, 2019, Honolulu, HI, USA. [Online]. Available: <a href=\"https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download\">https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download<\/a><\/p>\n<p><a class=\"anchor\" name=\"L12.4\"><\/a><br \/>\n4. \u201cProject ExplAIn interim report,\u201d U.K. Information Commissioner\u2019s Office, 2019. [Online]. Available: <a href=\"https:\/\/ico.org.uk\/about-the-ico\/research-and-reports\/project-explain-interim-report\/\">https:\/\/ico.org.uk\/about-the-ico\/research-and-reports\/project-explain-interim-report\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L12.5\"><\/a><br \/>\n5. \u201cSquad X improves situational awareness, coordination for dismounted units,\u201d Defense Advanced Research Projects Agency, Nov. 30, 2018. [Online]. Available: <a href=\"https:\/\/www.darpa.mil\/news-events\/2018-11-30a\">https:\/\/www.darpa.mil\/news-events\/2018-11-30a<\/a><\/p>\n<p><a class=\"anchor\" name=\"L12.6\"><\/a><br \/>\n6. DARPAtv, \u201cSquad X experimentation exercise,\u201d YouTube, July 12, 2019. [Online]. Available; <a href=\"https:\/\/www.youtube.com\/watch?v=DgM7hbCNMmU\">https:\/\/www.youtube.com\/watch?v=DgM7hbCNMmU<\/a><\/p>\n<p><a class=\"anchor\" name=\"L12.7\"><\/a><br \/>\n7. S. J. Freedberg, \u201cSimulating a super brain: Artificial intelligence in wargames,\u201d Breaking Defense, April 26, 2019. [Online]. Available: <a href=\"https:\/\/breakingdefense.com\/2019\/04\/simulating-a-super-brain-artificial-intelligence-in-wargames\/\">https:\/\/breakingdefense.com\/2019\/04\/simulating-a-super-brain-artificial-intelligence-in-wargames\/<\/a>.<\/p>\n<p><a class=\"anchor\" name=\"L12.8\"><\/a><br \/>\n8. B. Jensen, S. Cuomo, and C. Whyte, \u201cWargaming with Athena: How to make militaries smarter, faster, and more efficient with artificial intelligence,\u201d <em>War on the Rocks<\/em>, June 5, 2018. [Online]. Available: <a href=\"https:\/\/warontherocks.com\/2018\/06\/wargaming-with-athena-how-to-make-militaries-smarter-faster-and-more-efficient-with-artificial-intelligence\/\">https:\/\/warontherocks.com\/2018\/06\/wargaming-with-athena-how-to-make-militaries-smarter-faster-and-more-efficient-with-artificial-intelligence\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L12.9\"><\/a><br \/>\n9. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, \u201cThe role and limits of principles in AI ethics: Towards a focus on tensions,\u201d presented at AIES \u201919, Jan. 27\u201328, 2019, Honolulu, HI, USA. [Online]. Available: <a href=\"https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download\">https:\/\/www.researchgate.net\/publication\/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions\/link\/5d269de0a6fdcc2462d41592\/download<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Monitor the AI\u2019s Impact and Establish Chains of Accountability<\/strong><\/p>\n<p><a name=\"L13.1\"><\/a><br \/>\n1. A. Gonfalonieri, \u201cWhy machine learning models degrade in production,\u201d towards data science, July 25, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214\">https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214<\/a><\/p>\n<p><a name=\"L13.2\"><\/a><br \/>\n2. A. Gonfalonieri, \u201cWhy machine learning models degrade in production,\u201d towards data science, July 25, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214\">https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214<\/a><\/p>\n<p><a name=\"L13.3\"><\/a><br \/>\n3. A. Gonfalonieri, \u201cWhy machine learning models degrade in production,\u201d towards data science, July 25, 2019. [Online]. Available: <a href=\"https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214\">https:\/\/towardsdatascience.com\/why-machine-learning-models-degrade-in-production-d0f2108e9214<\/a><\/p>\n<p><a name=\"L13.4\"><\/a><br \/>\n4. A. Campolo et al., <em>AI Now Report 2017<\/em>. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:\u00a0 <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf<\/a><\/p>\n<p><a name=\"L13.5\"><\/a><br \/>\n5. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a name=\"L13.6\"><\/a><br \/>\n6. J. C. Newman, \u201cDecision Points in AI Governance,\u201d <em>UC Berkeley Center for Long-Term Cybersecurity, <\/em>May 5, 2020. [Online]. Available: <a href=\"https:\/\/cltc.berkeley.edu\/2020\/05\/05\/decision-points-in-ai-governance\/\">https:\/\/cltc.berkeley.edu\/2020\/05\/05\/decision-points-in-ai-governance\/<\/a><\/p>\n<p><a name=\"L13.7\"><\/a><br \/>\n7. T. Hagendorff, \u201cThe ethics of AI ethics: An evaluation of guidelines,\u201d <em>arXiv.org<\/em>, Oct. 11, 2019. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1903.03425\">https:\/\/arxiv.org\/abs\/1903.03425<\/a><\/p>\n<p><a name=\"L13.8\"><\/a><br \/>\n8. J. C. Newman, \u201cDecision Points in AI Governance,\u201d <em>UC Berkeley Center for Long-Term Cybersecurity, <\/em>May 5, 2020. [Online]. Available: <a href=\"https:\/\/cltc.berkeley.edu\/2020\/05\/05\/decision-points-in-ai-governance\/\">https:\/\/cltc.berkeley.edu\/2020\/05\/05\/decision-points-in-ai-governance\/<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Envision Safeguards for AI Advocates<\/strong><\/p>\n<p><a name=\"L14.1\"><\/a><br \/>\n1. R. Sandler, \u201cAmazon, Microsoft, Wayfair: Employees stage internal protests against working with ICE,\u201d <em>Forbes<\/em>, July 19, 2019. [Online]. Available: <a href=\"https:\/\/www.forbes.com\/sites\/rachelsandler\/2019\/07\/19\/amazon-salesforce-wayfair-employees-stage-internal-protests-for-working-with-ice\/\">https:\/\/www.forbes.com\/sites\/rachelsandler\/2019\/07\/19\/amazon-salesforce-wayfair-employees-stage-internal-protests-for-working-with-ice\/<\/a><\/p>\n<p><a name=\"L14.2\"><\/a><br \/>\n2. J. Bhuiyan, \u201cHow the Google walkout transformed tech workers into activists,\u201d <em>Los Angeles Times<\/em>, Nov. 6, 2019. [Online]. Available: <a href=\"https:\/\/www.latimes.com\/business\/technology\/story\/2019-11-06\/google-employee-walkout-tech-industry-activism\">https:\/\/www.latimes.com\/business\/technology\/story\/2019-11-06\/google-employee-walkout-tech-industry-activism<\/a><\/p>\n<p><a name=\"L14.3\"><\/a><br \/>\n3. J. McLaughlin, Z. Dorfman, and S. D. Naylor, \u201cPentagon intelligence employees raise concerns about supporting domestic surveillance amid protests,\u201d <em>Yahoo News,<\/em> June 4, 2020. [Online]. Available: <a href=\"https:\/\/news.yahoo.com\/pentagon-intelligence-employees-raise-concerns-about-supporting-domestic-surveillance-amid-protests-194906537.html\">https:\/\/news.yahoo.com\/pentagon-intelligence-employees-raise-concerns-about-supporting-domestic-surveillance-amid-protests-194906537.html<\/a><\/p>\n<p><a name=\"L14.4\"><\/a><br \/>\n4. J. Menn, \u201cGoogle fires fifth activist employee in three weeks; complaint filed,\u201d <em>Reuters, <\/em>Dec. 17, 2019. [Online]. Available: <a href=\"https:\/\/www.reuters.com\/article\/google-unions\/google-fires-fifth-activist-employee-in-three-weeks-complaint-filed-idUSL1N28R02L\">https:\/\/www.reuters.com\/article\/google-unions\/google-fires-fifth-activist-employee-in-three-weeks-complaint-filed-idUSL1N28R02L<\/a><\/p>\n<p><a name=\"L14.5\"><\/a><br \/>\n5. A. Palmer, \u201cAmazon employees plan \u2018online walkout\u2019 to protest firings and treatment of warehouse workers,\u201d <em>CNBC, <\/em>April 16, 2020. [Online]. Available: <a href=\"https:\/\/www.cnbc.com\/2020\/04\/16\/amazon-employees-plan-online-walkout-over-firings-work-conditions.html\">https:\/\/www.cnbc.com\/2020\/04\/16\/amazon-employees-plan-online-walkout-over-firings-work-conditions.html<\/a><\/p>\n<p><a name=\"L14.6\"><\/a><br \/>\n6. J. Eidelson and H. Kanu, \u201cSoftware Startup Accused of Union-Busting Will Pay Ex-Employees,\u201d <em>Bloomberg,<\/em> Nov. 10, 2018. [Online]. Available: <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2018-11-10\/software-startup-accused-of-union-busting-will-pay-ex-employees\">https:\/\/www.bloomberg.com\/news\/articles\/2018-11-10\/software-startup-accused-of-union-busting-will-pay-ex-employees<\/a><\/p>\n<p><a name=\"L14.7\"><\/a><br \/>\n7. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a name=\"L14.8\"><\/a><br \/>\n8. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Require Objective, Third-party Verification and Validation<\/strong><br \/>\n<a class=\"anchor\" name=\"L15.1\"><\/a><br \/>\n1. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.2\"><\/a><br \/>\n2. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.3\"><\/a><br \/>\n3. ENERGY STAR homepage. Accessed on: Jan. 21, 2020. [Online]. Available: <a href=\"https:\/\/www.energystar.gov\/\">https:\/\/www.energystar.gov\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.4\"><\/a><br \/>\n4. C. Martin and M. Dent, \u201cHow Nestle, Google and other businesses make money by going green,\u201d <em>Los Angeles Times<\/em>, Sep. 20, 2019. [Online]. Available: <a href=\"https:\/\/www.latimes.com\/business\/story\/2019-09-20\/how-businesses-profit-from-environmentalism\">https:\/\/www.latimes.com\/business\/story\/2019-09-20\/how-businesses-profit-from-environmentalism<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.5\"><\/a><br \/>\n5. \u201cSafeAI.\u201d Accessed April 2, 2020. [Online]. Available:\u00a0\u00a0 <a href=\"https:\/\/www.forhumanity.center\/safeai\/\">https:\/\/www.forhumanity.center\/safeai\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.6\"><\/a><br \/>\n6. A. Campolo et al., <em>AI Now Report 2017<\/em>. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:\u00a0 <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2017_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.7\"><\/a><br \/>\n7. J. Stoyanovich and B. Howe, \u201cFollow the data! Algorithmic transparency starts with data transparency,\u201d Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: <a href=\"https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency\">https:\/\/ai.shorensteincenter.org\/ideas\/2018\/11\/26\/follow-the-data-algorithmic-transparency-starts-with-data-transparency<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.8\"><\/a><br \/>\n8. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.9\"><\/a><br \/>\n9. Z. C. Lipton, \u201cThe doctor just won\u2019t accept that,\u201d <em>arXiv.org<\/em>, Nov. 24, 2017. [Online]. Available: <a href=\"https:\/\/arxiv.org\/abs\/1711.08037\">https:\/\/arxiv.org\/abs\/1711.08037<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.a\"><\/a><br \/>\n10. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.b\"><\/a><br \/>\n11. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L15.c\"><\/a><br \/>\n12. Occupational Safety and Health Administration, \u201cOSHA\u2019s Nationally Recognized Testing Laboratory (NRTL) program,\u201d <em>OSHA.gov<\/em>. Accessed on: Jan. 30, 2020. [Online]. Available: <a href=\"https:\/\/www.osha.gov\/dts\/otpca\/nrtl\/\">https:\/\/www.osha.gov\/dts\/otpca\/nrtl\/<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Entrust Sector-specific Agencies to Establish AI Standards for Their Domains<\/strong><br \/>\n<a class=\"anchor\" name=\"L16.1\"><\/a><br \/>\n1. M. Whittaker et al., <em>AI Now Report 2018<\/em>. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: <a href=\"https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf\">https:\/\/ainowinstitute.org\/AI_Now_2018_Report.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.2\"><\/a><br \/>\n2. F. Balamuth et al., \u201cImproving recognition of pediatric severe sepsis in the emergency department: Contributions of a vital sign\u2013based electronic alert and bedside clinician identification,\u201d <em>Annals of Emergency Medicine<\/em>, vol. 79, no. 6, pp. 759-768.e2, Dec. 2017. [Online]. Available: <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0196064417303153\">https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0196064417303153<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.3\"><\/a><br \/>\n3. G. Siddiqui, \u201cWhy doctors reject tools that make their jobs easier,\u201d <em>Scientific American<\/em>, Oct. 15, 2018. [Online]. Available: <a href=\"https:\/\/blogs.scientificamerican.com\/observations\/why-doctors-reject-tools-that-make-their-jobs-easier\/\">https:\/\/blogs.scientificamerican.com\/observations\/why-doctors-reject-tools-that-make-their-jobs-easier\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.4\"><\/a><br \/>\n4. A. M. Barry-Jester, B. Casselman, and D. Goldstein, \u201cShould prison sentences be based on crimes that haven\u2019t been committed yet?\u2019 <em>FiveThirtyEight<\/em>, Aug. 4, 2015. [Online]. Available: <a href=\"https:\/\/fivethirtyeight.com\/features\/prison-reform-risk-assessment\/\">https:\/\/fivethirtyeight.com\/features\/prison-reform-risk-assessment\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.5\"><\/a><br \/>\n5. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, \u201cMachine bias,\u201d <em>ProPublica<\/em>, May 23, 2016. [Online]. Available: <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.6\"><\/a><br \/>\n6. S. Corbett-Davies, E. Pierson, A. Feller, and S. Goel, \u201cA computer program used for bail and sentencing decisions was labeled biased against blacks. It\u2019s actually not that clear,\u201d <em>Washington Post<\/em>, Oct. 17, 2016. [Online]. Available: <a href=\"https:\/\/www.washingtonpost.com\/news\/monkey-cage\/wp\/2016\/10\/17\/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas\/?noredirect=on&amp;utm_term=.a9cfb19a549d\">https:\/\/www.washingtonpost.com\/news\/monkey-cage\/wp\/2016\/10\/17\/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas\/?noredirect=on&amp;utm_term=.a9cfb19a549d<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.7\"><\/a><br \/>\n7. \u201cCase of first impression,\u201d <em>Legal Dictionary<\/em>, March 21, 2017. [Online]. Available: <a href=\"https:\/\/legaldictionary.net\/case-first-impression\/\">https:\/\/legaldictionary.net\/case-first-impression\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.8\"><\/a><br \/>\n8. \u201cFair cross section requirement,\u201d Stephen G. Rodriquez &amp; Partners. Accessed on: Jan. 21, 2020. [Online]. Available: <a href=\"https:\/\/www.lacriminaldefenseattorney.com\/legal-dictionary\/f\/fair-cross-section-requirement\/\">https:\/\/www.lacriminaldefenseattorney.com\/legal-dictionary\/f\/fair-cross-section-requirement\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.9\"><\/a><br \/>\n9. I. Masic, M. Miokovic, and B. Muhamedagic, \u201cEvidence based medicine\u2014new approaches and challenges,\u201d <em>Acta Informatica Medica<\/em>, vol. 16, no. 4, pp. 219-225, 2018. [Online]. Available: <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC3789163\/\">https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC3789163\/<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.a\"><\/a><br \/>\n10. \u201cHippocratic Oath,\u201d <em>Encyclopaedia Britannica<\/em>, Dec. 4, 2019. [Online]. Available: <a href=\"https:\/\/www.britannica.com\/topic\/Hippocratic-oath\">https:\/\/www.britannica.com\/topic\/Hippocratic-oath<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.b\"><\/a><br \/>\n11. R. Vought, \u201cGuidance for regulation of artificial intelligence applications,\u201d Draft memorandum, <em>WhiteHouse.gov<\/em>. Accessed on: Jan. 21, 2020. [Online]. Available: <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2020\/01\/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf\">https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2020\/01\/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.c\"><\/a><br \/>\n12. G. Vyse, \u201cThree American cities have now banned the use of facial recognition technology in local government amid concerns it&#8217;s inaccurate and biased,\u201d <em>Governing<\/em>, July 24, 2019. [Online]. Available: <a href=\"https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html\">https:\/\/www.governing.com\/topics\/public-justice-safety\/gov-cities-ban-government-use-facial-recognition.html<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.d\"><\/a><br \/>\n13. \u201cAlgorithms and artificial intelligence: CNIL\u2019s report on the ethical issues,\u201d CNIL [Commission Nationale de l&#8217;Informatique et des Libert\u00e9s], May 25, 2018. [Online]. Available: <a href=\"https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues\">https:\/\/www.cnil.fr\/en\/algorithms-and-artificial-intelligence-cnils-report-ethical-issues<\/a><\/p>\n<p><a class=\"anchor\" name=\"L16.e\"><\/a><br \/>\n14. A. Dafoe, \u201cAI governance: A research agenda,\u201d Future of Humanity Institute, University of Oxford, Oxford, UK, Aug. 27, 2018.\u00a0 [Online]. Available: <a href=\"https:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/GovAIAgenda.pdf\">https:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/GovAIAgenda.pdf<\/a><\/p>\n<p>[\/et_pb_text][et_pb_sidebar area=&#8221;sidebar-1&#8243; _builder_version=&#8221;4.4.8&#8243;][\/et_pb_sidebar][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.4.8&#8243; module_alignment=&#8221;center&#8221; global_module=&#8221;3880&#8243; saved_tabs=&#8221;all&#8221;][et_pb_fullwidth_code disabled_on=&#8221;off|off|off&#8221; admin_label=&#8221;Footer menu&#8221; _builder_version=&#8221;4.5.0&#8243; background_color=&#8221;#d5dde0&#8243; text_orientation=&#8221;center&#8221; module_alignment=&#8221;center&#8221; custom_padding=&#8221;10px||10px||false|false&#8221;]Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking <a href=\"mailto:jrotner@mitre.org;rhodge@mitre.org;ldanley@mitre.org?subject=AI Fails website\">here<\/a>.[\/et_pb_fullwidth_code][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>References \u00a0 Home Page 1. \u201cBenjamin Franklin quotable quote,\u201d Goodreads. Accessed March 16, 2020. [Online]. Available: https:\/\/www.goodreads.com\/quotes\/460142-if-you-fail-to-plan-you-are-planning-to-fail 2. Department of Defense, \u201cSummary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity,\u201d defense.gov, February 12, 2019. [Online]. Available: https:\/\/media.defense.gov\/2019\/Feb\/12\/2002088963\/-1\/-1\/1\/SUMMARY-OF-DOD-AI-STRATEGY.PDF 3. ichristianization, \u201cMicrosoft build 2017 translator demo,\u201d YouTube, June [&hellip;]<\/p>\n","protected":false},"author":142,"featured_media":0,"parent":0,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-65","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/65","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/users\/142"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/comments?post=65"}],"version-history":[{"count":0,"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/pages\/65\/revisions"}],"wp:attachment":[{"href":"https:\/\/sites.mitre.org\/aifails\/wp-json\/wp\/v2\/media?parent=65"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}