References

 

Home Page


1. “Benjamin Franklin quotable quote,” Goodreads. Accessed March 16, 2020. [Online]. Available: https://www.goodreads.com/quotes/460142-if-you-fail-to-plan-you-are-planning-to-fail


2. Department of Defense, “Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity,” defense.gov, February 12, 2019. [Online]. Available: https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF


3. ichristianization, “Microsoft build 2017 translator demo,” YouTube, June 13, 2017. [Online]. Available: https://www.youtube.com/watch?v=u4cJoX-DoiY


4. N. Martin, “Artificial intelligence is being used to diagnose disease and design new drugs,” Forbes, Sept. 30, 2019. [Online]. Available: https://www.forbes.com/sites/nicolemartin1/2019/09/30/artificial-intelligence-is-being-used-to-diagnose-disease-and-design-new-drugs/#8874c44db51f


5. “Meet the AI robots helping take care of elderly patients,” Time Magazine, Aug. 23, 2019. [Online]. Available: https://time.com/5660046/robots-elderly-care/


6. A. Chang, “The Facebook and Cambridge Analytica scandal, explained with a simple diagram,” Vox, May 2, 2018. [Online]. Available: https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram


7. P. Taddonio, “How China’s government is using AI on its Uighur Muslim population,” Frontline, Nov. 21, 2019. [Online]. Available: https://www.pbs.org/wgbh/frontline/article/how-chinas-government-is-using-ai-on-its-uighur-muslim-population/


8. D. Z. Morris, “China will block travel for those with bad ‘social credit,’” Fortune, March 18, 2018. [Online]. Available: https://fortune.com/2018/03/18/china-travel-ban-social-credit/


9. R. Adams, “Hong Kong protesters are worried about facial recognition technology. But there are many other ways they’re being watched,” BuzzFeed News, Aug. 17, 2019. [Online]. Available: https://www.buzzfeednews.com/article/rosalindadams/hong-kong-protests-paranoia-facial-recognition-lasers


10. S. Gibbs, “Tesla Model S cleared by auto safety regulator after fatal Autopilot crash,” Guardian, Jan. 20, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/jan/20/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash

 

 

Fails

No Human Needed: the AI’s Got This


1. E. Hunt, “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter,” Guardian, March 24, 2016. [Online]. Available: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter


2. C. Lecher, “How Amazon automatically tracks and fires warehouse workers for ‘productivity,’” The Verge, Apr. 25, 2019. [Online]. Available: https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations


3. C. F. de Winter & D. Dodou, “Why the Fitts list has persisted throughout the history of function allocation,” SpringerLink, August 25, 2011. [Online]. Available: https://link.springer.com/article/10.1007/s10111-011-0188-1


4. E. Brynjolfsson and A. McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY, USA: W. W. Norton, 2016.


5. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, “Seven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,” IEEE Intelligent Systems, Nov./Dec. 2014. [Online]. Available: http://www.jeffreymbradshaw.net/publications/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf


6. N. D. Sarter, D. D. Woods, and C. E. Billings, “Automation surprises,” in G.  Salvendy  (Ed.),  Handbook of Human Factors & Ergonomics (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.


7. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


8. D. Sarter, D. D. Woods, and C. E. Billings, “Automation surprises,” in G. Salvendy (Ed.), Handbook of Human Factors & Ergonomics (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.


9. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


10. T. Lewis, “A brief history of artificial intelligence,” Live Science, Dec. 4, 2014. [Online]. Available: https://www.livescience.com/49007-history-of-artificial-intelligence.html


11. Data & Society, “Algorithmic accountability: A primer,” Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf


12. T. D. Jajal, “Distinguishing between narrow AI, general AI and super AI,” Medium, May 21, 2018. [Online]. Available: https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22

 

 

AI Perfectionists and AI “Pixie Dusters”


1. J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct. 9, 2018. [Online]. Available: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G


2. Defense Science Board, Task Force Report: The Role of Autonomy in DoD Systems, Washington, D.C., June 2016. [Online]. Available: https://www.hsdl.org/?abstract&did=722318


3. M. McDonough, “Business-focus on artificial intelligence rising,” Twitter, Feb. 28, 2017. [Online]. Available: https://twitter.com/M_McDonough/status/836580294484451328


4. C. O’Neil, “The era of blind faith in big data must end,” TED, April 2017. [Online]. Available: https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end


5. “Here to help,” xkcd. Accessed March 18, 2020. [Online]. Available: https://www.xkcd.com/1831/


6. J. Brownlee, “A gentle introduction to transfer learning for deep learning,” Machine Learning Mastery, Sept. 16, 2019. [Online]. Available: https://machinelearningmastery.com/transfer-learning-for-deep-learning/


7. S. Schuchmann, “History of the second AI winter,” towards data science, May 12, 2019. [Online]. Available: https://towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45


8. Defense Science Board, Task Force Report: The Role of Autonomy in DoD Systems, Washington, D.C., June 2016. [Online]. Available: https://www.hsdl.org/?abstract&did=722318

 

 

Developers Are Wizards and Operators Are Muggles


1. A. Gregg, J. O’Connell, A. Ba Tran, and F. Siddiqui. “At tense meeting with Boeing executives, pilots fumed about being left in dark on plane software,” Washington Post, March 13, 2019. [Online]. Available: https://www.washingtonpost.com/business/economy/new-software-in-boeing-737-max-planes-under-scrutinty-after-second-crash/2019/03/13/06716fda-45c7-11e9-90f0-0ccfeec87a61_story.html


2. A. MacGillis, “The case against Boeing,” New Yorker, Nov. 11, 2019. [Online]. Available: https://www.newyorker.com/magazine/2019/11/18/the-case-against-boeing


3. P. McCausland, “Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk,“ NBC News, Nov. 9, 2019. [Online]. Available: https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281


4. M. McFarland, “My seat keeps vibrating. Will it make me a better driver before driving me insane?” Washington Post, Jan. 12, 2015. [Online]. Available: https://www.washingtonpost.com/news/innovations/wp/2015/01/12/my-seat-keeps-vibrating-will-it-make-me-a-better-driver-before-driving-me-insane/?noredirect=on&utm_term=.31792eb87c03


5. M. Cyril, “Watching the Black body,” Electronic Frontier Foundation, Feb. 28, 2019. [Online]. Available: https://www.eff.org/deeplinks/2019/02/watching-black-body


6. Barry Friedman: Is technology making police better—or…” Recode Decode podcast, Nov. 24, 2019. [Online]. Available: https://www.stitcher.com/podcast/vox/recode-decode/e/65519494?curator=MediaREDEF

 

 

You Call This Artificial “Intelligence”? AI Meets the Real World


1. R. Steinberg, “6 areas where artificial neural networks outperform humans,” Venture Beat, Dec. 8, 2017. [Online]. Available: https://venturebeat.com/2017/12/08/6-areas-where-artificial-neural-networks-outperform-humans/

 

 

Sensing is Believing

1. Reference 1 in this section comes from the category header: “You Call This Artificial “Intelligence”? AI Meets the Real World”


2. Bilton, “Nest thermostat glitch leaves users in the cold,” New York Times, Jan. 13, 2016. [Online]. Available: https://www.nytimes.com/2016/01/14/fashion/nest-thermostat-glitch-battery-dies-software-freeze.html


3. A. J. Hawkins, “Everything you need to know about the Boeing 737 Max airplane crashes,” The Verge, March 22, 2019. [Online]. Available: https://www.theverge.com/2019/3/22/18275736/boeing-737-max-plane-crashes-grounded-problems-info-details-explained-reasons


4. E. Ongweso, “Samsung Galaxy S10 ‘vault-like security’ beaten by a $3 screen protector,” Vice, Oct. 17, 2019. [Online]. Available: https://www.vice.com/en_us/article/59nqdb/samsung-galaxy-s10-vault-like-security-beaten-by-a-dollar3-screen-protector


5. “Airplane redundancy systems” Poente Technical. Accessed April 3, 2020. [Online]. Available: https://www.poentetechnical.com/aircraft-engineer/airplane-redundancy-systems/

 

 

Insecure AI

1. Reference 1 in this section comes from the category header: “You Call This Artificial “Intelligence”? AI Meets the Real World”


2. AJ Vicens, “An Amazon Echo recorded a family’s private conversation and sent it to some random person,” Mother Jones, May 24, 2018. [Online]. Available: https://www.motherjones.com/politics/2018/05/an-amazon-echo-recorded-a-familys-private-conversation-and-sent-it-to-some-random-person/


3. J. Oates, “Japanese hotel chain sorry that hackers may have watched guests through bedside robots,” Register, Oct. 22, 2019. [Online]. Available: https://www.theregister.co.uk/2019/10/22/japanese_hotel_chain_sorry_that_bedside_robots_may_have_watched_guests


4. T. G. Dietterich and E. J. Horvitz, “Rise of Concerns about AI: Reflections and Directions,” Communications of the ACM, vol. 58, no. 10, pp. 38-40, October 2015. [Online]. Available: http://erichorvitz.com/CACM_Oct_2015-VP.pdf


5. J. S. McEwen and S. S. Shapiro, “MITRE’S Privacy Engineering Tools and Their Use in a Privacy Assessment Framework,” The MITRE Corporation, McLean, VA, Nov. 2019. [Online]. Available: https://www.mitre.org/publications/technical-papers/mitre%E2%80%99s-privacy-engineering-tools-and-their-use-in-a-privacy


6. University of Michigan Engineering, “Watch engineers hack a ‘smart home’ door lock,” YouTube, May 2, 2016. [Online]. Available: https://www.youtube.com/watch?v=Iwm6nvC9Xhc


7. M. Hanrahan, “Ring security camera hacks see homeowners subjected to racial abuse, ransom demands,” ABC News, Dec. 12, 2019. [Online]. Available: https://abcnews.go.com/US/ring-security-camera-hacks-homeowners-subjected-racial-abuse/story?id=67679790


8. “Cybersecurity Vulnerabilities Affecting Medtronic Implantable Cardiac Devices, Programmers, and Home Monitors: FDA Safety Communication.” US Food & Drug Administration, March 2019. [Online]. Available: https://www.fda.gov/medical-devices/safety-communications/cybersecurity-vulnerabilities-affecting-medtronic-implantable-cardiac-devices-programmers-and-home


9. J. Herrman, “Google knows where you’ve been but does it know who you are,” New York Times Magazine, Sept. 12, 2018. [Online]. Available: https://www.nytimes.com/2018/09/12/magazine/google-maps-location-data-privacy.html


10. A. Greenberg, “Hackers remotely kill a Jeep on the highway—with me in it,” Wired, July 21, 2015. [Online]. Available: https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/


11. L. Rocher, J. M. Hendrickx, and Y.-A. de Montjoye, “Estimating the success of re-identifications in incomplete datasets using generative models,” Nature Communications, July 23, 2019. [Online]. Available: https://www.nature.com/articles/s41467-019-10933-3

 

 

AI Pwned

1. Reference 1 in this section comes from the category header: “You Call This Artificial “Intelligence”? AI Meets the Real World”


2. “pwned,” Urban Dictionary. Accessed on: March 11, 2020. [Online]. Available: https://www.urbandictionary.com/define.php?term=pwned


3. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “A general framework for adversarial examples with objectives,” arXiv.org, April 4, 2019. [Online]. Available: https://arxiv.org/abs/1801.00349


4. M. Fredrikson, S. Jha, T. Ristenpart, “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures,” CCS ’15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, pp. 1322–1333. [Online]. Available: https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf


5. M. James, “Adversarial attacks on voice input,” I Programmer, Jan. 31, 2018. [Online]. Available: https://www.i-programmer.info/news/105-artificial-intelligence/11515-adversarial-attacks-on-voice-input.html


6. G. Ateniese et al., “Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers,” arXiv.org, June 19, 2013. [Online]. Available: https://arxiv.org/abs/1306.4447


7. A. Polyakov, “How to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors),” towards data science, August 6, 2019. [Online]. Available: https://towardsdatascience.com/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c


8. K. Eykholt et al., “Robust physical-world attacks on deep learning models,” arXiv.org, April 10, 2018. [Online]. Available: https://arxiv.org/abs/1707.08945


9. M. James, “Adversarial attacks on voice input,” I Programmer, Jan. 31, 2018. [Online]. Available: https://www.i-programmer.info/news/105-artificial-intelligence/11515-adversarial-attacks-on-voice-input.html


10. A. Dorschel, “Rethinking data privacy: The impact of machine learning,” Medium, April 24, 2019. [Online]. Available: https://medium.com/luminovo/data-privacy-in-machine-learning-a-technical-deep-dive-f7f0365b1d60


11. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “A general framework for adversarial examples with objectives,” arXiv.org, April 4, 2019. [Online]. Available: https://arxiv.org/abs/1801.00349

 

 

Irrelevant Data, Irresponsible Outcomes


1. M. Simon, “HP looking into claim webcams can’t see black people,” CNN.com, Dec. 23, 2009. [Online]. Available: http://www.cnn.com/2009/TECH/12/22/hp.webcams/index.html


2. B. Barrett, “Lawmakers can’t ignore facial recognition’s bias anymore,” Wired, July 26, 2018. [Online]. Available: https://www.wired.com/story/amazon-facial-recognition-congress-bias-law-enforcement/


3. P. Egan, “Data glitch was apparent factor in false fraud charges against jobless claimants,” Detroit Free Press, July 30, 2017. [Online]. Available: https://www.freep.com/story/news/local/michigan/2017/07/30/fraud-charges-unemployment-jobless-claimants/516332001/


4. S. Mullainathan, “Biased algorithms are easier to fix than biased people,” New York Times, Dec. 6, 2019. [Online]. Available: https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html?searchResultPosition=1


5. Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, vol. 366, no. 6464, pp. 447-453, Oct. 25, 2019. [Online]. Available: https://science.sciencemag.org/content/366/6464/447


6. K. Hao, “This is how AI bias really happens—and why it’s so hard to fix,” MIT Technology Review, Feb. 4, 2020. [Online]. Available: https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/


7. Data & Society, “Algorithmic accountability: A primer,” Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf


8. N. Barrowman, “Why data is never raw,” New Atlantis, Summer/Fall 2018. [Online]. Available: https://www.thenewatlantis.com/publications/why-data-is-never-raw


9. K. Hao, “This is how AI bias really happens—and why it’s so hard to fix,” MIT Technology Review, Feb. 4, 2020. [Online]. Available: https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/


10. K. Crawford and R. Calo, “There is a blind spot in AI research,” Nature, Oct. 13, 2016. [Online]. Available: https://www.nature.com/articles/538311a

 

 

You Told Me to Do This


1. N. V. Patel, “Why doctors aren’t afraid of better, more efficient AI diagnosing cancer,” Daily Beast, Dec. 22, 2017. [Online]. Available: https://www.thedailybeast.com/why-doctors-arent-afraid-of-better-more-efficient-ai-diagnosing-cancer


2. T. Murphy VII, “The first level of Super Mario Bros. is easy with lexicographic orderings and time travel… after that it gets a little tricky,” April 1, 2013. [Online]. Available: http://www.cs.cmu.edu/~tom7/mario/mario.pdf


3. J. Vincent, “OpenAI has published the text-generating AI it said was too dangerous to share,” The Verge, November 7, 2019. [Online]. Available: https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters


4. “GPT-2: 1.5B Release,” OpenAI, November 5, 2019. [Online]. Available: https://openai.com/blog/gpt-2-1-5b-release/


5. Amodei, “Concrete problems in AI safety,” arXiv.org, July 25, 2016. [Online]. Available: https://arxiv.org/pdf/1606.06565.pdf


6. Data & Society, “Algorithmic accountability: A primer,” Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf


7. Narayanan, “21 fairness definitions and their politics,” presented at Conference on Fairness, Accountability, and Transparency, Feb. 23, 2018. [Online]. Available: https://fairmlbook.org/tutorial2.html


8. “College Board Announces Improved Admissions Resource,” College Board, August 27, 2019. [Online]. Available: https://www.collegeboard.org/releases/2019/college-board-announces-improved-admissions-resource

 

 

Feeding the Feedback Loop


1. A. Jenkins, “This town is fining drivers to fight ‘horrific’ traffic from Google Maps and Waze,” Travel + Leisure, Dec. 26, 2017. [Online]. Available: https://www.travelandleisure.com/travel-news/leonia-waze-google-maps-fines


2. A. Feng and S. Wu, “The myth of the impartial machine,” Parametric Press, no. 01 (Science + Society), May 1, 2019. [Online]. Available: https://parametric.press/issue-01/the-myth-of-the-impartial-machine/


3. E. Lacey, “The toxic potential of YouTube’s feedback loop,” Wired, July 13, 2019. [Online]. Available: https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/


4. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


5. M. Heid, “The unsettling ways tech is changing your personal reality,” Elemental, Oct. 3, 2019. [Online]. Available: https://elemental.medium.com/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee


6. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


7. W. Oremus, “Who controls your Facebook feed,” Slate, Jan. 3, 2016. [Online]. Available: http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html


8. “Tech experts: What you post online could be directly impacting your insurance coverage,” CBS New York, March 21, 2019. [Online]. Available: https://newyork.cbslocal.com/2019/03/21/online-posting-dangerous-selfies-insurance-coverage/


9. R. Deller, “Book review: Automating inequality: How high-tech tools profile, police and punish the poor by Virginia Eubanks,” LSE Review of Books blog, July 2, 2018. [Online]. Available: https://blogs.lse.ac.uk/lsereviewofbooks/2018/07/02/book-review-automating-inequality-how-high-tech-tools-profile-police-and-punish-the-poor-by-virginia-eubanks/


10. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues

 

 

A Special Case: AI Arms Race


1. “How artificial intelligence could increase the risk of nuclear war,” The RAND blog, April 23, 2018. [Online]. Available: https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-could-increase-the-risk.html


2. “How artificial intelligence could increase the risk of nuclear war,” The RAND blog, April 23, 2018. [Online]. Available: https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-could-increase-the-risk.html


3. P. Scharre, “Killer apps: The real dangers of an AI arms race,” Foreign Affairs, March/April 2019. [Online]. Available: https://www.foreignaffairs.com/articles/2019-04-16/killer-apps

 

 

Testing in the Wild


1. A. MacGillis, “The case against Boeing,” New Yorker, Nov. 11, 2019. [Online]. Available: https://www.newyorker.com/magazine/2019/11/18/the-case-against-boeing


2. N. Sonnad, “A flawed algorithm led the UK to deport thousands of students,” Quartz, May 3, 2018. [Online]. Available: https://qz.com/1268231/a-toeic-test-led-the-uk-to-deport-thousands-of-students/


3. “Ahsan v The Secretary of State for the Home Department (Rev 1) [2017] EWCA Civ 2009 (05 December 2017).” British and Irish Legal Information Institute, December 5, 2017. [Online]. Available: http://www.bailii.org/ew/cases/EWCA/Civ/2017/2009.html


4. P. Wu, “Test your machine learning algorithm with metamorphic testing,” Medium, Nov. 13, 2017. [Online]. Available: https://medium.com/trustableai/testing-ai-with-metamorphic-testing-61d690001f5c


5. I. Goodfellow and N. Papernot, “The challenge of verification and testing of machine learning,” Cleverhans blog, June 14, 2017. [Online]. Available: http://www.cleverhans.io/security/privacy/ml/2017/06/14/verification.html


6. Raphael, “Introducing tf-explain, interpretability for TensorFlow 2.0,” Sicara blog, July 30, 2019. [Online]. Available: https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35


7. “Fit interpretable machine learning models. Explain blackbox machine learning,” GitHub. Accessed March 13, 2020. [Online]. Available: https://github.com/Microsoft/interpret


8. Y. Sun et al., ” Structural test coverage criteria for deep neural networks,” in 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings, 2019. [Online]. Available: https://www.kroening.com/papers/emsoft2019.pdf


9. L. M. Strickhart and H.N.J. Lee, “Show your work: Machine learning explainer tools and their use in artificial intelligence assurance,” The MITRE Corporation, McLean, VA, June 2019, unpublished.


10. D. Sculley et al., “Machine learning: The high interest credit card of technical debt,” in SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop). Accessed March 16, 2020. [Online]. Available: https://ai.google/research/pubs/pub43146


11. A. Madan, ”3 practical ways to future-proof your IoT devices,” IoT Times, July 2, 2019. [Online]. Available:  https://iot.eetimes.com/3-practical-ways-to-future-proof-your-iot-devices/


12. A. Gonfalonieri, “Why machine learning models degrade in production,” towards data science, July 25, 2019. [Online]. Available: https://towardsdatascience.com/why-machine-learning-models-degrade-in-production-d0f2108e9214


13. D. Sculley et al., “Machine learning: The high interest credit card of technical debt,” in SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop). Accessed March 16, 2020. [Online]. Available: https://ai.google/research/pubs/pub43146


14. D. Sculley et al., “Machine learning: The high interest credit card of technical debt,” in SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop). Accessed March 16, 2020. [Online]. Available: https://ai.google/research/pubs/pub43146


15. R. Potember, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” Defense Technical Information Center, Jan. 1, 2017. [Online]. Available: https://apps.dtic.mil/docs/citations/AD1024432


16. J. Zittrain, “The hidden costs of automated thinking,” The New Yorker, July 23, 2019. [Online]. Available:  https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking


17. N. Carne, “Blaming the driver in a ‘driverless’ car,” Cosmos, Oct. 29. 2019. [Online]. Available: https://cosmosmagazine.com/technology/blaming-the-driver-in-a-driverless-car


18. S. Captain, “Humans were to blame in Google self-driving car crash, police say,” Fast Company, May 4, 2018. [Online]. Available: https://www.fastcompany.com/40568609/humans-were-to-blame-in-google-self-driving-car-crash-police-say


19. J. Stewart, “Tesla’s autopilot was involved in another deadly car crash,” Wired, March 30, 2018. [Online]. Available: https://www.wired.com/story/tesla-autopilot-self-driving-crash-california/


20. J. Stewart, “Why Tesla’s autopilot can’t see a stopped firetruck,” Wired, Aug. 27, 2018. [Online]. Available: https://www.wired.com/story/tesla-autopilot-why-crash-radar/


21. M. McFarland, “Uber self-driving car kills pedestrian in first fatal autonomous crash,” CNN Business, March 19, 2018. [Online]. Available: https://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html


22. A. MacGillis, “The case against Boeing,” New Yorker, Nov. 11, 2019. [Online]. Available: https://www.newyorker.com/magazine/2019/11/18/the-case-against-boeing


23. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


24. W. Langewiesche, “The human factor,” Vanity Fair, Oct. 2014. [Online]. Available: https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash


25. “A320, vicinity Tel Aviv Israel, 2012,” SKYbrary. Accessed on: March 11, 2020. [Online]. Available: https://www.skybrary.aero/index.php/A320,_vicinity_Tel_Aviv_Israel,_2012


26. S. Gibbs, “Tesla Model S cleared by auto safety regulator after fatal Autopilot crash,” Guardian, Jan. 20, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/jan/20/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash


27. C. Ross and I. Swetlitz, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show,” STATnews, July 25, 2018. [Online]. Available: https://www.statnews.com/wp-content/uploads/2018/09/IBMs-Watson-recommended-unsafe-and-incorrect-cancer-treatments-STAT.pdf


28. S. Fussell, “Pearson Embedded a ‘Social-Psychological’ Experiment in Students’ Educational Software [Updated],” Gizmodo, April 18, 2018. [Online]. Available: https://gizmodo.com/pearson-embedded-a-social-psychological-experiment-in-s-1825367784


29. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf

 

 

Government Dependence on Black Box Vendors


1. S. Corbett-Davies, E. Pierson, A. Feller, and S. Goel, “A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear,” Washington Post, Oct. 17, 2016. [Online]. Available: https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?noredirect=on&utm_term=.a9cfb19a549d


2. R. Wexler, “When a computer program keeps you in jail,” New York Times, June 13, 2017. [Online]. Available: https://www.nytimes.com/2017/06/13/opinion/how-computers-are-harming-criminal-justice.html


3. C. Langford, “Houston Schools Must Face Teacher Evaluation Lawsuit,” Courthouse News Service, May 8, 2017. [Online]. Available: https://www.courthousenews.com/houston-schools-must-face-teacher-evaluation-lawsuit/

 

 

Clear as Mud

1. A. Yoo, “UPS: Driving performance by optimizing driver behavior,” Harvard Business School Digital Initiative, April 5, 2017. [Online]. Available: https://digital.hbs.edu/platform-digit/submission/ups-driving-performance-by-optimizing-driver-behavior/


2. K. Hill, “Facebook recommended that this psychiatrist’s patients friend each other,” Splinternews, Aug. 29, 2016. [Online]. Available: https://splinternews.com/facebook-recommended-that-this-psychiatrists-patients-f-1793861472


3. B. Khaleghi, “The what of explainable AI,” Element AI, Sept. 3, 2019. [Online]. Available: https://www.elementai.com/news/2019/the-what-of-explainable-ai


4. C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” arXiv.org, Sep. 22, 2019. [Online]. Available: https://arxiv.org/abs/1811.10154


5. P. L. McDermott, “Human-machine teaming systems engineering guide,” The MITRE Corporation, Dec. 2018. [Online]. Available: https://www.mitre.org/publications/technical-papers/human-machine-teaming-systems-engineering-guide


6. D. Gunning, “Explainable artificial intelligence (XAI),” Defense Advanced Research Projects Agency, Nov. 2017. [Online]. Available: https://www.darpa.mil/attachments/XAIProgramUpdate.pdf?source=post_page—————————


7. Z. C. Lipton, “The mythos of model interpretability,” arXiv.org, March 6, 2017. [Online]. Available: https://arxiv.org/abs/1606.03490


8. C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” arXiv.org, Sep. 22, 2019. [Online]. Available: https://arxiv.org/abs/1811.10154


9. Z. C. Lipton, “The mythos of model interpretability,” arXiv.org, March 6, 2017. [Online]. Available: https://arxiv.org/abs/1606.03490


10. C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” arXiv.org, Sep. 22, 2019. [Online]. Available: https://arxiv.org/abs/1811.10154

 

 

In AI We Overtrust


1. P. Robinette, W. Li, R. Allen, A. M. Howard, and A. R. Wagner, “Overtrust of robots in emergency evacuation scenarios,” presented at 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 101-108. [Online]. Available: https://www.cc.gatech.edu/~alanwags/pubs/Robinette-HRI-2016.pdf


2. Georgia Tech, “In emergencies, should you trust a robot?” YouTube. Accessed March 13, 2020. [Online]. Available: https://www.youtube.com/watch?v=frr6cVBQPXQ


3. M. Heid, “The unsettling ways tech is changing your personal reality,” Elemental, Oct. 3, 2019. [Online]. Available: https://elemental.medium.com/technology-is-fundamentally-changing-the-ways-you-think-and-feel-b4bbfdefc2ee


4. M. Vazquez, A. May, A. Steinfeld, and W.-H. Chen, “A deceptive robot referee in a multiplayer gaming environment,” Conference Paper, Proceedings of 2011 International Conference on Collaboration Technologies and Systems (CTS), pp. 204-211, May 2011. [Online]. Available: https://www.ri.cmu.edu/publications/a-deceptive-robot-referee-in-a-multiplayer-gaming-environment/


5. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


6. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


7. L. Hansen, ”8 drivers who blindly followed their GPS into disaster,” The Week, May 7, 2013. [Online]. Available: https://theweek.com/articles/464674/8-drivers-who-blindly-followed-gps-into-disaster


8. P. Madhavan and D. A. Wiegmann, “Similarities and differences between human-human and human-automation trust: An integrative review,” Theoretical Issues in Ergonomics Science, vol. 8, no. 4, pp. 277-301, 2007).


9. “Appeal to authority,” Legally Fallacious. Accessed March 25, 2020. [Online]. Available:  https://www.logicallyfallacious.com/logicalfallacies/Appeal-to-Authority


10. M. Chalabi, “Weapons of math destruction: Cathy O’Neil adds up the damage of algorithms,” Guardian, Oct. 27, 2016. [Online]. Available: https://www.theguardian.com/books/2016/oct/27/cathy-oneil-weapons-of-math-destruction-algorithms-big-data


11. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


12. Data & Society, “Algorithmic accountability: A primer,” Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality, Prepared for the Congressional Progressive Caucus, April 18, 2018. [Online]. Available: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf


13. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


14. B. Aguera y Arcas, “Physiognomy’s new clothes,” Medium, May 6, 2017. [Online]. Available: https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a


15. Synced, “2018 in review: 10 AI failures,” Medium, Dec. 10, 2018. [Online]. Available: https://medium.com/syncedreview/2018-in-review-10-ai-failures-c18faadf5983


16. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


17. S. Levin, “New AI can guess whether you’re gay or straight from a photograph,” Guardian, Sept. 7, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph


18. Synced, “2018 in review: 10 AI failures,” Medium, Dec. 10, 2018. [Online]. Available: https://medium.com/syncedreview/2018-in-review-10-ai-failures-c18faadf5983

 

 

Lost in Translation: Automation Surprise


1. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


2. “A320, vicinity Tel Aviv Israel, 2012,” SKYbrary. Accessed on: March 11, 2020. [Online]. Available: https://www.skybrary.aero/index.php/A320,_vicinity_Tel_Aviv_Israel,_2012


3. R. Nieva, “Facebook put cork in chatbots that created a secret language,” CNET, July 31, 2017. [Online]. Available: https://www.cnet.com/news/what-happens-when-ai-bots-invent-their-own-language/


4. N. D. Sarter, D. D. Woods, and C. E. Billings, “Automation surprises,” in G. Salvendy (Ed.), Handbook of Human Factors & Ergonomics (2nd ed., pp. 1926-1943). New York, NY, USA: John Wiley, 1997.


5. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


6. G. Klien et al., “Ten challenges for making automation a ‘team player’ in joint human-agent activity,” IEEE: Intelligent Systems, vol. 19, no. 6, pp. 91-95, Nov./Dec. 2004. [Online]. Available: http://jeffreymbradshaw.net/publications/17._Team_Players.pdf_1.pdf


7. J. B. Lyons, “Being transparent about transparency: A model for human-robot interaction,” in 2013 AAAI Spring Symposium Series, 2013. [Online]. Available: https://www.semanticscholar.org/paper/Being-Transparent-about-Transparency%3A-A-Model-for-Lyons/840080df8a02de6aab098e7eabef84831ac95428


8. D. Woods, “Generic support requirements for cognitive work: laws that govern cognitive work in action,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 49, pp. 317-321, Sept. 1, 2005. [Online]. Available: https://journals.sagepub.com/doi/10.1177/154193120504900322

 

 

The AI Resistance 


1. “Luddite,” Merriam-Webster. Accessed April 11, 2020. [Online]. Available: https://www.merriam-webster.com/dictionary/Luddite


2. D. Wray, “The companies cleaning the deepest, darkest parts of social media,” Vice, June 26, 2018. [Online]. Available: https://www.vice.com/en_us/article/ywe7gb/the-companies-cleaning-the-deepest-darkest-parts-of-social-media


3. “Why a #Google walkout organizer left Google,” Medium, June 7, 2019. [Online]. Available: https://medium.com/@GoogleWalkout/why-a-googlewalkout-organizer-left-google-26d1e3fbe317


4. S. Romero, “Wielding rocks and knives, Arizonans attack self-driving cars,” New York Times, Dec. 31, 2018. [Online]. Available: https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html


5. D. Simberkoff, “How Facebook’s Cambridge Analytica scandal impacted the intersection of privacy and regulation,” CMS Wire, Aug. 30, 2018. [Online]. Available: https://www.cmswire.com/information-management/how-facebooks-cambridge-analytica-scandal-impacted-the-intersection-of-privacy-and-regulation/


6. “Technology adoption life cycle,” Wikipedia. Accessed March 17, 2020. [Online]. Available: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle


7. M. Anderson, “Useful or creepy? Machines suggest Gmail replies,” AP News, Aug. 30, 2018. [Online]. Available: https://apnews.com/bcc384298fe944e89367e42e20d43f05


8. “House Intelligence Committee hearing on ‘Deepfake’ videos,” C-SPAN, June 13, 2019. [Online]. Available: https://www.c-span.org/video/?461679-1/house-intelligence-committee-hearing-deepfake-videos


9. C. F. Kerry, “Protecting privacy in an AI-driven world,” Brookings, Feb. 10, 2020. [Online]. Available:  https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/  


10. C. Forrest, “Fear of losing job to AI is the no. 1 cause of stress at work,” TechRepublic, June 6, 2017.  [Online]. Available: https://www.techrepublic.com/article/report-fear-of-losing-job-to-ai-is-the-no-1-cause-of-stress-at-work/


11. S. Browne, Dark Matters: On the Surveillance of Blackness, Durham, NC, USA: Duke University Press Books, 2015. [Online]. Available: https://www.dukeupress.edu/dark-matters


12. A. M. Bedoya, “The color of surveillance: What an infamous abuse of power teaches us about the modern spy era,” Slate, Jan. 18, 2016. [Online]. Available: https://slate.com/technology/2016/01/what-the-fbis-surveillance-of-martin-luther-king-says-about-modern-spying.html


13. M. Cyril, “Watching the Black body,” Electronic Frontier Foundation, Feb. 28, 2019. [Online]. Available: https://www.eff.org/deeplinks/2019/02/watching-black-body


14. P. McCausland, “Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk,” NBC News, Nov. 9, 2019. [Online]. Available: https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281

 

 

Good (Grief!) Governance


1. A. M. Barry-Jester, B. Casselman, and D. Goldstein, “Should prison sentences be based on crimes that haven’t been committed yet?’ FiveThirtyEight, Aug. 4, 2015. [Online]. Available: https://fivethirtyeight.com/features/prison-reform-risk-assessment/


2. E. Ongweso, “Google is investigating why it trained facial recognition on ‘dark skinned’ homeless people,” Vice, Oct. 4, 2019. [Online]. Available: https://www.vice.com/en_us/article/43k7yd/google-is-investigating-why-it-trained-facial-recognition-on-dark-skinned-homeless-people


3. J. Stanley, “Secret Service announces test of face recognition system around White House,” ACLU bog, Dec. 4, 2018. [Online]. Available: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/secret-service-announces-test-face-recognition


4. R. Courtland, “Bias detectives: The researchers striving to make algorithms fair,” Nature, June 20, 2018. [Online]. Available: https://www.nature.com/articles/d41586-018-05469-3


5. D. Robinson and L. Koepke, “Stuck in a pattern: Early evidence on ‘predictive policing’ and civil rights,” Upturn, Aug. 2016. [Online]. Available: https://www.upturn.org/reports/2016/stuck-in-a-pattern/


6. “An ethics guidelines global inventory,” Algorithm Watch. Accessed on: Jan. 17, 2020. [Online]. Available: https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/


7. “An ethics guidelines global inventory,” Algorithm Watch. Accessed on: Jan. 17, 2020. [Online]. Available: https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/


8. T. Hagendorff, “The ethics of AI ethics: An evaluation of guidelines,” arXiv.org, Oct. 11, 2019. [Online]. Available: https://arxiv.org/abs/1903.03425


9. R. Vought, “Guidance for regulation of artificial intelligence applications,” Draft memorandum, WhiteHouse.gov. Accessed on: Jan. 21, 2020. [Online]. Available: https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf


10. “Wrestling with AI governance around the world,” Forbes, March 27, 2019. [Online]. Available: https://www.forbes.com/sites/insights-intelai/2019/03/27/wrestling-with-ai-governance-around-the-world/#7d3f84ed1766


11. G. Vyse, “Three American cities have now banned the use of facial recognition technology in local government amid concerns it’s inaccurate and biased,” Governing, July 24, 2019. [Online]. Available: https://www.governing.com/topics/public-justice-safety/gov-cities-ban-government-use-facial-recognition.html


12. P. Martineau, “Cities examine proper—and improper—uses of facial recognition,” Wired, Nov. 10, 2019. [Online]. Available: https://www.wired.com/story/cities-examine-proper-improper-facial-recognition/


13. “Ban facial recognition.” Accessed March 17, 2020. [Online]. Available: https://www.banfacialrecognition.com/map/


14. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


15. R. Courtland, “Bias detectives: The researchers striving to make algorithms fair,” Nature, June 20, 2018. [Online]. Available: https://www.nature.com/articles/d41586-018-05469-3


16. D. Robinson and L. Koepke, “Stuck in a pattern: Early evidence on ‘predictive policing’ and civil rights,” Upturn, Aug. 2016. [Online]. Available: https://www.upturn.org/reports/2016/stuck-in-a-pattern/


17. D. Robinson and L. Koepke, “Stuck in a pattern: Early evidence on ‘predictive policing’ and civil rights,” Upturn, Aug. 2016. [Online]. Available: https://www.upturn.org/reports/2016/stuck-in-a-pattern/

 

 

Just Add (Technical) People


1. J. Spitzer, “IBM’s Watson recommended ‘unsafe and incorrect’ cancer treatments, STAT report finds,” Becker’s Health IT, July 25, 2018. [Online]. Available: https://www.beckershospitalreview.com/artificial-intelligence/ibm-s-watson-recommended-unsafe-and-incorrect-cancer-treatments-stat-report-finds.html


2. A. Liptak, “The US Navy will replace its touchscreen controls with mechanical ones on its destroyers,” The Verge, Aug. 11, 2019. [Online]. Available: https://www.theverge.com/2019/8/11/20800111/us-navy-uss-john-s-mccain-crash-ntsb-report-touchscreen-mechanical-controls


3. T. Simonite, “When It Comes to Gorillas, Google Photos Remains Blind,” Wired, January 11, 2018. [Online]. Available: https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/


4. B. Marr, “The AI skills crisis and how to close the gap,” Forbes, June 25, 2018. [Online]. Available: https://www.forbes.com/sites/bernardmarr/2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#6525b57b31f3


5. NICE cybersecurity workforce framework resource center,” National Institute of Standards and Technology. Accessed March 17, 2020. [Online]. Available: https://www.nist.gov/itl/applied-cybersecurity/nice/nice-cybersecurity-workforce-framework-resource-center


6. S. Anand and T. Bärnighausen, “Health workers at the core of the health system: Framework and research issues,” Global Health Workforce Alliance, 2011. [Online]. Available: https://www.who.int/workforcealliance/knowledge/resources/frameworkandresearch_dec2011/en/


7. Lippincott Solutions, “Interdisciplinary care plans: Teamwork makes the dream work,” Calling the Shots blog, Sept. 6, 2018. [Online]. Available:  http://lippincottsolutions.lww.com/blog.entry.html/2018/09/06/interdisciplinaryca-z601.html


8. M. Mahdizadeh, A. Heydari, and H. K. Moonaghi, “Clinical interdisciplinary collaboration models and frameworks from similarities to differences: A systematic review,” Global Journal of Health Science, vol. 7, no. 6, pp. 170-180, Nov. 2015. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4803863/


9. C. Hagel, “Reagan national defense forum keynote,” Secretary of Defense Speech, Ronald Reagan Presidential Library, Simi Valley, CA, Nov. 15, 2014. [Online]. Available: https://www.defense.gov/Newsroom/Speeches/Speech/Article/606635/


10. “Reports,” National Security Commission on Artificial Intelligence. Accessed March 18, 2020. [Online]. Available: https://www.nscai.gov/reports

 

 

Square Data, Round Problem


1. “Bad data costs United Airlines $1B annually,” Travel Data Daily. Accessed March 16, 2020. [Online]. Available: https://www.traveldatadaily.com/bad-data-costs-united-airlines-1b-annually/


2. B. Vergakis, “The Navy, Air Force and Army collect different data on aircraft crashes. That’s a big problem,” Task & Purpose, Aug. 16, 2018. [Online]. Available: https://taskandpurpose.com/aviation-mishaps-data-collection


3. B. Marr, “How much data do we create every day? The mind-blowing stats everyone should read,” Forbes, May 21, 2018. [Online]. Available: https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/


4. “No AI until the data is fixed,” Wired, Feb. 22, 2019. [Online]. Available: https://www.wired.co.uk/article/no-ai-until-the-data-is-fixed


5. D. Robinson and L. Koepke, “Stuck in a pattern: Early evidence on ‘predictive policing’ and civil rights,” Upturn, Aug. 2016. [Online]. Available: https://www.upturn.org/reports/2016/stuck-in-a-pattern/


6. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues

 

 

My 8-Track Still Works So What’s the Issue?


1. U.S. Government Accountability Office, “Information technology: Federal agencies need to address aging legacy systems,” GAO-16-696T, May 25, 2016. [Online]. Available: https://www.gao.gov/products/GAO-16-696T


2. D. Cassel, “COBOL is everywhere. Who will maintain it?” The New Stack, May 6, 2017. [Online]. Available: https://thenewstack.io/cobol-everywhere-will-maintain/


3. J. Uchill, “How did the government’s technology get so bad?” The Hill, Dec. 13, 2016. [Online]. Available: https://thehill.com/policy/technology/310271-how-did-the-governments-technology-get-so-bad


4. B. Balter, “19 reasons why technologists don’t want to work at your government agency,” April 21, 2015. [Online]. Available: https://ben.balter.com/2015/04/21/why-technologists-dont-want-to-work-at-your-agency/


5. U.S. Government Accountability Office, “Information technology: Federal agencies need to address aging legacy systems,” GAO-16-696T, May 25, 2016. [Online]. Available: https://www.gao.gov/products/GAO-16-696T


6. D. Cassel, “COBOL is everywhere. Who will maintain it?” The New Stack, May 6, 2017. [Online]. Available: https://thenewstack.io/cobol-everywhere-will-maintain/

 

 

Lessons Learned

Hold AI to a Higher Standard

1. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


2. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


3. S. Gibbs, “Tesla Model S cleared by auto safety regulator after fatal Autopilot crash,” Guardian, Jan. 20, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/jan/20/tesla-model-s-cleared-auto-safety-regulator-after-fatal-autopilot-crash


4. D. Tomchek and S. Krawlzik, “Looking beyond the technical to fill America’s cyber workforce gap,” Nextgov, Sept. 27, 2019. [Online]. Available: https://www.nextgov.com/ideas/2019/09/looking-beyond-technical-fill-americas-cyber-workforce-gap/160222/

 

 

It’s OK to Say No to Automation

1. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, “Seven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,” IEEE Intelligent Systems, Nov./Dec. 2014. [Online]. Available: http://www.jeffreymbradshaw.net/publications/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf


2. “Ethics & algorithms toolkit.” Accessed March 13, 2020. [Online]. Available: http://ethicstoolkit.ai/

 

 

AI Challenges Are Multidisciplinary, so They Require a Multidisciplinary Team


1. S. Ferro, “Here’s why facial recognition tech can’t figure out black people,” HuffPost, March 2, 2016. [Online]. Available: https://www.huffpost.com/entry/heres-why-facial-recognition-tech-cant-figure-out-black-people_n_56d5c2b1e4b0bf0dab3371eb


2. S. J. Freedberg, “’Guess what, there’s a cost for that’: Getting cloud & AI right,” Breaking Defense, Nov. 26, 2019. [Online]. Available: https://breakingdefense.com/2019/11/guess-what-theres-a-cost-for-that-getting-cloud-ai-right/


3. A. Campolo et al., AI Now Report 2017. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:  https://ainowinstitute.org/AI_Now_2017_Report.pdf

 

 

Incorporate Privacy, Civil Liberties, and Security from the Beginning


1. R. V. Yampolskiy and M. S. Spellchecker, “Artificial intelligence safety and cybersecurity: A timeline of AI failures,” arXiv.org. Accessed March 25, 2020. [Online]. Available: https://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf


2. J. Rotner, “The person at the other end of the data,” Knowledge-Driven Enterprise blog, The MITRE Corporation, Oct. 1, 2019. [Online]. Available: https://kde.mitre.org/blog/2019/10/01/the-person-at-the-other-end-of-the-data/


3. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, “The role and limits of principles in AI ethics: Towards a focus on tensions,” presented at AIES ’19, Jan. 27–28, 2019, Honolulu, HI, USA. [Online]. Available: https://www.researchgate.net/publication/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions/link/5d269de0a6fdcc2462d41592/download


4. I. Goodfellow, P. McDaniel, and N. Papernot, “Making machine learning robust against adversarial inputs,” Communications of the ACM, vol. 61, no. 7, pp. 56-66, July 2018. [Online]. Available: https://cacm.acm.org/magazines/2018/7/229030-making-machine-learning-robust-against-adversarial-inputs/fulltext


5. R. V. Yampolskiy and M. S. Spellchecker, “Artificial intelligence safety and cybersecurity: A timeline of AI failures,” arXiv.org. Accessed March 25, 2020. [Online]. Available: https://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf


6. “General data protection regulation,” European Union. Accessed March 25, 2020. [Online]. Available: https://eugdpr.com/


7. D. Miralis and P. Gibson, “Australia: Data protection 2019,” ICLG.com, March 7, 2019. [Online]. Available: https://iclg.com/practice-areas/data-protection-laws-and-regulations/australia


8. “Data protection laws of the world: New Zealand,” DLA Piper. Accessed March 16, 2020. [Online]. Available: https://www.dlapiperdataprotection.com/index.html?t=law&c=NZ


9. G. Vyse, “Three American cities have now banned the use of facial recognition technology in local government amid concerns it’s inaccurate and biased,” Governing.com, July 24, 2019. [Online]. Available: https://www.governing.com/topics/public-justice-safety/gov-cities-ban-government-use-facial-recognition.html


10. L. Hautala, “California’s new data privacy law the toughest in the US,” CNET.com, June 29, 2018. [Online]. Available: https://www.cnet.com/news/californias-new-data-privacy-law-the-toughest-in-the-us/

 

 

Involve the Communities Affected by the AI


1. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


2. “Diverse Voices: A How-To Guide for Facilitating Inclusiveness in Tech Policy.” Accessed April 8, 2020. [Online]. Available: https://techpolicylab.uw.edu/project/diverse-voices/


3. A. Campolo et al., AI Now Report 2017. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:  https://ainowinstitute.org/AI_Now_2017_Report.pdf


4. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


5. A. Campolo et al., AI Now Report 2017. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:  https://ainowinstitute.org/AI_Now_2017_Report.pdf


6. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf

 

 

Plan to Fail


1. “Benjamin Franklin quotable quote,” Goodreads. Accessed March 16, 2020. [Online]. Available: https://www.goodreads.com/quotes/460142-if-you-fail-to-plan-you-are-planning-to-fail


2. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, “Seven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,” IEEE Intelligent Systems, Nov./Dec. 2014. [Online]. Available: http://www.jeffreymbradshaw.net/publications/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf


3. M. Baker and D. Gates, “Lack of redundancies on Boeing 737 MAX system baffles some involved in developing the jet,” Seattle Times, March 27, 2019. [Online]. Available: https://www.seattletimes.com/business/boeing-aerospace/a-lack-of-redundancies-on-737-max-system-has-baffled-even-those-who-worked-on-the-jet/


4. E. Lacey, “The toxic potential of YouTube’s feedback loop,” Wired, July 13, 2019. [Online]. Available: https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/


5. D. Amodei, “Concrete problems in AI safety,” arXiv.org, July 25, 2016. [Online]. Available: https://arxiv.org/pdf/1606.06565.pdf

 

 

Ask for Help: Hire a Villain


1. “The Netflix Simian Army,” The Netflix Tech Blog, July 19, 2011. [Online]. Available: https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116


2. C. A. Cois, “DevOps case study: Netflix and the chaos monkey,” DevOps blog, Software Engineering Institute, April 30, 2015. [Online]. Available: https://insights.sei.cmu.edu/devops/2015/04/devops-case-study-netflix-and-the-chaos-monkey.html


3. “White-hat,” Your Dictionary. Accessed March 13, 2020. [Online]. Available: https://www.yourdictionary.com/white-hat


4. E. Tittel and E. Follis, “How to become a white hat hacker,” Business News Daily, June 17, 2019. [Online]. Available: https://www.businessnewsdaily.com/10713-white-hat-hacker-career.html


5. K. Lerwing, “Apple hired the hackers who created the first Mac firmware virus,” Business Insider, Feb. 3, 2016. [Online]. Available: https://www.businessinsider.com/apple-hired-the-hackers-who-created-the-first-mac-firmware-virus-2016-2


6. HackerOne, “What was it like to hack the Pentagon?” h1 blog, June 17, 2016. [Online]. Available: https://www.hackerone.com/blog/hack-the-pentagon-results


7. J. Talamantes, “What is red teaming and why do I need it?” RedTeam blog. Accessed March 16, 2020. [Online]. Available: https://www.redteamsecure.com/what-is-red-teaming-and-why-do-i-need-it-2/

 

 

Use Math to Reduce Bad Outcomes Caused by Math


1. K. Hao, “This is how AI bias really happens—and why it’s so hard to fix,” MIT Technology Review, Feb. 4, 2020. [Online]. Available: https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/


2. A. Feng and S. Wu, “The myth of the impartial machine,” Parametric Press, no. 01 (Science + Society), May 1, 2019. [Online]. Available: https://parametric.press/issue-01/the-myth-of-the-impartial-machine/


3. “AI fairness 360 open source toolkit,” IBM Research Trusted AI. Accessed March 13, 2020. [Online]. Available: http://aif360.mybluemix.net/


4. “Bias and fairness audit toolkit,” GitHub. Accessed March 13, 2020. [Online]. Available: https://github.com/dssg/aequitas


5. “A Python package that implements a variety of algorithms that mitigate unfairness in supervised machine learning,” GitHub. Accessed March 13, 2020. [Online]. Available: https://github.com/Microsoft/fairlearn


6. “What-if tool,” GitHub. Accessed March 13, 2020. [Online]. Available: https://pair-code.github.io/what-if-tool/


7. “Facets,” GitHub. Accessed March 13, 2020. [Online]. Available: https://pair-code.github.io/facets/


8. T. Bolukbasi, K. Chang, J. Zou, V. Saligrama, and A. Kalai, “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,” arXiv.org, July 21, 2016. [Online]. Available: https://arxiv.org/abs/1607.06520


9. J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K. Chang, “Men also like shopping: Reducing gender bias amplification using Corpus-level constraints,” arXiv.org, July 29, 2017.[Online]. Available: https://arxiv.org/pdf/1707.09457.pdf


10. A. Feng and S. Wu, “The myth of the impartial machine,” Parametric Press, no. 01 (Science + Society), May 1, 2019. [Online]. Available: https://parametric.press/issue-01/the-myth-of-the-impartial-machine/


11. D. Sculley et al., ”Hidden technical debt in machine learning systems,” in Advances in Neural Information Processing Systems 28 (NIPS 2015). Accessed March 16, 2020. [Online]. Available: https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf


12. A. Feng and S. Wu, “The myth of the impartial machine,” Parametric Press, no. 01 (Science + Society), May 1, 2019. [Online]. Available: https://parametric.press/issue-01/the-myth-of-the-impartial-machine/


13. D. Sculley et al., “Hidden technical debt in machine learning systems,” in Advances in Neural Information Processing Systems 28 (NIPS 2015). Accessed March 16, 2020. [Online]. Available: https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf


14. Z. Rogers, “Have strategists drunk the ‘AI race’ Kool-Aid,” War on the Rocks, June 4, 2019. [Online]. Available: https://warontherocks.com/2019/06/have-strategists-drunk-the-ai-race-kool-aid/

 

 

Make Our Assumptions Explicit


1. J. Stoyanovich and B. Howe, “Follow the data! Algorithmic transparency starts with data transparency,” Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: https://ai.shorensteincenter.org/ideas/2018/11/26/follow-the-data-algorithmic-transparency-starts-with-data-transparency


2. T. Gebru et al., “Datasheets for datasets,” arXiv.org, Jan. 14, 2020. [Online]. Available: [Online]. Available: https://arxiv.org/abs/1803.09010


3. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


4. M. Mitchell et al., “Model cards for model reporting,” arXiv.org, Jan. 14, 2019. [Online]. Available: [Online]. Available: https://arxiv.org/abs/1810.03993


5. “About Us,” Partnership On AI. Accessed May 27, 2020. [Online]. Available: https://www.partnershiponai.org/about/


6. “Deployed Examples,” Partnership On AI. Accessed May 27, 2020. [Online]. Available: https://www.partnershiponai.org/about-ml/#examples

 

 

Try Human-AI Couples Counseling


1. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, “The seven deadly myths of ‘autonomous systems,’” IEEE: Intelligent Systems, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: https://ieeexplore.ieee.org/document/6588858


2. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, “The seven deadly myths of ‘autonomous systems,’” IEEE: Intelligent Systems, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: https://ieeexplore.ieee.org/document/6588858


3. S. M. Casner and E. L. Hutchins, “What do we tell the drivers? Toward minimum driver training standards for partially automated cars,” Journal of Cognitive Engineering and Decision Making, March 8, 2019. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1555343419830901


4. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, “Seven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,” IEEE Intelligent Systems, Nov./Dec. 2014. [Online]. Available: http://www.jeffreymbradshaw.net/publications/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf


5. J. M. Bradshaw, R. Hoffman, M. Johnson, and D. D. Woods, “The seven deadly myths of ‘autonomous systems,’” IEEE: Intelligent Systems, vol. 28, no. 3, pp. 54-61, May 2013. [Online]. Available: https://ieeexplore.ieee.org/document/6588858


6. M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich, and D. D. Woods, “Seven cardinal virtues of human-machine teamwork: Examples from the DARPA robotic challenge,” IEEE Intelligent Systems, Nov./Dec. 2014. [Online]. Available: http://www.jeffreymbradshaw.net/publications/56.%20Human-Robot%20Teamwork_IEEE%20IS-2014.pdf


7. G. Klein et al., “Ten challenges for making automation a ’team player’ in joint human-agent activity,” IEEE: Intelligent Systems, vol. 19, no. 6, pp. 91-95, Nov./Dec. 2004. [Online]. Available: http://jeffreymbradshaw.net/publications/17._Team_Players.pdf_1.pdf


8. W. Lawless, R. Mittu, D. Sofge, and L. Hiatt, “Artificial intelligence, autonomy, and human-machine teams—Interdependence, context, and explainable AI,” AI Magazine, vol. 40, no. 3, pp. 5-13, 2019.


9. “A framework for discussing trust in increasingly autonomous systems,” The MITRE Corporation, June 2017. [Online]. Available: https://www.mitre.org/sites/default/files/publications/17-2432-framework-discussing-trust-increasingly-autonomous-systems.pdf

 

 

Offer the User Choices


1. M. Kearns, “The ethical algorithm,” Carnegie Council for Ethics in International Affairs, Nov. 6, 2019. [Online]. Available: https://www.carnegiecouncil.org/studio/multimedia/20191106-the-ethical-algorithm-michael-kearns


2. J. Stoyanovich and B. Howe, “Follow the data! Algorithmic transparency starts with data transparency,” Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: https://ai.shorensteincenter.org/ideas/2018/11/26/follow-the-data-algorithmic-transparency-starts-with-data-transparency


3. L. M. Strickhart and H.N.J. Lee, “Show your work: Machine learning explainer tools and their use in artificial intelligence assurance,” The MITRE Corporation, McLean, VA, June 2019, unpublished.


4. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues

 

 

Promote Better Adoption through Gameplay

1. N. D. Sarter and D. D. Woods, “How in the world did I ever get into that mode? Mode error and awareness in supervisory control,” Human Factors, vol. 37, pp. 5-19, 1995.


2. “Virtuous cycle of AI: Build good product, get more users, collect more data, build better product, get more users, collect more data, etc.,” in A. Ng, AI Transformation Playbook: How to Lead Your Company into the AI Era, Landing AI, Dec. 13, 2018. [Online]. Available: https://landing.ai/ai-transformation-playbook/.


3. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, “The role and limits of principles in AI ethics: Towards a focus on tensions,” presented at AIES ’19, Jan. 27–28, 2019, Honolulu, HI, USA. [Online]. Available: https://www.researchgate.net/publication/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions/link/5d269de0a6fdcc2462d41592/download


4. “Project ExplAIn interim report,” U.K. Information Commissioner’s Office, 2019. [Online]. Available: https://ico.org.uk/about-the-ico/research-and-reports/project-explain-interim-report/


5. “Squad X improves situational awareness, coordination for dismounted units,” Defense Advanced Research Projects Agency, Nov. 30, 2018. [Online]. Available: https://www.darpa.mil/news-events/2018-11-30a


6. DARPAtv, “Squad X experimentation exercise,” YouTube, July 12, 2019. [Online]. Available; https://www.youtube.com/watch?v=DgM7hbCNMmU


7. S. J. Freedberg, “Simulating a super brain: Artificial intelligence in wargames,” Breaking Defense, April 26, 2019. [Online]. Available: https://breakingdefense.com/2019/04/simulating-a-super-brain-artificial-intelligence-in-wargames/.


8. B. Jensen, S. Cuomo, and C. Whyte, “Wargaming with Athena: How to make militaries smarter, faster, and more efficient with artificial intelligence,” War on the Rocks, June 5, 2018. [Online]. Available: https://warontherocks.com/2018/06/wargaming-with-athena-how-to-make-militaries-smarter-faster-and-more-efficient-with-artificial-intelligence/


9. J. Whittlestone, A. Alexandrova, R. Nyrup, and S. Cave, “The role and limits of principles in AI ethics: Towards a focus on tensions,” presented at AIES ’19, Jan. 27–28, 2019, Honolulu, HI, USA. [Online]. Available: https://www.researchgate.net/publication/334378492_The_Role_and_Limits_of_Principles_in_AI_Ethics_Towards_a_Focus_on_Tensions/link/5d269de0a6fdcc2462d41592/download

 

 

Monitor the AI’s Impact and Establish Chains of Accountability


1. A. Gonfalonieri, “Why machine learning models degrade in production,” towards data science, July 25, 2019. [Online]. Available: https://towardsdatascience.com/why-machine-learning-models-degrade-in-production-d0f2108e9214


2. A. Gonfalonieri, “Why machine learning models degrade in production,” towards data science, July 25, 2019. [Online]. Available: https://towardsdatascience.com/why-machine-learning-models-degrade-in-production-d0f2108e9214


3. A. Gonfalonieri, “Why machine learning models degrade in production,” towards data science, July 25, 2019. [Online]. Available: https://towardsdatascience.com/why-machine-learning-models-degrade-in-production-d0f2108e9214


4. A. Campolo et al., AI Now Report 2017. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:  https://ainowinstitute.org/AI_Now_2017_Report.pdf


5. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


6. J. C. Newman, “Decision Points in AI Governance,” UC Berkeley Center for Long-Term Cybersecurity, May 5, 2020. [Online]. Available: https://cltc.berkeley.edu/2020/05/05/decision-points-in-ai-governance/


7. T. Hagendorff, “The ethics of AI ethics: An evaluation of guidelines,” arXiv.org, Oct. 11, 2019. [Online]. Available: https://arxiv.org/abs/1903.03425


8. J. C. Newman, “Decision Points in AI Governance,” UC Berkeley Center for Long-Term Cybersecurity, May 5, 2020. [Online]. Available: https://cltc.berkeley.edu/2020/05/05/decision-points-in-ai-governance/

 

 

Envision Safeguards for AI Advocates


1. R. Sandler, “Amazon, Microsoft, Wayfair: Employees stage internal protests against working with ICE,” Forbes, July 19, 2019. [Online]. Available: https://www.forbes.com/sites/rachelsandler/2019/07/19/amazon-salesforce-wayfair-employees-stage-internal-protests-for-working-with-ice/


2. J. Bhuiyan, “How the Google walkout transformed tech workers into activists,” Los Angeles Times, Nov. 6, 2019. [Online]. Available: https://www.latimes.com/business/technology/story/2019-11-06/google-employee-walkout-tech-industry-activism


3. J. McLaughlin, Z. Dorfman, and S. D. Naylor, “Pentagon intelligence employees raise concerns about supporting domestic surveillance amid protests,” Yahoo News, June 4, 2020. [Online]. Available: https://news.yahoo.com/pentagon-intelligence-employees-raise-concerns-about-supporting-domestic-surveillance-amid-protests-194906537.html


4. J. Menn, “Google fires fifth activist employee in three weeks; complaint filed,” Reuters, Dec. 17, 2019. [Online]. Available: https://www.reuters.com/article/google-unions/google-fires-fifth-activist-employee-in-three-weeks-complaint-filed-idUSL1N28R02L


5. A. Palmer, “Amazon employees plan ‘online walkout’ to protest firings and treatment of warehouse workers,” CNBC, April 16, 2020. [Online]. Available: https://www.cnbc.com/2020/04/16/amazon-employees-plan-online-walkout-over-firings-work-conditions.html


6. J. Eidelson and H. Kanu, “Software Startup Accused of Union-Busting Will Pay Ex-Employees,” Bloomberg, Nov. 10, 2018. [Online]. Available: https://www.bloomberg.com/news/articles/2018-11-10/software-startup-accused-of-union-busting-will-pay-ex-employees


7. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


8. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues

 

 

Require Objective, Third-party Verification and Validation

1. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


2. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


3. ENERGY STAR homepage. Accessed on: Jan. 21, 2020. [Online]. Available: https://www.energystar.gov/


4. C. Martin and M. Dent, “How Nestle, Google and other businesses make money by going green,” Los Angeles Times, Sep. 20, 2019. [Online]. Available: https://www.latimes.com/business/story/2019-09-20/how-businesses-profit-from-environmentalism


5. “SafeAI.” Accessed April 2, 2020. [Online]. Available:   https://www.forhumanity.center/safeai/


6. A. Campolo et al., AI Now Report 2017. New York, NY, USA: AI Now Institute, 2017. [Online]. Available:  https://ainowinstitute.org/AI_Now_2017_Report.pdf


7. J. Stoyanovich and B. Howe, “Follow the data! Algorithmic transparency starts with data transparency,” Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, Nov. 27, 2018. [Online]. Available: https://ai.shorensteincenter.org/ideas/2018/11/26/follow-the-data-algorithmic-transparency-starts-with-data-transparency


8. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


9. Z. C. Lipton, “The doctor just won’t accept that,” arXiv.org, Nov. 24, 2017. [Online]. Available: https://arxiv.org/abs/1711.08037


10. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


11. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


12. Occupational Safety and Health Administration, “OSHA’s Nationally Recognized Testing Laboratory (NRTL) program,” OSHA.gov. Accessed on: Jan. 30, 2020. [Online]. Available: https://www.osha.gov/dts/otpca/nrtl/

 

 

Entrust Sector-specific Agencies to Establish AI Standards for Their Domains

1. M. Whittaker et al., AI Now Report 2018. New York, NY, USA: AI Now Institute, 2018. [Online]. Available: https://ainowinstitute.org/AI_Now_2018_Report.pdf


2. F. Balamuth et al., “Improving recognition of pediatric severe sepsis in the emergency department: Contributions of a vital sign–based electronic alert and bedside clinician identification,” Annals of Emergency Medicine, vol. 79, no. 6, pp. 759-768.e2, Dec. 2017. [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0196064417303153


3. G. Siddiqui, “Why doctors reject tools that make their jobs easier,” Scientific American, Oct. 15, 2018. [Online]. Available: https://blogs.scientificamerican.com/observations/why-doctors-reject-tools-that-make-their-jobs-easier/


4. A. M. Barry-Jester, B. Casselman, and D. Goldstein, “Should prison sentences be based on crimes that haven’t been committed yet?’ FiveThirtyEight, Aug. 4, 2015. [Online]. Available: https://fivethirtyeight.com/features/prison-reform-risk-assessment/


5. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias,” ProPublica, May 23, 2016. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing


6. S. Corbett-Davies, E. Pierson, A. Feller, and S. Goel, “A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear,” Washington Post, Oct. 17, 2016. [Online]. Available: https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?noredirect=on&utm_term=.a9cfb19a549d


7. “Case of first impression,” Legal Dictionary, March 21, 2017. [Online]. Available: https://legaldictionary.net/case-first-impression/


8. “Fair cross section requirement,” Stephen G. Rodriquez & Partners. Accessed on: Jan. 21, 2020. [Online]. Available: https://www.lacriminaldefenseattorney.com/legal-dictionary/f/fair-cross-section-requirement/


9. I. Masic, M. Miokovic, and B. Muhamedagic, “Evidence based medicine—new approaches and challenges,” Acta Informatica Medica, vol. 16, no. 4, pp. 219-225, 2018. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3789163/


10. “Hippocratic Oath,” Encyclopaedia Britannica, Dec. 4, 2019. [Online]. Available: https://www.britannica.com/topic/Hippocratic-oath


11. R. Vought, “Guidance for regulation of artificial intelligence applications,” Draft memorandum, WhiteHouse.gov. Accessed on: Jan. 21, 2020. [Online]. Available: https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf


12. G. Vyse, “Three American cities have now banned the use of facial recognition technology in local government amid concerns it’s inaccurate and biased,” Governing, July 24, 2019. [Online]. Available: https://www.governing.com/topics/public-justice-safety/gov-cities-ban-government-use-facial-recognition.html


13. “Algorithms and artificial intelligence: CNIL’s report on the ethical issues,” CNIL [Commission Nationale de l’Informatique et des Libertés], May 25, 2018. [Online]. Available: https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues


14. A. Dafoe, “AI governance: A research agenda,” Future of Humanity Institute, University of Oxford, Oxford, UK, Aug. 27, 2018.  [Online]. Available: https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf

Add Your Experience! This site should be a community resource and would benefit from your examples and voices. You can write to us by clicking here.