Smith, Therese Mary
Telliel, Yunus Dogan
Artificial intelligence is being deployed in increasingly autonomous systems where it will have to make moral decisions. However, the rapid growth in artificial intelligence is outpacing the research in building explainable systems. In this paper, a number of problems around one facet of explainable artificial intelligence, training data, is explored. Possible solutions to these problems are presented. Additionally, the human decision-making process in unavoidable accident scenarios is explored through qualitative analysis of survey results.
Worcester Polytechnic Institute
Interactive Qualifying Project
All authors have granted to WPI a nonexclusive royalty-free license to distribute copies of the work, subject to other agreements. Copyright is held by the author or authors, with all rights reserved, unless otherwise noted.