Faculty Advisor

Smith, Therese Mary

Faculty Advisor

Telliel, Yunus Dogan

Abstract

Artificial intelligence is being deployed in increasingly autonomous systems where it will have to make moral decisions. However, the rapid growth in artificial intelligence is outpacing the research in building explainable systems. In this paper, a number of problems around one facet of explainable artificial intelligence, training data, is explored. Possible solutions to these problems are presented. Additionally, the human decision-making process in unavoidable accident scenarios is explored through qualitative analysis of survey results.

Publisher

Worcester Polytechnic Institute

Date Accepted

April 2019

Project Type

Interactive Qualifying Project

Accessibility

Unrestricted

Share

COinS