Dissertations and Theses

Date of Award


Document Type


First Advisor

Jie Wei


AI diagnostic machines that preform beyond that of human intelligence have yet to be implemented into society at large in substantiative ways. Examples in research of AI that can diagnose cancer, narcolepsy, renal diseases, and cardiac damage abound, yet these machines have to be implemented into clinical settings. Although predictively powerful, these diagnostic machines are powered by Deep Convolutional Neural Networks that operate as virtual black- boxes and the medical community is unwilling to accept predictions that lack explanation. To this end Explainable Artificial Intelligence is developing a host of methods that seek to pry open the inner workings DNNs in order to establish trust and transparency in industries where causality is critical. A new model of machine learning has emerged pairing predictively powerful AI with explanation interfaces. Explanation interfaces using feature extraction that rely on domain specific knowledge given a priori have been presented as a popular option to the medical community for image diagnostics. We present a system that uses multiple saliency techniques as an explanation interface. This system has a bi-directional flow of information allowing expert users to learn from its conclusions while at the same time leaving the DNNs free from bias imposed by a priori conditions. Such a system is more advantages to industries such as HADR and medical diagnostics where casual modes of understanding are needed for actionable intelligence and adhere more to the goals of explainable artificial intelligence systems and their interfaces.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.