Dissertations and Theses
The Use Of Multiple Saliency Techniques As an Explanation Interface In General Image Recognition
Date of Award
AI diagnostic machines that preform beyond that of human intelligence have yet to be implemented into society at large in substantiative ways. Examples in research of AI that can diagnose cancer, narcolepsy, renal diseases, and cardiac damage abound, yet these machines have to be implemented into clinical settings. Although predictively powerful, these diagnostic machines are powered by Deep Convolutional Neural Networks that operate as virtual black- boxes and the medical community is unwilling to accept predictions that lack explanation. To this end Explainable Artificial Intelligence is developing a host of methods that seek to pry open the inner workings DNNs in order to establish trust and transparency in industries where causality is critical. A new model of machine learning has emerged pairing predictively powerful AI with explanation interfaces. Explanation interfaces using feature extraction that rely on domain specific knowledge given a priori have been presented as a popular option to the medical community for image diagnostics. We present a system that uses multiple saliency techniques as an explanation interface. This system has a bi-directional flow of information allowing expert users to learn from its conclusions while at the same time leaving the DNNs free from bias imposed by a priori conditions. Such a system is more advantages to industries such as HADR and medical diagnostics where casual modes of understanding are needed for actionable intelligence and adhere more to the goals of explainable artificial intelligence systems and their interfaces.
Moss, Robert N. Mr, "The Use Of Multiple Saliency Techniques As an Explanation Interface In General Image Recognition" (2022). CUNY Academic Works.