loading page

Non-algorithms for Explainable Artificial Intelligence
  • +2
  • Shane Mueller,
  • Robert Hoffman,
  • Gary Klein,
  • Tauseef Mamun,
  • Mohammadreza Jalaeian
Shane Mueller
Michigan Technological University
Author Profile
Robert Hoffman
Florida Institute for Human and Machine Cognition
Author Profile
Gary Klein
Macrocognition, LLC
Author Profile
Tauseef Mamun
Michigan Technological University
Author Profile
Mohammadreza Jalaeian
Macrocognition, LLC
Author Profile

Abstract

The field of Explainable AI (XAI) has focused primarily on algorithms that can help explain decisions and classification and help understand whether a particular action of an AI system is justified. These \emph{XAI algorithms} provide a variety of means for answering a number of questions human users might have about an AI. However, explanation is also supported by \emph{non-algorithms}: methods, tools, interfaces, and evaluations that might help develop or provide explanations for users, either on their own or in company with algorithmic explanations. In this article, we introduce and describe a small number of non-algorithms we have developed. These include several sets of guidelines for methodological guidance about evaluating systems, including both formative and summative evaluation (such as the self-explanation scorecard and stakeholder playbook) and several concepts for generating explanations that can augment or replace algorithmic XAI (such as the Discovery platform, Collaborative XAI, and the Cognitive Tutorial). We will introduce and review several of these example systems, and discuss how they might be useful in developing or improving algorithmic explanations, or even providing complete and useful non-algorithmic explanations of AI and ML systems.

Peer review status:UNDER REVIEW

02 Jun 2021Submitted to Applied AI Letters
03 Jun 2021Assigned to Editor
03 Jun 2021Submission Checks Completed
08 Jun 2021Reviewer(s) Assigned