loading page

An Interpretable Deep Learning Model for EEG Signals
  • Amirhessam Tahmassebi,
  • Anke Meyer-Baese,
  • Amir Gandomi
Amirhessam Tahmassebi
Florida State University
Author Profile
Anke Meyer-Baese
Florida State University
Author Profile
Amir Gandomi
University of Technology Sydney
Author Profile

Peer review status:UNDER REVIEW

14 Jun 2020Submitted to Applied AI Letters
15 Jun 2020Assigned to Editor
15 Jun 2020Submission Checks Completed
19 Jun 2020Reviewer(s) Assigned


Cutting-edge methods in artificial intelligence (AI) have the ability to significantly improve outcomes. However, the struggle to interpret these black box models presents a serious problem to the industry. When selecting a model, the decision to sacrifice accuracy for interpretability must be made. In this paper, we consider a case study on eye state detection using electroencephalogram (EEG) signals to investigate how a deep neural network (DNN) model makes a prediction, and how that prediction can be interpreted.