Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Jordan Vice

and 4 more

Models of seven discrete facial expressions are built on macro-level facial muscle variations for separating distinct affective states. We propose a step-wise Hierarchical Separation and Classification Network (HSCN) that discovers dynamic and continuous macro- and micro-level variations in facial expressions. The HSCN first invokes an unsupervised cosine similarity-based separation method on continuous facial expression data and extracts twenty-one dynamic expression classes from the seven common discrete affective states. Separation between the clusters is then optimised for discovering the macro-level changes in facial muscle activations followed by splitting the upper and lower facial regions for realising and modelling changes pertaining to upper and lower facial muscle activations. A linear discriminant space is developed for clustering the upper and lower facial images on the basis of similar muscular activation patterns. Actual dynamic data and linear discriminant features are mapped for developing a rule-based expert system that would facilitate classification of twenty-one upper and twenty-one lower facial micro-expressions. Using the random forest algorithm, classification accuracies of 76.11\% were observed for dynamic macro-level facial expression classification. A support vector machine provided 73.63\% and 87.68\% accuracies respectively while classifying upper and lower facial micro-expressions. This work provides a novel framework for the dynamic assessment of affective states. Reported methods and results also provide new insight into the dynamic analysis of facial expressions of affective states.

Masood Khan

and 1 more

After reviewing the current state of explainable Artificial Intelligence (XAI) capabilities in artificial Intelligence (AI) systems developed for critical domains like criminology, engineering, governance, health, law and psychology, this paper proposes a domain-independent Accountable explainable Artificial Intelligence (AXAI) capability framework. The proposed AXAI framework extends the XAI capability to let AI systems share their decisions and adequately explain the underlying reasoning processes. The idea is to help AI system developers overcome algorithmic biases and system limitations through incorporation of domain independent AXAI capabilities. Moreover, existing XAI methods would neither separate nor quantify measures of comprehensibility, accuracy and accountability so incorporating and assessing XAI capabilities remains difficult. Assessment of the AXAI capabilities of two AI systems in this paper demonstrates that the proposed AXAI framework facilitates separation and measurement of comprehensibility, predictive accuracy and accountability. The AXAI framework allows for the delineation of AI systems in a three-dimensional AXAI space. It measures comprehensibility as the readiness of a human to apply the acquired knowledge. The system accuracy is measured as the ratio of the test and training data, training data size and the observed number of false-positive inferences. Finally, the AXAI framework measures accountability in terms of the inspect ability of the input cues, the processed data and the output information, for addressing any legal and ethical issues.