Human-agent Explainability: An Experimental Case Study on the Filtering of Explanations
Résumé
12th International Conference on Agents and Artificial Intelligence (ICAART 2020), Valletta, Malta, 1970.
The communication between robots/agents and humans is a challenge, since humans are not typically capable of understanding the robot's state of mind. To overcome this challenge, this paper relies on the recent advances of the domain of eXplainable Artificial Intelligence (XAI) to trace the decisions of the agents and increase the human's understandability of the agents behaviour, and hence improve efficiency and user satisfaction. In particular, we propose a Human-Agent EXplainability Architecture (HAEXA) to model human-agent explainability. HAEXA filters the explanations provided by the agents to the human user to reduce the user's cognitive load. To evaluate HAEXA, a human-computer interaction experiment is conducted, where participants watch an agent-based simulation of aerial package delivery and fill in a questionnaire that collects their responses. The questionnaire is built according to XAI metrics as established in the literature. The significance of the results is verified using emph{One-tailed Mann-Whitney U} tests. The results show that explainability increases the understandability of the simulation by human users. However, too many details in the explanations overwhelms them; hence, in many scenarios it is preferable to filter the explanations.