Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs

ELECTRONICS(2024)

引用 0|浏览5
暂无评分
摘要
The adoption of algorithms based on Artificial Intelligence (AI) has been rapidly increasing during the last few years. However, some aspects of AI techniques are under heavy scrutiny. For instance, in many use cases, it is not clear whether the decisions of an algorithm are well informed and conforming to human understanding. Having ways to address these concerns is crucial in many domains, especially whenever humans and intelligent (physical or virtual) agents must cooperate in a shared environment. In this paper, we apply an explainability method based on the creation of a Policy Graph (PG) based on discrete predicates that represent and explain a trained agent’s behaviour in a multi-agent cooperative environment. We show that from these policy graphs, policies for surrogate interpretable agents can be automatically generated. These policies can be used to measure the reliability of the explanations enabled by the PGs through a fair behavioural comparison between the original opaque agent and the surrogate one. The contributions of this paper represent the first use case of policy graphs in the context of explaining agent behaviour in cooperative multi-agent scenarios and present experimental results that sets this kind of scenario apart from previous implementations in single-agent scenarios: when requiring cooperative behaviour, predicates that allow representing observations about the other agents are crucial to replicate the opaque agent’s behaviour and increase the reliability of explanations.
更多
查看译文
关键词
explainable AI,reinforcement learning,policy graphs,multi-agent reinforcement learning,cooperative environments
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要