Exploration with Unreliable Intrinsic Reward in Multi-Agent Reinforcement Learning

Böhmer, Wendelin, Tabish Rashid, and Shimon Whiteson. “Exploration with unreliable intrinsic reward in multi-agent reinforcement learning.” arXiv preprint arXiv:1906.02138 (2019).
URL1 URL2

This paper investigates the use of intrinsic reward to guide exploration in multi-agent reinforcement learning. We discuss the challenges in applying intrinsic reward to multiple collaborative agents and demonstrate how unreliable reward can prevent decentralized agents from learning the optimal policy. We address this problem with a novel framework, Independent Centrally-assisted Q-learning (ICQL), in which decentralized agents share control and an experience replay buffer with a centralized agent. Only the centralized agent is intrinsically rewarded, but the decentralized agents still benefit from improved exploration, without the distraction of unreliable incentives.

Cited by 11
Related articles