Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination

Han, Dongge, Wendelin Boehmer, Michael Wooldridge, and Alex Rogers. “Multi-agent hierarchical reinforcement learning with dynamic termination.” In Pacific Rim International Conference on Artificial Intelligence , pp. 80-92. Springer, Cham, 2019.
URL1 URL2

In a multi-agent system, an agent’s optimal policy will typically depend on the policies chosen by others. Therefore, a key issue in multi-agent systems research is that of predicting the behaviours of others, and responding promptly to changes in such behaviours. One obvious possibility is for each agent to broadcast their current intention, for example, the currently executed option in a hierarchical reinforcement learning framework. However, this approach results in inflexibility of agents if options have an extended duration and are dynamic. While adjusting the executed option at each step improves flexibility from a single-agent perspective, frequent changes in options can induce inconsistency between an agent’s actual behaviour and its broadcast intention. In order to balance flexibility and predictability, we propose a dynamic termination Bellman equation that allows the agents to flexibly terminate their options. We evaluate our models empirically on a set of multi-agent pursuit and taxi tasks, and show that our agents learn to adapt flexibly across scenarios that require different termination behaviours.

Cited by 7
Related articles