Abstract
In this paper we report on using a relational state space in multi-agent
reinforcement learning. There is growing evidence in the Reinforcement
Learning research community that a relational representation of the
state space has many benefits over a propositional one. Complex tasks
as planning or information retrieval on the web can be represented
more naturally in relational form. Yet, this relational structure
has not been exploited for multi-agent reinforcement learning tasks
and has only been studied in a single agent context so far. In this
paper we explore the powerful possibilities of using Relational Reinforcement
Learning (RRL) in complex multi-agent coordination tasks. More precisely,
we consider an abstract multi-state coordination problem, which can
be considered as a variation and extension of repeated stateless
Dispersion Games. Our approach shows that RRL allows to represent
a complex state space in a multi-agent environment more compactly
and allows for fast convergence of learning agents. Moreover, with
this technique, agents are able to make complex interactive models
(in the sense of learning from an expert), to predict what other
agents will do and generalize over this model. This enables to solve
complex multi-agent planning tasks, in which agents need to be adaptive
and learn, with more powerful tools.
Users
Please
log in to take part in the discussion (add own reviews or comments).