EHS
EHS

Learning to Share and Hide Intentions using Information Regularization. (arXiv:1808.02093v1 [cs.AI])

Learning to cooperate with friends and compete with foes is a key component
of multi-agent reinforcement learning. Typically to do so, one requires access
to either a model of or interaction with the other agent(s). Here we show how
to learn effective strategies for cooperation and competition in an asymmetric
information game with no such model or interaction. Our approach is to
encourage an agent to reveal or hide their intentions using an
information-theoretic regularizer. We consider both the mutual information
between goal and action given state, as well as the mutual information between
goal and state. We show how to stochastically optimize these regularizers in a
way that is easy to integrate with policy gradient reinforcement learning.
Finally, we demonstrate that cooperative (competitive) policies learned with
our approach lead to more (less) reward for a second agent in two simple
asymmetric information games.

Source link

EHS
Back to top button