19/08/2021

Learning with Generated Teammates to Achieve Type-Free Ad-Hoc Teamwork

Dong Xing, Qianhui Liu, Qian Zheng, Gang Pan

Keywords: Agent-based and Multi-agent Systems, Cooperative Games, Coordination and Cooperation, Sequential Decision Making

Abstract: In ad-hoc teamwork, an agent is required to cooperate with unknown teammates without prior coordination. To swiftly adapt to an unknown teammate, most works adopt a type-based approach, which pre-trains the agent with a set of pre-prepared teammate types, then associates the unknown teammate with a particular type. Typically, these types are collected manually. This hampers previous works by both the availability and diversity of types they manage to obtain. To eliminate these limitations, this work addresses to achieve ad-hoc teamwork in a type-free approach. Specifically, we propose the model of Entropy-regularized Deep Recurrent Q-Network (EDRQN) to generate teammates automatically, meanwhile utilize them to pre-train our agent. These teammates are obtained from scratch and are designed to perform the task with various behaviors, therefore their availability and diversity are both ensured. We evaluate our model on several benchmark domains of ad-hoc teamwork. The result shows that even if our model has no access to any pre-prepared teammate types, it still achieves significant performance.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at IJCAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers