Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning #10

Open
Zzoay opened this issue Jun 16, 2021 · 0 comments

Comments

@Zzoay
Copy link
Collaborator

Zzoay commented Jun 16, 2021

本文在语言指导的强化学习任务上引入实体信息。探索的主要问题是:在强化学习的基础设定之上,引入指导文本,来使agent做出更合理的action。本文提出了EMMA(Entity Mapper with Multi-modal Attention),使用了多模态的注意力机制(具体任务是一个2D游戏,所以可以把观测看作是一张h乘w的图片),将观察中的实体表示为query,指导文本表示为key和value,来实现实体和文本的对齐,最后输进卷积和FFN中输出agent做下一个action的概率。作者们表示,这种方法能够很好地在没见过的游戏中泛化。

信息

  • 主要作者:H. J. Austin Wang, Karthik Narasimhan
  • 单位:Princeton University
  • 论文链接

1 学习到的新东西:

自然语言指导的RL是一个比较新的任务,应该可以做的东西会比较多。本文的创新点在于对观测中的实体进行表示,然后通过注意力机制来对齐指导文本,声称可以学习到文本中的实体和动态的隐含知识,能在没有见过的游戏场景下取得较好的泛化表现。因为训练好的文本和游戏中实体的对齐,让agent通过指导文本一定程度上“认识”了新游戏。
本文的一个基本方法论是,文本和观测中的实体的软映射(或者说对齐),来使得文本信息更准确地指导agent。这里面可以套用的方法应该挺多的,但是怎么去做更有新意和意义的研究,还有待研究。感觉值得深入地去做,但目前还没有什么好的idea。

2 通过Related Work了解到了哪些知识

两个核心点:1)是如何使用Grounding来在RL上进行指导;2)是如何从指导文本中学习到Grounding知识。后续再深入了解。

3 实验验证任务,如果不太熟悉,需要简单描述

本文在1300个众包标注的游戏上进行实验。不了解RL的实验,还在学习,后续再深入探索实验。

4 在你认知范围内,哪些其它任务可以尝试

  1. 可以在本文提出的任务的基础上做,比如用更先进的方法对齐文本和实体,又或者是更好的表示方法;
  2. 还在想

5 好的句子

  1. In this paper, we consider the problem of leveraging textual descriptions to improve the generalization of control policies to new scenarios.

  2. In this paper, we propose a model to learn an effective grounding for entities and dynamics without requiring any prior mapping between text and state observations, using only scalar reward signals from the environment. To achieve this, there are two key inferences for an agent to make — ... To this end, we develop a new model...

  3. Our objective is to demonstrate the grounding of environment dynamics and entities in a multi-task set-up in order to drive generalization to unseen environments.

  4. The key objective in these learning setups is for the agent to utilize feedback from the environment to acquire linguistic representations(e.g. word vectors) that are optimized for the task.

  5. ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant