论文解读:Gated Recurrent Unit

2017-05-15  本文已影响1129人  调参写代码

GRU算法出自这篇文章:"Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation"。这里介绍下这篇文章的主要贡献。

RNN Encoder–Decoder

文章首先提出一种RNN的自编码结构。相比于一个个单词的预测方法,这种结构能够更有效的学习序列中的隐含信息。这相当于在模型中增加了更多的标签信息。


Hidden Unit that Adaptively Remembers and Forgets

然后,该文章提出了一种自适应记忆和忘记的结构。该结构的主要思想是为每个unit设计记忆和忘记的机制,从而学习到长短期的特征。对于短期记忆单元,reset gate就会频繁的激活;对长期记忆单元,update gate会经常激活。

原文中的解释是

As each hidden unit has separate reset and update gates, each hidden unit will learn to capture dependencies over different time scales. Those units that learn to capture short-term dependencies will tend to have reset gates that are frequently active, but those that capture longer-term dependencies will have update gates that are mostly active.

对比LSTM

相比于LSTM,GRU算法有以下优势:

上一篇下一篇

猜你喜欢

热点阅读