DeepMind

ACT for RNN

2016-04-01  本文已影响304人  朱小虎XiaohuZhu

摘要:
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output.

ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients.

Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem.

When applied to character-level language modeling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.

上一篇下一篇

猜你喜欢

热点阅读