An Actor-Critic Algorithm for Se

2016-08-03  本文已影响602人  hzyido

Recurrent neural networks


RNNs for sequence prediction

In our models, the sequence of vectors is produced by either a bidirectional RNN (Schuster and Paliwal, 1997) or a convolutional encoder (Rush et al., 2015).








3 Actor-Critic for Sequence Prediction

We note that this way of re-writing the gradient of the expected reward is known in RL under the names policy gradient theorem (Sutton et al., 1999) and stochastic actor-critic (Sutton, 1984).
我们注意到,重写预期回报的梯度这样的RL是已知的名字政策梯度定理下(萨顿等,1999)和随机演员评论家(萨顿,1984)。




Training the critic


Applying deep RL techniques

Attempts to remove the target network by propagating the gradient through qt resulted in a lower square error (Qˆ(ˆyt ; Yˆ 1...T ) − qt) 2 , but the resulting Qˆ values proved very unreliable as training signals for the actor

采样 5page
To compensate for this, we sample predictions from a delayed actor, whose weights are slowly updated to follow the actor that is actually trained. This is inspired by (Lillicrap et al., 2015), where a delayer actor is used for a similar purpose。

有关target critic network解释

CONTINUOUS CONTROL WITH DEEP REINFORCEMENT 1509.02971.pdf
上一篇下一篇

猜你喜欢

热点阅读