Q学习延伸至DDPG算法公式

2020-07-08  本文已影响0人  天使的白骨_何清龙

Q learning原始损失函数定义:

\mathbf L(\theta^Q)=\mathbb E_{s_t\sim \rho^\beta, a_t \sim \beta, r_t} \sim E \bigl[ \bigl( Q(s_t, a_t \vert \theta^Q) - y_t \bigr)^2 \bigr]

Q的贝尔曼方程:

Q^\pi(s_t, a_t) = \mathbb E_{r_t, s_{t+1}} \sim E \Bigl[r(s_t,a_t) + \gamma\mathbb E_{a_{t+1}} \sim \pi \bigl[ Q^\pi (s_{t+1, a_{t+1}}) \bigr] \Bigr]

确定性策略的Q定义:

Q^\mu(s_t, a_t)=\mathbb E_{r_t,s_{t+1}} \sim E \bigl[ r(s_t, a_t) + \gamma Q^\mu(s_{t+1}, \mu(s_{t+1})) \bigr]

DPG的轨迹分布函数定义:

\bigtriangledown_{\theta^\mu}J \approx \mathbb E_{s_t \sim \rho^\beta} \bigl[ \bigtriangledown_{\theta^\mu}Q(s,a \vert \theta^Q)\vert s=s_T, a=s_t \vert \theta^\mu \mu (s_t \vert \theta^\mu) \bigr]
\qquad\quad = \mathbb E_{s_t \sim \rho^\beta} \bigl[ \bigtriangledown_{a}Q(s,a \vert \theta^Q)\vert s=s_T, a = \mu (s_t) \triangledown_{\theta} \mu(s_t \vert \theta^\mu)) \vert s=s_t \bigr]

DDPG改进:

上一篇 下一篇

猜你喜欢

热点阅读