[论文阅读笔记]The Limitations of Deep

2020-03-28  本文已影响0人  wangxiaoguang

论文题目:The Limitations of Deep Learning in Adversarial Settings
论文地址:https://arxiv.org/abs/1511.07528

JSMA


算法的主要步骤有以下三个:

  1. 计算前向导数J_F(X^*)
  2. 基于前向导数构造雅可比显著图S
  3. 利用\theta修改输入特征i_{max}

Step 1. 计算前向导数J_F(X^*)

\mathbf{F}_{j}(\mathbf{X})=f_{n+1, j}\left(\mathbf{W}_{n+1, j} \cdot \mathbf{H}_{n}+b_{n+1, j}\right)Thus, we apply the chain rule again to obtain:
\begin{aligned} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial x_{i}}=&\left(\mathbf{W}_{n+1, j} \cdot \frac{\partial \mathbf{H}_{n}}{\partial x_{i}}\right) \times \frac{\partial f_{n+1, j}}{\partial x_{i}}\left(\mathbf{W}_{n+1, j} \cdot \mathbf{H}_{n}+b_{n+1, j}\right) \end{aligned}

Step 2. 构造雅可比显著图S

增大输入特征S(\mathbf{X}, t)[i]=\left\{\begin{array}{l} 0 \text { if } \frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}<0 \text { or } \sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}>0 \\ \left(\frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right)\left|\sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right| \text { otherwise } \end{array}\right.
减小输入特征
S(\mathbf{X}, t)[i]=\left\{\begin{array}{l} 0 \text { if } \frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}}>0 \text { or } \sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}<0 \\ \left|\frac{\partial \mathbf{F}_{t}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right|\left(\sum_{j \neq t} \frac{\partial \mathbf{F}_{j}(\mathbf{X})}{\partial \mathbf{X}_{i}}\right) \text { otherwise } \end{array}\right.

Step 3. 利用\theta修改输入特征i_{max}

参考

知乎-[论文笔记] The Limitations of Deep Learning in Adversarial Settings
CSDN-ZQL[论文阅读笔记]The Limitations of Deep Learning in Adversarial Settings
关于The Limitations of Deep Learning in Adversarial Settings的理解

上一篇 下一篇

猜你喜欢

热点阅读