EXPLAINING AND HARNESSING ADVERS

2020-03-04  本文已影响0人  馒头and花卷

Goodfellow I, Shlens J, Szegedy C, et al. Explaining and Harnessing Adversarial Examples[J]. arXiv: Machine Learning, 2014.

@article{goodfellow2014explaining,
title={Explaining and Harnessing Adversarial Examples},
author={Goodfellow, Ian and Shlens, Jonathon and Szegedy, Christian},
journal={arXiv: Machine Learning},
year={2014}}

Adversarial examples 中FGSM(fast gradient sign method)方法的来源,
\tilde{x}=x+ \epsilon \: \mathrm{sign} (\nabla_x J(\theta, x, y)).

主要内容

在图像中, 像素点的进度是1/255, 所以如果我们在图像上的摄动小于此精度, 那么图像实际上是不会产生任何变化的. 作者首先说明, 即便是线性模型, 在输入上的微小摄动也能够引起结果(当维数够大)的很大变化.

从线性谈起

\tilde{x} = x+\eta, 线性摄动如下
w^T\tilde{x} = w^Tx+w^T\eta,

此时结果的摄动为w^T\eta, 假设w的平均值为m. 注意到, 在\|\eta\|_{\infty}<\epsilon的条件下, \eta=\epsilon \: \mathrm{sign}(w)时摄动最大(这也是FGSM的启发点), 此时摄动为\epsilon mn, 注意到, 假设\epsilon, m是固定的, 那么n足够大的时候摄动就会特别大.

非线性

由线性启发至非线性(因为很多deep networks 的表现是线性的), 便是
\tilde{x}=x+ \epsilon \: \mathrm{sign} (\nabla_x J(\theta, x, y)).
实验证明, 即便是GoogLeNet这样的网络也会被生成的adversarial examples所欺骗.

其实看这篇文章的主要一个问题就是为什么\eta \not = \epsilon \: \nabla_x J(\theta, x, y), 逼近这个方向才是令损失函数增长最快的方向.

文中有这么一段话, 不是很明白:

Because the derivative of the sign function is zero or undefined everywhere, gradient descent on the adversarial objective function based on the fast gradient sign method does not allow the model to anticipate how the adversary will react to changes in the parameters. If we instead adversarial examples based on small rotations or addition of the scaled gradient, then the perturbation process isitselfdifferentiableandthelearningcantakethereactionoftheadversaryintoaccount. However, we did not find nearly as powerful of a regularizing result from this process, perhaps because these kinds of adversarial examples are not as difficult to solve.

顺便记一下论文的总结:

上一篇 下一篇

猜你喜欢

热点阅读