SaGAN:Generative Adversarial Net

2018-11-06  本文已影响0人  yestinl

Problem

  1. Traditional GAN methods directly operate on the whole image, and inevitably change the attribute-irrelevant regions
  2. The performance of traditional regression methods heavily depends on the paired training data, which are however quite difficult to acquire

Relative work

Method

  1. SaGAN: only alter the attribute specific region and keep the rest unchanged
  2. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-speciic region which restricts the alternation of AMN within this region.

Contribution

  1. The spatial attention is introduced to the GAN framework, forming an end-to-end generative model for face attribute editing (referred to as SaGAN),which can only alter those attribute-speciic region and keep the rest irrelevant region remain the same.
  2. The proposed SaGAN adopts single generator with attribute as conditional signal rather than two dual ones for two inverse face attribute editing.
  3. The proposed SaGAN achieves quite promising results especially for those local attributes with the attribute-irrelevant details well preserved. Besides, our approach also benefits the face recognition by data augmentation.

Generative Adversarial Network with Spatial Attention

SaGAN
notation meaning
I input image
\hat{I} output image
I_a an edited face image output by AMN
c attribute value
c_g ground truth attribute label of the real image I
D_{src}(I) probability of an image I to be a real one
D_{cls}(c|I) probability of an image I with the attribute c
F_m an attribute manipulation network (AMN)
F_a a spatial attention network(SAN)
b a spatial attention mask, used to restrict the alternation of AMN within this region
\lambda_1 balance parameters
\lambda_2 balance parameters
λ_{gp} hyper-parameters control the gradient penalty, default = 10

Discriminator

Generator

  1. To make the edited face image \hat{I} photo-realistic: an adversarial loss is designed to confuse the real/fake classifier
    \mathcal{L}^G_{src} = \mathbb{E}_{\hat{I}}[[-logD_{src}(\hat{I})]]\tag{2}
  2. To make \hat{I} be correctly with target attribute c: an attribute classification loss is designed to enforce the attribute prediction of \hat{I} from the attribute classifier approximates the target value c
    \mathcal{L}_{cls}^G = \mathbb{E}_\hat{I}[-logD_{cls}(c|\hat{I})]
  3. To keep the attribute-irrelevant region unchanged: a reconstruction loss is employed similar as CycleGAN and StarGAN
    \mathcal{L}_{rec}^G = \lambda_1\mathbb{E}_{I,c,c^g}[(||I-G(G(I,c),c^g)||_1]+\lambda_2\mathbb{E}_{I,c^g}[(||I-G(I,c^g)||_1]
  4. generator G
    \min \limits_{F_m,F_a} \mathcal{L}_G = \mathcal{L}_{adv}^G+\mathcal{L}_{cls}^G+\mathcal{L}_{rec}^G

Implementation

Optimization

To optimize the adversarial real/fake classification more stably, in all experiments the objectives in Eq.(1) and Eq.(2) is optimized by using WGAN-GP
\mathcal{L}_{src}^D = -\mathbb{E}_I[D_{src}(I)]+\mathbb{E}_\hat{I}[D_{src}(\hat{I})]+\lambda_{gp}\mathbb{E}_\tilde{I}[(||\nabla_\tilde{I}D_{src}(\tilde{I})||_2-1)^2]

\tilde{I} is sampled uniformly along a straight line between the edited images \hat{I} and the real images I

Network Architecture

Network Input Output Activation function
AMN 4-channel input, an input image and a attribute 3-channel RGB image Tanh
SAN 3-channel input, an input image 1-channel attention mask image Sigmoid
上一篇 下一篇

猜你喜欢

热点阅读