StarGAN: Unified Generative Adve

2018-11-03  本文已影响0人  yestinl

Problem

New method

Star Generative Adversarial Networks

Star Generative Adversarial Networks

1. Multi-Domain Image-to-Image Translation

notation meaning
x input image
y output image
c target domain label
c' original domain label
Dsrc(x) a probability distribution over sources given by D
Dcls(c'|x) a probability distribution over domain labels computed by D
λcls hyper-parameters that control the relative importance of domain classification and reconstruction losses
λrec hyper-parameters control the relative importance of reconstruction losses
m a mask vector
[\cdot] concatenation
c_i a vector for the labels of the i-th dataset
\hat{x} sampled uniformly along a straight line between a pair of a real and a generated images
λ_{gp} hyper-parameters control the gradient penalty

Adversarial Loss

\mathcal{L}_{adv} = \mathbb{E}_x [log D_{src}(x)] + \mathbb{E}_{x,c}[log (1- D_{src}(G(x, c))]\tag{1}

Dsrc(x) as a probability distribution over sources given by D. The generator G tries to minimize this objective, while the discriminator D tries to maximize it

Domain Classification Loss

Reconstruction Loss

Full Objective

\mathcal{L}_D = -\mathcal{L}_{adv} + \lambda_{cls}\mathcal{L}_{cls}^r
\mathcal{L}_G = \mathcal{L}_{adv}+\lambda_{cls}\mathcal{L}_{cls}^f+\lambda_{rec}\mathcal{L}_{rec}

We use λ_{cls} = 1 and λ_{rec} = 10 in all of our experiments

2. Training with Multiple Datasets

Mask Vector

\tilde{c} = [c_1,c_2...c_n,m]
For the remaining n-1 unknown labels we simply assign zero values

Training Strategy

Implementation

Improved GAN Training

\mathcal{L}_{adv} = \mathbb{E}_x[D_{src}(x)]-\mathbb{E}_{x,c}[D_{src}(G(x,c))]-\lambda_{gp}\mathbb{E}_\hat{x}[||\nabla_\hat{x}D_{src}(\hat{x})||_2-1)^2]

where \hat{x} is sampled uniformly along a straight line between a pair of a real and a generated images. We use λ_{gp} = 10 for all experiments

Network Architecture

上一篇 下一篇

猜你喜欢

热点阅读