生成对抗网络推导
2017-06-08 21:06
120 查看
性质1:For G fixed, the optimal discriminator D is
D∗G(x)=q(x)q(x)+p(x)
证明:
V(D,G)=Eq(x)[log(D(x))]+Ep(z)[log(1−D(G(z)))]=∫q(x)log(D(x))dx+∫∫p(z)p(x|z)log(1−D(x))dxdz
固定G,我们的目标是最大化V(G,D)。因为
p(x)=∫p(z)p(x|z)dz,所以
V(D,G)=∫q(x)log(D(x))dx+∫p(x)log(1−D(x))dx=∫q(x)log(D(x))dx+p(x)log(1−D(x))dx
考虑函数y=alog(y)+blog(1−y),其中(a,b)∈R2∖{0,0},求导可得[0,1]区间上的最大值在点aa+b处,应用到上面的式子可得
D∗G(x)=q(x)q(x)+p(x)
证毕。||
1、对连续随机变量的分布P,Q,KL散度定义为积分:
DKL(P||Q)=∫∞−∞p(x)logp(x)q(x)dx
其中p,q表示P,Q的密度。
2、Jensen−Shannon divergence(JSD)定义为
JSD(P||Q)=12DKL(P||M)+12DKL(Q||M)
其中M=12(P+Q)
定理1:The global minimum of the virtual training criterion C(G)
is achieved if and only if p(x)=q(x).At that point,C(G) achieves the value
−log4.
证明:
已知
V(D,G)=∫q(x)log(D(x))dx+p(x)log(1−D(x))dx
且D(x)最优时,D∗G(x)=q(x)q(x)+p(x),代入上式得到
V(D∗G,G)=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x)+p(x)dx
右边加上log4在减去log4可得
V(D∗G,G)=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log4=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log4∫p(x)dx=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log2∫q(x)dx+log2∫p(x)dx=∫q(x)log2q(x)q(x)+p(x)dx+p(x)log2p(x)q(x+p(x))dx−log4=∫q(x)logq(x)q(x)+p(x)2dx+p(x)logp(x)q(x+p(x))2dx−log4=DKL(q(x)||q(x)+p(x)2)+DKL(p(x)||q(x)+p(x)2)−log4=2⋅JSD(q(x)||p(x))−log4
得证。||
D∗G(x)=q(x)q(x)+p(x)
证明:
V(D,G)=Eq(x)[log(D(x))]+Ep(z)[log(1−D(G(z)))]=∫q(x)log(D(x))dx+∫∫p(z)p(x|z)log(1−D(x))dxdz
固定G,我们的目标是最大化V(G,D)。因为
p(x)=∫p(z)p(x|z)dz,所以
V(D,G)=∫q(x)log(D(x))dx+∫p(x)log(1−D(x))dx=∫q(x)log(D(x))dx+p(x)log(1−D(x))dx
考虑函数y=alog(y)+blog(1−y),其中(a,b)∈R2∖{0,0},求导可得[0,1]区间上的最大值在点aa+b处,应用到上面的式子可得
D∗G(x)=q(x)q(x)+p(x)
证毕。||
1、对连续随机变量的分布P,Q,KL散度定义为积分:
DKL(P||Q)=∫∞−∞p(x)logp(x)q(x)dx
其中p,q表示P,Q的密度。
2、Jensen−Shannon divergence(JSD)定义为
JSD(P||Q)=12DKL(P||M)+12DKL(Q||M)
其中M=12(P+Q)
定理1:The global minimum of the virtual training criterion C(G)
is achieved if and only if p(x)=q(x).At that point,C(G) achieves the value
−log4.
证明:
已知
V(D,G)=∫q(x)log(D(x))dx+p(x)log(1−D(x))dx
且D(x)最优时,D∗G(x)=q(x)q(x)+p(x),代入上式得到
V(D∗G,G)=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x)+p(x)dx
右边加上log4在减去log4可得
V(D∗G,G)=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log4=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log4∫p(x)dx=∫q(x)logq(x)q(x)+p(x)dx+p(x)logp(x)q(x+p(x))dx−log4+log2∫q(x)dx+log2∫p(x)dx=∫q(x)log2q(x)q(x)+p(x)dx+p(x)log2p(x)q(x+p(x))dx−log4=∫q(x)logq(x)q(x)+p(x)2dx+p(x)logp(x)q(x+p(x))2dx−log4=DKL(q(x)||q(x)+p(x)2)+DKL(p(x)||q(x)+p(x)2)−log4=2⋅JSD(q(x)||p(x))−log4
得证。||
相关文章推荐
- GAN生成对抗网络公式推导和证明
- 与判别网络对抗的生成网络 (Generative Adversarial Nets)
- 深度学习笔记一:生成对抗网络(Generative Adversarial Nets)
- 对抗生成网络(Generative Adversarial Net)
- 【GAN的魔法】生成对抗网络技术进展及论文笔记3
- 【GAN的魔法】生成对抗网络技术进展及论文笔记2
- 【深度学习】生成对抗网络Generative Adversarial Nets
- 生成对抗网络
- 火热的生成对抗网络(GAN),你究竟好在哪里
- 深度卷积对抗生成网络(DCGAN)
- GAN生成对抗网络发展史-文章整理
- 基于能量的生成对抗网络
- 生成对抗网络
- 对抗网络在文本生成图片中的应用
- 生成对抗网络(GAN)
- 对抗的深度卷积生成网络来学习无监督表示
- 学习笔记-对抗生成网络
- 生成对抗网络(GAN:Generative Adversarial Networks)
- Tensorflow(1.0)基于对抗生成网络生成明星脸
- GANs(生成对抗网络)初步