我一直在尝试使用 Keras 实现一个简单版本的标准化流程,如本文所述:https://arxiv.org/pdf/1505.05770.pdf
我的问题是损失总是无穷大,而且我无法明白我做错了什么。有谁能够帮助我 ?
程序如下:
编码器生成大小为
latent_dim = 100
的向量。它们是z_mean、z_log_var、u、b、w
。从
z_mean
和z_log_var
中,使用重新参数化技巧,我可以对z_0
~N(z_mean, z_log_var) 进行采样
.然后我可以计算
log(abs(1+u.T.dot(psi(z_0))))
然后我可以计算
z_1
这是这四个步骤的代码:
def sampling(args):
z_mean, z_log_var = args
# sample epsilon according to N(O,I)
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
std=epsilon_std)
# generate z0 according to N(z_mean, z_log_var)
z0 = z_mean + K.exp(z_log_var / 2) * epsilon
print('z0', z0)
return z0
def logdet_loss(args):
z0, w, u, b = args
b2 = K.squeeze(b, 1)
beta = K.sum(tf.multiply(w, z0), 1) # <w|z0>
linear_trans = beta + b2 # <w|z0> + b
# change u2 so that the transformation z0->z1 is invertible
alpha = K.sum(tf.multiply(w, u), 1) #
diag1 = tf.diag(K.softplus(alpha) - 1 - alpha)
u2 = u + K.dot(diag1, w) / K.sum(K.square(w)+1e-7)
gamma = K.sum(tf.multiply(w,u2), 1)
logdet = K.log(K.abs(1 + (1 - K.square(K.tanh(linear_trans)))*gamma) + 1e-6)
return logdet
def transform_z0(args):
z0, w, u, b = args
b2 = K.squeeze(b, 1)
beta = K.sum(tf.multiply(w, z0), 1)
# change u2 so that the transformation z0->z1 is invertible
alpha = K.sum(tf.multiply(w, u), 1)
diag1 = tf.diag(K.softplus(alpha) - 1 - alpha)
u2 = u + K.dot(diag1, w) / K.sum(K.square(w)+1e-7)
diag2 = tf.diag(K.tanh(beta + b2))
# generate z1
z1 = z0 + K.dot(diag2,u2)
return z1
然后这是损失(其中logdet
在上面定义)
def vae_loss(x, x_decoded_mean):
xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean), -1)
ln_q0z0 = K.sum(log_normal2(z0, z_mean, z_log_var, eps=1e-6), -1)
ln_pz1 = K.sum(log_stdnormal(z1), -1)
result = K.mean(logdet + ln_pz1 + xent_loss - ln_q0z0)
return result
最佳答案
我在这里修改了关于 VAE 的 Keras 教程:https://github.com/sbaurdlp/keras-iaf-mnist
如果有人有兴趣看... 奇怪的是,添加更多层并不能提高性能,而且我看不出代码中有什么问题
关于python - 在 Keras 中实现标准化流程,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42620065/