python - 自定义 keras 回调和改变变分自动编码器损失函数中正则化项的权重 (beta)

标签 python tensorflow keras deep-learning autoencoder

变分自编码器损失函数是这样的:Loss = Loss_reconstruction + Beta * Loss_kld。我正在尝试有效地实现 Kullback-Liebler Divergence Cyclic Annealing ——那就是在训练过程中动态改变 beta 的权重。我将 tf.keras.callbacks.Callback 类作为开始,但我不知道如何更新 tf.keras.Model 变量自定义 keras 回调。此外,我想跟踪每个训练步骤结束时 beta 的变化情况 (on_train_batch_end),现在我在回调类中有一个列表,但我知道 python 列表不能很好地与 TensorFlow 配合使用。当我拟合模型时,我收到一条警告,指出我的 on_train_batch_end 函数比批处理本身的处理速度慢。 我认为我应该使用 tf.TensorArray 而不是 python 列表,但是 tf.TensorArray 方法 write 不能使用 tf.Variable 作为索引(即,随着步数的变化,tf.TensorArray 中应写入该步的新 beta 的索引也会发生变化)。 ..有没有更好的方法来存储值的变化?看起来像这样github显示了一个不涉及自定义 tf.keras.Model 并使用不同类型的 KL 退火的解决方案。下面是回调函数和虚拟 VAE。

class CyclicAnnealing(tf.keras.callbacks.Callback):
  """Cyclic annealing from https://arxiv.org/abs/1903.10145
  
  Requires that model tracks training iterations and 
  total number of training iterations. It also requires
  that model has hyperparameter for `M` and `R`.
  """

  def __init__(self, schedule_fxn='sigmoid', **kwargs):
    super().__init__(**kwargs)

    # INEFFICIENT WAY OF LOGGING `betas` AND THE TRAIN STEPS...
    # The `train_iterations` list could be removed because in principle
    # if I have a list of betas, I know that the list of betas is of length
    # (number of samples//batch size) * number of epochs.
    # This is because (number of samples//batch size) * number of epochs is the total number of steps for the model.
    self.betas = []
    self.train_iterations = []

    if schedule_fxn == 'sigmoid':
      self.schedule_fxn = self.sigmoid

    elif schedule_fxn =='linear':
      self.schedule_fxn = self.linear

    else:
      raise ValueError('Invalid arg: `schedule_fxn`')

  def on_epoch_end(self, epoch, logs=None):
    print('\nCurrent anneal weight B =', self.beta)

  def on_train_batch_end(self, batch, logs=None):
    """Computes betas and updates list"""

    # Compute beta
    self.beta = self.beta_tau_cyclic_annealing(self.compute_tau())

    ###################################
    # HOW TO UPDATE BETA IN THE MODEL???
    ###################################

    # Update the lists for logging
    self.betas.append(self.beta)
    self.train_iterations.append(self.model._train_counter))

  def get_annealing_data(self):
    return {'betas': self.betas, 'training_iterations': self.train_iterations}

  def sigmoid(self, x):
    """Monotonic increasing function
    
    :return: tf.constant float32
    """

    return (1/(1+tf.keras.backend.exp(-x)))

  def linear(self, x):
    return x/self.model._R

  def compute_tau(self):
    """Used to determine kld_beta.
    
    :return: tf.constant float32
    """

    t = tf.identity(self.model._train_counter)
    T = self.model._total_training_iterations
    M = self.model._M
    numerator = tf.cast(tf.math.floormod(tf.subtract(t, 1), tf.math.floordiv(T, M)), dtype=tf.float32)
    denominator = tf.cast(tf.math.floordiv(T, M), dtype=tf.float32)
    return tf.math.divide(numerator, denominator)

  def beta_tau_cyclic_annealing(self, tau):
    """Compute change for kld_beta.
    
    :param tau: Increases beta_tau
    :param R: Proportion used to increase Beta w/i cycle.

    :return: tf.constant float32
    """

    R = self.model._R
    if tau <= R:
        return self.schedule_fxn(tau)
    else:
      return tf.constant(1.0)

虚拟vae:

class VAE(tf.keras.Model):
    def __init__(self, num_samples, batch_size, epochs, features, units, latent_size, kld_beta, M, R, **kwargs):
        """Defines state for model.

        :param num_samples: <class 'int'>
        :param batch_size: <class 'int'>
        :param epochs: <class 'int'>
        :param features: <class 'int'> if input is (n, m), then `features` is the the `m` dimension. This param is used with the decoder.
        :param units: <class 'int'> Number of hidden units.
        :param latent_size: <class 'int'> Dimension of latent space z.
        :param kld_beta: <tf.Variable??> for dynamic weight.
        :param M: <class 'int'> Hyperparameter for cyclic annealing.
        :param R: <class 'float'> Hyperparameter for cyclic annealing.
        """
        super().__init__(**kwargs)

        # NEED TO UPDATE THIS SOMEHOW -- I think it should be a tf.Variable?
        self.kld_beta = kld_beta

        # Hyperparameters for CyclicAnnealing
        self._M = M
        self._R = R
        self._total_training_iterations = (num_samples//batch_size) * epochs    

        # Encoder and Decoder not defined, but typically
        # encoder = inputs -> dense -> dense mu and dense log var -> z
        # while decoder = z -> dense -> reconstructions
        self.encoder = Encoder(units, latent_size)
        self.decoder = Decoder(features)

    def call(self, inputs):
        z, mus, log_vars = self.encoder(inputs)
        reconstructions = self.decoder(z)

        kl_loss = self.compute_kl_loss(mus, log_vars)

        # THE BETA WEIGHT NEEDS TO BE DYNAMIC
        weighted_kl_loss = self.kld_beta * kl_loss
      
        self.add_loss(weighted_kl_loss)

        return reconstructions
        
    def compute_kl_loss(self, mus, log_vars):
         return -0.5 * tf.reduce_mean(1. + log_vars - tf.exp(log_vars) - tf.pow(mus, 2))

最佳答案

关于你的第一个问题:这取决于你计划如何使用优化器(例如 ADAM)更新梯度。当使用 Tensorflow/Keras 训练 VAE 时,我通常使用 @tf.function 装饰器来计算模型的损失,并根据该损失更新模型的参数:

@tf.function
def train_step(self, model, batch, gamma, capacity):
    with tf.GradientTape() as tape:
        x, c = batch
        loss = compute_loss(model, x, c, gamma, capacity)
        tf.print('Total loss: ', loss)

    gradients = tape.gradient(loss, model.trainable_variables)
    self.optimizer.apply_gradients(zip(gradients, model.trainable_variables))

注意变量 gamma 和容量。它们被定义为影响损失函数的术语。我在 x 个纪元后更新它们,如下所示:

new_weight = min(tf.keras.backend.get_value(capacity) + (20. / capacity_annealtime), 20.)
tf.keras.backend.set_value(capacity, new_weight)

此时,您可以轻松保存 new_weight 以用于记录目的,或者您可以定义自定义 Tensorflow logger登录到一个文件。如果你确实想使用数组,你可以简单地将 TF 数组定义为:

this_array = tf.TensorArray(tf.float32, size=0, dynamic=True)

并在 x 个步骤后更新它:

this_array.write(this_array.size(), new_beta_weight)

您还可以使用第二个数组并同时更新它,以记录更新 new_beta_weight 的纪元或批处理。

最后,损失函数本身如下所示:

def compute_loss(model, x, c, gamma_weight, capacity_weight):

  mean, logvar = model.encode(x, c)

  z = model.reparameterize(mean, logvar)
  reconstruction = model.decode(z, c)

  total_reconstruction_loss = 
  tf.nn.sigmoid_cross_entropy_with_logits(labels=x,                                                                      
  logits=reconstruction)
  
  total_reconstruction_loss = tf.reduce_sum(total_reconstruction_loss, 
   1)

  kl_loss = 1 + logvar - tf.square(mean) - tf.exp(logvar)
  kl_loss = tf.reduce_mean(kl_loss)
  kl_loss *= -0.5

  total_loss = tf.reduce_mean(total_reconstruction_loss * 3 + (
        gamma_weight * tf.abs(kl_loss - capacity_weight)))
  return total_loss

请注意,模型来自 tf.keras.Model 类型。希望这能让您对这个特定主题有一些不同的见解。

关于python - 自定义 keras 回调和改变变分自动编码器损失函数中正则化项的权重 (beta),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68636987/

相关文章:

python - 如果您的代码隐藏运行无限循环,您如何获得响应式 GUI? PyQT

python - 如何清理 Python 和 Flask 中用户提供的路径?

使用递归的具有一些限制和特殊条件的 Python str 映射排列

python - django:redis:CommandError: 你没有设置ASGI_APPLICATION,这是运行服务器所需要的

python - Tensorflow Keras 指标未显示

python - 使用 VGG16 预训练模型处理灰度图像时出错

python - 如何加载谷歌名为 inception 的预训练 tensorflow 模型?

tensorflow - tensorflow如何实现embedding_column?

python - Keras:具有多个参数的 Lambda 层函数

python - tf.boolean_mask 不接受轴参数