用法语法清晰:
decay = tf.constant(0.001, dtype=tf.float32)
w = tf.get_variable(name='weight', shape=[512, 512],
regularizer=tf.contrib.layers.l2_regularizer(decay))
但是,在文档中仅说明了以下内容:
regularizer
: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collectiontf.GraphKeys.REGULARIZATION_LOSSES
and can be used for regularization.
以上并不意味着正则化损失会自动最小化。那么我们是否需要手动从集合 tf.GraphKeys.REGULARIZATION_LOSSES
中获取变量并将其添加到我们的主要损失中以便应用它?
最佳答案
So do we need to manually get the variable from the collection tf.GraphKeys.REGULARIZATION_LOSSES and add it to our main loss in order for it to be applied?
是,也不是:您需要通过 tf.losses.get_regularization_loss()
手动获取正则化损失(这将已经获得集合中定义的所有正则化损失,无需在其中搜索变量),然后您只需将正则化损失添加到模型的损失中并将其用作优化器训练的损失:
logits = model_fn(inputs)
model_loss = your_chosen_loss_function(logits)
regularization_loss = tf.losses.get_regularization_loss()
your_chosen_optimizer.minimize(model_loss + regularization_loss)
关于tensorflow - 如何在 tf.get_variable 中使用正则化参数?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56899105/