python - 以下Theano方法中参数更新方式是否有错误?

标签 python theano deep-learning

我正在在线学习基于动量的学习教程,并在 Theano 中发现了这种方法

def gradient_updates_momentum(cost, params, learning_rate, momentum):
    '''
Compute updates for gradient descent with momentum

:parameters:
    - cost : theano.tensor.var.TensorVariable
        Theano cost function to minimize
    - params : list of theano.tensor.var.TensorVariable
        Parameters to compute gradient against
    - learning_rate : float
        Gradient descent learning rate
    - momentum : float
        Momentum parameter, should be at least 0 (standard gradient descent) and less than 1

:returns:
    updates : list
        List of updates, one for each parameter
'''
# Make sure momentum is a sane value
assert momentum < 1 and momentum >= 0
# List of update steps for each parameter
updates = []
# Just gradient descent on cost
for param in params:
    # For each parameter, we'll create a param_update shared variable.
    # This variable will keep track of the parameter's update step across iterations.
    # We initialize it to 0
    param_update = theano.shared(param.get_value()*0., broadcastable=param.broadcastable)
    # Each parameter is updated by taking a step in the direction of the gradient.
    # However, we also "mix in" the previous step according to the given momentum value.
    # Note that when updating param_update, we are using its old value and also the new gradient step.
    updates.append((param, param - learning_rate*param_update))
    # Note that we don't need to derive backpropagation to compute updates - just use T.grad!
    updates.append((param_update, momentum*param_update + (1. - momentum)*T.grad(cost, param)))
return updates

下面两行的顺序不应该颠倒过来(互换)吗?

updates.append((param, param - learning_rate*param_update))

updates.append((param_update, momentum*param_update + (1. - momentum)*T.grad(cost, param)))

据我所知,在执行训练方法并计算成本后,才运行更新,对吗?

这是否意味着我们应该使用当前的成本,并使用现有的 param_update 值(来自上一次迭代),我们应该计算较新的 param_update ,从而更新当前的 param 值?

为什么是相反的,为什么这是正确的?

最佳答案

提供给theano.function的更新列表内的更新顺序被忽略。更新始终使用共享变量的值来计算。

这段代码显示更新顺序被忽略:

import theano
import theano.tensor

p = 0.5
param = theano.shared(1.)
param_update = theano.shared(2.)
cost = 3 * param * param
update_a = (param, param - param_update)
update_b = (param_update, p * param_update + (1 - p) * theano.grad(cost, param))
updates1 = [update_a, update_b]
updates2 = [update_b, update_a]
f1 = theano.function([], outputs=[param, param_update], updates=updates1)
f2 = theano.function([], outputs=[param, param_update], updates=updates2)
print f1(), f1()
param.set_value(1)
param_update.set_value(2)
print f2(), f2()

如果从逻辑上讲,你想要

new_a = old_a + a_update
new_b = new_a + b_update

然后您需要提供如下更新:

new_a = old_a + a_update
new_b = old_a + a_update + b_update

关于python - 以下Theano方法中参数更新方式是否有错误?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33103111/

相关文章:

opencv - 从图像中提取对象

Python:检查(并计算多少)点位于 voronoi 单元格内的位置

python - pygtk 中的多行 gtk 条目

numpy - Theano 中是否有 GPU 加速的 numpy.max(X, axis=0) 实现?

python - 如何在训练过程中打印tensorflow python中的训练损失

deep-learning - 如何在 Keras 中计算 Mobilenet FLOPs

Python 3.6 类型检查 : numpy arrays and use defined classes

python - 如何正确使用 tkinter create_line() 坐标

python - 了解图像是否与用于训练卷积神经网络的数据集相关的有效方法

python - Theano 逻辑回归维度不匹配