machine-learning - TensorFlow 学习率衰减 - 如何正确提供衰减的步数?

标签 machine-learning neural-network tensorflow deep-learning

我正在 TensorFlow 中训练我的深度网络,并尝试使用学习率衰减。据我所知,我应该使用 train.exponential_decay 函数 - 它将使用各种参数计算当前训练步骤的正确学习率值。我只需要向它提供一个立即执行的步骤。我怀疑当我需要向网络提供一些东西时我应该像往常一样使用 tf.placeholder(tf.int32) ,但似乎我错了。当我这样做时,我收到以下错误:

TypeError: Input 'ref' of 'AssignAdd' Op requires l-value input

我做错了什么?不幸的是,我还没有找到一些衰减网络训练的好例子。我的整个代码如下。网络有 2 个隐藏的 ReLU 层,对权重有 L2 惩罚,并且在两个隐藏层上都有 dropout。

#We try the following - 2 ReLU layers
#Dropout on both of them
#Also L2 regularization on them
#and learning rate decay also


#batch size for SGD
batch_size = 128
#beta parameter for L2 loss
beta = 0.001

#that's how many hidden neurons we want
num_hidden_neurons = 1024

#learning rate decay
#starting value, number of steps decay is performed,
#size of the decay
start_learning_rate = 0.05
decay_steps = 1000
decay_size = 0.95

#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  #now let's build our first hidden layer
  #its weights
  hidden_weights_1 = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_hidden_neurons]))
  hidden_biases_1 = tf.Variable(tf.zeros([num_hidden_neurons]))

  #now the layer 1 itself. It multiplies data by weights, adds biases
  #and takes ReLU over result
  hidden_layer_1 = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights_1) + hidden_biases_1)

  #add dropout on hidden layer 1
  #we pick up the probabylity of switching off the activation
  #and perform the switch off of the activations
  keep_prob = tf.placeholder("float")
  hidden_layer_drop_1 = tf.nn.dropout(hidden_layer_1, keep_prob)  

  #now let's build our second hidden layer
  #its weights
  hidden_weights_2 = tf.Variable(
    tf.truncated_normal([num_hidden_neurons, num_hidden_neurons]))
  hidden_biases_2 = tf.Variable(tf.zeros([num_hidden_neurons]))

  #now the layer 2 itself. It multiplies data by weights, adds biases
  #and takes ReLU over result
  hidden_layer_2 = tf.nn.relu(tf.matmul(hidden_layer_drop_1, hidden_weights_2) + hidden_biases_2)

  #add dropout on hidden layer 2
  #we pick up the probabylity of switching off the activation
  #and perform the switch off of the activations
  hidden_layer_drop_2 = tf.nn.dropout(hidden_layer_2, keep_prob)  

  #time to go for output linear layer
  #out weights connect hidden neurons to output labels
  #biases are added to output labels  
  out_weights = tf.Variable(
    tf.truncated_normal([num_hidden_neurons, num_labels]))  

  out_biases = tf.Variable(tf.zeros([num_labels]))  

  #compute output
  #notice that upon training we use the switched off activations
  #i.e. the variaction of hidden_layer with the dropout active
  out_layer = tf.matmul(hidden_layer_drop_2,out_weights) + out_biases
  #our real output is a softmax of prior result
  #and we also compute its cross-entropy to get our loss
  #Notice - we introduce our L2 here
  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    out_layer, tf_train_labels) +
    beta*tf.nn.l2_loss(hidden_weights_1) +
    beta*tf.nn.l2_loss(hidden_biases_1) +
    beta*tf.nn.l2_loss(hidden_weights_2) +
    beta*tf.nn.l2_loss(hidden_biases_2) +
    beta*tf.nn.l2_loss(out_weights) +
    beta*tf.nn.l2_loss(out_biases)))

  #variable to count number of steps taken
  global_step = tf.placeholder(tf.int32)

  #compute current learning rate
  learning_rate = tf.train.exponential_decay(start_learning_rate, global_step, decay_steps, decay_size)
  #use it in optimizer
  optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

  #nice, now let's calculate the predictions on each dataset for evaluating the
  #performance so far
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(out_layer)
  valid_relu_1 = tf.nn.relu(  tf.matmul(tf_valid_dataset, hidden_weights_1) + hidden_biases_1)
  valid_relu_2 = tf.nn.relu(  tf.matmul(valid_relu_1, hidden_weights_2) + hidden_biases_2)
  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu_2, out_weights) + out_biases) 

  test_relu_1 = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights_1) + hidden_biases_1)
  test_relu_2 = tf.nn.relu( tf.matmul( test_relu_1, hidden_weights_2) + hidden_biases_2)
  test_prediction = tf.nn.softmax(tf.matmul(test_relu_2, out_weights) + out_biases)



#now is the actual training on the ANN we built
#we will run it for some number of steps and evaluate the progress after 
#every 500 steps

#number of steps we will train our ANN
num_steps = 3001

#actual training
with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob : 0.5, global_step: step}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
      print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

最佳答案

不要使用 global_step 占位符,而是尝试使用 Variable

global_step = tf.Variable(0)

您必须从 feed_dict 中删除 global_step。请注意,您不必手动增加 global_step,tensorflow 会自动为您执行此操作。

关于machine-learning - TensorFlow 学习率衰减 - 如何正确提供衰减的步数?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38297193/

相关文章:

python-2.7 - 父树 : Expected a node value and child list or a single string or 'ParentedTree' object has no attribute 'label'

python - ValueError : Error when checking input: expected conv2d_1_input to have 4 dimensions, 但得到形状为 (120, 1) 的数组

machine-learning - Caffe 如何在原型(prototype)中对同一数据集进行缩放和设置平均值

python - Tensorflow 无法计算 Addv2,因为输入 #1(从零开始)应该是双张量,但它是浮点张量 [Op :Addv]

python - Tensorflow c++,张量和馈送问题

python - 为什么 `optimizer.minimize()` 不返回损失 `tf.slim.learning.train()` ?

python - 将 Keras 模型集成到 TensorFlow 中

machine-learning - 路易斯女士 |每个意图/应用程序的最大话语数

machine-learning - 使用有偏差的数据集训练决策树

machine-learning - 降低 FP 率 scikit-learn 随机森林