python - 训练 CNN 时 GPU 使用率低

标签 python tensorflow tensorflow-datasets

我刚刚安装了tensorflow gpu,然后开始训练我的卷积神经网络。问题是我的 GPU 使用百分比始终为 0%,有时会增加到 20%。 CPU 大约在 20% 左右,磁盘则在 60% 以上。我尝试测试我是否安装正确,并做了一些矩阵乘法,在这种情况下,一切正常,GPU 使用率超过 90%。

with tf.device("/gpu:0"):
    #here I set up the computational graph

当我运行图表时,我使用它,因此编译器将决定一个操作是否具有 GPU 实现

with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:

我有一 block NVIDIA GEFORCE GTX 950m 显卡,并且在运行时没有出现错误。我做错了什么?

稍后编辑,我的计算图

with tf.device("/gpu:0"):
    X = tf.placeholder(tf.float32, shape=[None, height, width, channels], name="X")
    dropout_rate= 0.3


    training = tf.placeholder_with_default(False, shape=(), name="training")
    X_drop = tf.layers.dropout(X, dropout_rate, training = training)

    y = tf.placeholder(tf.int32, shape = [None], name="y")


    conv1 = tf.layers.conv2d(X_drop, filters=32, kernel_size=3,
                            strides=1, padding="SAME",
                            activation=tf.nn.relu, name="conv1")

    conv2 = tf.layers.conv2d(conv1, filters=64, kernel_size=3,
                            strides=2, padding="SAME",
                            activation=tf.nn.relu, name="conv2")

    pool3 = tf.nn.max_pool(conv2,
                            ksize=[1, 2, 2, 1],
                            strides=[1, 2, 2, 1],
                            padding="VALID")

    conv4 = tf.layers.conv2d(pool3, filters=128, kernel_size=4,
                            strides=3, padding="SAME",
                            activation=tf.nn.relu, name="conv4")

    pool5 = tf.nn.max_pool(conv4,
                            ksize=[1, 2, 2, 1],
                            strides=[1, 1, 1, 1],
                            padding="VALID")


    pool5_flat = tf.reshape(pool5, shape = [-1, 128*2*2])

    fullyconn1 = tf.layers.dense(pool5_flat, 128, activation=tf.nn.relu, name = "fc1")
    fullyconn2 = tf.layers.dense(fullyconn1, 64, activation=tf.nn.relu, name = "fc2")

    logits = tf.layers.dense(fullyconn2, 2, name="output")

    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)

    loss = tf.reduce_mean(xentropy)
    optimizer = tf.train.AdamOptimizer()
    training_op = optimizer.minimize(loss)

    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

    init = tf.global_variables_initializer()
saver = tf.train.Saver()

hm_epochs = 100
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True

批量大小为 128

with tf.Session(config=config) as sess:
        tbWriter = tf.summary.FileWriter(logPath, sess.graph)
        dataset = tf.data.Dataset.from_tensor_slices((training_images, training_labels))
        dataset = dataset.map(rd.decodeAndResize)
        dataset = dataset.batch(batch_size)

        testset = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
        testset = testset.map(rd.decodeAndResize)
        testset = testset.batch(len(test_images))

        iterator = dataset.make_initializable_iterator()
        test_iterator = testset.make_initializable_iterator()
        next_element = iterator.get_next()
        sess.run(tf.global_variables_initializer())
        for epoch in range(hm_epochs):
            epoch_loss = 0
            sess.run(iterator.initializer)
            while True:
                try:
                    epoch_x, epoch_y = sess.run(next_element)
                    # _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
                    # epoch_loss += c
                    sess.run(training_op, feed_dict={X:epoch_x, y:epoch_y, training:True})
                except tf.errors.OutOfRangeError:
                    break


            sess.run(test_iterator.initializer)
            # acc_train = accuracy.eval(feed_dict={X:epoch_x, y:epoch_y})
            try:
                next_test = test_iterator.get_next()
                test_images, test_labels = sess.run(next_test)
                acc_test = accuracy.eval(feed_dict={X:test_images, y:test_labels})
                print("Epoch {0}: Train accuracy {1}".format(epoch, acc_test))
            except tf.errors.OutOfRangeError:
                break
            # print("Epoch {0}: Train accuracy {1}, Test accuracy: {2}".format(epoch, acc_train, acc_test))
        save_path = saver.save(sess, "./my_first_model")

我有 9k 训练图片和 3k 测试图片

最佳答案

您的代码中存在一些问题,可能会导致 GPU 使用率较低。

1) 添加prefetch Dataset 管道末尾的指令,使 CPU 能够维护输入数据批处理的缓冲区,准备将它们移动到 GPU。

# this should be the last thing in your pipeline
dataset = dataset.prefetch(1)

2) 您正在使用 feed_dict 以及 Dataset 迭代器来提供模型。这不是预期的方式! feed_dict is the slowest method of inputting data to your model and not recommended 。您应该根据迭代器的 next_element 输出来定义您的模型

示例:

next_x, next_y = iterator.get_next()
with tf.device('/GPU:0'):
    conv1 = tf.layers.conv2d(next_x, filters=32, kernel_size=3,
                        strides=1, padding="SAME",
                        activation=tf.nn.relu, name="conv1")
    # rest of model here...
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, 
                 labels=next_y)

然后,您可以使用feed_dict调用您的训练操作,并且迭代器将在幕后处理向您的模型提供数据的情况。 Here is another related Q&A 。您的新训练循环将如下所示:

while True:
    try:
        sess.run(training_op, feed_dict={training:True})
    except tf.errors.OutOfRangeError:
        break

您应该只通过迭代器不提供的 feed_dict 输入数据,并且这些数据通常应该非常轻量。

有关性能的更多提示,您可以引用this guide on TF website .

关于python - 训练 CNN 时 GPU 使用率低,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49339062/

相关文章:

python - scypy python 模块中的 odeint() 如何工作?

python - 从 Google App Engine 下载源代码

python - 删除行(即用 NaN 填充)并在 Pandas Dataframe 中进行插值

python - 使用 Keras 构建的回归神经网络只能预测一个值

python - 使用 Keras 通过 "optimal"输入图像可视化 CNN 生成的特征/内核

tensorflow - 如何将大 float 保存为 TFRecord 格式? float_list/float32 似乎截断了值

python-3.x - 如何查找tensorflow.python.data.ops.dataset_ops.MapDataset对象的大小或形状,make_csv_dataset的输出

python - PyDev 中假的 Unresolved 导入错误

python - 在 Windows 10 (python 3.6.2) 中安装 Tensorflow 时出现问题

python - tensorflow 估计器精度和损失为零