tensorflow - tensorflow 1.0 mnist 代码出错

标签 tensorflow mnist

我现在正在使用 python 3.5.2 学习 tensorflow 1.0。我尝试了在 github 上找到的以下代码,但我收到错误 No module named 'tensorflowvisu'。如果我删除导入 tensorflowvisu 我得到错误 I = tensorflowvisu.tf_format_mnist_images(X, Ypred, Y_) # 默认组装 10x10 图像
NameError:名称“tensorflowvisu”未定义
我应该怎么做才能让这个代码工作?有人有我可以学习的 tensorflow 1.0 和 python 3.5 的 mnist 工作代码吗?任何回应表示赞赏。
https://github.com/martin-gorner/tensorflow-mnist-tutorial/blob/master/mnist_1.0_softmax.py

import tensorflow as tf
import tensorflowvisu
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
tf.set_random_seed(0)

# neural network with 1 layer of 10 softmax neurons
#
# · · · · · · · · · ·       (input data, flattened pixels)       X [batch, 784]        # 784 = 28 * 28
# \x/x\x/x\x/x\x/x\x/    -- fully connected layer (softmax)      W [784, 10]     b[10]
#   · · · · · · · ·                                              Y [batch, 10]

# The model is:
#
# Y = softmax( X * W + b)
#              X: matrix for 100 grayscale images of 28x28 pixels, flattened (there are 100 images in a mini-batch)
#              W: weight matrix with 784 lines and 10 columns
#              b: bias vector with 10 dimensions
#              +: add with broadcasting: adds the vector to each line of the matrix (numpy)
#              softmax(matrix) applies softmax on each line
#              softmax(line) applies an exp to each value then divides by the norm of the resulting line
#              Y: output matrix with 100 lines and 10 columns

# Download images and labels into mnist.test (10K images+labels) and mnist.train (60K images+labels)
mnist = read_data_sets("data", one_hot=True, reshape=False, validation_size=0)

# input X: 28x28 grayscale images, the first dimension (None) will index the images in the mini-batch
X = tf.placeholder(tf.float32, [None, 28, 28, 1])
# correct answers will go here
Y_ = tf.placeholder(tf.float32, [None, 10])
# weights W[784, 10]   784=28*28
W = tf.Variable(tf.zeros([784, 10]))
# biases b[10]
b = tf.Variable(tf.zeros([10]))

# flatten the images into a single line of pixels
# -1 in the shape definition means "the only possible dimension that will preserve the number of elements"
XX = tf.reshape(X, [-1, 784])

# The model
Y = tf.nn.softmax(tf.matmul(XX, W) + b)

# loss function: cross-entropy = - sum( Y_i * log(Yi) )
#                           Y: the computed output vector
#                           Y_: the desired output vector

# cross-entropy
# log takes the log of each element, * multiplies the tensors element by element
# reduce_mean will add all the components in the tensor
# so here we end up with the total cross-entropy for all images in the batch
cross_entropy = -tf.reduce_mean(Y_ * tf.log(Y)) * 1000.0  # normalized for batches of 100 images,
                                                          # *10 because  "mean" included an unwanted division by 10

# accuracy of the trained model, between 0 (worst) and 1 (best)
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# training, learning rate = 0.005
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)

# matplotlib visualisation
allweights = tf.reshape(W, [-1])
allbiases = tf.reshape(b, [-1])
I = tensorflowvisu.tf_format_mnist_images(X, Y, Y_)  # assembles 10x10 images by default
It = tensorflowvisu.tf_format_mnist_images(X, Y, Y_, 1000, lines=25)  # 1000 images on 25 lines
datavis = tensorflowvisu.MnistDataVis()

# init
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)


# You can call this function in a loop to train the model, 100 images at a time
def training_step(i, update_test_data, update_train_data):

    # training on batches of 100 images with 100 labels
    batch_X, batch_Y = mnist.train.next_batch(100)

    # compute training values for visualisation
    if update_train_data:
        a, c, im, w, b = sess.run([accuracy, cross_entropy, I, allweights, allbiases], feed_dict={X: batch_X, Y_: batch_Y})
        datavis.append_training_curves_data(i, a, c)
        datavis.append_data_histograms(i, w, b)
        datavis.update_image1(im)
        print(str(i) + ": accuracy:" + str(a) + " loss: " + str(c))

    # compute test values for visualisation
    if update_test_data:
        a, c, im = sess.run([accuracy, cross_entropy, It], feed_dict={X: mnist.test.images, Y_: mnist.test.labels})
        datavis.append_test_curves_data(i, a, c)
        datavis.update_image2(im)
        print(str(i) + ": ********* epoch " + str(i*100//mnist.train.images.shape[0]+1) + " ********* test accuracy:" + str(a) + " test loss: " + str(c))

    # the backpropagation training step
    sess.run(train_step, feed_dict={X: batch_X, Y_: batch_Y})


datavis.animate(training_step, iterations=2000+1, train_data_update_freq=10, test_data_update_freq=50, more_tests_at_start=True)

# to save the animation as a movie, add save_movie=True as an argument to datavis.animate
# to disable the visualisation use the following line instead of the datavis.animate line
# for i in range(2000+1): training_step(i, i % 50 == 0, i % 10 == 0)

print("max test accuracy: " + str(datavis.get_max_test_accuracy()))

# final max test accuracy = 0.9268 (10K iterations). Accuracy should peak above 0.92 in the first 2000 iterations.

最佳答案

我遇到过同样的问题。解决方案是从所有代码所在的文件夹中运行代码。不要只是将 mnist_1.0_softmax.py 代码复制到您的 IDE 并运行它。从下面的链接下载或克隆整个 repo

https://github.com/martin-gorner/tensorflow-mnist-tutorial.git

克隆后,您将看到该文件夹​​中有一个名为 tensorflowvisu.py 的文件。所以这不是你从 conda 或 pip 安装的模块。在这种精确的情况下,它只是作者用作模块的文件。通过命令行转到所有这些代码所在的目录,然后从那里运行
python mnist_1.0_softmax.py

现在它应该可以工作了。您应该会看到一个弹出窗口,其中包含 6 个实时更新的图表。

如果你想从你的 IDE 运行它,然后打开你的 IDE(在我的例子中是 Atom),然后转到文件 > 打开文件夹 > 单击确定 > 选择你的文件 mnist_1.0_softmax.py 并按 Ctrl+Shift+B。
应该出现相同的弹出窗口。
最重要的是从作者提供的目录中打开文件。

关于tensorflow - tensorflow 1.0 mnist 代码出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42707081/

相关文章:

python - 调整 Keras 中隐藏层的数量

keras - 拟合keras模型时出现ValueError

python - 从卷积网络中删除顺序后,我得到 : "TypeError: ' Tensor' object is not callable"

python - 获取 Tensorflow 中线性回归的系数

Tensorflow:Keras、Estimators 和自定义输入函数

python - 为什么 softmax 总是提供 1.0 的概率?

python - 如何在 TensorFlow 中从 MNIST 中排除某个类?

go - TensorFlow for Go 演示示例运行失败

tensorflow - 如何为自定义 keras 层创建可训练的权重变量

tensorflow - Mnist 像素边框