python-3.x - Tensorflow,使用经过训练的网络进行预测

标签 python-3.x tensorflow deep-learning

所以我正在训练一个网络来对 tensorflow 中的图像进行分类。训练网络后,我开始尝试使用它对其他图像进行分类。目标是导入图像,将其提供给分类器并让它打印结果。不过,我在让这部分离开地面时遇到了一些麻烦。这是我到目前为止所拥有的。我发现 tf.argmax(y,1) 给出了错误。我发现将其更改为 0 可以修复该错误。但我不相信它确实有效。我通过分类器扔了两张图像,尽管它们有很大不同,但它们都得到了相同的类别。这里只需要一些观点。这是有效的吗?或者这里有什么问题总是会为我提供相同的类(在这种情况下,我尝试的两个图像都得到了类 0)。

这是否是在 tensorflow 中进行预测的正确方法?这只是我调试的高潮,不确定这是否应该做。

from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
   X_train,X_validation,y_train,y_validation=train_test_split(X_train,y_train,   test_size=20,random_state=0)   
X_train, y_train = shuffle(X_train, y_train)




def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables 

for the weights and biases for each layer
    mu = 0
    sigma = 0.1

# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)

# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b

# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)

# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0   = flatten(conv2)

# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1   = tf.matmul(fc0, fc1_W) + fc1_b

# SOLUTION: Activation.
fc1    = tf.nn.relu(fc1)

# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b  = tf.Variable(tf.zeros(84))
fc2    = tf.matmul(fc1, fc2_W) + fc2_b

# SOLUTION: Activation.
fc2    = tf.nn.relu(fc2)

# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b  = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b

return logits



import tensorflow as tf
 x = tf.placeholder(tf.float32, (None, 32, 32, 3))
 y = tf.placeholder(tf.int32, (None))
 one_hot_y = tf.one_hot(y, 43)
EPOCHS=10
BATCH_SIZE=128

rate = 0.001

logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_train)

    print("Training...")
    print()
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})

        validation_accuracy = evaluate(X_validation, y_validation)
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print()

    saver.save(sess, './lenet')
    print("Model saved")


import cv2
image=cv2.imread('File path')
image=cv2.resize(image,(32,32)) #classifier takes 32X32 images 
image=np.array(image)


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver3 = tf.train.import_meta_graph('./lenet.meta')
    saver3.restore(sess, "./lenet")
    pred = tf.nn.softmax(logits)
    predictions = sess.run(tf.argmax(y,0), feed_dict={x: image})
    print (predictions)

最佳答案

所以这里必须首先清除内核和输出。在这个过程中,我的占位符变得困惑,清除内核就解决了这个问题。然后我必须意识到这里真正需要完成的事情:我必须对新数据调用 softmax 函数。

像这样:

pred = tf.nn.softmax(logits)
classification = sess.run(pred, feed_dict={x: image_array}) 

关于python-3.x - Tensorflow,使用经过训练的网络进行预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42081917/

相关文章:

python - chr() 等效返回一个字节对象,在 py3k

python-3.x - 将 Lime 表格解释器与 Keras 一起使用时出现关键错误

python - 错误 : Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow)

python - 尝试在 tensorflow 中创建 OCR,字母训练后要做什么?

python - Tensorflow CNN 图像增强管道

machine-learning - 如何解决Keras LSTM网络中的loss = Nan问题?

python - Python 线程中 join() 的用途是什么?

python - 多条件提前停止

python - 连体模型不学习任何东西,总是将图像编码成零向量

python - 列表分配