tensorflow - 在 TFLearn 中加载模型 - 每次预测相同的值

标签 tensorflow machine-learning neural-network conv-neural-network tflearn

我使用 tflearn 对一些数据训练了一个模型来进行二元分类。该模型经过训练,准确率达到 97%。

我想在另一个程序中使用model.load()来预测一些测试输入数据的类别。

但是,model.load() 仅当我包含参数 weights_only=True 时才有效。当我从 model.load() 中省略该参数时,它会抛出一个错误:

NotFoundError (see above for traceback): Key is_training not found in checkpoint

当我加载模型并在我的小测试集上运行一些预测时 - 分类看起来很奇怪..模型每次都在第一个索引中预测完美的 1。对我来说,如果模型训练到非常高的准确度,这种情况就不会发生。以下是预测的样子(右侧的预期输出):

[[  5.59889193e-22   1.00000000e+00]    [0, 1]
 [  4.25160435e-22   1.00000000e+00]    [0, 1]
 [  6.65333618e-23   1.00000000e+00]    [0, 1]
 [  2.07748895e-21   1.00000000e+00]    [0, 1]
 [  1.77639440e-21   1.00000000e+00]    [0, 1]
 [  5.77486922e-18   1.00000000e+00]    [1, 0]
 [  2.70562403e-19   1.00000000e+00]    [1, 0]
 [  2.78288828e-18   1.00000000e+00]    [1, 0]
 [  6.10306495e-17   1.00000000e+00]    [1, 0]
 [  2.35787162e-19   1.00000000e+00]]   [1, 0]

注意:此测试数据是用于训练模型的数据,因此应该能够以高精度正确分类。

训练模型的代码:

tf.reset_default_graph()

train = pd.read_csv("/Users/darrentaggart/Library/Mobile Documents/com~apple~CloudDocs/Uni Documents/MEE4040 - Project 4/Coding Related Stuff/Neural Networks/modeltraindata_1280.csv")
test = pd.read_csv("/Users/darrentaggart/Library/Mobile Documents/com~apple~CloudDocs/Uni Documents/MEE4040 - Project 4/Coding Related Stuff/Neural Networks/modeltestdata_320.csv")

X = train.iloc[:,1:].values.astype(np.float32)
Y = np.array([np.array([int(i == l) for i in range(2)]) for l in 
train.iloc[:,:1].values])
test_x = test.iloc[:,1:].values.astype(np.float32)
test_y = np.array([np.array([int(i == l) for i in range(2)]) for l in 
test.iloc[:,:1].values])

X = X.reshape([-1, 16, 16, 1])
test_x = test_x.reshape([-1, 16, 16, 1])

convnet = input_data(shape=[None, 16, 16, 1], name='input')

initialization = tf.contrib.layers.variance_scaling_initializer(factor=1.0, mode='FAN_IN', uniform=False)

convnet = conv_2d(convnet, 32, 2, activation='elu', 
weights_init=initialization)
convnet = max_pool_2d(convnet, 2)

convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = conv_2d(convnet, 64, 2, activation='elu', 
weights_init=initialization)
convnet = max_pool_2d(convnet, 2)

convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = fully_connected(convnet, 254, activation='elu', weights_init=initialization)
convnet = dropout(convnet, 0.8)

convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = fully_connected(convnet, 2, activation='softmax')
adam = tflearn.optimizers.Adam(learning_rate=0.00065, beta1=0.9, beta2=0.999, epsilon=1e-08)
convnet = regression(convnet, optimizer=adam, loss='categorical_crossentropy', name='targets')

model = tflearn.DNN(convnet, tensorboard_dir='/Users/darrentaggart/Library/Mobile Documents/com~apple~CloudDocs/Uni Documents/MEE4040 - Project 4/Coding Related Stuff/Neural Networks/latest logs',
tensorboard_verbose=3)

model.fit({'input': X}, {'targets': Y}, n_epoch=100, batch_size=16, 
validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=10, show_metric=True, run_id='1600 - ConvConvFC254 LR0.00065decay BN VSinit 16batchsize 100epochs')

model.save('tflearncnn.model')

用于加载和生成预测的代码:

test = pd.read_csv("/Users/darrentaggart/Library/Mobile Documents/com~apple~CloudDocs/Uni Documents/MEE4040 - Project 4/Coding Related Stuff/Neural Networks/modelpredictiondata.csv")

X = test.iloc[:,1:].values.astype(np.float32)

sess=tf.InteractiveSession()

tflearn.is_training(False)

convnet = input_data(shape=[None, 16, 16, 1], name='input')

initialization = tf.contrib.layers.variance_scaling_initializer(factor=1.0, mode='FAN_IN', uniform=False)

convnet = conv_2d(convnet, 32, 2, activation='elu', weights_init=initialization)
convnet = max_pool_2d(convnet, 2)

convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = conv_2d(convnet, 64, 2, activation='elu', weights_init=initialization)
convnet = max_pool_2d(convnet, 2)

convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = fully_connected(convnet, 254, activation='elu', weights_init=initialization)


convnet = tflearn.layers.normalization.batch_normalization(convnet, beta=0.0, gamma=1.0, epsilon=1e-05, 
decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')

convnet = fully_connected(convnet, 2, activation='softmax')
adam = tflearn.optimizers.Adam(learning_rate=0.00065, beta1=0.9, beta2=0.999, epsilon=1e-08)
convnet = regression(convnet, optimizer=adam, loss='categorical_crossentropy', name='targets')

model = tflearn.DNN(convnet)

if os.path.exists('{}.meta'.format('tflearncnn.model')):
    model.load('tflearncnn.model', weights_only=False)
    print('model loaded!')

for i in enumerate(X):

    X = X.reshape([-1, 16, 16, 1])

    model_out = model.predict(X)

    if np.argmax(model_out) == 1: str_label='Boss'
    else: str_label = 'Slot'

print(model_out)

我知道可能性不大,但我认为有人可能能够阐明此事。谢谢。

最佳答案

距离提出这个问题已有一年半了,但分享毕竟是一种关怀。使用 tflearn 和 Alexnet对图像进行二值分类。

诀窍是在转换为 nparray 后进行标准化。不要忘记更改目录路径。

from __future__ import division, print_function, absolute_import
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
from data_utils import * 
import os
from PIL import Image
from numpy import array

def res_image(f, image_shape=[224,224], grayscale=False, normalize=True):
    img = load_image(f)
    width, height = img.size
    if width != image_shape[0] or height != image_shape[1]:
        img = resize_image(img, image_shape[0], image_shape[1])
    if grayscale:
        img = convert_color(img, 'L')
    elif img.mode == 'L':
        img = convert_color(img, 'RGB')

    img = pil_to_nparray(img)
    if normalize: # << this here is what you need
        img /= 255.
    img = array(img).reshape(1, image_shape[0], image_shape[1], 3)
    return img

# Building the network
network = input_data(shape=[None, 227, 227, 3])
network = conv_2d(network, 96, 11, strides=4, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 256, 5, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 256, 3, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax') # output is the number of outcomes
network = regression(network, optimizer='momentum',
                     loss='categorical_crossentropy',
                     learning_rate=0.001)

# Training
model = tflearn.DNN(network, 
                    tensorboard_dir=R'C:\Users\b0588718\Source\Repos\AlexNet\AlexNet')

model.load('model.tfl')

f = r'C:\Users\b0588718\Source\Repos\AlexNet\AlexNet\rawdata\jpg\0\P1170047.jpg'
img = res_image(f, [227,227], grayscale=False, normalize=True)

pred = model.predict(img)
print(" %s" % pred[0])

关于tensorflow - 在 TFLearn 中加载模型 - 每次预测相同的值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48797618/

相关文章:

machine-learning - Tensorflow - 图形是如何执行的?

python - 为什么每次我在这个特定数据集上运行 train-test split 时我的内核都会死掉?

python - 使用 sklearn 找出错误率

python - 为什么使用 TensorFlow 进行多元线性回归时会得到不同的权重?

machine-learning - 应用神经网络进行图像识别

python - 无法加载预训练模型

python - 如何使用 TensorFlow GPU?

tensorflow - 在 Debug模式下将文件名和行号添加到 tensorflow op name

r - 在 R 中模拟数据集以进行模型选择

swift - 使用 CoreML 从模式预测结果