我正在使用 keras 实现 CNN 来执行图像分类,我使用 .fit_generator() 方法训练模型直到验证停止条件我使用了下一个代码:
history_3conv = cnn3.fit_generator(train_data,steps_per_epoch = train_data.n // 98, callbacks = [es,ckpt_3Conv],
validation_data = valid_data, validation_steps = valid_data.n // 98,epochs=50)
停止前的最后两个纪元是下一个:
如图所示,最后的训练精度为 0.91。但是,当我使用 model.evaluate()
方法评估训练、测试和验证集时,我得到了下一个结果:
所以,我的问题是:为什么我得到两个不同的值?
我应该使用 evaluate_generator()
吗?或者我应该在 flow_from_directory()
中修复 seed
知道要执行数据扩充我使用了下一个代码:
trdata = ImageDataGenerator(rotation_range=90,horizontal_flip=True)
vldata = ImageDataGenerator()
train_data = trdata.flow(x_train,y_train,batch_size=98)
valid_data = vldata.flow(x_valid,y_valid,batch_size=98)
此外,我知道在 fit_generator 中设置 use_multiprocessing=False
会显着降低训练速度。那么您认为最好的解决方案是什么
最佳答案
model.fit()
和 model.evaluate()
是作为 model.fit_generator
和 model 的方法。 evaluate_generator
已弃用。
training
和validation
数据是由生成器生成的增强数据。所以你的准确性会有一些变化。如果您在 fit_generator
的 validation_data
和 中使用了非增强的
或 validation
或 test
数据model.evaluate()model.evaluate_generator
,则精度不会有任何变化。
下面是我运行了一个 epoch 的简单猫狗分类程序-
- 验证数据生成器只有重新缩放转换,没有其他增强技术。
- 验证准确度在 epoch 结束后显示。
- 使用
val_data_gen.reset()
重置验证数据生成器。不过应该没有必要,因为我们没有进行任何增强。 - 使用
model.evaluate
和model.evaluate_generator
评估验证数据的准确性。
epoch 结束后计算的验证精度与使用 model.evaluate
和 model.evaluate_generator
计算的精度匹配。
代码:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import os
import numpy as np
import matplotlib.pyplot as plt
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
batch_size = 1
epochs = 1
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
optimizer = 'SGD'
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size)
from sklearn.metrics import confusion_matrix
# Reset
val_data_gen.reset()
# Evaluate on Validation data
scores = model.evaluate(val_data_gen)
print("%s%s: %.2f%%" % ("evaluate ",model.metrics_names[1], scores[1]*100))
scores = model.evaluate_generator(val_data_gen)
print("%s%s: %.2f%%" % ("evaluate_generator ",model.metrics_names[1], scores[1]*100))
输出:
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
2000/2000 [==============================] - 74s 37ms/step - loss: 0.6932 - accuracy: 0.5025 - val_loss: 0.6815 - val_accuracy: 0.5000
1000/1000 [==============================] - 11s 11ms/step - loss: 0.6815 - accuracy: 0.5000
evaluate accuracy: 50.00%
evaluate_generator accuracy: 50.00%
关于python - 我应该使用评估生成器还是评估来评估我的 CNN 模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63684459/