train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.1,
zoom_range=0.1,
rotation_range=5.,
width_shift_range=0.1,
height_shift_range=0.1)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_width, img_height),
batch_size = 20,
shuffle = True,
classes = TYPES,
class_mode = 'categorical')
validation_generator = val_datagen.flow_from_directory(
val_data_dir,
target_size=(img_width, img_height),
batch_size=20,
shuffle = True,
classes = TYPES,
class_mode = 'categorical')
model.fit_generator(
train_generator,
samples_per_epoch = 2000,
nb_epoch = 20
)
Epoch 14/50
480/2000 [======>.......................] - ETA: 128s - loss: 0.8708
Epoch 13/50
2021/2000 [==============================] - 171s - loss: 0.7973 - acc: 0.7041
我的 ImageGenerators 从文件夹中读取 2261 个训练图像和 567 个测试图像。我正在尝试使用 2000 个 samples_per_epoch 和 20 个 batch_size 来训练我的模型。 batch_size 对于 samples_per_epoch 是可分割的,但不知何故它增加了额外的值(value)并显示警告:
( UserWarning: Epoch comprised more than
samples_per_epoch
samples, which might affect learning results. Setsamples_per_epoch
correctly to avoid this warning).
它适用于 Single-Gpu 但如果我尝试训练 使用 Multi-Gpus 时会出现该错误:
InvalidArgumentError (see above for traceback): Incompatible shapes: [21] vs. [20] [[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](ArgMax, ArgMax_1)]] [[Node: gradients/concat_25_grad/Slice_1/_10811 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:1", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_101540_gradients/concat_25_grad/Slice_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:1"]]
我正在使用 code对于模型并行化:
感谢您的帮助...
最佳答案
训练数据的数量必须等于samples_per_epoch ×∀batch_size。 请减少1条数据,使训练数据条数达到2260条。 steps_per_epoch=113 batch_size=20
关于python - Keras 的 fit_generator 额外训练值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43542226/