python - keras model.fit文档中 "validation_data will override validation_split."是什么意思

标签 python keras

我是 Python 和机器学习的新手。我对 keras model.fiit 中的句子“validation_data will override validation_split”感到困惑。这是否意味着如果我像这样提供验证数据

history = model.fit(X_train, [train_labels_hotEncode,train_labels_hotEncode,train_labels_hotEncode],validation_data= (y_train,[test_labels_hotEncode,test_labels_hotEncode,test_labels_hotEncode]),train_labels_hotEncode]), validation_split=0.3 ,epochs=epochs, batch_size= 64, callbacks=[lr_sc])

验证分割不会被接受?而且该函数只会使用 Validation_data 而不是拆分?

此外,我正在尝试在 30% 的训练数据上测试我的数据。

但是,如果我尝试使用只有 validation_split = 0.3 的 model.fit,验证准确性就会变得非常糟糕。为此,我正在使用 inception googleNet 架构。

loss: 1.8204 - output_loss: 1.1435 - auxilliary_output_1_loss: 1.1292 - auxilliary_output_2_loss: 1.1272 - output_acc: 0.3845 - auxilliary_output_1_acc: 0.3797 - auxilliary_output_2_acc: 0.3824 - val_loss: 9.7972 - val_output_loss: 6.6655 - val_auxilliary_output_1_loss: 5.0973 - val_auxilliary_output_2_loss: 5.3417 - val_output_acc: 0.0000e+00 - val_auxilliary_output_1_acc: 0.0000e+00 - val_auxilliary_output_2_acc: 0.0000e+00

GOOGLENET 代码

input_layer = Input(shape=(224,224,3))

image = Conv2D(64,(7,7),padding='same', strides=(2,2), activation='relu', name='conv_1_7x7/2', kernel_initializer=kernel_init, bias_initializer=bias_init)(input_layer)

image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_1_3x3/2')(image)
image = Conv2D(64, (1,1), padding='same', strides=(1,1), activation='relu', name='conv_2a_3x3/1' )(image)
image = Conv2D(192, (3,3), padding='same', strides=(1,1), activation='relu', name='conv_2b_3x3/1')(image)
image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_2_3x3/2')(image)

image = inception_module(image,
                    filters_1x1= 64,
                    filters_3x3_reduce= 96,
                    filter_3x3 = 128,
                    filters_5x5_reduce=16,
                    filters_5x5= 32,
                    filters_pool_proj=32,
                    name='inception_3a')

image = inception_module(image,
                            filters_1x1=128,
                            filters_3x3_reduce=128,
                            filter_3x3=192,
                            filters_5x5_reduce=32,
                            filters_5x5=96,
                            filters_pool_proj=64,
                            name='inception_3b')

image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_3_3x3/2')(image)

image = inception_module(image, 
                            filters_1x1=192,
                            filters_3x3_reduce=96,
                            filter_3x3=208,
                            filters_5x5_reduce=16,
                            filters_5x5=48,
                            filters_pool_proj=64,
                            name='inception_4a')

image1 = AveragePooling2D((5,5), strides=3)(image)
image1 = Conv2D(128, (1,1), padding='same', activation='relu')(image1)
image1 = Flatten()(image1)
image1 = Dense(1024, activation='relu')(image1)
image1 = Dropout(0.4)(image1)
image1 = Dense(5, activation='softmax', name='auxilliary_output_1')(image1)

image = inception_module(image,
                            filters_1x1 = 160,
                            filters_3x3_reduce= 112,
                            filter_3x3= 224,
                            filters_5x5_reduce= 24,
                            filters_5x5= 64,
                            filters_pool_proj=64,
                            name='inception_4b')

image = inception_module(image,
                           filters_1x1= 128,
                           filters_3x3_reduce = 128,
                           filter_3x3= 256,
                           filters_5x5_reduce= 24,
                           filters_5x5=64,
                           filters_pool_proj=64,
                           name='inception_4c')

image = inception_module(image,
                           filters_1x1=112,
                           filters_3x3_reduce=144,
                           filter_3x3= 288,
                           filters_5x5_reduce= 32,
                           filters_5x5=64,
                           filters_pool_proj=64,
                           name='inception_4d')

image2 = AveragePooling2D((5,5), strides=3)(image)
image2 = Conv2D(128, (1,1), padding='same', activation='relu')(image2)
image2 = Flatten()(image2)
image2 = Dense(1024, activation='relu')(image2)
image2 = Dropout(0.4)(image2) #Changed from 0.7
image2 = Dense(5, activation='softmax', name='auxilliary_output_2')(image2)

image = inception_module(image,
                            filters_1x1=256,
                            filters_3x3_reduce=160,
                            filter_3x3=320,
                            filters_5x5_reduce=32,
                            filters_5x5=128,
                            filters_pool_proj=128,
                            name= 'inception_4e')

image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_4_3x3/2')(image)

image = inception_module(image,
                           filters_1x1=256,
                           filters_3x3_reduce=160,
                           filter_3x3= 320,
                           filters_5x5_reduce=32,
                           filters_5x5= 128,
                           filters_pool_proj=128,
                           name='inception_5a')

image = inception_module(image, 
                           filters_1x1=384,
                           filters_3x3_reduce=192,
                           filter_3x3=384,
                           filters_5x5_reduce=48,
                           filters_5x5=128,
                           filters_pool_proj=128,
                           name='inception_5b')

image = GlobalAveragePooling2D(name='avg_pool_5_3x3/1')(image)

image = Dropout(0.4)(image)
image = Dense(5, activation='softmax', name='output')(image)

model = Model(input_layer, [image,image1,image2], name='inception_v1')

model.summary()


epochs = 2
initial_lrate = 0.01 # Changed From 0.01

def decay(epoch, steps=100):
  initial_lrate = 0.01
  drop = 0.96
  epochs_drop = 8
  lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))#
  return lrate

sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# nadam = keras.optimizers.Nadam(lr= 0.002, beta_1=0.9, beta_2=0.999, epsilon=None)
# keras
lr_sc = LearningRateScheduler(decay)
# rms = keras.optimizers.RMSprop(lr = initial_lrate, rho=0.9, epsilon=1e-08, decay=0.0)
# ad = keras.optimizers.adam(lr=initial_lrate)
model.compile(loss=['categorical_crossentropy', 'categorical_crossentropy','categorical_crossentropy'],loss_weights=[1,0.3,0.3], optimizer='sgd', metrics=['accuracy'])

# loss = 'categorical_crossentropy', 'categorical_crossentropy','categorical_crossentropy'

history = model.fit(X_train, [train_labels_hotEncode,train_labels_hotEncode,train_labels_hotEncode], validation_split=0.3 ,epochs=epochs, batch_size= 32, callbacks=[lr_sc])

谢谢,

最佳答案

validation_split 是传入的参数。它是一个数字,用于确定应如何将数据划分为训练集和验证集。例如,如果 validation_split = 0.1,则 10% 的数据将用于验证集,90% 的数据将用于测试集。

validation_data 是您显式传入验证集的参数。如果您传入验证数据,keras 会使用您显式传入的数据,而不是使用 validation_split 计算验证集。这就是“忽略”的含义 - 为 validation_data 传递的参数会覆盖为 validation_split 传递的任何参数。

在您的情况下,因为您想使用 30% 的数据作为验证数据,只需传入 validation_split=0.3 而不要传入 validation_data 的参数.

关于python - keras model.fit文档中 "validation_data will override validation_split."是什么意思,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54526575/

相关文章:

python - MSSQL 连接字符串 - 使用另一组凭据的 Windows 身份验证

python - 添加到嵌套列表

python - 如何在 3D-Pandas Dataframe 中查找包含特定子列/嵌套列的列

python - 使用 boto 从 S3 流式传输 .gz 文件时无限循环

python - 如何修复这个 python 程序?

keras - keras中的断言错误

python - 如何使用 Keras Python 3 查找总损失、准确性、预测日期时间?

tensorflow - 如何在 Google Colab 中使用 TensorFlow 2.0 将 tf.Keras 模型转换为 TPU?

Python Keras - Windows 和 Linux 之间的兼容性

python - TensorFlow 的 Print 不打印