python - 训练损失高于验证损失

标签 python tensorflow keras

我正在尝试在 Keras 中使用完全连接的神经网络训练具有 3 个变量的虚拟函数的回归模型,并且我总是得到比验证损失高得多的训练损失。

我将数据集分成 2/3 用于训练,1/3 用于验证。我尝试过很多不同的事情:

  • 改变架构
  • 添加更多神经元
  • 使用正则化
  • 使用不同的批量大小

训练误差仍然比验证误差高一个数量级:

Epoch 5995/6000
4020/4020 [==============================] - 0s 78us/step - loss: 1.2446e-04 - mean_squared_error: 1.2446e-04 - val_loss: 1.3953e-05 - val_mean_squared_error: 1.3953e-05
Epoch 5996/6000
4020/4020 [==============================] - 0s 98us/step - loss: 1.2549e-04 - mean_squared_error: 1.2549e-04 - val_loss: 1.5730e-05 - val_mean_squared_error: 1.5730e-05
Epoch 5997/6000
4020/4020 [==============================] - 0s 105us/step - loss: 1.2500e-04 - mean_squared_error: 1.2500e-04 - val_loss: 1.4372e-05 - val_mean_squared_error: 1.4372e-05
Epoch 5998/6000
4020/4020 [==============================] - 0s 96us/step - loss: 1.2500e-04 - mean_squared_error: 1.2500e-04 - val_loss: 1.4151e-05 - val_mean_squared_error: 1.4151e-05
Epoch 5999/6000
4020/4020 [==============================] - 0s 80us/step - loss: 1.2487e-04 - mean_squared_error: 1.2487e-04 - val_loss: 1.4342e-05 - val_mean_squared_error: 1.4342e-05
Epoch 6000/6000
4020/4020 [==============================] - 0s 79us/step - loss: 1.2494e-04 - mean_squared_error: 1.2494e-04 - val_loss: 1.4769e-05 - val_mean_squared_error: 1.4769e-05

这没有意义,请帮忙!

编辑:这是完整的代码

我有 6000 个训练示例

# -*- coding: utf-8 -*-
"""
Created on Mon Feb 26 13:40:03 2018

@author: Michele
"""
#from keras.datasets import reuters
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
from keras import optimizers
import matplotlib.pyplot as plt
import os 
import pylab 
from keras.constraints import maxnorm
from sklearn.model_selection import train_test_split
from keras import regularizers
from sklearn.preprocessing import MinMaxScaler
import math
from sklearn.metrics import mean_squared_error
import keras

# fix random seed for reproducibility
seed=7
np.random.seed(seed)

dataset = np.loadtxt("BabbaX.csv", delimiter=",")
 #split into input (X) and output (Y) variables
#x = dataset.transpose()[:,10:15] #only use power
x = dataset
del(dataset) # delete container
dataset = np.loadtxt("BabbaY.csv", delimiter=",")
 #split into input (X) and output (Y) variables
y = dataset.transpose()
del(dataset) # delete container

 #scale labels from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
y = np.reshape(y, (y.shape[0],1))
y = scaler.fit_transform(y)

lenData=x.shape[0]
x=np.transpose(x)

xtrain=x[:,0:round(lenData*0.67)]
ytrain=y[0:round(lenData*0.67),]
xtest=x[:,round(lenData*0.67):round(lenData*1.0)]
ytest=y[round(lenData*0.67):round(lenData*1.0)]

xtrain=np.transpose(xtrain)
xtest=np.transpose(xtest)    

l2_lambda = 0.1 #reg factor

#sequential type of model
model = Sequential() 
#stacking layers with .add
units=300
#model.add(Dense(units, input_dim=xtest.shape[1], activation='relu', kernel_initializer='normal', kernel_regularizer=regularizers.l2(l2_lambda), kernel_constraint=maxnorm(3)))
model.add(Dense(units, activation='relu', input_dim=xtest.shape[1]))
#model.add(Dropout(0.1))
model.add(Dense(units, activation='relu'))
#model.add(Dropout(0.1))
model.add(Dense(1)) #no activation function should be used for the output layer

rms = optimizers.RMSprop(lr=0.00001, rho=0.9, epsilon=None, decay=0) #It is recommended to leave the parameters
adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)

#of this optimizer at their default values (except the learning rate, which can be freely tuned).
#adam=keras.optimizers.Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

#configure learning process with .compile
model.compile(optimizer=adam, loss='mean_squared_error', metrics=['mse'])

# fit the model (iterate on the training data in batches)
history = model.fit(xtrain, ytrain, nb_epoch=1000, batch_size=round(xtest.shape[0]/100),
              validation_data=(xtest, ytest), shuffle=True, verbose=2)

#extract weights for each layer
weights = [layer.get_weights() for layer in model.layers]

#evaluate on training data set
valuesTrain=model.predict(xtrain)

#evaluate on test data set
valuesTest=model.predict(xtest)

 #invert predictions
valuesTrain = scaler.inverse_transform(valuesTrain)
ytrain = scaler.inverse_transform(ytrain)
valuesTest = scaler.inverse_transform(valuesTest)
ytest = scaler.inverse_transform(ytest)

最佳答案

TL;DR: 当模型学习良好且快速时,验证损失可能低于训练损失,因为验证发生在更新的模型上,而训练损失没有任何(无批处理)或只有一些(有批处理)更新已应用。


好吧,我想我知道这里发生了什么。我使用以下代码来测试这一点。

import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt

np.random.seed(7)

N_DATA = 6000

x = np.random.uniform(-10, 10, (3, N_DATA))
y = x[0] + x[1]**2 + x[2]**3

xtrain = x[:, 0:round(N_DATA*0.67)]
ytrain = y[0:round(N_DATA*0.67)]

xtest = x[:, round(N_DATA*0.67):N_DATA]
ytest = y[round(N_DATA*0.67):N_DATA]

xtrain = np.transpose(xtrain)
xtest = np.transpose(xtest)

model = Sequential()
model.add(Dense(10, activation='relu', input_dim=3))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))

adam = keras.optimizers.Adam()

# configure learning process with .compile
model.compile(optimizer=adam, loss='mean_squared_error', metrics=['mse'])

# fit the model (iterate on the training data in batches)
history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtest, ytest), shuffle=False, verbose=2)

plt.plot(history.history['mean_squared_error'])
plt.plot(history.history['val_loss'])
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

这与您的代码本质上相同并复制了问题,这实际上不是问题。只需更改

history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtest, ytest), shuffle=False, verbose=2)

history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtrain, ytrain), shuffle=False, verbose=2)

因此,您不再使用验证数据进行验证,而是再次使用训练数据进行验证,这会导致完全相同的行为。很奇怪不是吗?不,实际上不是。我认为正在发生的事情是:

Keras 在每个时期给出的初始 mean_squared_error 是应用梯度之前的损失,而验证发生在应用梯度之后,这是有道理的。

对于通常使用神经网络的高度随机问题,您看不到这一点,因为数据变化很大,更新后的权重不足以描述验证数据,对训练数据的轻微过度拟合影响仍然存在如此强大,以至于即使在更新权重之后,验证损失仍然高于之前的训练损失。这只是我的想法,但我可能完全错了。

关于python - 训练损失高于验证损失,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50387749/

相关文章:

python - 将子字符串转换为字典

python - Tensorflow:为什么声明变量后必须声明 `saver = tf.train.Saver()`?

python - Keras Input_shape 形状错误

python - 有没有办法在 python 中显示原始图像?

python - 如何刷新页面直到正确加载(由于服务器错误,如错误 502)

python - 在 python 中模拟一个类

python - 在 Tensorflow 2.0 中的简单 LSTM 层之上添加 Attention

python - Keras:将模型对象作为参数传递给损失函数

python - Keras:CNN 多类分类器

Python Keras - ImageDataGenerator 中的自定义标签