python - Keras 提前停止 : Which min_delta and patience to use?

标签 python python-3.x tensorflow keras lstm

我是深度学习和 Keras 的新手,我尝试对我的模型训练过程进行的改进之一是利用 Keras 的 keras.callbacks.EarlyStopping 回调函数。

根据训练我的模型的输出,将以下参数用于 EarlyStopping 似乎合理吗?

EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5, verbose=0, mode='auto')

此外,如果要等待 5 个连续的时期,其中 val_loss 的差异小于 min_delta 0.0001?

训练 LSTM 模型时的输出(没有 EarlyStop)

运行所有 100 个 epoch

Epoch 1/100
10200/10200 [==============================] - 133s 12ms/step - loss: 1.1236 - val_loss: 0.6431
Epoch 2/100
10200/10200 [==============================] - 141s 13ms/step - loss: 0.2783 - val_loss: 0.0301
Epoch 3/100
10200/10200 [==============================] - 143s 13ms/step - loss: 0.1131 - val_loss: 0.1716
Epoch 4/100
10200/10200 [==============================] - 145s 13ms/step - loss: 0.0586 - val_loss: 0.3671
Epoch 5/100
10200/10200 [==============================] - 146s 13ms/step - loss: 0.0785 - val_loss: 0.0038
Epoch 6/100
10200/10200 [==============================] - 146s 13ms/step - loss: 0.0549 - val_loss: 0.0041
Epoch 7/100
10200/10200 [==============================] - 147s 13ms/step - loss: 4.7482e-04 - val_loss: 8.9437e-05
Epoch 8/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.5181e-05 - val_loss: 4.7367e-06
Epoch 9/100
10200/10200 [==============================] - 149s 14ms/step - loss: 9.1632e-07 - val_loss: 3.6576e-07
Epoch 10/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.4117e-07 - val_loss: 1.6058e-07
Epoch 11/100
10200/10200 [==============================] - 152s 14ms/step - loss: 1.2024e-07 - val_loss: 1.2804e-07
Epoch 12/100
10200/10200 [==============================] - 150s 14ms/step - loss: 0.0151 - val_loss: 0.4181
Epoch 13/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0701 - val_loss: 0.0057
Epoch 14/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0332 - val_loss: 5.0014e-04
Epoch 15/100
10200/10200 [==============================] - 147s 14ms/step - loss: 0.0367 - val_loss: 0.0020
Epoch 16/100
10200/10200 [==============================] - 151s 14ms/step - loss: 0.0040 - val_loss: 0.0739
Epoch 17/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0282 - val_loss: 6.4996e-05
Epoch 18/100
10200/10200 [==============================] - 147s 13ms/step - loss: 0.0346 - val_loss: 1.6545e-04
Epoch 19/100
10200/10200 [==============================] - 147s 14ms/step - loss: 4.6678e-05 - val_loss: 6.8101e-06
Epoch 20/100
10200/10200 [==============================] - 148s 14ms/step - loss: 1.7270e-06 - val_loss: 6.7108e-07
Epoch 21/100
10200/10200 [==============================] - 147s 14ms/step - loss: 2.4334e-07 - val_loss: 1.5736e-07
Epoch 22/100
10200/10200 [==============================] - 147s 14ms/step - loss: 0.0416 - val_loss: 0.0547
Epoch 23/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0413 - val_loss: 0.0145
Epoch 24/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0045 - val_loss: 1.1096e-04
Epoch 25/100
10200/10200 [==============================] - 149s 14ms/step - loss: 0.0218 - val_loss: 0.0083
Epoch 26/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0029 - val_loss: 5.0954e-05
Epoch 27/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0316 - val_loss: 0.0035
Epoch 28/100
10200/10200 [==============================] - 148s 14ms/step - loss: 0.0032 - val_loss: 0.2343
Epoch 29/100
10200/10200 [==============================] - 149s 14ms/step - loss: 0.0299 - val_loss: 0.0021
Epoch 30/100
10200/10200 [==============================] - 150s 14ms/step - loss: 0.0171 - val_loss: 9.3622e-04
Epoch 31/100
10200/10200 [==============================] - 149s 14ms/step - loss: 0.0167 - val_loss: 0.0023
Epoch 32/100
10200/10200 [==============================] - 148s 14ms/step - loss: 7.3654e-04 - val_loss: 4.1998e-05
Epoch 33/100
10200/10200 [==============================] - 149s 14ms/step - loss: 7.3300e-06 - val_loss: 1.9043e-06
Epoch 34/100
10200/10200 [==============================] - 148s 14ms/step - loss: 6.6648e-07 - val_loss: 2.3814e-07
Epoch 35/100
10200/10200 [==============================] - 147s 14ms/step - loss: 1.5611e-07 - val_loss: 1.3155e-07
Epoch 36/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.2159e-07 - val_loss: 1.2398e-07
Epoch 37/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1940e-07 - val_loss: 1.1977e-07
Epoch 38/100
10200/10200 [==============================] - 150s 14ms/step - loss: 1.1939e-07 - val_loss: 1.1935e-07
Epoch 39/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1935e-07
Epoch 40/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1935e-07
Epoch 41/100
10200/10200 [==============================] - 150s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 42/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 43/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 44/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 45/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 46/100
10200/10200 [==============================] - 151s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 47/100
10200/10200 [==============================] - 151s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07
Epoch 48/100
10200/10200 [==============================] - 151s 14ms/step - loss: 1.1921e-07 - val_loss: 1.1921e-07

使用 EarlyStop 输出

在 11 个 epoches 之后停止(太早?)

10200/10200 [==============================] - 134s 12ms/step - loss: 1.2733 - val_loss: 0.9022
Epoch 2/100
10200/10200 [==============================] - 144s 13ms/step - loss: 0.5429 - val_loss: 0.4093
Epoch 3/100
10200/10200 [==============================] - 144s 13ms/step - loss: 0.1644 - val_loss: 0.0552
Epoch 4/100
10200/10200 [==============================] - 144s 13ms/step - loss: 0.0263 - val_loss: 0.9872
Epoch 5/100
10200/10200 [==============================] - 145s 13ms/step - loss: 0.1297 - val_loss: 0.1175
Epoch 6/100
10200/10200 [==============================] - 146s 13ms/step - loss: 0.0287 - val_loss: 0.0136
Epoch 7/100
10200/10200 [==============================] - 145s 13ms/step - loss: 0.0718 - val_loss: 0.0270
Epoch 8/100
10200/10200 [==============================] - 145s 13ms/step - loss: 0.0272 - val_loss: 0.0530
Epoch 9/100
10200/10200 [==============================] - 150s 14ms/step - loss: 3.3879e-04 - val_loss: 0.0575
Epoch 10/100
10200/10200 [==============================] - 146s 13ms/step - loss: 1.6789e-05 - val_loss: 0.0766
Epoch 11/100
10200/10200 [==============================] - 149s 14ms/step - loss: 1.4124e-06 - val_loss: 0.0981

Training stops early here.

 EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='min')

尝试将 min_delta 设置为 0。为什么即使 val_loss 从 0.0011 增加到 0.1045 也会停止?

10200/10200 [==============================] - 140s 13ms/step - loss: 1.1938 - val_loss: 0.5941
Epoch 2/100
10200/10200 [==============================] - 150s 14ms/step - loss: 0.3307 - val_loss: 0.0989
Epoch 3/100
10200/10200 [==============================] - 151s 14ms/step - loss: 0.0946 - val_loss: 0.0213
Epoch 4/100
10200/10200 [==============================] - 149s 14ms/step - loss: 0.0521 - val_loss: 0.0011
Epoch 5/100
10200/10200 [==============================] - 150s 14ms/step - loss: 0.0793 - val_loss: 0.0313
Epoch 6/100
10200/10200 [==============================] - 154s 14ms/step - loss: 0.0367 - val_loss: 0.0369
Epoch 7/100
10200/10200 [==============================] - 154s 14ms/step - loss: 0.0323 - val_loss: 0.0014
Epoch 8/100
10200/10200 [==============================] - 153s 14ms/step - loss: 0.0408 - val_loss: 0.0011
Epoch 9/100
10200/10200 [==============================] - 154s 14ms/step - loss: 0.0379 - val_loss: 0.1045

Training stops early here.

最佳答案

从keras documentation可以清楚的看出两个参数的作用.

min_delta : minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.

patience : number of epochs with no improvement after which training will be stopped.

这些参数实际上没有标准值。您需要分析训练过程的参与者(数据集、环境、模型类型)以确定他们的值(value)。

(1)。耐心

  • 数据集 - 如果数据集对于不同类别的变化不太好。(示例 - 25-30 岁和 30-35 岁人群的面孔)。 损失的变化将是缓慢且随机的。 - 在这种情况下 patience 具有更高的值(value)是很好的。反之亦然 良好且清晰的数据集。
  • Model-Type - 在训练 GAN 模型时,准确度变化会很低(最大情况)并且一个 epoch 运行会消耗大量的 显卡。在这种情况下,最好在之后保存 checkpoint files patience 值较低的特定时期数。接着 根据需要使用检查点进一步改进。对其他模型类型进行类似分析。
  • 运行时环境 - 在 CPU 上进行训练时,epoch 运行会很耗时。因此,我们更喜欢 patience 的较小值。和 可以使用 GPU 尝试更大的值。

(2)。 min_delta

  • 要确定 min_delta,运行几个 epoch 并查看误差的变化 & 验证准确性。根据变化率,它应该是 定义。在许多情况下,默认值 0 工作得很好。

关于python - Keras 提前停止 : Which min_delta and patience to use?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50284898/

相关文章:

python - 在linux上使用python编写带有DOS行尾的文本文件

python - 用更实用的方法替换链接(递归)for 循环

python - Django 第一次迁移有一个匿名用户

python - VS Code/Pylance/Pylint 无法解析导入

python - PIL - 提交的文件为空测试用例

python - Tkinter 将 <Shift-MouseWheel> 绑定(bind)到水平滚动

Django:如何在 sqlite3 数据库中添加/删除字段?

python - tensorflow 概率中的重新参数化 : tf. GradientTape() 不计算相对于分布均值的梯度

python - 多层前馈网络无法在 TensorFlow 中训练

Python Pandas groupby 函数,具有每月、每小时的动态参数