我正在训练一个简单的 MLP,以使用 Keras 对 MNIST 数字进行分类。我遇到了一个问题,无论我使用什么优化器和学习率,模型都不会学习/下降,我的准确度几乎和随机猜测一样好。
代码如下:
model2=Sequential()
model2.add(Dense(output_dim=512, input_dim=784, activation='relu', name='dense1', kernel_initializer='random_uniform'))
model2.add(Dropout(0.2, name='dropout1'))
model2.add(Dense(output_dim=512, input_dim=512, activation='relu', name='dense2', kernel_initializer='random_uniform'))
model2.add(Dropout(0.2, name='dropout2'))
model2.add(Dense(output_dim=10, input_dim=512, activation='softmax', name='dense3', kernel_initializer='random_uniform'))
model2.compile(optimizer=Adagrad(), loss='categorical_crossentropy', metrics=['accuracy'])
model2.summary()
model2.fit(image_train.as_matrix(),img_keras_lb,batch_size=128,epochs = 100)
和输出:
Epoch 1/100
33600/33600 [==============================] - 5s - loss: 14.6704 - acc: 0.0894
Epoch 2/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 3/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 4/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 5/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 6/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 7/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 8/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 9/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 10/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 11/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 12/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 13/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 14/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 15/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 16/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 17/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 18/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 19/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 20/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 21/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
Epoch 22/100
33600/33600 [==============================] - 4s - loss: 14.6809 - acc: 0.0892
如您所见,该模型没有学习任何东西。我还尝试了 SGD、Adam、RMSprop,以及将批量大小减少到 32、16 等。
非常感谢任何关于为什么会发生这种情况的指示!
最佳答案
您正在使用 ReLU
激活,它基本上切断低于 0 的激活,并使用默认的 random_normal
初始化,它具有参数 keras.initializers.RandomUniform (minval=-0.05, maxval=0.05, seed=None)
默认。如您所见,初始化值非常接近 0,其中一半(-.05 到 0)根本没有被激活。那些确实被激活的(0 到 0.05)传播梯度非常非常缓慢。
我的猜测是将初始化更改为 0
和 n
(这是 ReLU 的操作范围),您的模型应该会快速收敛。
关于python - Keras MNIST 梯度下降停滞/学习非常缓慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46639430/