python - 为什么我在 Keras 中得到负的 false_negative 计数(例如 -10)?

标签 python tensorflow machine-learning keras dataset

我正在尝试调试一个 keras 模型来对文本进行二进制分类,但效果非常糟糕。

我关闭了所有花哨的功能,并尝试将其与两个不同的数据集(我的数据集的 X 数据相同,但 Y 标签不同)匹配:

  • Y0:所有 Y=0
  • Y1:所有 Y=1

每个数据集大约有 1K 个样本。

然后我尝试多次改变一些参数(例如学习率、层大小、在 one_hot 和单词的整数编码表示之间切换)来拟合模型。

令人惊讶的是,这个测试显示一些指标给了我错误的结果:

Stats of the model when fitted with Y0 and Y1 datasets

为什么 FN 计数为负数?

我做了一些检查。 似乎负的 FalseNegative 计数(例如:-87)会影响其他指标,例如召回率(甚至 > 1)、MAE、准确性

这是我正在运行的(简化的)代码:

import keras_metrics

DEFAULT_INNER_ACTIVATION = 'relu'
DEFAULT_OUTPUT_ACTIVATION = 'softplus'

    def __init__(self, sentence_max_lenght, ctx_max_len, dense_features_dim, vocab_size):

        lstm_input_phrase = keras.layers.Input(shape=(sentence_max_lenght,), name='L0_STC_MyApp')

        lstm_emb_phrase = keras.layers.LSTM(DEFAULT_MODEL_L1_STC_DIM, name='L1_STC_MyApp')(lstm_emb_phrase)
        lstm_emb_phrase = keras.layers.Dense(DEFAULT_MODEL_L2_STC_DIM, name='L2_STC_MyApp', activation=DEFAULT_INNER_ACTIVATION)(lstm_emb_phrase)

        x = keras.layers.Dense(DEFAULT_MODEL_L3_DIM, activation=DEFAULT_INNER_ACTIVATION)(lstm_emb_phrase)
        x = keras.layers.Dense(DEFAULT_MODEL_L4_DIM, activation=DEFAULT_INNER_ACTIVATION)(x)

        main_output = keras.layers.Dense(2, activation=DEFAULT_OUTPUT_ACTIVATION)(x)

        self.model = keras.models.Model(inputs=lstm_input_phrase,
                                        outputs=main_output)

        optimizer = keras.optimizers.Adam(lr=self.LEARNING_RATE)

        self.model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['binary_accuracy',
                                                                                     'mae',
                                                                                     keras_metrics.precision(),
                                                                                     keras_metrics.recall(),
                                                                                     keras_metrics.binary_precision(),
                                                                                     keras_metrics.binary_recall(),
                                                                                     keras_metrics.binary_true_positive(),
                                                                                     keras_metrics.binary_true_negative(),
                                                                                     keras_metrics.binary_false_positive(),
                                                                                     keras_metrics.binary_false_negative()])


    def fit(self, x_lstm_phrase, x_lstm_context, x_lstm_pos, x_dense, y):

        x_arr = keras.preprocessing.sequence.pad_sequences(x_lstm_phrase)

        y_onehot = MyNN.onehot_transform(y)

        return self.model.fit(x_arr,
                       y_onehot,
                       batch_size=self.batch_size,
                       epochs=self.max_epochs,
                       validation_split=self.validation_split,
                       callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss',
                                                                min_delta=0.0001,
                                                                patience=self.patience,
                                                                restore_best_weights=True
                                                                )])



这是我从终端获得的输出第一部分的片段:

注意:这里有两个警告。我不认为这些警告会影响该问题。

Using TensorFlow backend.
2019-04-01 23:26:59.479064: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From [path_to_myApp]\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (f
rom tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From [path_to_myApp]\venv\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.p
ython.ops.math_ops) is deprecated and will be removed in a future version.

 16/618 [..............................] - ETA: 38s - loss: 0.7756 - binary_accuracy: 0.5000 - mean_absolute_error: 0.5007 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 16.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
 32/618 [>.............................] - ETA: 23s - loss: 0.7740 - binary_accuracy: 0.5000 - mean_absolute_error: 0.5000 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 32.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
 48/618 [=>............................] - ETA: 17s - loss: 0.7725 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4994 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 48.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
 64/618 [==>...........................] - ETA: 15s - loss: 0.7711 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4988 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 64.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
 80/618 [==>...........................] - ETA: 13s - loss: 0.7697 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4982 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 80.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
 96/618 [===>..........................] - ETA: 12s - loss: 0.7682 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4976 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 96.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
112/618 [====>.........................] - ETA: 11s - loss: 0.7666 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4970 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 112.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
128/618 [=====>........................] - ETA: 10s - loss: 0.7650 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4963 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 128.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
144/618 [=====>........................] - ETA: 9s - loss: 0.7634 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4956 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 144.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00 
160/618 [======>.......................] - ETA: 9s - loss: 0.7617 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4949 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 160.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
176/618 [=======>......................] - ETA: 8s - loss: 0.7600 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4941 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 176.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
192/618 [========>.....................] - ETA: 8s - loss: 0.7582 - binary_accuracy: 0.5000 - mean_absolute_error: 0.4934 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 192.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00

这是我开始得到 FN 负数的时候:


256/618 [===========>..................] - ETA: 5s - loss: 0.3052 - binary_accuracy: 0.8750 - mean_absolute_error: 0.2778 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 256.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
272/618 [============>.................] - ETA: 5s - loss: 0.2965 - binary_accuracy: 0.8824 - mean_absolute_error: 0.2791 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 272.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
288/618 [============>.................] - ETA: 5s - loss: 0.2882 - binary_accuracy: 0.8889 - mean_absolute_error: 0.2807 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 288.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
304/618 [=============>................] - ETA: 4s - loss: 0.2804 - binary_accuracy: 0.8947 - mean_absolute_error: 0.2828 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 304.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
320/618 [==============>...............] - ETA: 4s - loss: 0.2730 - binary_accuracy: 0.9000 - mean_absolute_error: 0.2853 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 320.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
336/618 [===============>..............] - ETA: 4s - loss: 0.2659 - binary_accuracy: 0.9048 - mean_absolute_error: 0.2882 - precision: 1.0000 - recall: 1.0000 - precision_1: 1.0000 - recall_1: 1.0000 - true_positive: 336.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: 0.0000e+00
352/618 [================>.............] - ETA: 4s - loss: 0.2591 - binary_accuracy: 0.8864 - mean_absolute_error: 0.2914 - precision: 1.0000 - recall: 1.0455 - precision_1: 1.0000 - recall_1: 1.0455 - true_positive: 368.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -16.0000  
368/618 [================>.............] - ETA: 3s - loss: 0.2526 - binary_accuracy: 0.8696 - mean_absolute_error: 0.2950 - precision: 1.0000 - recall: 1.0870 - precision_1: 1.0000 - recall_1: 1.0870 - true_positive: 400.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -32.0000
384/618 [=================>............] - ETA: 3s - loss: 0.2464 - binary_accuracy: 0.8542 - mean_absolute_error: 0.2989 - precision: 1.0000 - recall: 1.1250 - precision_1: 1.0000 - recall_1: 1.1250 - true_positive: 432.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -48.0000
400/618 [==================>...........] - ETA: 3s - loss: 0.2404 - binary_accuracy: 0.8400 - mean_absolute_error: 0.3031 - precision: 1.0000 - recall: 1.1600 - precision_1: 1.0000 - recall_1: 1.1600 - true_positive: 464.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -64.0000
416/618 [===================>..........] - ETA: 3s - loss: 0.2346 - binary_accuracy: 0.8269 - mean_absolute_error: 0.3076 - precision: 1.0000 - recall: 1.1923 - precision_1: 1.0000 - recall_1: 1.1923 - true_positive: 496.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -80.0000
432/618 [===================>..........] - ETA: 2s - loss: 0.2291 - binary_accuracy: 0.8148 - mean_absolute_error: 0.3124 - precision: 1.0000 - recall: 1.2222 - precision_1: 1.0000 - recall_1: 1.2222 - true_positive: 528.0000 - true_negative: 0.0000e+00 - false_positive: 0.0000e+00 - false_negative: -96.0000

你知道如何解决这个问题吗?

编辑:

我尝试删除所有使用的 keras_metrics,只留下 binary_accuracy。

仍然遇到这个问题,因为Loss 和 Val_Loss 几乎下降到零,而准确度则停留在 0.5 左右。

考虑到数据集的特殊性,这意味着#TP = #FN(对于Y1)和#TN + #FP(对于Y0)

怎么可能用这种损失测量来获得这种准确度测量?

这与我使用的事实有什么关系

Dense(2, activation='softplus') 

图层作为输出?

你有什么想法吗?

最佳答案

经过一些测试,我将激活函数从softplus更改为softmax。

尽管分类器表现不佳,但所有指标现在都在正确的范围内。

/H

关于python - 为什么我在 Keras 中得到负的 false_negative 计数(例如 -10)?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55463959/

相关文章:

python - 根据年份删除 periodIndex 类型的列

Python 配置解析器

python - django-allauth 将用户名设置为与电子邮件相同

c# - Azure 机器学习 - 文本分析 C# 错误请求正文,即使在验证 JSON 正文后也是如此

r - 比较期望最大化的时间性能的框架

python - 基于 cpanel 的共享主机上的 MySQLdb 库问题

neural-network - TensorFlow 训练

r - R 中支持 Callback_Early_Stopping

python - 如何正确设置tensorflow中占位符的值?

python批量梯度下降不收敛