python - tf.keras.losses 中 "BinaryCrossentropy"和 "binary_crossentropy"的区别?

标签 python tensorflow tf.keras

我正在使用 tf.GradientTape() 使用 TensorFlow 2.0 训练模型,但我发现该模型的准确度为 95%如果我使用tf.keras.losses.BinaryCrossentropy ,但降级为 75%如果我使用tf.keras.losses.binary_crossentropy 。所以我对这里相同指标的差异感到困惑?

import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers

from sklearn.model_selection import train_test_split

def read_data():
    red_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv", sep=";")
    white_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv", sep=";")
    red_wine["type"] = 1
    white_wine["type"] = 0
    wines = red_wine.append(white_wine)
    return wines

def get_x_y(df):
    x = df.iloc[:, :-1].values.astype(np.float32)
    y = df.iloc[:, -1].values.astype(np.int32)
    return x, y

def build_model():
    inputs = layers.Input(shape=(12,))
    dense1 = layers.Dense(12, activation="relu", name="dense1")(inputs)
    dense2 = layers.Dense(9, activation="relu", name="dense2")(dense1)
    outputs = layers.Dense(1, activation = "sigmoid", name="outputs")(dense2)
    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    return model

def generate_dataset(df, batch_size=32, shuffle=True, train_or_test = "train"):
    x, y = get_x_y(df)
    ds = tf.data.Dataset.from_tensor_slices((x, y))
    if shuffle:
        ds = ds.shuffle(10000)
    if train_or_test == "train":
        ds = ds.batch(batch_size)
    else:
        ds = ds.batch(len(df))
    return ds

# loss_object = tf.keras.losses.binary_crossentropy
loss_object = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

def train_step(model, optimizer, x, y):
    with tf.GradientTape() as tape:
        pred = model(x, training=True)
        loss = loss_object(y, pred)
    grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))


def train_model(model, train_ds, epochs=10):
    for epoch in range(epochs):
        print(epoch)
        for x, y in train_ds:
            train_step(model, optimizer, x, y)

def main():
    data = read_data()
    train, test = train_test_split(data, test_size=0.2, random_state=23)
    train_ds = generate_dataset(train, 32, True, "train")
    test_ds = generate_dataset(test, 32, False, "test")
    model = build_model()
    train_model(model, train_ds, 10)
    model.compile(loss='binary_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy']
                  )
    model.evaluate(test_ds)

main()

最佳答案

它们确实应该工作相同; BinaryCrossentropy使用binary_crossentropy ,文档字符串描述中存在明显差异;前者用于两个类标签,而后者支持任意类计数。但是,如果以预期格式传入目标,则在调用后端的 binary_crossentropy 之前都应用相同的预处理。 ,它进行实际的计算。

您观察到的差异可能是一个再现性问题;确保设置随机种子 - 请参阅下面的函数。有关再现性的更完整答案,请参阅 here .

<小时/>

函数

def reset_seeds(reset_graph_with_backend=None):
    if reset_graph_with_backend is not None:
        K = reset_graph_with_backend
        K.clear_session()
        tf.compat.v1.reset_default_graph()
        print("KERAS AND TENSORFLOW GRAPHS RESET")  # optional

    np.random.seed(1)
    random.seed(2)
    tf.compat.v1.set_random_seed(3)
    print("RANDOM SEEDS RESET")  # optional
<小时/>

用法:

import tensorflow as tf
import tensorflow.keras.backend as K

reset_seeds(K)

关于python - tf.keras.losses 中 "BinaryCrossentropy"和 "binary_crossentropy"的区别?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59612914/

相关文章:

python - 从 get_ipython().magic(u'R ...') 转换为简单的 r2py 命令

python - 如何在excel表中创建具有相应值的新列

TensorFlow 未检测到 GPU

python - 带有 ListDirectory 的 Tensorflow 数据集 API

tensorflow - 预期 input_1 有 3 个维度,但得到形状为 (3, 4) 的数组

框架内带有标签的 Python 类

python - 在python中获取文件的唯一值

python - Google colab TPU 并在训练时从光盘读取

tensorflow - 为什么 tf.function 跟踪图层两次?

python - TF Keras v 1.14+ : subclass model or subclass layer for "module"