python - 使用 Keras 稀疏分类交叉熵进行逐像素多类分类

标签 python tensorflow machine-learning keras loss-function

首先我会透露我是一名机器学习和 Keras 新手,除了一般的 CNN 二元分类器之外,我并不了解太多。我正在尝试使用 U-Net 架构(TF 后端)对许多 256x256 图像执行像素级多类分类。换句话说,我输入了一个 256x256 的图像,我希望它输出一个 256x256 的“掩码”(或标签图像),其中值是 0-30 之间的整数(每个整数代表一个唯一的类)。我正在使用 2 个 1080Ti NVIDIA GPU 进行训练。

当我尝试执行单热编码时,出现 OOM 错误,这就是为什么我使用稀疏分类交叉熵而不是常规分类交叉熵作为损失函数的原因。但是,在训练我的 U-Net 时,我的损失值从头到尾都是“nan”(它初始化为 nan 并且从不改变)。当我通过将所有值除以 30(因此它们从 0-1 变为一堆 0)。

这是我正在使用的 U-Net:

def unet(pretrained_weights = None,input_size = (256,256,1)):
inputs = keras.engine.input_layer.Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
#drop4 = Dropout(0.5)(conv4)
drop4 = SpatialDropout2D(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)

conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
#drop5 = Dropout(0.5)(conv5)
drop5 = SpatialDropout2D(0.5)(conv5)

up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)

up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'softmax')(conv9)
#conv10 = Flatten()(conv10)
#conv10 = Dense(65536, activation = 'softmax')(conv10)
flat10 = Reshape((65536,1))(conv10)
#conv10 = Conv1D(1, 1, activation='linear')(conv10)

model = Model(inputs = inputs, outputs = flat10)

opt = Adam(lr=1e-6,clipvalue=0.01)
model.compile(optimizer = opt, loss = 'sparse_categorical_crossentropy', metrics = ['sparse_categorical_accuracy'])
#model.compile(optimizer = Adam(lr = 1e-6), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
#model.compile(optimizer = Adam(lr = 1e-4),

#model.summary()

if(pretrained_weights):

    model.load_weights(pretrained_weights)

return model

请注意,我需要展平输出只是为了使稀疏分类交叉熵正常运行(由于某种原因它不喜欢我的 2D 矩阵)。

这是一个训练运行的例子(只有 1 个 epoch,因为无论我运行多少次都是一样的)

model = unet()
model.fit(x=x_train, y=y_train, batch_size=1, epochs=1, verbose=1, validation_split=0.2, shuffle=True)

训练 2308 个样本,验证 577 个样本 时代 1/1 2308/2308 [==============================] - 191 秒 83 毫秒/步 - 损失:nan - sparse_categorical_accuracy:0.9672 - val_loss :南 - val_sparse_categorical_accuracy:0.9667 出[18]:

如果需要更多信息来诊断问题,请告诉我。提前致谢!

最佳答案

问题是对于多类分类,你需要输出一个向量,每个类别一维,代表对该类别的置信度。如果您想识别 30 个不同的类别,那么您的最后一层应该是 3D 张量 (256, 256, 30)。

conv10 = Conv2D(30, 1, activation = 'softmax')(conv9)
flat10 = Reshape((256*256*30,1))(conv10)

opt = Adam(lr=1e-6,clipvalue=0.01)
model.compile(optimizer = opt, loss = 'sparse_categorical_crossentropy', metrics = 
['sparse_categorical_accuracy'])

我假设您的输入是 (256, 256, 1) float 张量,其值介于 0 和 1 之间,而您的目标是 (256*256) Int 张量。

这有帮助吗?

关于python - 使用 Keras 稀疏分类交叉熵进行逐像素多类分类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54136325/

相关文章:

python - 带有 Tensorflow 的 CNN,CIFAR-10 的准确率低且没有改进

machine-learning - 将 optim.step() 与 Pytorch 的 DataLoader 一起使用

python - 为什么 key 会是:value pair not be added to a Dictionary?

python - openCV中梯度滤波器的方向

python - 如何在 Flask 中使用 Flot 图表?

python - 为什么类方法访问它的类变量中需要 `global var`?

python - 删除 tflearn 中的列产生奇怪的输出

python - Keras、Tensorflow、CuDNN 初始化失败

python - One-Hot 编码的 Keras 自定义损失

python - 了解二元分类器的精度和召回结果