python - Keras 中的多个类的总损失是如何计算的?

标签 python tensorflow machine-learning keras deep-learning

假设我有以下参数的网络:

  1. 用于语义分割的全卷积网络
  2. loss = weighted binary cross entropy(但它可以是任何损失函数,没关系)
  3. 5 个类别 - 输入是图像,ground truths 是二进制掩码
  4. 批量大小 = 16

现在,我知道损失是按以下方式计算的:二进制交叉熵应用于图像中每个类的每个像素。所以本质上,每个像素将有 5 个损失值

这一步之后会发生什么?

当我训练我的网络时,它只打印一个时期的单个损失值。 产生单个值需要发生许多级别的损失累积,并且在文档/代码中根本不清楚它是如何发生的。

  1. 首先合并什么 - (1) 类的损失值(例如每个像素合并 5 个值(每个类一个)),然后是图像中的所有像素或 (2) 图像中的所有像素每个类别的图像,然后合并所有类别的损失?
  2. 这些不同的像素组合究竟是如何发生的 - 在哪里求和/在哪里求平均?
  3. Keras's binary_crossentropy axis=-1 的平均值。那么这是每个类所有像素的平均值还是所有类的平均值还是两者兼而有之?

换句话说:不同类别的损失是如何组合起来产生图像的单一损失值的?

这在文档中根本没有解释,并且对于在 keras 上进行多类预测的人非常有帮助,无论网络类型如何。这是keras code开头的链接其中首先传入损失函数。

我能找到的最接近解释的是

loss: String (name of objective function) or objective function. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses

来自 keras .那么这是否意味着图像中每个类别的损失只是简单相加?

示例代码 在这里供其他人试用。这是从 Kaggle 借来的基本实现并针对多标签预测进行了修改:

# Build U-Net model
num_classes = 5
IMG_DIM = 256
IMG_CHAN = 3
weights = {0: 1, 1: 1, 2: 1, 3: 1, 4: 1000} #chose an extreme value just to check for any reaction
inputs = Input((IMG_DIM, IMG_DIM, IMG_CHAN))
s = Lambda(lambda x: x / 255) (inputs)

c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (s)
c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)

c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (p1)
c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)

c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (p2)
c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)

c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (p3)
c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)

c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (p4)
c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (c5)

u6 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (u6)
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (c6)

u7 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (u7)
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (c7)

u8 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (u8)
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (c8)

u9 = Conv2DTranspose(8, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (u9)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (c9)

outputs = Conv2D(num_classes, (1, 1), activation='sigmoid') (c9)

model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss=weighted_loss(weights), metrics=[mean_iou])

def weighted_loss(weightsList):
    def lossFunc(true, pred):

        axis = -1 #if channels last 
        #axis=  1 #if channels first        
        classSelectors = K.argmax(true, axis=axis) 
        classSelectors = [K.equal(tf.cast(i, tf.int64), tf.cast(classSelectors, tf.int64)) for i in range(len(weightsList))]
        classSelectors = [K.cast(x, K.floatx()) for x in classSelectors]
        weights = [sel * w for sel,w in zip(classSelectors, weightsList)] 

        weightMultiplier = weights[0]
        for i in range(1, len(weights)):
            weightMultiplier = weightMultiplier + weights[i]

        loss = BCE_loss(true, pred) - (1+dice_coef(true, pred))
        loss = loss * weightMultiplier
        return loss
    return lossFunc
model.summary()

可以在此处找到实际的 BCE-DICE 损失函数。

问题的动机:根据上述代码,网络在 20 个 epoch 后的总验证损失为 ~1%;然而,前 4 个类(class)的联合得分平均交集均高于 95%,但最后一个类(class)的平均交集为 23%。清楚地表明第 5 类的表现一点也不好。但是,这种准确性损失根本没有反射(reflect)在损失中。因此,这意味着样本的个体损失以完全抵消我们在第 5 类中看到的巨大损失的方式组合在一起。而且,所以当每个样本的损失在批处理中合并时,它仍然非常低。我不确定如何协调这些信息。

最佳答案

尽管我已经在 related answer 中提到了这个答案的一部分,但让我们逐步检查源代码并提供更多详细信息,以具体找到答案。

首先,让我们前馈(!):there is a call weighted_loss 函数以 y_truey_predsample_weightmask 作为输入:

weighted_loss = weighted_losses[i]
# ...
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)

weighted_loss 实际上是 an element of a list其中包含传递给 fit 方法的所有(增强的)损失函数:

weighted_losses = [
    weighted_masked_objective(fn) for fn in loss_functions]

我提到的“增强”一词在这里很重要。这是因为,正如您在上面看到的,实际的损失函数被另一个名为 weighted_masked_objective 的函数包装了。定义如下:

def weighted_masked_objective(fn):
    """Adds support for masking and sample-weighting to an objective function.
    It transforms an objective function `fn(y_true, y_pred)`
    into a sample-weighted, cost-masked objective function
    `fn(y_true, y_pred, weights, mask)`.
    # Arguments
        fn: The objective function to wrap,
            with signature `fn(y_true, y_pred)`.
    # Returns
        A function with signature `fn(y_true, y_pred, weights, mask)`.
    """
    if fn is None:
        return None

    def weighted(y_true, y_pred, weights, mask=None):
        """Wrapper function.
        # Arguments
            y_true: `y_true` argument of `fn`.
            y_pred: `y_pred` argument of `fn`.
            weights: Weights tensor.
            mask: Mask tensor.
        # Returns
            Scalar tensor.
        """
        # score_array has ndim >= 2
        score_array = fn(y_true, y_pred)
        if mask is not None:
            # Cast the mask to floatX to avoid float64 upcasting in Theano
            mask = K.cast(mask, K.floatx())
            # mask should have the same shape as score_array
            score_array *= mask
            #  the loss per batch should be proportional
            #  to the number of unmasked samples.
            score_array /= K.mean(mask)

        # apply sample weighting
        if weights is not None:
            # reduce score_array to same ndim as weight array
            ndim = K.ndim(score_array)
            weight_ndim = K.ndim(weights)
            score_array = K.mean(score_array,
                                 axis=list(range(weight_ndim, ndim)))
            score_array *= weights
            score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))
        return K.mean(score_array)
return weighted

所以,有一个嵌套函数,weighted,它实际上在 score_array = fn(y_true, y_pred)。现在,具体来说,对于 OP 提供的示例,fn(即损失函数)是 binary_crossentropy。因此我们需要看一下 binary_crossentropy() 的定义在凯拉斯:

def binary_crossentropy(y_true, y_pred):
    return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)

这又会调用后端函数 K.binary_crossentropy()。如果使用 Tensorflow 作为后端,K.binary_crossentropy() 的定义如下:

def binary_crossentropy(target, output, from_logits=False):
    """Binary crossentropy between an output tensor and a target tensor.
    # Arguments
        target: A tensor with the same shape as `output`.
        output: A tensor.
        from_logits: Whether `output` is expected to be a logits tensor.
            By default, we consider that `output`
            encodes a probability distribution.
    # Returns
        A tensor.
    """
    # Note: tf.nn.sigmoid_cross_entropy_with_logits
    # expects logits, Keras expects probabilities.
    if not from_logits:
        # transform back to logits
        _epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)
        output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)
        output = tf.log(output / (1 - output))

    return tf.nn.sigmoid_cross_entropy_with_logits(labels=target,
                                                   logits=output)

tf.nn.sigmoid_cross_entropy_with_logits返回:

A Tensor of the same shape as logits with the componentwise logistic losses.

现在,让我们反向传播(!):考虑到上面的注释,K.binray_crossentropy 的输出形状将与 y_pred(或 y_true)。如 OP 所述,y_true 的形状为 (batch_size, img_dim, img_dim, num_classes)。因此,K.mean(..., axis=-1) 应用于形状为 (batch_size, img_dim, img_dim, num_classes) 的张量,这导致形状为 (batch_size, img_dim, img_dim) 的输出张量。因此,所有类别的损失值都针对图像中的每个像素进行平均。因此,上面提到的 weighted 函数中 score_array 的形状将是 (batch_size, img_dim, img_dim)。还有一步:weighted 函数中的 return 语句再次取平均值,即 return K.mean(score_array)。那么它是如何计算平均值的呢?如果你看一下 mean 的定义后端函数,您会发现 axis 参数默认为 None:

def mean(x, axis=None, keepdims=False):
    """Mean of a tensor, alongside the specified axis.
    # Arguments
        x: A tensor or variable.
        axis: A list of integer. Axes to compute the mean.
        keepdims: A boolean, whether to keep the dimensions or not.
            If `keepdims` is `False`, the rank of the tensor is reduced
            by 1 for each entry in `axis`. If `keepdims` is `True`,
            the reduced dimensions are retained with length 1.
    # Returns
        A tensor with the mean of elements of `x`.
    """
    if x.dtype.base_dtype == tf.bool:
        x = tf.cast(x, floatx())
return tf.reduce_mean(x, axis, keepdims)

它调用tf.reduce_mean()给定一个 axis=None 参数,取输入张量所有轴的平均值并返回一个值。因此,计算形状为 (batch_size, img_dim, img_dim) 的整个张量的平均值,这转化为对批处理中的所有标签及其所有像素取平均值,并返回为一个表示损失值的标量值。然后,这个损失值由 Keras 报告回来并用于优化。


奖励:如果我们的模型有多个输出层并因此使用多个损失函数怎么办?

记住我在这个答案中提到的第一段代码:

weighted_loss = weighted_losses[i]
# ...
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)

如您所见,有一个 i 变量用于索引数组。您可能已经猜对了:它实际上是一个循环的一部分,该循环使用其指定的损失函数计算每个输出层的损失值,然后将所有这些损失值的(加权)总和计算为 compute the total loss。 :

# Compute total loss.
total_loss = None
with K.name_scope('loss'):
    for i in range(len(self.outputs)):
        if i in skip_target_indices:
            continue
        y_true = self.targets[i]
        y_pred = self.outputs[i]
        weighted_loss = weighted_losses[i]
        sample_weight = sample_weights[i]
        mask = masks[i]
        loss_weight = loss_weights_list[i]
        with K.name_scope(self.output_names[i] + '_loss'):
            output_loss = weighted_loss(y_true, y_pred,
                                        sample_weight, mask)
        if len(self.outputs) > 1:
            self.metrics_tensors.append(output_loss)
            self.metrics_names.append(self.output_names[i] + '_loss')
        if total_loss is None:
            total_loss = loss_weight * output_loss
        else:
            total_loss += loss_weight * output_loss
    if total_loss is None:
        if not self.losses:
            raise ValueError('The model cannot be compiled '
                                'because it has no loss to optimize.')
        else:
            total_loss = 0.

    # Add regularization penalties
    # and other layer-specific losses.
    for loss_tensor in self.losses:
        total_loss += loss_tensor  

关于python - Keras 中的多个类的总损失是如何计算的?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52034983/

相关文章:

python - Tensorflow:如何将 NaN 转换为数字?

algorithm - 从预测算法中获取两个目标值

python - 如何为非常相似的代码片段编写方法或 for 循环

java - Google Wave 沙盒

tensorflow - 如何解释损失和准确性的增加

python - 用于启动和停止进程的最佳 Python Web 框架

c++ - tensorflow 无效参数: In[0] is not a matrix

machine-learning - sklearns fusion_matrix中的 'normalize'参数是什么意思?

python - 无法使用 Python 中的请求加载网站

python - 从多个 URL 中抓取数据