我已阅读 docs of both functions ,但据我所知,对于函数 tf.nn.softmax_cross_entropy_with_logits(logits, labels, dim=-1, name=None)
,结果是交叉熵损失,其中logits
的维度和 labels
是相同的。
但是,对于函数 tf.nn.sparse_softmax_cross_entropy_with_logits
,尺寸logits
和 labels
不一样吗?
你能给出一个更详细的tf.nn.sparse_softmax_cross_entropy_with_logits
的例子吗? ?
最佳答案
不同的是tf.nn.softmax_cross_entropy_with_logits
不假设这些类是互斥的:
Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.
比较
sparse_*
:Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
因此,对于稀疏函数,
logits
的维度和 labels
不一样:labels
每个示例包含一个数字,而 logits
每个示例的类数,表示概率。
关于Tensorflow,tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sparse_softmax_cross_entropy_with_logits 的区别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41283115/