machine-learning - Caffe 损失层、均值和准确度

标签 machine-learning neural-network deep-learning caffe conv-neural-network

我有一个用于深度估计的完全卷积网络,如下所示:(为了简单起见,只有上层和下层):

# input: image and depth_image
layer {
  name: "train-data"
  type: "Data"
  top: "data"
  top: "silence_1"
  include {
    phase: TRAIN
  }
  transform_param {
    #mean_file: "mean_train.binaryproto"
    scale: 0.00390625
  }
  data_param {
        source: "/train_lmdb"
    batch_size: 4
    backend: LMDB
  }
}
layer {
  name: "train-depth"
  type: "Data"
  top: "depth"
  top: "silence_2"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "train_depth_lmdb"
    batch_size: 4
    backend: LMDB
  }
}
layer {
  name: "val-data"
  type: "Data"
  top: "data"
  top: "silence_1"
  include {
    phase: TEST
  }
  transform_param {
    #mean_file: "mean_val.binaryproto"
    scale: 0.00390625
  }
  data_param {
    source: "val_lmdb"
    batch_size: 4
    backend: LMDB
  }
}
layer {
  name: "val-depth"
  type: "Data"
  top: "depth"
  top: "silence_2"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "val_depth_lmdb"
    batch_size: 4
    backend: LMDB
  }
}
################## Silence unused labels ##################
layer {
    name: "silence_layer_1"
    type: "Silence"
    bottom: "silence_1"
}

layer {
    name: "silence_layer_2"
    type: "Silence"
    bottom: "silence_2"
}
....
layer {
    name: "conv"
    type: "Convolution"
    bottom: "concat"
    top: "conv"
    convolution_param {
        num_output: 1
        kernel_size: 5
        pad: 2
        stride: 1
        engine: CUDNN
        weight_filler {
            type: "gaussian"
            std: 0.01
        }
        bias_filler {
            type: "constant"
            value: 0
        }
    }
}

layer {
    name: "relu"
    type: "ReLU"
    bottom: "conv"
    top: "result"
    relu_param{
    negative_slope: 0.01
        engine: CUDNN
    }
}

# Error
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "result"
  bottom: "depth"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "EuclideanLoss"
  bottom: "result"
  bottom: "depth"
  top: "loss"
}

现在我有 3 个问题:

当我训练网络时,准确度层始终为 1。我不明白为什么?

EuclideanLayer 是用于此目的的正确层吗?

在这种情况下是否需要平均值,或者我可以忽略平均值吗?

#Define image transformers
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_mean('data', mean_array)
transformer.set_transpose('data', (2,0,1))

image = "test.png"

img = caffe.io.load_image(image, False)

img = caffe.io.resize_image( img, (IMAGE_WIDTH, IMAGE_HEIGHT))

net.blobs['data'].data[...] = transformer.preprocess('data', img)

pred = net.forward()

output_blob = pred['result']

最佳答案

  1. 准确度始终为 1 - 请参阅 this answer
  2. "EuclideanLoss" 层非常适合回归。
  3. 减去平均值应该有助于网络更好地收敛。继续使用它。您可以阅读更多有关数据标准化的重要性以及在这方面可以采取的措施 here .

关于machine-learning - Caffe 损失层、均值和准确度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40462524/

相关文章:

image-processing - 在特征提取中实现卷积神经网络

javascript - Tensorflow.js 中的内存泄漏 : How to clean up unused tensors?

python - 如何使用 PySpark 的 ChiSqSelector 检查所选功能?

python - 在 Tensorflow 中学习两个数字的和

r - 了解随机起始权重对神经网络性能的影响

python - 为什么我的神经网络不起作用?

javascript - Tensorflow/ffjs : Embedding Layer Weights are NaN after training with model. fit() 方法

python - keras中的fit_generator,将所有内容加载到内存中

python - 属性错误 : 'CrossEntropyLoss' object has no attribute 'backward'

linux - 通过 docker 安装 Tensorflow 时出错