我一直在阅读日志输出从头开始重新启动数据预取。显然,这意味着没有足够的数据,并且数据从一开始就被预取*。但是,我的数据集存在 10.000 个数据样本,而我的批量大小为 4。它怎么可能必须预取数据,因为我的批量大小为 4,这意味着每次迭代将需要 4 个数据样本。谁能澄清我的理解吗?
日志:
I0409 20:33:35.053406 20072 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:35.053447 20074 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:40.320605 20074 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:40.320598 20072 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:45.591019 20072 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:45.591047 20074 data_layer.cpp:73] Restarting data prefetching from start.
I0409 20:33:49.392580 20034 solver.cpp:398] Test net output #0: loss = nan (* 1 = nan loss)
I0409 20:33:49.780678 20034 solver.cpp:219] Iteration 0 (-4.2039e-45 iter/s, 20.1106s/100 iters), loss = 54.0694
I0409 20:33:49.780731 20034 solver.cpp:238] Train net output #0: loss = 54.0694 (* 1 = 54.0694 loss)
I0409 20:33:49.780750 20034 sgd_solver.cpp:105] Iteration 0, lr = 0.0001
I0409 20:34:18.812854 20034 solver.cpp:219] Iteration 100 (3.44442 iter/s, 29.0325s/100 iters), loss = 21.996
I0409 20:34:18.813213 20034 solver.cpp:238] Train net output #0: loss = 21.996 (* 1 = 21.996 loss)
最佳答案
如果你有 10,000 个样本,并且以大小为 4 的批处理处理它们,这意味着在 10,000/4=2,500 次迭代后,你将处理所有数据,caffe 将从头开始读取数据。
顺便说一句,检查所有样本一次也称为“纪元”。
每个纪元之后,caffe 都会打印到日志
Restarting data prefetching from start
关于machine-learning - *从头开始重新启动数据预取*在caffe中是什么意思,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43306352/