我正在从事一个深度学习项目,我在该项目中编写了一些测试来评估神经网络中的净权重。 evaluate_net_weight
的代码如下所示:
/*! Compute the loss of the net as a function of the weight at index (i,j) in
* layer l. dx is added as an offset to the current value of the weight. */
//______________________________________________________________________________
template <typename Architecture>
auto evaluate_net_weight(TDeepNet<Architecture> &net, std::vector<typename Architecture::Matrix_t> & X,
const typename Architecture::Matrix_t &Y, const typename Architecture::Matrix_t &W, size_t l,
size_t k, size_t i, size_t j, typename Architecture::Scalar_t xvalue) ->
typename Architecture::Scalar_t
{
using Scalar_t = typename Architecture::Scalar_t;
Scalar_t prev_value = net.GetLayerAt(l)->GetWeightsAt(k).operator()(i,j);
net.GetLayerAt(l)->GetWeightsAt(k).operator()(i,j) = xvalue;
Scalar_t res = net.Loss(X, Y, W, false, false);
net.GetLayerAt(l)->GetWeightsAt(k).operator()(i,j) = prev_value;
//std::cout << "compute loss for weight " << xvalue << " " << prev_value << " result " << res << std::endl;
return res;
}
函数调用如下:
// Testing input gate: input weights k = 0
auto &Wi = layer->GetWeightsAt(0);
auto &dWi = layer->GetWeightGradientsAt(0);
for (size_t i = 0; i < (size_t) Wi.GetNrows(); ++i) {
for (size_t j = 0; j < (size_t) Wi.GetNcols(); ++j) {
auto f = [&lstm, &XArch, &Y, &weights, i, j](Scalar_t x) {
return evaluate_net_weight(lstm, XArch, Y, weights, 0, 0, i, j, x);
};
ROOT::Math::Functor1D func(f);
double dy = deriv.Derivative1(func, Wi(i,j), 1.E-5);
Double_t dy_ref = dWi(i,j);
// Compute relative error if dy != 0
Double_t error;
std::string errorType;
if (std::fabs(dy_ref) > 1e-15) {
error = std::fabs((dy - dy_ref) / dy_ref);
errorType = "relative";
} else {
error = std::fabs(dy - dy_ref);
errorType = "absolute";
}
if (debug) std::cout << "Input Gate: input weight gradients (" << i << "," << j << ") : (comp, ref) " << dy << ", " << dy_ref << std::endl;
if (error >= maximum_error) {
maximum_error = error;
maxErrorType = errorType;
}
}
}
XArch
是我的输入,Y
是预测,lstm
是指网络类型。这些已经定义好了。
当我尝试使用 cmake 构建程序时,我通常会遇到此错误:
/Users/harshitprasad/Desktop/gsoc-rnn/root/tmva/tmva/test/DNN/RNN/TestLSTMBackpropagation.h:385:24: error:
no matching function for call to 'evaluate_net_weight'
return evaluate_net_weight(lstm, XArch, Y, weights, 0, 2, i, j, x);
^~~~~~~~~~~~~~~~~~~
/Users/harshitprasad/Desktop/gsoc-rnn/root/tmva/tmva/test/DNN/RNN/TestLSTMBackpropagation.h:67:6: note:
candidate function [with Architecture = TMVA::DNN::TReference<double>] not viable: no known
conversion from 'Scalar_t' (aka 'TMatrixT<double>') to 'typename TReference<double>::Scalar_t'
(aka 'double') for 9th argument
auto evaluate_net_weight(TDeepNet<Architecture> &net, std::vector<typename Architecture::Matr...
我想不通,为什么会出现这个错误?如果有人可以帮助我解决这个问题,那就太好了。谢谢!
最佳答案
您的自定义类型可能有不同且相互冲突的定义 Scalar_t
, 在不同的范围内。
从错误消息中,我们可以看到该函数需要一个typename TReference<double>::Scalar_t
。 (相当于 double
),但你实际上传递了一个 Scalar_t
类型的参数(可能在全局范围内的某个地方定义),相当于 TMatrixT<double>
,这会导致错误,正如一些程序员提到的那样。
关于c++ - 在 C++11 中没有用于调用 'evaluate_net_weight' 的匹配函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51220544/