python - 调试神经网络丢失问题的概率不在 [0,1] 内

标签 python debugging neural-network pytorch torch

我尝试使用 torch 将掉落率放入我的神经网络(NN)中,最后出现了一个奇怪的错误。我该如何修复它?

所以我的想法是在函数内部编写一个神经网络以使其更容易调用。函数如下: (我个人认为问题出在神经网络的类内部,但为了有一个有效的例子,我把所有的东西都放在了里面)。

def train_neural_network(data_train_X, data_train_Y, batch_size, learning_rate, graph = True, dropout = 0.0 ):
  input_size = len(data_test_X.columns)
  hidden_size = 200
  num_classes = 4
  num_epochs = 120
  batch_size = batch_size
  learning_rate = learning_rate

  # The class of NN
  class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p = dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)

    def forward(self, x, p = dropout):
          out = F.relu(self.fc1(x))
          out = F.relu(self.fc2(out))
          out = nn.Dropout(out, p) #drop
          out = self.fc3(out)
          return out

  # Prepare data
  X_train = torch.from_numpy(data_train_X.values).float()
  Y_train = torch.from_numpy(data_train_Y.values).float()

  # Loading data
  train = torch.utils.data.TensorDataset(X_train, Y_train)
  train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size)

  net = NeuralNet(input_size, hidden_size, num_classes)

  # Loss
  criterion = nn.CrossEntropyLoss()

  # Optimiser
  optimiser = torch.optim.SGD(net.parameters(), lr=learning_rate)

  # Proper training
  total_step = len(train_loader)
  loss_values = []

  for epoch in range(num_epochs+1):
    net.train()

    train_loss = 0.0

    for i, (predictors, results) in enumerate(train_loader, 0):
      # Forward pass
      outputs = net(predictors)
      results = results.long()
      results = results.squeeze_()
      loss = criterion(outputs, results)

      # Backward and optimise
      optimiser.zero_grad()
      loss.backward()
      optimiser.step()

      # Update loss
      train_loss += loss.item()

    loss_values.append(train_loss / batch_size )
  print('Finished Training')

  return net

当我调用该函数时:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

错误如下:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/dropout.py in __init__(self, p, inplace)
      8     def __init__(self, p=0.5, inplace=False):
      9         super(_DropoutNd, self).__init__()
---> 10         if p < 0 or p > 1:
     11             raise ValueError("dropout probability has to be between 0 and 1, "
     12                              "but got {}".format(p))

RuntimeError: bool value of Tensor with more than one value is ambiguous

您认为为什么会出现错误?

在设置掉落率之前,一切正常。如果您知道如何的话,可以为您额外加分 在我的网络中实现偏见!例如,在隐藏层上。我在网上找不到任何示例。

最佳答案

为此更改您的架构:

class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p=dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)
        self.dropout = nn.Dropout(p=p)

    def forward(self, x):
        out = F.relu(self.fc1(x))
        out = F.relu(self.fc2(out))
        out = self.dropout(self.fc3(out))
        return out

让我知道它是否有效。

关于python - 调试神经网络丢失问题的概率不在 [0,1] 内,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59008098/

相关文章:

c++ - 调试时看不到任何变量值

Python/ML : Which methods to use for Multiclass Classification for Product Categorization?

javascript - 如何使用 javascript、JSON、python 和 mongodb 之间的日期时间?

c++ - 当我调试方法 AttachThreadInput() 时,Visual Studio 2010 挂起

python - pandas 内的 np reshape 应用

javascript - 为什么我无法读入输入值并显示警报?

C# Encog 神经网络——尽管神经网络的整体误差很低,但预期输出与实际误差相去甚远

machine-learning - 使用Caffe训练时如何得到训练误差?

python - wxPython 和在窗口之间共享对象

python - 使用 urllib urllib2 python 向 SciHub 发送表单请求不再有效