python - PyTorch |使用数据集 Omniglot 获取 "RuntimeError: Found dtype Long but expected Float"

标签 python neural-network pytorch

我是 PyTorch 和神经网络的真正新手。本周我开始研究这些主题,我的导师给了我一个代码以及一些处理该代码的任务。 但他给我的代码不起作用。我一整天都在尝试解决这个问题,但没有结果。因为我不知道 NN 和 PyTorch 的背景,所以很难理解这个问题。 需要你的帮助。 谢谢!

import torch
import numpy as np
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torchsummary import summary
#DEFINE YOUR DEVICE
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device) #if cpu, go Runtime-> Change runtime type-> Hardware accelerator GPU -> Save -> Redo previous steps
#DOWNLOAD DATASET
train_data = datasets.Omniglot('./data', background=True, download = True, transform = transforms.ToTensor())
test_data = datasets.Omniglot('./data',background = False, download = True, transform = transforms.ToTensor())
#DEFINE DATA GENERATOR
batch_size = 50
train_generator = torch.utils.data.DataLoader(train_data, batch_size = batch_size, shuffle = True)
test_generator = torch.utils.data.DataLoader(test_data, batch_size = batch_size, shuffle = False)
#DEFINE NEURAL NETWORK MODEL
class CNN(torch.nn.Module):
  def __init__(self):
    super(CNN, self).__init__()
    self.conv1 = torch.nn.Conv2d(1, 8, kernel_size = 4, stride = 1)
    self.conv2 = torch.nn.Conv2d(8, 16, kernel_size = 4, stride = 1)
    self.mpool = torch.nn.MaxPool2d(2)
    self.fc1 = torch.nn.Linear(18432, 256)
    self.fc2 = torch.nn.Linear(256, 64)
    self.fc3 = torch.nn.Linear(64, 50)
    self.relu = torch.nn.ReLU()
    self.sigmoid = torch.nn.Sigmoid()
  def forward(self, x):
    hidden = self.mpool(self.relu(self.conv1(x)))
    hidden = self.mpool(self.relu(self.conv2(hidden)))
    hidden = hidden.view(-1,18432)
    hidden = self.relu(self.fc1(hidden))
    hidden = self.relu(self.fc2(hidden))
    output = self.fc3(hidden)
    return output
# CREATE MODEL
model = CNN()
model.to(device)
summary(model, (1, 105, 105))
# DEFINE LOSS FUNCTION AND OPTIMIZER
learning_rate = 0.001
loss_fun = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# TRAIN THE MODEL
model.train()
epoch = 10
num_of_batch = np.int(len(train_generator.dataset) / batch_size)
loss_values = np.zeros(epoch * num_of_batch)
for i in range(epoch):
      for batch_idx, (x_train, y_train) in enumerate(train_generator):
          x_train, y_train = x_train.to(device), y_train.to(device)
          optimizer.zero_grad()
          y_pred = model(x_train)
          loss = loss_fun(y_pred, y_train)
          loss_values[num_of_batch * i + batch_idx] = loss.item()
          loss.backward()
          optimizer.step()
          if (batch_idx + 1) % batch_size == 0:
              print('Epoch: {}/{} [Batch: {}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                  i + 1, epoch, (batch_idx + 1) * len(x_train), len(train_generator.dataset),
                  100. * (batch_idx + 1) / len(train_generator), loss.item()))
#PLOT THE LEARNING CURVE
iterations = np.linspace(0,epoch,num_of_batch*epoch)
plt.plot(iterations, loss_values)
plt.title('Learning Curve')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.grid('on')
#TEST THE MODEL
model.eval()
correct=0
total=0
for x_val, y_val in test_generator:
  x_val = x_val.to(device)
  y_val = y_val.to(device)
  output = model(x_val)
  y_pred = output.argmax(dim=1)
  for i in range(y_pred.shape[0]):
    if y_val[i]==y_pred[i]:
      correct += 1
    total +=1
print('Validation accuracy: %.2f%%' %((100*correct)//(total)))

这是我收到的错误代码。

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([50])) that is different to the input size (torch.Size([25, 50])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.mse_loss(input, target, reduction=self.reduction)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-11-bffd863688df> in <module>()
     13     loss = loss_fun(y_pred, y_train)
     14     loss_values[num_of_batch*i+batch_idx] = loss.item()
---> 15     loss.backward()
     16     optimizer.step()
     17     if (batch_idx+1) % batch_size == 0:

1 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    147     Variable._execution_engine.run_backward(
    148         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    150 
    151 

RuntimeError: Found dtype Long but expected Float

最佳答案

您的数据集正在返回标签的整数,您应该将它们转换为浮点。解决这个问题的一种方法是:

loss = loss_fun(y_pred, y_train.float())

关于python - PyTorch |使用数据集 Omniglot 获取 "RuntimeError: Found dtype Long but expected Float",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69124057/

相关文章:

python - 查找每类具有最高 TF-IDF 分数的前 n 个术语

r - 如何使用 R 中的神经网络包对多个输出节点进行编程?

visual-studio-code - PyTorch DataLoader 的 VSCode 错误?

python - 您可以使用 Caffe 在同一数据上训练多个网络吗?

machine-learning - 如何在 Keras 中进行逐点分类交叉熵损失?

python - 如何使用 autograd 查找最小/最大点

deep-learning - a.sub(or*a.grad) 实际上做了什么?

python - 在 Mac 中以管理员身份运行已编译的 python (py2app)

python - 为什么在 Windows 上使用 python 套接字进行端口扫描比在 linux 上慢得多?

python - 变量在函数调用之间不清除