python - 对于大批量的 PyTorch 训练或此脚本是否存在错误?

标签 python image-processing gpu gpgpu pytorch

我正在关注this PyTorch tutorial作者:约书亚·L·米切尔本教程的压轴戏是以下 PyTorch 训练脚本。我在脚本的第一行中对一个元素(批量大小)进行了参数化,并在新启动的 Jupyter 笔记本中运行该脚本。有问题的关键参数是 BIGGER_BATCH,最初设置为 4:

BIGGER_BATCH=4

import numpy as np
import torch # Tensor Package (for use on GPU)
import torch.nn as nn ## Neural Network package
import torch.optim as optim # Optimization package
import torchvision # for dealing with vision data
import torchvision.transforms as transforms # for modifying vision data to run it through models
from torch.autograd import Variable # for computational graph
import torch.nn.functional as F # Non-linearities package
import matplotlib.pyplot as plt # for plotting

def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))

transform = transforms.Compose( # we're going to use this to transform our data to make each sample more uniform
   [
    transforms.ToTensor(), # converts each sample from a (0-255, 0-255, 0-255) PIL Image format to a (0-1, 0-1, 0-1) FloatTensor format
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # for each of the 3 channels of the image, subtract mean 0.5 and divide by stdev 0.5
   ]) # the normalization makes each SGD iteration more stable and overall makes convergence easier

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform) # this is all we need to get/wrangle the dataset!

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)

trainloader = torch.utils.data.DataLoader(trainset, batch_size=BIGGER_BATCH,
                                          shuffle=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=BIGGER_BATCH,
                                         shuffle=False)

classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # each image can have 1 of 10 labels

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 10, 5) # Let's add more feature maps - that might help
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(10, 20, 5) # And another conv layer with even more feature maps
        self.fc1 = nn.Linear(20 * 5 * 5, 120) # and finally, adjusting our first linear layer's input to our previous output
        self.fc2 = nn.Linear(120, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x) # we're changing our nonlinearity / activation function from sigmoid to ReLU for a slight speedup
        x = self.pool(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = self.pool(x) # after this pooling layer, we're down to a torch.Size([4, 20, 5, 5]) tensor.
        x = x.view(-1, 20 * 5 * 5) # so let's adjust our tensor again.
        x = self.fc1(x)             
        x = F.relu(x)
        x = self.fc2(x)
        x = F.relu(x)
        return x

net = Net().cuda()

NUMBER_OF_EPOCHS = 25
LEARNING_RATE = 1e-2
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE)

for epoch in range(NUMBER_OF_EPOCHS):
    train_loader_iter = iter(trainloader)
    for batch_idx, (inputs, labels) in enumerate(train_loader_iter):
        net.zero_grad()
        inputs, labels = Variable(inputs.float().cuda()), Variable(labels.cuda())
        output = net(inputs)
        loss = loss_function(output, labels)
        loss.backward()
        optimizer.step()
    if epoch % 5 is 0:
        print("Iteration: " + str(epoch + 1))

dataiter = iter(testloader)
images, labels = dataiter.next()

imshow(torchvision.utils.make_grid(images[0:4]))

outputs = net(Variable(images.cuda()))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(4)))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

correct = 0
total = 0
for data in testloader:
    images, labels = data
    labels = labels.cuda()
    outputs = net(Variable(images.cuda()))
    _, predicted = torch.max(outputs.data, 1)
    total += labels.size(0)
    correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

这得到了 58% 准确率的良好预期结果:

Predicted:    cat  ship  ship plane
GroundTruth:    cat  ship  ship plane
Accuracy of the network on the 10000 test images: 58 %

现在,如果我将上述脚本的第一行更改为

BIGGER_BATCH=4096

然后我重新启动内核并运行脚本,准确率始终为 19%:

Predicted:    car  ship  ship  ship
GroundTruth:    cat  ship  ship plane
Accuracy of the network on the 10000 test images: 19 %

请注意,我没有打乱输入,因此我无法将此更改归因于训练中的输入顺序:

trainloader = torch.utils.data.DataLoader(trainset, batch_size=BIGGER_BATCH,
                                          shuffle=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=BIGGER_BATCH,
                                         shuffle=False)

当我增加批量大小时,准确率大幅下降的原因是什么?脚本中是否有问题,或者 PyTorch 中有什么问题,或者是我没有想到的其他问题?

最佳答案

抱歉,我刚刚意识到这是一个愚蠢的问题。我所做的更新少得多——减少了 1,024 倍。这就是为什么准确率要低得多。我可以调整学习率,但显然有一个 tradeoff between batch size and learning rate我现在才刚刚了解。

关于python - 对于大批量的 PyTorch 训练或此脚本是否存在错误?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51180423/

相关文章:

image - 将真实图像卡通化

android - Android:使用hu moments opencv函数获取功能值

memory - 只有 'Import keras' 在 GPU 中占用 10GB

python - 如何判断 tensorflow 是否从 python shell 内部使用 gpu 加速?

python - 创建一个计算单词和字符的函数(包括标点符号,但不包括空格)

python - pathlib.Path.home() 中的 pathlib 库错误 : type object 'Path' has no attribute 'home'

android - android中的OpenCV图像处理

cpu - 为什么不使用 GPU 作为 CPU?

python - 值错误: time data "[' 140209/172 9']" does not match format '%y%m%d/%H%M'

python - Django、Turbo Gears、Web2Py,哪个更好?