python - pytorch - 使用 'with statement' 内的设备

标签 python gpu pytorch

有没有办法在特定 (GPU) 设备的上下文中运行 pytorch(无需为每个新张量指定设备,例如 .to 选项)?

类似于 tensorflow with tf.device('/device:GPU:0'):..

好像默认的设备是cpu(除非我做错了):

with torch.cuda.device('0'):
   a = torch.zeros(1)
   print(a.device)

>>> cpu

最佳答案

不幸的是,在当前的实现中,with-device 语句不能以这种方式工作,它只能用于在 cuda 设备之间切换。


您仍然必须使用 device 参数来指定使用哪个设备(或 .cuda() 将张量移动到指定的 GPU),带有像这样的术语:

# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)

因此要访问 cuda:1:

cuda = torch.device('cuda')

with torch.cuda.device(1):
    # allocates a tensor on GPU 1
    a = torch.tensor([1., 2.], device=cuda)

并访问 cuda:2:

cuda = torch.device('cuda')

with torch.cuda.device(2):
    # allocates a tensor on GPU 2
    a = torch.tensor([1., 2.], device=cuda)

但是没有 device 参数的张量仍然是 CPU 张量:

cuda = torch.device('cuda')

with torch.cuda.device(1):
    # allocates a tensor on CPU
    a = torch.tensor([1., 2.])

总结一下:

No - unfortunately it is in the current implementation of the with-device statement not possible to use in a way you described in your question.


以下是来自 documentation 的更多示例:

cuda = torch.device('cuda')     # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)

x = torch.tensor([1., 2.], device=cuda0)
# x.device is device(type='cuda', index=0)
y = torch.tensor([1., 2.]).cuda()
# y.device is device(type='cuda', index=0)

with torch.cuda.device(1):
    # allocates a tensor on GPU 1
    a = torch.tensor([1., 2.], device=cuda)

    # transfers a tensor from CPU to GPU 1
    b = torch.tensor([1., 2.]).cuda()
    # a.device and b.device are device(type='cuda', index=1)

    # You can also use ``Tensor.to`` to transfer a tensor:
    b2 = torch.tensor([1., 2.]).to(device=cuda)
    # b.device and b2.device are device(type='cuda', index=1)

    c = a + b
    # c.device is device(type='cuda', index=1)

    z = x + y
    # z.device is device(type='cuda', index=0)

    # even within a context, you can specify the device
    # (or give a GPU index to the .cuda call)
    d = torch.randn(2, device=cuda2)
    e = torch.randn(2).to(cuda2)
    f = torch.randn(2).cuda(cuda2)
    # d.device, e.device, and f.device are all device(type='cuda', index=2)

关于python - pytorch - 使用 'with statement' 内的设备,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52076815/

相关文章:

macos - 在哪里可以找到从 gpusGenerateCrashLog 调用堆栈创建的日志?

azure - AWS 或 Azure 提供的虚拟机使用哪种 GPU?

lstm - Pytorch 隐藏状态 LSTM

python - 填充 Pandas 中其他列的邻居值的值

python - 如何获取 QtChart QLineSeries 中的轴标签

python - 使用 ElementTree 查找节点 - 无效谓词

python - 无法在 Google App Engine Launcher 中停止网络服务器

python - pyopenCL,openCL,无法在GPU上构建程序

tensorflow - 寻找与 Pytorch GRU 功能等效的 TensorFlow

python - 尝试安装 FastAi 但我得到 "ERROR: No matching distribution found for torchvision>=0.7"