python - PyTorch:如何打印网络中每一层的输出 blob 大小?

标签 python deep-learning pytorch

我可以像这样打印网络结构(还有如何打印每个“简单”层的位置索引?因为在这个例子中我们有 3 用于 Fire 模块,它的内容(“简单”层)没有索引):

net = models.squeezenet1_1(pretrained=True)
print(net)



SqueezeNet(
  (features): Sequential(
    (0): Conv2d (3, 64, kernel_size=(3, 3), stride=(2, 2))
    (1): ReLU(inplace)
    (2): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
    (3): Fire(
      (squeeze): Conv2d (64, 16, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (16, 64, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (4): Fire(
      (squeeze): Conv2d (128, 16, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (16, 64, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (5): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
    (6): Fire(
      (squeeze): Conv2d (128, 32, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (32, 128, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (7): Fire(
      (squeeze): Conv2d (256, 32, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (32, 128, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (8): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
    (9): Fire(
      (squeeze): Conv2d (256, 48, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (48, 192, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (10): Fire(
      (squeeze): Conv2d (384, 48, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (48, 192, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (11): Fire(
      (squeeze): Conv2d (384, 64, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (64, 256, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
    (12): Fire(
      (squeeze): Conv2d (512, 64, kernel_size=(1, 1), stride=(1, 1))
      (squeeze_activation): ReLU(inplace)
      (expand1x1): Conv2d (64, 256, kernel_size=(1, 1), stride=(1, 1))
      (expand1x1_activation): ReLU(inplace)
      (expand3x3): Conv2d (64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (expand3x3_activation): ReLU(inplace)
    )
  )
  (classifier): Sequential(
    (0): Dropout(p=0.5)
    (1): Conv2d (512, 1000, kernel_size=(1, 1), stride=(1, 1))
    (2): ReLU(inplace)
    (3): AvgPool2d(kernel_size=13, stride=1, padding=0, ceil_mode=False, count_include_pad=True)
  )
)

我可以像这样打印重量大小:

for i, weights in enumerate(list(net.parameters())):
    print('i:',i,'weights:',weights.size())



i: 0 weights: torch.Size([64, 3, 3, 3])
i: 1 weights: torch.Size([64])
i: 2 weights: torch.Size([16, 64, 1, 1])
i: 3 weights: torch.Size([16])
i: 4 weights: torch.Size([64, 16, 1, 1])
i: 5 weights: torch.Size([64])
i: 6 weights: torch.Size([64, 16, 3, 3])
i: 7 weights: torch.Size([64])
i: 8 weights: torch.Size([16, 128, 1, 1])
i: 9 weights: torch.Size([16])
i: 10 weights: torch.Size([64, 16, 1, 1])
i: 11 weights: torch.Size([64])
i: 12 weights: torch.Size([64, 16, 3, 3])
i: 13 weights: torch.Size([64])
i: 14 weights: torch.Size([32, 128, 1, 1])
i: 15 weights: torch.Size([32])
i: 16 weights: torch.Size([128, 32, 1, 1])
i: 17 weights: torch.Size([128])
i: 18 weights: torch.Size([128, 32, 3, 3])
i: 19 weights: torch.Size([128])
i: 20 weights: torch.Size([32, 256, 1, 1])
i: 21 weights: torch.Size([32])
i: 22 weights: torch.Size([128, 32, 1, 1])
i: 23 weights: torch.Size([128])
i: 24 weights: torch.Size([128, 32, 3, 3])
i: 25 weights: torch.Size([128])
i: 26 weights: torch.Size([48, 256, 1, 1])
i: 27 weights: torch.Size([48])
i: 28 weights: torch.Size([192, 48, 1, 1])
i: 29 weights: torch.Size([192])
i: 30 weights: torch.Size([192, 48, 3, 3])
i: 31 weights: torch.Size([192])
i: 32 weights: torch.Size([48, 384, 1, 1])
i: 33 weights: torch.Size([48])
i: 34 weights: torch.Size([192, 48, 1, 1])
i: 35 weights: torch.Size([192])
i: 36 weights: torch.Size([192, 48, 3, 3])
i: 37 weights: torch.Size([192])
i: 38 weights: torch.Size([64, 384, 1, 1])
i: 39 weights: torch.Size([64])
i: 40 weights: torch.Size([256, 64, 1, 1])
i: 41 weights: torch.Size([256])
i: 42 weights: torch.Size([256, 64, 3, 3])
i: 43 weights: torch.Size([256])
i: 44 weights: torch.Size([64, 512, 1, 1])
i: 45 weights: torch.Size([64])
i: 46 weights: torch.Size([256, 64, 1, 1])
i: 47 weights: torch.Size([256])
i: 48 weights: torch.Size([256, 64, 3, 3])
i: 49 weights: torch.Size([256])
i: 50 weights: torch.Size([1000, 512, 1, 1])
i: 51 weights: torch.Size([1000])

如何打印网络中每一层的输出blob大小?

最佳答案

您可以注册一个钩子(Hook)(回调函数),它将打印出输入和输出张量的形状,如手册中所述:Forward and Backward Function Hooks
示例:

net.register_forward_hook(your_print_blobs_function)

在此之后,您需要对一些输入张量进行一次前向传递。

expected_image_shape = (3, 224, 224)
input_tensor = torch.autograd.Variable(torch.rand(1, *expected_image_shape))
# this call will invoke all registered forward hooks
output_tensor = net(input_tensor)

关于python - PyTorch:如何打印网络中每一层的输出 blob 大小?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48675114/

相关文章:

Python,运行测试,如果失败则发送邮件

python - 在 Python 中打开和读取具有未知文件名的顺序 XML 文件

python - 使用SMOTE和ADASYN平衡图像数据集

python - output_graph.pb 上的 tf.GraphKeys.TRAINABLE_VARIABLES 导致空列表

python - PyTorch Autograd 自动微分功能

python - 模块未找到错误: No module named ‘tools.nnwrap’ (windows)

Python 2.7.11 os.rename 问题和奇怪的执行顺序

python : Printing contents of a file to the terminal

python - Fastai - 如何在 cpu 中使用 load_learner 后进行预测

python - 正确使用 PyTorch 的 non_blocking=True 进行数据预取