下面是我的目标的简单解释
给定结构为的网络 O
--------- ---------- ---------- ---------- ----------
| Input | -> (x) -> | Blob A | -> (xa) -> | Blob B | -> (xb) -> | Blob C | -> (xc) -> | Output |
--------- ---------- ---------- ---------- ----------
我想创建一个子网络来计算 Blob C
的噪声损失函数。该操作给定输入xb
和原始输出xc
,并通过Blob C
再次传递xb + noise
得到xc'
。然后 mse_loss
在 xc
和 xc'
我已经尝试从原始模型创建 nn.Sequential
。但我不确定它是否创建了新的深拷贝或引用。
如有遗漏,欢迎留言
谢谢
最佳答案
所以经过一些测试后,我发现如果层引用被保留(比如在某个变量中),然后使用 nn.Sequential 和该层架构创建一个新模型,新模型将共享相同的层引用。所以当原始网络更新时,新模型也会更新。
我用来检验我的假设的代码如下
class TestNN(nn.Module):
def __init__(self):
super(TestNN, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
self.relu2 = nn.ReLU()
self.conv3 = nn.Conv2d(64, 3, kernel_size=3, padding=1)
self.relu3 = nn.ReLU()
def forward(self, x):
in_x = x
h = self.relu1(self.conv1(in_x))
h = self.relu2(self.conv2(h))
h = self.relu3(self.conv3(h))
return h
net = TestNN()
testInput = torch.from_numpy(np.random.rand(1, 3, 3, 3)).float()
target = torch.from_numpy(np.random.rand(1, 3, 3, 3)).float()
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)
def subnetwork(model, start_layer_idx, end_layer_idx):
subnetwork = nn.Sequential()
for idx, layer in enumerate(list(model)[start_layer_idx: end_layer_idx+1]):
subnetwork.add_module("layer_{}".format(idx), layer)
return subnetwork
start = subnetwork(net.children(), 0, 1)
middle = subnetwork(net.children(), 2, 3)
end = subnetwork(net.children(), 4, 5)
print(end(middle(start(testInput))))
print(net(testInput))
for idx in range(5):
net.zero_grad()
out = net(testInput)
loss = criterion(out, target)
print("[{}] {:4f}".format(idx, loss))
loss.backward()
optimizer.step()
print(end(middle(start(testInput))))
print(net(testInput))
训练前后的输出是一样的。所以我得出结论,我的假设是正确的。
为了完成我的目标,我创建了一个“透明”损失,如 this教程。
class NoiseLoss(nn.Module):
def __init__(self, subnet, noise_count = 20, noise_range=0.3):
super(NoiseLoss, self).__init__()
self.net = subnet
self.noise_count = noise_count
self.noise_range = noise_range
def add_noise(self, x):
b, c, h, w = x.size()
noise = torch.zeros(c, h, w)
for i in range(self.noise_count):
row, col = rng.randint(0, h-1), rng.randint(0, w-1)
for j in range(c):
noise[j,row,col] = (2*(rng.random()%self.noise_range)) - self.noise_range
noise = noise.float()
xp = x.clone()
for b_idx in range(b):
xp[b_idx,:,:,:] = xp[b_idx,:,:,:] + noise
return xp
def forward(self, x):
self.loss = F.mse_loss(x, self.add_noise(x))
print(self.loss)
return x
noise_losses = []
testLoss = NoiseLoss(subnetwork(net.children(), 2, 3))
middle.add_module('noise_loss_test', testLoss)
noise_losses.append(testLoss)
并将我的循环修改为
...
print("[{}] {:4f}".format(idx, loss))
for nl in noise_losses:
loss += nl.loss
loss.backward(retain_graph=True)
...
如有遗漏,欢迎留言
关于python - 如何在pytorch中创建子网引用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51511074/