有什么方法可以在 nn.Sequential
对象中使用自定义 torch.autograd.Function
或者我应该显式使用 nn.Module
具有转发功能的对象。具体来说,我正在尝试实现一个稀疏自动编码器,并且我需要将代码(隐藏表示)的 L1 距离添加到损失中。
我在下面定义了自定义 torch.autograd.Function
L1Penalty,然后尝试在 nn.Sequential
对象中使用它,如下所示。但是,当我运行时,出现错误 TypeError: __main__.L1Penalty is not a Module subclass
如何解决此问题?
class L1Penalty(torch.autograd.Function):
@staticmethod
def forward(ctx, input, l1weight = 0.1):
ctx.save_for_backward(input)
ctx.l1weight = l1weight
return input, None
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_variables
grad_input = input.clone().sign().mul(ctx.l1weight)
grad_input+=grad_output
return grad_input
model = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 6),
nn.ReLU(),
# sparsity
L1Penalty(),
nn.Linear(6, 10),
nn.ReLU(),
nn.Linear(10, 10),
nn.ReLU()
).to(device)
最佳答案
正确的方法是这样
import torch, torch.nn as nn
class L1Penalty(torch.autograd.Function):
@staticmethod
def forward(ctx, input, l1weight = 0.1):
ctx.save_for_backward(input)
ctx.l1weight = l1weight
return input
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_variables
grad_input = input.clone().sign().mul(ctx.l1weight)
grad_input+=grad_output
return grad_input
创建充当包装器的 Lambda 类
class Lambda(nn.Module):
"""
Input: A Function
Returns : A Module that can be used
inside nn.Sequential
"""
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
TA-DA!
model = nn.Sequential(
nn.Linear(10, 10),
nn.ReLU(),
nn.Linear(10, 6),
nn.ReLU(),
# sparsity
Lambda(L1Penalty.apply),
nn.Linear(6, 10),
nn.ReLU(),
nn.Linear(10, 10),
nn.ReLU())
a = torch.rand(50,10)
b = model(a)
print(b.shape)
关于python - 如何在 nn.Sequential 模型中使用自定义 torch.autograd.Function,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61117361/