我正在寻找一种覆盖 nn.Module 中的向后操作的好方法,例如:
class LayerWithCustomGrad(nn.Module):
def __init__(self):
super(LayerWithCustomGrad, self).__init__()
self.weights = nn.Parameter(torch.randn(200))
def forward(self,x):
return x * self.weights
def backward(self,grad_of_c): # This gets called during loss.backward()
# grad_of_c comes from the gradient of b*23
grad_of_a = some_operation(grad_of_c)
# perform extra computation
# and more computation
self.weights.grad = another_operation(grad_of_a,grad_of_c)
return grad_of_a # and the grad of parameter "a" will receive this
layer = LayerWithCustomGrad()
a = nn.Parameter(torch.randn(200),requires_grad=True)
b = layer(a)
c = b*23
我从事的一些项目包含具有不可微分函数的层,如果有一些方法可以连接两个损坏的图和/或修改已存在的图的梯度,我会喜欢它。
如果有一种可能的方法可以在 tensorflow 中做到这一点,那就太好了
最佳答案
PyTorch 的构建方式您应该首先实现自定义 torch.autograd.Function
其中将包含层的前向和后向传递。然后你可以创建一个nn.Module
用必要的参数包装这个函数。
在此tutorial page你可以看到 ReLU 正在实现。我将在这里展示的是构建一个torch.autograd.Function
及其 nn.Module
包装器。
class F(torch.autograd.Function):
"""Both forward and backward are static methods."""
@staticmethod
def forward(ctx, input, weights):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input, weights)
return input*weights
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the inputs: here input and weights
"""
input, weights = ctx.saved_tensors
grad_input = weights.clone()*grad_output
grad_weights = input.clone()*grad_output
return grad_input, grad_weights
nn.Module
将初始化参数并调用F
处理前向/后向传递的实际运算计算。
class LayerWithCustomGrad(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.rand(10))
self.fn = F.apply
def forward(self, x):
return self.fn(x, self.weights)
现在我们可以尝试推断和反向传播:
>>> layer = LayerWithCustomGrad()
>>> x = torch.randn(10, requires_grad=True)
>>> y = layer(x)
tensor([ 0.2023, 0.7176, 0.3577, -1.3573, 1.5185, 0.0632, 0.1210, 0.1566,
0.0709, -0.4324], grad_fn=<FBackward>)
注意 <FBackward>
如grad_fn
:这是 F
的后向函数绑定(bind)到我们之前用 x
做出的推论.
>>> y.mean().backward()
>>> x.grad # i.e. grad_input in F.backward
tensor([0.0141, 0.0852, 0.0450, 0.0922, 0.0400, 0.0988, 0.0762, 0.0227, 0.0569,
0.0309])
>>> layer.weights.grad # i.e. grad_weights in F.backward
tensor([-1.4584, -2.1187, 1.5991, 0.9764, 1.8956, -1.0993, -3.7835, -0.4926,
0.9477, -1.2219])
关于python - 有没有办法覆盖 nn.Module 上的向后操作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69500995/