我想将 L1 正则化器添加到 ReLU 的激活输出中。 更一般地说,如何将正则化器仅添加到网络中的特定层?
<支持>
相关资料:
This similar post指的是添加L2正则化,但它似乎是将正则化惩罚添加到网络的所有层。
nn.modules.loss.L1Loss()
似乎相关,但我还不明白如何使用它。遗留模块
L1Penalty
似乎也相关,但为什么它已被弃用?
最佳答案
这是你如何做到这一点:
- 在您要应用 L1 正则化的模块的前向返回最终输出和层的输出中
loss
变量将是输出 w.r.t 的交叉熵损失之和。目标和 L1 处罚。
示例代码
import torch
from torch.autograd import Variable
from torch.nn import functional as F
class MLP(torch.nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.linear1 = torch.nn.Linear(128, 32)
self.linear2 = torch.nn.Linear(32, 16)
self.linear3 = torch.nn.Linear(16, 2)
def forward(self, x):
layer1_out = F.relu(self.linear1(x))
layer2_out = F.relu(self.linear2(layer1_out))
out = self.linear3(layer2_out)
return out, layer1_out, layer2_out
batchsize = 4
lambda1, lambda2 = 0.5, 0.01
model = MLP()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# usually following code is looped over all batches
# but let's just do a dummy batch for brevity
inputs = Variable(torch.rand(batchsize, 128))
targets = Variable(torch.ones(batchsize).long())
optimizer.zero_grad()
outputs, layer1_out, layer2_out = model(inputs)
cross_entropy_loss = F.cross_entropy(outputs, targets)
all_linear1_params = torch.cat([x.view(-1) for x in model.linear1.parameters()])
all_linear2_params = torch.cat([x.view(-1) for x in model.linear2.parameters()])
l1_regularization = lambda1 * torch.norm(all_linear1_params, 1)
l2_regularization = lambda2 * torch.norm(all_linear2_params, 2)
loss = cross_entropy_loss + l1_regularization + l2_regularization
loss.backward()
optimizer.step()
关于python - Pytorch:如何将 L1 正则化器添加到激活中?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44641976/