我是 Keras 的新手,正在构建模型。我想在训练前几层时卡住模型最后几层的权重。我试图将横向模型的可训练属性设置为 False,但它似乎不起作用。这是代码和模型摘要:
opt = optimizers.Adam(1e-3)
domain_layers = self._build_domain_regressor()
domain_layers.trainble = False
feature_extrator = self._build_common()
img_inputs = Input(shape=(160, 160, 3))
conv_out = feature_extrator(img_inputs)
domain_label = domain_layers(conv_out)
self.domain_regressor = Model(img_inputs, domain_label)
self.domain_regressor.compile(optimizer = opt, loss='binary_crossentropy', metrics=['accuracy'])
self.domain_regressor.summary()
模型汇总:model summary .
如您所见,model_1
是可训练的。但是根据代码,它被设置为不可训练的。
最佳答案
您可以简单地为图层属性 trainable
分配一个 bool 值。
model.layers[n].trainable = False
你可以想象哪一层是可训练的:
for l in model.layers:
print(l.name, l.trainable)
您也可以通过模型定义传递它:
frozen_layer = Dense(32, trainable=False)
来自 Keras documentation :
To "freeze" a layer means to exclude it from training, i.e. its weights will never be updated. This is useful in the context of fine-tuning a model, or using fixed embeddings for a text input.
You can pass a trainable argument (boolean) to a layer constructor to set a layer to be non-trainable. Additionally, you can set the trainable property of a layer to True or False after instantiation. For this to take effect, you will need to call compile() on your model after modifying the trainable property.
关于python - 如何将keras中的参数设置为不可训练?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53503389/