python - 变压器模型无法保存

标签 python tensorflow keras neural-network transformer-model

我正在尝试遵循此教程 https://colab.research.google.com/github/tensorflow/examples/blob/master/community/en/transformer_chatbot.ipynb ,但是,当我尝试保存模型以便在没有训练的情况下再次加载模型时,我收到了此处提到的错误 NotImplementedError: Layers with arguments in `__init__` must override `get_config` 我从答案中了解到,我需要将编码器和解码器作为类并对其进行自定义(而不是将其保留为像 Colab 教程那样的函数),因此我在这里回到了该模型的 tensorflow 文档:https://www.tensorflow.org/tutorials/text/transformer#encoder_layer并尝试在其中进行编辑。我将编码器层设置为:

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads,  rate=0.1,**kwargs,):
    #super(EncoderLayer, self).__init__()
    super().__init__(**kwargs)
    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
  def get_config(self):

        config = super().get_config().copy()
        config.update({
            #'vocab_size': self.vocab_size,
            #'num_layers': self.num_layers,
            #'units': self.units,
            'd_model': self.d_model,
            'num_heads': self.num_heads,
            'dropout': self.dropout,
        })
        return config

  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)

    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)

    return out2

解码器层类也是如此。然后tf的文档中同样的编码器

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)


    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]

    self.dropout = tf.keras.layers.Dropout(rate)

  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]

    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)

    return x  # (batch_size, input_seq_len, d_model)

模型的功能为:

def transformer(vocab_size,
                num_layers,
                units,
                d_model,
                num_heads,
                dropout,
                name="transformer"):
  inputs = tf.keras.Input(shape=(None,), name="inputs")
  dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")

  enc_padding_mask = tf.keras.layers.Lambda(
      create_padding_mask, output_shape=(1, 1, None),
      name='enc_padding_mask')(inputs)
  # mask the future tokens for decoder inputs at the 1st attention block
  look_ahead_mask = tf.keras.layers.Lambda(
      create_look_ahead_mask,
      output_shape=(1, None, None),
      name='look_ahead_mask')(dec_inputs)
  # mask the encoder outputs for the 2nd attention block
  dec_padding_mask = tf.keras.layers.Lambda(
      create_padding_mask, output_shape=(1, 1, None),
      name='dec_padding_mask')(inputs)

  enc_outputs = Encoder(
      num_layers=num_layers, d_model=d_model, num_heads=num_heads, 
                         input_vocab_size=vocab_size,


  )(inputs=[inputs, enc_padding_mask])

  dec_outputs = Decoder(
      num_layers=num_layers, d_model=d_model, num_heads=num_heads, 
                          target_vocab_size=vocab_size,


  )(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])

  outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)

  return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)

并调用模型:

#the model itself with its paramters:
# Hyper-parameters
NUM_LAYERS = 3
D_MODEL = 256
#D_MODEL=tf.cast(D_MODEL, tf.float32)

NUM_HEADS = 8
UNITS = 512
DROPOUT = 0.1
model = transformer(
    vocab_size=VOCAB_SIZE,
    num_layers=NUM_LAYERS,
    units=UNITS,
    d_model=D_MODEL,
    num_heads=NUM_HEADS,
    dropout=DROPOUT)

但是,我收到了这个错误: 类型错误:__init__() 缺少 2 个必需的位置参数:“dff”和“maximum_position_encoding” 我真的很困惑,我不明白 dff 和最大位置编码在文档中的含义,当我从编码器和解码器类中删除它们时,我得到了花药错误,因为positional_encoding函数将最大位置作为输入,并且 dff 被传递为类内输入。我不太确定我应该做什么,因为我不确定我是否遵循了正确的步骤

最佳答案

如果您在调用 transformer 时遇到此错误,那么您的问题在于创建模型,而不是保存模型。

除此之外,我发现您的 get_config 有几个问题:

  1. 您定义了 dropout 而不是 rate
  2. 您所处理的属性(self.d_model 等)未在 __init__ 处定义或分配。
  3. 您的 Encoder 类不存在该属性。

关于python - 变压器模型无法保存,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59796343/

相关文章:

python - TypeError: 'tuple' 对象不支持项目分配

tensorflow - 如何使用pipenv在m1 mac上安装tensorflow

machine-learning - 对于浅层神经网络分类器来说,这个图像是否太复杂了?

python - TensorFlow 中图像分类的输入/输出形状错误

python - 在多线程上进行预测时出现 Keras 错误

rest - 如何将图形导出到 Tensorflow Serving 以便输入为 b64?

python - 加载 Fashion_mnist 数据需要太多时间

python - 将查询输出作为输入传递给 SQLAlchemy 中另一个查询的 in 子句

python - Scapy 等待多个数据包

tensorflow - 使用 Keras 进行多维回归