python - 如何将急切执行中的模型转换为静态图并保存在 .pb 文件中?

标签 python tensorflow eager-execution

假设我有模型(tf.keras.Model):

class ContextExtractor(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.model = self.__get_model()

    def call(self, x, training=False, **kwargs):
        features = self.model(x, training=training)
        return features

    def __get_model(self):
        return self.__get_small_conv()

    def __get_small_conv(self):
        model = tf.keras.Sequential()
        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(32, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(64, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(128, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))

        model.add(layers.Conv2D(256, (3, 3), strides=(2, 2), padding='same'))
        model.add(layers.LeakyReLU(alpha=0.2))


        model.add(layers.GlobalAveragePooling2D())

        return model

我训练它并使用以下方法保存它:

   checkpoint = tf.train.Checkpoint(
                model=self.model,
                global_step=tf.train.get_or_create_global_step())
   checkpoint.save(weights_path / f'epoch_{epoch}')

这意味着我有两个保存的文件:epoch_10-2.indexepoch_10-2.data-00000-of-00001

现在我想部署我的模型。我想要获取 .pb 文件。我怎么才能得到它?我想我需要以图形模式打开模型,加载权重并将其保存在 pb.file 中。实际上该怎么做呢?

最佳答案

您应该获得 session :

tf.keras.backend.get_session()

然后卡住模型,例如此处所做的 https://www.dlology.com/blog/how-to-convert-trained-keras-model-to-tensorflow-and-make-prediction/

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    from tensorflow.python.framework.graph_util import convert_variables_to_constants
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        # Graph -> GraphDef ProtoBuf
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                      output_names, freeze_var_names)
        return frozen_graph


frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])

然后将模型另存为 .pb(也在链接中显示):

tf.train.write_graph(frozen_graph, "model", "tf_model.pb", as_text=False)

如果这太麻烦,请尝试将 keras 模型保存为 .h5 (HDF5 类型文件),然后按照提供的链接中的说明进行操作。

来自 tensorflow 文档:

Write compatible code The same code written for eager execution will also build a graph during graph execution. Do this by simply running the same code in a new Python session where eager execution is not enabled.

同样来自同一页面:

To save and load models, tf.train.Checkpoint stores the internal state of objects, without requiring hidden variables. To record the state of a model, an optimizer, and a global step, pass them to a tf.train.Checkpoint:

checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
                           model=model,
                           optimizer_step=tf.train.get_or_create_global_step())

root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))

我向您推荐本页的最后部分:https://www.tensorflow.org/guide/eager

希望这有帮助。

关于python - 如何将急切执行中的模型转换为静态图并保存在 .pb 文件中?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55529755/

相关文章:

python - 不捕获字符串之前的内容

python - (Python) 如何将我的代码从 2.7 转换为 python 3

python - 使用自定义层加载 Keras 中保存的模型,预测结果不同?

python - 在 Keras 中加载权重后添加 DropOut

tensorflow - 使用 Tensorflow 图形转换工具

python - 为什么 model.losses 会返回正则化损失?

python - 使用 sklearn 或 numpy 的基于内容的推荐系统

python - 无效参数错误 : cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul]

python - 如何使用 tf.keras.Model 保存和恢复模式的权重 - TensorFlow 2.0 - 子类化 API

python - 安全错误 : Permission denied to access property "document" on cross-origin object error clicking on download link in iframe using Selenium Python