machine-learning - Cloud ML Engine 中部署的重新训练的 inception_v3 模型始终输出相同的预测

标签 machine-learning tensorflow computer-vision google-cloud-platform google-cloud-ml

我遵循了 Codelab TensorFlow For Poets使用 inception_v3 进行迁移学习。它生成 retrained_graph.pb 和 retrained_labels.txt 文件,可用于在本地进行预测(运行 label_image.py )。

然后,我想将此模型部署到 Cloud ML Engine,以便我可以进行在线预测。为此,我必须将 retrained_graph.pb 导出为 SavedModel 格式。我按照this answer from Google's @rhaertel80中的指示设法做到了这一点和 this python file来自Flowers Cloud ML Engine Tutorial 。这是我的代码:

import tensorflow as tf
from tensorflow.contrib import layers

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils


export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5

def build_signature(inputs, outputs):
    signature_inputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in inputs.items() }
    signature_outputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in outputs.items() }

    signature_def = signature_def_utils.build_signature_def(
        signature_inputs,
        signature_outputs,
        signature_constants.PREDICT_METHOD_NAME
    )

    return signature_def

class GraphReferences(object):
  def __init__(self):
    self.examples = None
    self.train = None
    self.global_step = None
    self.metric_updates = []
    self.metric_values = []
    self.keys = None
    self.predictions = []
    self.input_jpeg = None

class Model(object):
    def __init__(self, label_count):
        self.label_count = label_count

    def build_image_str_tensor(self):
        image_str_tensor = tf.placeholder(tf.string, shape=[None])

        def decode_and_resize(image_str_tensor):
            return image_str_tensor

        image = tf.map_fn(
            decode_and_resize,
            image_str_tensor,
            back_prop=False,
            dtype=tf.string
        )

        return image_str_tensor

    def build_prediction_graph(self, g):
        tensors = GraphReferences()
        tensors.examples = tf.placeholder(tf.string, name='input', shape=(None,))
        tensors.input_jpeg = self.build_image_str_tensor()

        keys_placeholder = tf.placeholder(tf.string, shape=[None])
        inputs = {
            'key': keys_placeholder,
            'image_bytes': tensors.input_jpeg
        }

        keys = tf.identity(keys_placeholder)
        outputs = {
            'key': keys,
            'prediction': g.get_tensor_by_name('final_result:0')
        }

        return inputs, outputs

    def export(self, output_dir):
        with tf.Session(graph=tf.Graph()) as sess:
            with tf.gfile.GFile(retrained_graph, "rb") as f:
                graph_def = tf.GraphDef()
                graph_def.ParseFromString(f.read())
                tf.import_graph_def(graph_def, name="")

            g = tf.get_default_graph()
            inputs, outputs = self.build_prediction_graph(g)

            signature_def = build_signature(inputs=inputs, outputs=outputs)
            signature_def_map = {
                signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
            }

            builder = saved_model_builder.SavedModelBuilder(output_dir)
            builder.add_meta_graph_and_variables(
                sess,
                tags=[tag_constants.SERVING],
                signature_def_map=signature_def_map
            )
            builder.save()

model = Model(label_count)
model.export(export_dir)

此代码生成一个saved_model.pb 文件,然后我用它来创建Cloud ML Engine 模型。我可以使用 gcloud ml-engine Predict --model my_model_name --json-instances request.json 从此模型获取预测,其中 request.json 的内容为:

{ "key": "0", "image_bytes": { "b64": "jpeg_image_base64_encoded" } }

但是,无论我在请求中编码哪种 jpeg,我总是得到完全相同的错误预测:

Prediction output

我猜问题出在 CloudML Prediction API 将 base64 编码图像字节传递到 inception_v3 的输入张量“DecodeJpeg/contents:0”(前面代码中的“build_image_str_tensor()”方法)的方式上。有关如何解决此问题并使本地重新训练的模型在 Cloud ML Engine 上提供正确预测的任何线索吗?

(为了明确起见,问题不在 retrained_graph.pb 中,因为当我在本地运行它时,它会做出正确的预测;也不在 request.json 中,因为在遵循 Flowers 时,相同的请求文件工作没有问题上面提到的 Cloud ML Engine 教程。)

最佳答案

首先,一般性警告。 TensorFlow for Poets Codelab 的编写方式不太适合生产服务(部分体现在您必须实现的解决方法上)。您通常会导出不包含所有额外训练操作的特定于预测的图表。因此,虽然我们可以尝试将一些可行的东西组合在一起,但可能需要额外的工作来生产该图。

您的代码的方法似乎是导入一个图表,添加一些占位符,然后导出结果。这通常没问题。但是,在问题中显示的代码中,您添加输入占位符,但没有实际将它们连接到导入图中的任何内容。您最终会得到一个包含多个断开连接的子图的图表,类似于(请原谅粗略的图表):

image_str_tensor [input=image_bytes] -> <nothing>
keys_placeholder [input=key]  -> identity [output=key]
inception_subgraph -> final_graph [output=prediction]

通过inception_subgraph我指的是您正在导入的所有操作。

因此 image_bytes 实际上是一个无操作并且被忽略; key 被传递; prediction 包含运行 inception_subgraph 的结果;因为它没有使用您传递的输入,所以每次都会返回相同的结果(尽管我承认我实际上期望这里出现错误)。

为了解决此问题,我们需要将您创建的占位符连接到 inception_subgraph 中已存在的占位符,以创建或多或少如下所示的图表:

image_str_tensor [input=image_bytes] -> inception_subgraph -> final_graph [output=prediction]
keys_placeholder [input=key]  -> identity [output=key]   

请注意,根据预测服务的要求,image_str_tensor 将是一批图像,但初始图的输入实际上是单个图像。为了简单起见,我们将以一种黑客的方式解决这个问题:我们假设我们将一张一张地发送图像。如果我们每个请求发送多于一张图像,我们就会收到错误。此外,批量预测永远不会起作用。

您需要的主要更改是导入语句,它将我们添加到图表中现有输入的占位符连接起来(您还将看到用于更改输入形状的代码):

把它们放在一起,我们得到这样的结果:

import tensorflow as tf
from tensorflow.contrib import layers

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils


export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5

class Model(object):
    def __init__(self, label_count):
        self.label_count = label_count

    def build_prediction_graph(self, g):
        inputs = {
            'key': keys_placeholder,
            'image_bytes': tensors.input_jpeg
        }

        keys = tf.identity(keys_placeholder)
        outputs = {
            'key': keys,
            'prediction': g.get_tensor_by_name('final_result:0')
        }

        return inputs, outputs

    def export(self, output_dir):
        with tf.Session(graph=tf.Graph()) as sess:
            # This will be our input that accepts a batch of inputs
            image_bytes = tf.placeholder(tf.string, name='input', shape=(None,))
            # Force it to be a single input; will raise an error if we send a batch.
            coerced = tf.squeeze(image_bytes)
            # When we import the graph, we'll connect `coerced` to `DecodeJPGInput:0`
            input_map = {'DecodeJPGInput:0': coerced}

            with tf.gfile.GFile(retrained_graph, "rb") as f:
                graph_def = tf.GraphDef()
                graph_def.ParseFromString(f.read())
                tf.import_graph_def(graph_def, input_map=input_map, name="")

            keys_placeholder = tf.placeholder(tf.string, shape=[None])

            inputs = {'image_bytes': image_bytes, 'key': keys_placeholder}

            keys = tf.identity(keys_placeholder)
            outputs = {
                'key': keys,
                'prediction': tf.get_default_graph().get_tensor_by_name('final_result:0')}    
            }

            tf.simple_save(sess, output_dir, inputs, outputs)

model = Model(label_count)
model.export(export_dir)

关于machine-learning - Cloud ML Engine 中部署的重新训练的 inception_v3 模型始终输出相同的预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47558050/

相关文章:

javascript - opencv.js 大津阈值

python - 越来越大的正 WGAN-GP 损失

c++ - OpenCV、C++:如何使用 cv::Meanshift

java - 使用 GUI/应用程序时预测用户操作

python - 重复推理后模型推理运行时间增加

python - Tensorflow: session 图为空。 Python

python - 如何获取 keras 模型的输出作为数值,而不是 Tensor 对象?

python - 使用 scikit-learn 预测电影评论

python - Theano值错误: Some matrix has no unit stride

python - 属性错误 : 'FactorAnalyzer' object has no attribute 'analyze'