我正在使用 gcloud local prediction
测试我导出的模型。该模型是在自定义数据集上训练过的 TensorFlow 对象检测模型。我正在使用以下 gcloud 命令:
gcloud ml-engine local predict --model-dir=/path/to/saved_model/ --json-instances=input.json --signature-name="serving_default" --verbosity debug
当我不使用 verbose 时,该命令不输出任何内容。将详细设置为调试后,我得到以下回溯:
DEBUG: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 984, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 784, in Run
resources = command_instance.Run(args)
File "/google-cloud-sdk/lib/surface/ai_platform/local/predict.py", line 83, in Run
signature_name=args.signature_name)
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/local_utils.py", line 103, in RunPredict
proc.stdin.write((json.dumps(instance) + '\n').encode('utf-8'))
IOError: [Errno 32] Broken pipe
我的导出模型的详细信息:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: encoded_image_string_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_classes:0
outputs['detection_features'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, -1, -1)
name: detection_features:0
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 2)
name: detection_multiclass_scores:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: raw_detection_boxes:0
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 2)
name: raw_detection_scores:0
Method name is: tensorflow/serving/predict
我使用以下代码生成用于预测的 input.json:
with open('input.json', 'wb') as f:
img = Image.open("image.jpg")
img = img.resize((width, height), Image.ANTIALIAS)
output_str = io.BytesIO()
img.save(output_str, "JPEG")
image_byte_array = output_str.getvalue()
image_base64 = base64.b64encode(image_byte_array)
json_entry = {"b64": image_base64.decode()}
#instances.append(json_entry
request = json.dumps({'inputs': json_entry})
f.write(request.encode('utf-8'))
f.close()
{"inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/......}}
我正在用一张图像测试预测。
最佳答案
我遇到了同样的问题,发现 ml_engine/local_utils.py
用途 python
运行 ml_engine/local_predict.pyc
专为 python2.7
而构建.
我的 python 是 python3
,所以当 ml_engine/local_utils.py
尝试运行 ml_engine/local_predict.pyc
使用 python
(实际上 python3
),它失败并出现错误:
RuntimeError: Bad magic number in .pyc file
解决方案1:
你可以做
python2
作为系统中的默认值。解决方案2:
我改了
ml_engine/local_utils.py
使用这样的补丁:83c83
< python_executables = files.SearchForExecutableOnPath("python")
---
> python_executables = files.SearchForExecutableOnPath("python2")
114a115
> log.debug(args)
124,126c125,130
< for instance in instances:
< proc.stdin.write((json.dumps(instance) + "\n").encode("utf-8"))
< proc.stdin.flush()
---
> try:
> for instance in instances:
> proc.stdin.write((json.dumps(instance) + "\n").encode("utf-8"))
> proc.stdin.flush()
> except:
> pass
try-catch 需要使脚本能够读取和打印运行时发生的错误
ml_engine/local_predict.pyc
.
关于tensorflow - 本地预测的 gcloud 问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58581540/