c++ - 用于推理的 TensorFlow Lite C++ API 示例

标签 c++ tensorflow tensorflow-lite inference

我正在尝试让 TensorFlow Lite 示例在配备 ARM Cortex-A72 处理器的机器上运行。不幸的是,由于缺少有关如何使用 C++ API 的示例,我无法部署测试模型。我将尝试解释我到目前为止所取得的成就。

创建 tflite 模型

我创建了一个简单的线性回归模型并对其进行了转换,它应该近似函数 f(x) = 2x - 1。我从一些教程中得到了这段代码片段,但我再也找不到了。

import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.contrib import lite

model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')

xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

model.fit(xs, ys, epochs=500)

print(model.predict([10.0]))

keras_file = 'linear.h5'
keras.models.save_model(model, keras_file)

converter = lite.TocoConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open('linear.tflite', 'wb').write(tflite_model)

这会创建一个名为 linear.tflite 的二进制文件,我应该可以加载它。

为我的机器编译 TensorFlow Lite

TensorFlow Lite 附带一个脚本,用于在具有 aarch64 架构的机器上进行编译。我按照指南 here为此,即使我必须稍微修改 Makefile。请注意,我在我的目标系统上本地编译了它。这创建了一个名为 libtensorflow-lite.a 的静态库。

问题:推理

我尝试遵循网站上的教程 here ,并简单地将加载和运行模型的代码片段粘贴在一起,例如

class FlatBufferModel {
  // Build a model based on a file. Return a nullptr in case of failure.
  static std::unique_ptr<FlatBufferModel> BuildFromFile(
      const char* filename,
      ErrorReporter* error_reporter);

  // Build a model based on a pre-loaded flatbuffer. The caller retains
  // ownership of the buffer and should keep it alive until the returned object
  // is destroyed. Return a nullptr in case of failure.
  static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
      const char* buffer,
      size_t buffer_size,
      ErrorReporter* error_reporter);
};

tflite::FlatBufferModel model("./linear.tflite");

tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);

// Resize input tensors, if desired.
interpreter->AllocateTensors();

float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.

interpreter->Invoke();

float* output = interpreter->typed_output_tensor<float>(0);

尝试通过以下方式编译时

g++ demo.cpp libtensorflow-lite.a

我收到一大堆错误。日志:

root@localhost:/inference# g++ demo.cpp libtensorflow-lite.a 
demo.cpp:3:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
   static std::unique_ptr<FlatBufferModel> BuildFromFile(
               ^~~~~~~~~~
demo.cpp:10:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
   static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
               ^~~~~~~~~~
demo.cpp:16:1: error: ‘tflite’ does not name a type
 tflite::FlatBufferModel model("./linear.tflite");
 ^~~~~~
demo.cpp:18:1: error: ‘tflite’ does not name a type
 tflite::ops::builtin::BuiltinOpResolver resolver;
 ^~~~~~
demo.cpp:19:6: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
 std::unique_ptr<tflite::Interpreter> interpreter;
      ^~~~~~~~~~
demo.cpp:20:1: error: ‘tflite’ does not name a type
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
 ^~~~~~
demo.cpp:23:1: error: ‘interpreter’ does not name a type
 interpreter->AllocateTensors();
 ^~~~~~~~~~~
demo.cpp:25:16: error: ‘interpreter’ was not declared in this scope
 float* input = interpreter->typed_input_tensor<float>(0);
                ^~~~~~~~~~~
demo.cpp:25:48: error: expected primary-expression before ‘float’
 float* input = interpreter->typed_input_tensor<float>(0);
                                                ^~~~~
demo.cpp:28:1: error: ‘interpreter’ does not name a type
 interpreter->Invoke();
 ^~~~~~~~~~~
demo.cpp:30:17: error: ‘interpreter’ was not declared in this scope
 float* output = interpreter->typed_output_tensor<float>(0);
                 ^~~~~~~~~~~
demo.cpp:30:50: error: expected primary-expression before ‘float’
 float* output = interpreter->typed_output_tensor<float>(0);

我是 C++ 的新手,所以我可能遗漏了一些明显的东西。然而,似乎其他人也有 C++ API 的问题(请参阅 this GitHub issue )。有没有人也偶然发现并让它运行?

对我来说最重要的方面是:

1.) 我在哪里以及如何定义签名,以便模型知道将什么视为输入和输出?

2.) 我必须包含哪些 header ?

谢谢!

编辑

感谢@Alex Cohn,链接器能够找到正确的 header 。我还意识到我可能不需要重新定义 flatbuffers 类,所以我最终得到了这段代码(标记了微小的变化):

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"

auto model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");   //CHANGED

tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);

// Resize input tensors, if desired.
interpreter->AllocateTensors();

float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.

interpreter->Invoke();

float* output = interpreter->typed_output_tensor<float>(0);

这大大减少了错误的数量,但我不确定如何解决其余的问题:

root@localhost:/inference# g++ demo.cpp -I/tensorflow
demo.cpp:10:34: error: expected ‘)’ before ‘,’ token
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
                                  ^
demo.cpp:10:44: error: expected initializer before ‘)’ token
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
                                            ^
demo.cpp:13:1: error: ‘interpreter’ does not name a type
 interpreter->AllocateTensors();
 ^~~~~~~~~~~
demo.cpp:18:1: error: ‘interpreter’ does not name a type
 interpreter->Invoke();
 ^~~~~~~~~~~

我该如何解决这些问题?看来我必须定义自己的解析器,但我不知道该怎么做。

最佳答案

我终于让它运行起来了。考虑到我的目录结构如下所示:

/(root)
    /tensorflow
        # whole tf repo
    /demo
        demo.cpp
        linear.tflite
        libtensorflow-lite.a

我把demo.cpp改成了

#include <stdio.h>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"

int main(){

    std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");

    if(!model){
        printf("Failed to mmap model\n");
        exit(0);
    }

    tflite::ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);

    // Resize input tensors, if desired.
    interpreter->AllocateTensors();

    float* input = interpreter->typed_input_tensor<float>(0);
    // Dummy input for testing
    *input = 2.0;

    interpreter->Invoke();

    float* output = interpreter->typed_output_tensor<float>(0);

    printf("Result is: %f\n", *output);

    return 0;
}

此外,我必须调整我的编译命令(我必须手动安装 flatbuffers 才能使其工作)。对我有用的是:

g++ demo.cpp -I/tensorflow -L/demo -ltensorflow-lite -lrt -ldl -pthread -lflatbuffers -o demo

感谢@AlexCohn 让我走上正轨!

关于c++ - 用于推理的 TensorFlow Lite C++ API 示例,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56837288/

相关文章:

c++ - 最佳 HTTP header 容器 C++ 设计

c++ - 用于查找指针何时超出范围的程序和技术

python - 为什么 class_weights 不起作用并出现错误?

python - Tensorflow hub.load 模型到 TFLite

android - Tensorflow-lite - 从量化模型输出中获取位图

c++ - 如何通过local_iterator删除boost unordered_map中的元素?

c++ - QTabWidget 内容不展开

python - Tensorflow 宽深模型,具有不同数据集的 AttributeError

python - AWS elastic beanstalk 上的 tensorflow GPU - 调用 "python"+ "sudo"时出现 tf 导入错误(libcublas.so.9.0 错误)

java - (TensorflowLite/Android) 无法实例化 Activity ComponentInfo