tensorflow - 由于内部错误,无法在解释器上运行 tflite 模型

标签 tensorflow nlp tensorflow2.0 tensorflow-lite

我正在尝试为 android 构建一个离线翻译器。我的模型深受本指南的启发:https://www.tensorflow.org/tutorials/text/nmt_with_attention .我只是做了一些修改以确保模型是可序列化的。 (你可以在最后找到模型的代码)
该模型在我的 jupyter 笔记本上完美运行。我正在使用 Tensorflow 版本:2.3.0-dev20200617,我还能够使用以下代码段生成 tflite 文件:

converter = tf.lite.TFLiteConverter.from_keras_model(partial_model)
tflite_model = converter.convert()

with tf.io.gfile.GFile('goog_nmt_v2.tflite', 'wb') as f:
  f.write(tflite_model)
但是,当我使用生成的 tflite 模型在 android 上进行预测时,它会抛出错误 java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: tensorflow/lite/kernels/concatenation.cc:73 t->dims->data[d] != t0->dims->data[d] (8 != 1) Node number 84 (CONCATENATION) failed to prepare.这很奇怪,因为我提供的输入尺寸与我在 jupyter notebook 中所做的完全相同。这是用于测试(虚拟输入)模型是否在 android 上运行的 java 代码:
 HashMap<Integer, Object> outputVal = new HashMap<>();
        for(int i=0; i<2; i++) outputVal.put(i, new float[1][5]);
        float[][] inp_test = new float[1][8];
        float[][] enc_hidden = new float[1][1024];
        float[][] dec_input = new float[1][1];
        float[][] dec_test = new float[1][8];

        tfLite.runForMultipleInputsOutputs(new Object[] {inp_test,enc_hidden, dec_input, dec_test},outputVal);
这是我的 gradle 依赖项:
dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])

    implementation 'androidx.appcompat:appcompat:1.1.0'
    implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'
    implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly'
    // This dependency adds the necessary TF op support.
    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.1'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
}
正如错误所指出的,节点 84 处的尺寸有问题。所以我继续使用 Netron 可视化 tflite 文件。我已经放大了连接节点,您可以找到节点的图片以及输入和输出维度 here .您可以找到整个生成的图 here .
事实证明,位置 84 处的串联节点实际上并未串联,您可以从输入和输出维度看到这一点。它只是在处理 1X1X1 和 1X1X256 矩阵后吐出一个 1X1X1 矩阵。我知道 tflite 图与原始模型图不同,因为许多操作被替换甚至删除以进行优化,但这似乎有点奇怪。
我无法将其与错误联系起来。如果它在 jupyter 上完美运行,是框架问题还是我遗漏了什么?另外,谁能解释一下 t->dims->data[d] != t0->dims->data[d] 的错误是什么意思?什么是d?
如果您对任何一个问题都有答案,请写下来。如果您需要任何额外的细节,请告诉我。
这是模型的代码:

Tx = 8
def Partial_model():
    outputs = []
    X = tf.keras.layers.Input(shape=(Tx,))
    partial = tf.keras.layers.Input(shape=(Tx,))
    enc_hidden = tf.keras.layers.Input(shape=(units,))
    dec_input = tf.keras.layers.Input(shape=(1,))
    
    d_i = dec_input
    e_h = enc_hidden
    X_i = X
    
    enc_output, e_h = encoder(X, enc_hidden)
    
    
    dec_hidden = enc_hidden
    print(dec_input.shape, 'inp', dec_hidden.shape, 'dec_hidd')
    for t in range(1, Tx):
        print(t, 'tt')
      # passing enc_output to the decoder
        predictions, dec_hidden, _ = decoder(d_i, dec_hidden, enc_output)
#         outputs.append(predictions)
        print(predictions.shape, 'pred')
        d_i = tf.reshape(partial[:, t], (-1, 1))
        print(dec_input.shape, 'dec_input')
    
    predictions, dec_hidden, _ = decoder(d_i, dec_hidden, enc_output)
    d_i = tf.squeeze(d_i)
    
    outputs.append(tf.math.top_k(predictions, 5))
    
    return tf.keras.Model(inputs = [X, enc_hidden, dec_input, partial], outputs = [outputs[0][0], outputs[0][1]])




class Encoder():
  def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
    self.batch_sz = batch_sz
    self.enc_units = enc_units
    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
    self.gru = tf.keras.layers.GRU(self.enc_units,
                                   return_sequences=True,
                                   return_state=True,
                                   recurrent_initializer='glorot_uniform')

  def __call__(self, x, hidden):
    x = self.embedding(x)
    output, state = self.gru(x, initial_state = hidden)
    print(output.shape, hidden.shape, "out", "hid")
    return output, state


  def initialize_hidden_state(self):
    return tf.zeros((self.batch_sz, self.enc_units))



class BahdanauAttention():
  def __init__(self, units):
    self.W1 = tf.keras.layers.Dense(units)
    self.W2 = tf.keras.layers.Dense(units)
    self.V = tf.keras.layers.Dense(1)

  def __call__(self, query, values):
    # query hidden state shape == (batch_size, hidden size)
    # query_with_time_axis shape == (batch_size, 1, hidden size)
    # values shape == (batch_size, max_len, hidden size)
    # we are doing this to broadcast addition along the time axis to calculate the score
    print(query.shape, 'shape')
    query_with_time_axis = tf.expand_dims(query, 1)
    # score shape == (batch_size, max_length, 1)
    # we get 1 at the last axis because we are applying score to self.V
    # the shape of the tensor before applying self.V is (batch_size, max_length, units)
    print("2")
    score = self.V(tf.nn.tanh(
        self.W1(query_with_time_axis) + self.W2(values)))
    print("3")

    # attention_weights shape == (batch_size, max_length, 1)
    attention_weights = tf.nn.softmax(score, axis=1)

    # context_vector shape after sum == (batch_size, hidden_size)
    context_vector = attention_weights * values
    context_vector = tf.reduce_sum(context_vector, axis=1)
    
    return context_vector, attention_weights


class Decoder():
  def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
    self.dec_units = dec_units
    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
    self.gru = tf.keras.layers.GRU(self.dec_units,
                                   return_sequences=True,
                                   return_state=True,
                                   recurrent_initializer='glorot_uniform')
    self.fc = tf.keras.layers.Dense(vocab_size)

    # used for attention
    self.attention = BahdanauAttention(self.dec_units)

  def __call__(self, x, hidden, enc_output):
    # enc_output shape == (batch_size, max_length, hidden_size)
    context_vector, attention_weights = self.attention(hidden, enc_output)
    
    print(context_vector.shape, 'c_v', attention_weights.shape, "attention_w")

    # x shape after passing through embedding == (batch_size, 1, embedding_dim)
    x = self.embedding(x)

    # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
    print(x.shape, 'xshape', context_vector.shape, 'context')
    expanded_dims = tf.expand_dims(context_vector, 1)
    x = tf.concat([expanded_dims, x], axis=-1)

    # passing the concatenated vector to the GRU
    output, state = self.gru(x)

    # output shape == (batch_size * 1, hidden_size)
    output = tf.reshape(output, (-1, output.shape[2]))

    # output shape == (batch_size, vocab)
    x = self.fc(output)

    return x, state, attention_weights




最佳答案

您可以在 python notebook 中加载生成的 .tflite 文件,并传递与 Keras 模型相同的输入。您必须查看准确的输出,因为在模型转换期间不会损失准确性。如果那里有问题......在android操作过程中会有问题。如果没有……一切都会好起来的。使用 Tensorflow 指南中的以下代码在 Python 中运行推理:

import numpy as np
import tensorflow as tf

# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
快乐编码!

关于tensorflow - 由于内部错误,无法在解释器上运行 tflite 模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62627157/

相关文章:

Tensorflow 对象检测 API : Train from exported model checkpoint

python - 在2.4版本的Tensorflow中,我们应该在pip安装tensorflow后单独做tensorflow-gpu吗?

tensorflow - 如何将 tflite 模型转换为 Tensorflow 中的卡住图 (.pb)?

正则表达式模式以随机\n或\n\n作为换行符计算诗歌中的行数

python - tf.data : Parallelize loading step

python - 如何使用 NLTK 从归纳语法生成句子?

java - StanleyCoreNLP 不按照我的方式工作

python-2.7 - tensorflow 服务错误: Invalid argument: JSON object: does not have named input

python - Tensorflow Inception v3 再训练 - 将文本/标签附加到单个图像

python - 将具有非固定长度元素的列表转换为张量