python - 准备张量分配时出现意外失败 : tensorflow/lite/kernels/reshape. cc :85 num_input_elements ! = num_output_elements (1200 != 0)

标签 python android numpy tensorflow machine-learning

我有一个 tflite 模型,TFLite 模型的输入签名是 'shape_signature': array([ -1, 12000, 1]。我已经使用形状为 [1,1200,1] 的随机数据进行了测试并且模型运行没有任何错误。

预测形状也是(1,1200,1)

https://colab.research.google.com/gist/pjpratik/bd48804cc8d40239812079b5a249aac3/60367.ipynb#scrollTo=9qc4EpLTUw0v

现在我想在 android 中执行此操作

我在 Android 中尝试过此操作,但收到此错误

private fun applyModel() {
        val inputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //1,1200,1

        val outputFloatArray = inputFloatArray //Attempt 1
        val outputFloatArray = FloatArray(1200) //Attempt 2
        val outputFloatArray = FloatArray(1) //Attempt 3
        val outputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //Attempt 4

        Log.d("tflite", "Model input data: ${inputFloatArray.toString()}")

        tflite!!.run(inputFloatArray, outputFloatArray)

        Log.d("tflite", "Model output data: ${outputFloatArray.toString()}")
    }
 java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)
                                                                                                    Node number 6 (RESHAPE) failed to prepare.
                                                                                                        at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensors(Native Method)
                                                                                                        at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensorsIfNeeded(NativeInterpreterWrapper.java:308)
                                                                                                        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:248)
                                                                                                        at org.tensorflow.lite.InterpreterImpl.runForMultipleInputsOutputs(InterpreterImpl.java:101)
                                                                                                        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:77)
                                                                                                        at org.tensorflow.lite.InterpreterImpl.run(InterpreterImpl.java:94)
                                                                                                        ```

最佳答案

这似乎更像是一种调试,您需要检查模型期望什么样的输入以及生成什么类型​​的数据。

您可以尝试对数据进行分块,然后尝试处理部分数据,下面显示了我如何应用于 dtln dtln_aec_128_1.tflite

您可以遵循以下实现并尝试在您的数据集上使用它

https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models

private fun applyModel(data: ShortArray): ShortArray {
        val chunkSize = 257 // can be depending on your model test, can be 1200
        val outputData = ShortArray(chunkSize)

        val inputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }

        for (i in 0 until chunkSize)
            inputArray[0][0][i] = data[i].toFloat()

        val outputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }

        tflite!!.run(inputArray, outputArray)

        Log.d("tflite output", "Model direct output ${outputArray[0][0].joinToString(" ")}")


        val outBuffer = FloatArray(chunkSize)

        for (i in 0 until chunkSize)
            outBuffer[i] = (outputArray[0][0][i]).toFloat()

        for (i in 0 until chunkSize)
            outputData[i] = outBuffer[i].toInt()
        .toShort()

        return outputData
    }

要加载模型,您可以这样做

@Throws(IOException::class)
    private fun loadModelFile(activity: Activity): MappedByteBuffer? {
        val fileDescriptor: AssetFileDescriptor = activity.assets.openFd("dtln_aec_128_1.tflite")
        val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
        val fileChannel = inputStream.channel
        val startOffset = fileDescriptor.startOffset
        val declaredLength = fileDescriptor.declaredLength
        return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declared length)
    }

应用程序/build.gradle

implementation 'org.tensorflow:tensorflow-lite:2.12.0'

就在 buildTypes 下方{}

aaptOptions {
        noCompress "dtln_aec_128_1.tflite"
    }

并复制到 Assets 文件夹

关于python - 准备张量分配时出现意外失败 : tensorflow/lite/kernels/reshape. cc :85 num_input_elements ! = num_output_elements (1200 != 0),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/76052705/

相关文章:

python - 如何分配给零维 numpy 数组?

python socket.error : [Errno 98] Address already in use

android - FragmentManager、LocalActivityManager 和 TabHost.setup()

python - 如何将 SparseTensorValue 转换为 numpy 数组?

python - 需要对 numpy 索引进行一些澄清吗?

python - 计算每分钟的列值总和

python - App Engine,pymongo.errors.ServerSelectionTimeoutError : connection closed, 连接已关闭,连接已关闭”

android - 如何防止 droid VNC 服务器上的键盘缓冲

Android : java. lang.IllegalArgumentException:音频缓冲区大小无效

python - 将数组范围作为参数传递给函数?