python - 展平层的输入必须是张量

标签 python tensorflow keras deep-learning keras-layer

我有以下 keras 模型,运行正常:

model = Sequential()
model.add(Flatten(input_shape=(1,1,68)))
model.add(Dense(35,activation='linear'))
model.add(LeakyReLU(alpha=.001))
model.add(Dense(nb_actions))
model.add(Activation('linear'))

然后,我尝试做一些更详细的事情,如下:

model = Sequential()
input1 = keras.layers.Flatten(input_shape=(1,1,68))
x1 = keras.layers.Dense(68, activation='linear')(input1)
x2 = keras.layers.Dense(68, activation='relu')(input1)
x3 = keras.layers.Dense(68, activation='sigmoid')(input1)
add1 = keras.layers.Add()([x1, x2, x3])
activ1 = keras.layers.advanced_activations.LeakyReLU(add1)

x4 = keras.layers.Dense(34, activation='linear')(activ1)
x5 = keras.layers.Dense(34, activation='relu')(activ1)
x6 = keras.layers.Dense(34, activation='sigmoid')(activ1)
add2 = keras.layers.Add()([x4, x5, x6])
activ2 = keras.layers.advanced_activations.LeakyReLU(add2)

x7 = keras.layers.Dense(17, activation='linear')(activ2)
x8 = keras.layers.Dense(17, activation='relu')(activ2)
x9 = keras.layers.Dense(17, activation='sigmoid')(activ2)
add2 = keras.layers.Add()([x4, x5, x6])
activ3 = keras.layers.advanced_activations.LeakyReLU(add3)

final_layer=keras.layers.Dense(nb_actions, activation='linear')(activ3)
model = keras.models.Model(inputs=input1, outputs=final_layer)

正如您在上面的代码中看到的,我保留来自 Flatten 层的相同输入,并且只是对具有相同数量神经元但激活方式不同的层进行求和。我的问题是当我尝试运行这段代码时。我总是收到以下错误:

Using TensorFlow backend. Traceback (most recent call last):   File "/home/anselmo/virtualenvironment/virtualenvironment_anselmo2/lib/python3.5/site-packages/keras/engine/base_layer.py", line 279, in assert_input_compatibility
    K.is_keras_tensor(x)   File "/home/anselmo/virtualenvironment/virtualenvironment_anselmo2/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 474, in is_keras_tensor
    str(type(x)) + '`. ' ValueError: Unexpectedly found an instance of type class keras.layers.core.Flatten. Expected a symbolic tensor instance.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):   File "main.py", line 64, in <module>
    x1 = keras.layers.Dense(68, activation='linear')(input1)   File "/home/anselmo/virtualenvironment/virtualenvironment_anselmo2/lib/python3.5/site-packages/keras/engine/base_layer.py", line 414, in __call__
    self.assert_input_compatibility(inputs)   File "/home/anselmo/virtualenvironment/virtualenvironment_anselmo2/lib/python3.5/site-packages/keras/engine/base_layer.py", line 285, in assert_input_compatibility
    str(inputs) + '. All inputs to the layer ' ValueError: Layer dense_1 was called with an input that isn't a symbolic tensor. Received type: class keras.layers.core.Flatten. Full input: [keras.layers.core.Flatten object at 0x7f0a145d6438]. All inputs to the layer should be tensors.

当我运行前面的代码时,错误没有发生。那么为什么改变网络设计会出现这个错误呢?我该如何解决它?我的错误在哪里?

最佳答案

您在第二个代码中尝试的是 Keras 功能模型,而不是顺序模型。您应该将第一行从 model = Sequential() 更改为 input1 = Input(shape=(1, 1, 68))

更多详细信息请访问 official documentation .

关于python - 展平层的输入必须是张量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56337100/

相关文章:

python - 如何在本地运行 Postgres

python - 如何将 autograph 和 tf.device 与 tf.function 包装的类方法一起使用?

python - 将 Convnet.js 神经网络模型转换为 Keras Tensorflow

tensorflow - keras CNN : train and validation set are identical but with different accuracy

python - 这个异步代码在 Python 和 C 中将如何执行?

python - 在Python中存储列表的最佳方式?

Python:我陷入了无限循环

python-2.7 - Tensorflow:用于回归的 MLP 显示测试集的相同预测值

python - keras层中的参数适用于哪里?

python - 使用keras同时训练两个网络