我正在努力了解如何在 Tensorflow 中使用 RNN 来实现 Seq2Seq 模型,并且我到达了执行动态 RNN 的最后一步,我得到了动态动态解码步骤,并收到了错误:
“ValueError:层 gru_cell_3 的输入 0 与层不兼容:预期 ndim=2,发现 ndim=1。收到完整形状:[无]”
import tensorflow as tf
data_inputs = tf.placeholder(tf.float32,[None,102,300])
batch_lengths = tf.cast(tf.reduce_sum(tf.reduce_max(tf.sign(data_inputs),2),1),tf.int32)
encoder_cell_forward = tf.nn.rnn_cell.GRUCell(num_units = 150)
encoder_cell_backward = tf.nn.rnn_cell.GRUCell(num_units = 150)
_ , state = tf.nn.bidirectional_dynamic_rnn(
encoder_cell_forward,encoder_cell_backward,
data_inputs,sequence_length = batch_lengths,
dtype = tf.float32 )
state = tf.concat(state,1)
decoder_cell = tf.nn.rnn_cell.GRUCell(num_units = 300)
helper = tf.contrib.seq2seq.TrainingHelper(state,batch_lengths)
projection_layer = tf.layers.Dense(
units = 300,activation= None,trainable =True )
decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, helper, state,
output_layer=projection_layer)
final_outputs, final_state, final_sequence_lengths = tf.contrib.seq2seq.dynamic_decode(
decoder,maximum_iterations= 102,impute_finished=False)
我在这里做错了什么?
最佳答案
我想通了,与训练助手一致:
tf.contrib.seq2seq.TrainingHelper(state,batch_lengths)
state 必须是您想要解码的序列批处理,如果您使用编码状态,则会引发错误。
关于python - Tensorflow 中的 Seq2Seq,但我收到 ValueError : Input 0 of layer gru_cell_3 is incompatible with the layer:,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49619628/