我有一个单词嵌入矩阵,其中包含每个单词的向量。我正在尝试使用 TensorFlow 获取给定嵌入向量的每个单词的双向 LSTM 编码。不幸的是,我收到以下错误消息:
ValueError: Shapes (1, 125) and () must have the same rank Exception TypeError: TypeError("'NoneType' object is not callable",) in ignored
这是我使用的代码:
# Declare max number of words in a sentence
self.max_len = 100
# Declare number of dimensions for word embedding vectors
self.wdims = 100
# Indices of words in the sentence
self.wrd_holder = tf.placeholder(tf.int32, [self.max_len])
# Embedding Matrix
wrd_lookup = tf.Variable(tf.truncated_normal([len(vocab)+3, self.wdims], stddev=1.0 / np.sqrt(self.wdims)))
# Declare forward and backward cells
forward = rnn_cell.LSTMCell(125, (self.wdims))
backward = rnn_cell.LSTMCell(125, (self.wdims))
# Perform lookup
wrd_embd = tf.nn.embedding_lookup(wrd_lookup, self.wrd_holder)
embd = tf.split(0, self.max_len, wrd_embd)
# run bidirectional LSTM
boutput = rnn.bidirectional_rnn(forward, backward, embd, dtype=tf.float32, sequence_length=self.max_len)
最佳答案
传递给 rnn 的序列长度必须是长度批量大小的向量。
关于python - 词嵌入的 TensorFlow 双向 LSTM 编码,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36515648/