tensorflow - CuDNNLSTM : Failed to call ThenRnnForward

标签 tensorflow keras google-cloud-platform gpu lstm

我在尝试使用 CuDNNLSTM 而不是 keras.layers.LSTM 时遇到了问题。
这是我得到的错误:

Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, seq_length, batch_size]: [1, 300, 512, 1, 5521, 128] [[{{node bidirectional_1/CudnnRNN_1}} = CudnnRNN[T=DT_FLOAT, _class=["loc:@train...NNBackprop"], direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=87654321, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"](bidirectional_1/transpose_1, bidirectional_1/ExpandDims_1, bidirectional_1/ExpandDims_1, bidirectional_1/concat_1)]] [[{{node loss/mul/_75}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1209_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]


另外,我在其中一次运行中遇到了这个错误:

InternalError: GPU sync failed


每次运行后内核都会死掉。
当我尝试使用 CuDNNLSTM 在谷歌云上的 VM 实例上运行它时,我才开始收到此错误。
我的代码是:
MAX_LEN = max(len(article) for article in X_train_tokens)
EMBEDDING_DIM=300
vocab_size = len(word_to_id)
classes = 2 
# Text input
text_input = Input(shape=(MAX_LEN,))
embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input)
x = Bidirectional(LSTM(512, return_sequences=False))(embedding)
pred = Dense(2, activation='softmax')(x)
model = Model(inputs=[text_input],outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='RMSprop',     metrics=['accuracy'])
batch_size = 128
generator = text_training_generator(batch_size)
steps = len(X_train)/ batch_size 

model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10)
模型总结:
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 5521)              0         
_________________________________________________________________
embedding_1 (Embedding)      (None, 5521, 300)         8099100   
_________________________________________________________________
bidirectional_1 (Bidirection (None, 1024)              3330048   
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 2050      
=================================================================
Total params: 11,431,198
Trainable params: 11,431,198
Non-trainable params: 0
_________________________________________________________________

最佳答案

可能您的 GPU 内存不足。您的网络非常大,有 1100 万个可训练参数。你真的需要循环层的 512*2 输出吗?

此外,您的 embedding_dim 也很大,而您的词汇量却很小,只有 5k 个单词。我猜您的网络对于您的问题来说太复杂了。我建议首先尝试使用 32 的嵌入大小和 32 的 LSTM 大小。如果您的准确性仍然很差,您可以增加复杂性。

EMBEDDING_DIM = 32
Bidirectional(LSTM(32, return_sequences=False))(embedding)

关于tensorflow - CuDNNLSTM : Failed to call ThenRnnForward,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53972814/

相关文章:

python-3.x - keras 的 lstm 模型的训练和评估精度不同

machine-learning - Tensorflow.js/Keras LSTM 具有多个序列?

python - Keras 中使用 LSTM 进行多元多时间序列回归的恒定输出值

input - 卷积神经网络Conv1d输入形状

google-cloud-platform - 谷歌云 : Dataproc taking too long to start the explorer

java - gcp 数据流流程元素不会转到下一个 ParDO 函数

python - 如何在 python Beam 中制作通用的 Protobuf 解析器 DoFn?

python - Tensorflow:使用子/上对角线上的输入创建对角矩阵

python - Tensorflow DNNclassifier : error wile training (numpy. ndarray 没有属性索引)

python - tensorflow - 将矩阵向量与另一个向量中的每个矩阵相乘