假设我有一个 LSTM 网络来对长度为 10 的时间序列进行分类,将时间序列馈送到 LSTM 的标准方法是形成一个 [批量大小 X 10 X 向量大小] 数组并将其馈送到 LSTM:
self.rnn_t, self.new_state = tf.nn.dynamic_rnn( \
inputs=self.X, cell=self.lstm_cell, dtype=tf.float32, initial_state=self.state_in)
当使用sequence_length
参数时,我可以指定时间序列的长度。
我的问题是,对于上面定义的场景,如果我使用大小为 [批量大小 X 1 X 向量大小] 的向量调用 dynamic_rnn
10 次,获取时间序列中的匹配索引并传递返回的状态作为先前调用的初始状态,我最终会得到相同的结果吗?输出和状态?或不?
最佳答案
在这两种情况下您应该得到相同的输出。我将用下面的一个玩具示例来说明这一点:
> 1. 设置网络的输入和参数:
# Set RNN params
batch_size = 2
time_steps = 10
vector_size = 5
# Create a random input
dataset= tf.random_normal((batch_size, time_steps, vector_size), dtype=tf.float32, seed=42)
# input tensor to the RNN
X = tf.Variable(dataset, dtype=tf.float32)
> 2. 时间序列 LSTM,输入:[batch_size, time_steps, vector_size]
# Initializers cannot be set to random value, so set it a fixed value.
with tf.variable_scope('rnn_full', initializer=tf.initializers.ones()):
basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
output_f, state_f= tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
> 3. LSTM 在 time_steps
的循环计数中调用以创建 tim_series,其中每个 LSTM 都输入一个输入:[batch_size, vector_size]
并将返回的状态设置为初始状态
# Unstack the inputs across time_steps
unstack_X = tf.unstack(X,axis=1)
outputs = []
with tf.variable_scope('rnn_unstacked', initializer=tf.initializers.ones()):
basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
#init_state has to be set to zero
init_state = basic_cell.zero_state(batch_size, dtype=tf.float32)
# Create a loop of N LSTM cells, N = time_steps.
for i in range(len(unstack_X)):
output, state= tf.nn.dynamic_rnn(basic_cell, tf.expand_dims(unstack_X[i], 1), dtype=tf.float32, initial_state= init_state)
# copy the init_state with the new state
init_state = state
outputs.append(output)
# Transform the output to [batch_size, time_steps, vector_size]
output_r = tf.transpose(tf.squeeze(tf.stack(outputs)), [1, 0, 2])
> 4. 检查输出
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
out_f, st_f =sess.run([output_f, state_f])
out_r, st_r =sess.run([output_r, state])
npt.assert_almost_equal(out_f, out_r)
npt.assert_almost_equal(st_f, st_r)
状态
和输出
都匹配。
关于python - 设置sequence_length对dynamic_rnn中返回状态的影响,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50262174/