machine-learning - 如何在 tensorflow 0.11 中使用 RNN 编写多维回归预测器

标签 machine-learning tensorflow time-series prediction

这是我实际尝试做的事情的玩具版本。我有大量时间步长(150,000 个步长)的非常高维度的输入数据(2e05 到 5e06 维度)。我知道我最终可能需要对状态进行一些嵌入/压缩(请参阅 this question )。但现在让我们先把它放在一边。

以这个11维的玩具输入数据为例

t  Pattern
0  0,0,0,0,0,0,0,0,0,2,1 
1  0,0,0,0,0,0,0,0,2,1,0 
2  0,0,0,0,0,0,0,2,1,0,0 
n  ...

我希望 RNN 学会将当前时间步长与下一个时间步长关联起来,这样如果输入 (x) 是 t0,那么所需的输出 (y) 就是 t1。

使用 RNN 的想法是这样我一次只能为网络提供一个时间步(由于我的真实数据维度很大)。由于输入和输出的数量相同,我不确定基本的 RNN 是否合适。我看了一点 seq2seq 教程,但我不确定这个应用程序是否需要编码器/解码器,而且我无法使用我的玩具数据到达任何地方。

以下是我能想到的全部内容,但它根本不收敛。我错过了什么?

import numpy as np
import tensorflow as tf

# Imports for loading CSV file
from tensorflow.python.platform import gfile 
import csv

# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
                 [0,0,0,0,0,0,0,0,2,1,0],
                 [0,0,0,0,0,0,0,2,1,0,0],
                 [0,0,0,0,0,0,2,1,0,0,0],
                 [0,0,0,0,0,2,1,0,0,0,0],
                 [0,0,0,0,2,1,0,0,0,0,0],
                 [0,0,0,2,1,0,0,0,0,0,0],
                 [0,0,2,1,0,0,0,0,0,0,0],
                 [0,2,1,0,0,0,0,0,0,0,0],
                 [2,1,0,0,0,0,0,0,0,0,0]]

data = np.array(wholeSequence[:-1], dtype=int) # all but last
target = np.array(wholeSequence[1:], dtype=int) # all but first
trainingSet = tf.contrib.learn.datasets.base.Dataset(data=data, target=target)
trainingSetDims = trainingSet.data.shape[1]

EPOCHS = 10000
PRINT_STEP = 1000

x_ = tf.placeholder(tf.float32, [None, trainingSetDims])
y_ = tf.placeholder(tf.float32, [None, trainingSetDims])

cell = tf.nn.rnn_cell.BasicRNNCell(num_units=trainingSetDims)

outputs, states = tf.nn.rnn(cell, [x_], dtype=tf.float32)
outputs = outputs[-1]

W = tf.Variable(tf.random_normal([trainingSetDims, 1]))     
b = tf.Variable(tf.random_normal([trainingSetDims]))

y = tf.matmul(outputs, W) + b

cost = tf.reduce_mean(tf.square(y - y_))
train_op = tf.train.RMSPropOptimizer(0.005, 0.2).minimize(cost)

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(EPOCHS):
        sess.run(train_op, feed_dict={x_:trainingSet.data, y_:trainingSet.target})
        if i % PRINT_STEP == 0:
            c = sess.run(cost, feed_dict={x_:trainingSet.data, y_:trainingSet.target})
            print('training cost:', c)

    response = sess.run(y, feed_dict={x_:trainingSet.data})
    print(response)

该方法来自 this thread

最后,我想使用 LSTM,重点是对序列进行建模,以便可以通过使用 t0 启动网络来重建整个序列的近似值,然后将预测反馈为下一个输入。

编辑1

由于我添加了以下代码以在训练前将直方图输入数据重新调整为概率分布,因此我现在看到成本降低了:

# Convert hist to probability distribution
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
pdfSequence = wholeSequence*(1./np.sum(wholeSequence)) # Normalize to PD.

data = pdfSequence[:-1] # all but last
target = pdfSequence[1:] # all but first

输出仍然没有出现像输入一样的东西,所以我肯定遗漏了一些东西:

('training cost:', 0.49993864)
('training cost:', 0.0012213766)
('training cost:', 0.0010471855)
('training cost:', 0.00094231067)
('training cost:', 0.0008385859)
('training cost:', 0.00077578216)
('training cost:', 0.00071381911)
('training cost:', 0.00063783216)
('training cost:', 0.00061271922)
('training cost:', 0.00059178629)
[[ 0.02012676  0.02383044  0.02383044  0.02383044  0.02383044  0.02383044
   0.02383044  0.02383044  0.02383044  0.01642305  0.01271933]
 [ 0.02024871  0.02395239  0.02395239  0.02395239  0.02395239  0.02395239
   0.02395239  0.02395239  0.02395239  0.016545    0.01284128]
 [ 0.02013803  0.02384171  0.02384171  0.02384171  0.02384171  0.02384171
   0.02384171  0.02384171  0.02384171  0.01643431  0.0127306 ]
 [ 0.020188    0.02389169  0.02389169  0.02389169  0.02389169  0.02389169
   0.02389169  0.02389169  0.02389169  0.01648429  0.01278058]
 [ 0.02020025  0.02390394  0.02390394  0.02390394  0.02390394  0.02390394
   0.02390394  0.02390394  0.02390394  0.01649654  0.01279283]
 [ 0.02005926  0.02376294  0.02376294  0.02376294  0.02376294  0.02376294
   0.02376294  0.02376294  0.02376294  0.01635554  0.01265183]
 [ 0.02034193  0.02404562  0.02404562  0.02404562  0.02404562  0.02404562
   0.02404562  0.02404562  0.02404562  0.01663822  0.01293451]
 [ 0.02057907  0.02428275  0.02428275  0.02428275  0.02428275  0.02428275
   0.02428275  0.02428275  0.02428275  0.01687536  0.01317164]
 [ 0.02042386  0.02412754  0.02412754  0.02412754  0.02412754  0.02412754
   0.02412754  0.02412754  0.02412754  0.01672015  0.01301643]]

最佳答案

我放弃了直接使用tensowflow,最终使用了Keras 。以下是使用带有第二个密集层的单层 LSTM 学习上述玩具序列的代码:

import numpy as np

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM

# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
                 [0,0,0,0,0,0,0,0,2,1,0],
                 [0,0,0,0,0,0,0,2,1,0,0],
                 [0,0,0,0,0,0,2,1,0,0,0],
                 [0,0,0,0,0,2,1,0,0,0,0],
                 [0,0,0,0,2,1,0,0,0,0,0],
                 [0,0,0,2,1,0,0,0,0,0,0],
                 [0,0,2,1,0,0,0,0,0,0,0],
                 [0,2,1,0,0,0,0,0,0,0,0],
                 [2,1,0,0,0,0,0,0,0,0,0]]

# Preprocess Data: (This does not work)
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence[:-1] # all but last
target = wholeSequence[1:] # all but first

# Reshape training data for Keras LSTM model
# The training data needs to be (batchIndex, timeStepIndex, dimentionIndex)
# Single batch, 9 time steps, 11 dimentions
data = data.reshape((1, 9, 11))
target = target.reshape((1, 9, 11))

# Build Model
model = Sequential()  
model.add(LSTM(11, input_shape=(9, 11), unroll=True, return_sequences=True))
model.add(Dense(11))
model.compile(loss='mean_absolute_error', optimizer='adam')
model.fit(data, target, nb_epoch=2000, batch_size=1, verbose=2)

关于machine-learning - 如何在 tensorflow 0.11 中使用 RNN 编写多维回归预测器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41191105/

相关文章:

pandas - 一次对多个因变量使用 pandas.ols

python - 具有多个时间序列的 PCA 作为具有 sklearn 的一个实例的特征

python - python 中使用 SVM 进行机器学习的分类报告测试集出错

python - 删除趋势和季节性时间序列 Python

python - 如何将qint32的结果转换为quint8

python - 如何使用嵌套形状的 tf.data.Dataset.padded_batch?

python - Tensorflow seq2seq 教程 : NoneType object has no attribute 'update'

machine-learning - Tensorflow 中的分数最大池化

statistics - 在应用分类算法之前预处理分类数据的方法有哪些?

python - ValueError : could not broadcast input array from shape (20, 590) 变成形状 (20)