我一直在研究这个神经网络,目的是根据某些属性预测模拟风车公园的 TBA(基于时间的可用性)。神经网络运行得很好,并给了我一些预测,但是我对结果不太满意。它没有注意到一些我自己可以清楚看到的非常明显的相关性。这是我当前的代码:
`# Import
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
maxi = 0.96
mini = 0.7
# Make data a np.array
data = pd.read_csv('datafile_ML_no_avg.csv')
data = data.values
# Shuffle the data
shuffle_indices = np.random.permutation(np.arange(len(data)))
data = data[shuffle_indices]
# Training and test data
data_train = data[0:int(len(data)*0.8),:]
data_test = data[int(len(data)*0.8):int(len(data)),:]
# Scale data
scaler = MinMaxScaler(feature_range=(mini, maxi))
scaler.fit(data_train)
data_train = scaler.transform(data_train)
data_test = scaler.transform(data_test)
# Build X and y
X_train = data_train[:, 0:5]
y_train = data_train[:, 6:7]
X_test = data_test[:, 0:5]
y_test = data_test[:, 6:7]
# Number of stocks in training data
n_args = X_train.shape[1]
multi = int(8)
# Neurons
n_neurons_1 = 8*multi
n_neurons_2 = 4*multi
n_neurons_3 = 2*multi
n_neurons_4 = 1*multi
# Session
net = tf.InteractiveSession()
# Placeholder
X = tf.placeholder(dtype=tf.float32, shape=[None, n_args])
Y = tf.placeholder(dtype=tf.float32, shape=[None,1])
# Initialize1s
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg",
distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()
# Hidden weights
W_hidden_1 = tf.Variable(weight_initializer([n_args, n_neurons_1]))
bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1]))
W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2]))
bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2]))
W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3]))
bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3]))
W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4]))
bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4]))
# Output weights
W_out = tf.Variable(weight_initializer([n_neurons_4, 1]))
bias_out = tf.Variable(bias_initializer([1]))
# Hidden layer
hidden_1 = tf.nn.relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1))
hidden_2 = tf.nn.relu(tf.add(tf.matmul(hidden_1, W_hidden_2),
bias_hidden_2))
hidden_3 = tf.nn.relu(tf.add(tf.matmul(hidden_2, W_hidden_3),
bias_hidden_3))
hidden_4 = tf.nn.relu(tf.add(tf.matmul(hidden_3, W_hidden_4),
bias_hidden_4))
# Output layer (transpose!)
out = tf.transpose(tf.add(tf.matmul(hidden_4, W_out), bias_out))
# Cost function
mse = tf.reduce_mean(tf.squared_difference(out, Y))
# Optimizer
opt = tf.train.AdamOptimizer().minimize(mse)
# Init
net.run(tf.global_variables_initializer())
# Fit neural net
batch_size = 10
mse_train = []
mse_test = []
# Run
epochs = 10
for e in range(epochs):
# Shuffle training data
shuffle_indices = np.random.permutation(np.arange(len(y_train)))
X_train = X_train[shuffle_indices]
y_train = y_train[shuffle_indices]
# Minibatch training
for i in range(0, len(y_train) // batch_size):
start = i * batch_size
batch_x = X_train[start:start + batch_size]
batch_y = y_train[start:start + batch_size]
# Run optimizer with batch
net.run(opt, feed_dict={X: batch_x, Y: batch_y})
# Show progress
if np.mod(i, 50) == 0:
mse_train.append(net.run(mse, feed_dict={X: X_train, Y: y_train}))
mse_test.append(net.run(mse, feed_dict={X: X_test, Y: y_test}))
pred = net.run(out, feed_dict={X: X_test})
print(pred)`
尝试调整隐藏层的数量、每层的节点数量、运行的纪元数量并尝试不同的激活函数和优化器。然而,我对神经网络还很陌生,所以我可能遗漏了一些非常明显的东西。
提前感谢所有阅读完所有内容的人。
最佳答案
分享一个说明问题的小数据集会让您更容易。不过,我将阐述非标准数据集的一些问题以及如何克服这些问题。
可能的解决方案
正则化和基于验证的优化 - 在寻求额外的准确性时,这些方法总是值得尝试的。请参阅 dropout 方法 here (原始论文),以及一些概述 here .
不平衡数据 - 有时时间序列类别/事件的行为类似于异常,或者只是以不平衡的方式表现。如果你读一本书,像the或it这样的词出现的次数会比warehouse之类的词出现的次数多得多。如果您的主要任务是检测单词“warehouse”并且您以传统方式训练网络(甚至是 lstms),那么这可能会成为一个问题。解决此问题的一种方法是平衡样本(创建平衡的数据集)或给予低频类别更多权重。
模型结构 - 有时完全连接的层是不够的。例如,看看计算机视觉问题,我们使用卷积层进行训练。卷积层和池化层强化了模型的结构,这适用于图像。这也是某种规则,因为我们在这些层中的参数较少。在时间序列问题中,卷积也是可能的,而且事实证明效果很好。请参阅条件 Time Series Forecasting with Convolution Neural Networks 中的示例.
以上建议是按照我建议尝试的顺序列出的。
祝你好运!
关于python - 在 tensorflow 中微调神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49731937/