python - 无法在 Tensorflow 中优化多元线性回归

标签 python machine-learning tensorflow

我使用了 tensorflow 教程中的单变量示例,但在优化 Tensorflow 中的多元线性回归问题时遇到了问题。

我正在使用波特兰房价的数据集 here .

我是 Tensorflow 的新手,我确信这里有一些看起来很可怕的东西。

优化似乎根本不起作用。它很快就会爆炸到无穷大。感谢您的帮助。

import tensorflow as tf
import numpy as np

X = np.array( [[  2.10400000e+03,   3.00000000e+00],
   [  1.60000000e+03,   3.00000000e+00],
   [  2.40000000e+03,   3.00000000e+00],
   [  1.41600000e+03,   2.00000000e+00],
   [  3.00000000e+03,   4.00000000e+00],
   [  1.98500000e+03,   4.00000000e+00],
   [  1.53400000e+03,   3.00000000e+00],
   [  1.42700000e+03,   3.00000000e+00],
   [  1.38000000e+03,   3.00000000e+00],
   [  1.49400000e+03,   3.00000000e+00],
   [  1.94000000e+03,   4.00000000e+00],
   [  2.00000000e+03,   3.00000000e+00],
   [  1.89000000e+03,   3.00000000e+00],
   [  4.47800000e+03,   5.00000000e+00],
   [  1.26800000e+03,   3.00000000e+00],
   [  2.30000000e+03,   4.00000000e+00],
   [  1.32000000e+03,   2.00000000e+00],
   [  1.23600000e+03,   3.00000000e+00],
   [  2.60900000e+03,   4.00000000e+00],
   [  3.03100000e+03,   4.00000000e+00],
   [  1.76700000e+03,   3.00000000e+00],
   [  1.88800000e+03,   2.00000000e+00],
   [  1.60400000e+03,   3.00000000e+00],
   [  1.96200000e+03,   4.00000000e+00],
   [  3.89000000e+03,   3.00000000e+00],
   [  1.10000000e+03,   3.00000000e+00],
   [  1.45800000e+03,   3.00000000e+00],
   [  2.52600000e+03,   3.00000000e+00],
   [  2.20000000e+03,   3.00000000e+00],
   [  2.63700000e+03,   3.00000000e+00],
   [  1.83900000e+03,   2.00000000e+00],
   [  1.00000000e+03,   1.00000000e+00],
   [  2.04000000e+03,   4.00000000e+00],
   [  3.13700000e+03,   3.00000000e+00],
   [  1.81100000e+03,   4.00000000e+00],
   [  1.43700000e+03,   3.00000000e+00],
   [  1.23900000e+03,   3.00000000e+00],
   [  2.13200000e+03,   4.00000000e+00],
   [  4.21500000e+03,   4.00000000e+00],
   [  2.16200000e+03,   4.00000000e+00],
   [  1.66400000e+03,   2.00000000e+00],
   [  2.23800000e+03,   3.00000000e+00],
   [  2.56700000e+03,   4.00000000e+00],
   [  1.20000000e+03,   3.00000000e+00],
   [  8.52000000e+02,   2.00000000e+00],
   [  1.85200000e+03,   4.00000000e+00],
   [  1.20300000e+03,   3.00000000e+00]]
).astype('float32')

y_data = np.array([[ 399900.],
   [ 329900.],
   [ 369000.],
   [ 232000.],
   [ 539900.],
   [ 299900.],
   [ 314900.],
   [ 198999.],
   [ 212000.],
   [ 242500.],
   [ 239999.],
   [ 347000.],
   [ 329999.],
   [ 699900.],
   [ 259900.],
   [ 449900.],
   [ 299900.],
   [ 199900.],
   [ 499998.],
   [ 599000.],
   [ 252900.],
   [ 255000.],
   [ 242900.],
   [ 259900.],
   [ 573900.],
   [ 249900.],
   [ 464500.],
   [ 469000.],
   [ 475000.],
   [ 299900.],
   [ 349900.],
   [ 169900.],
   [ 314900.],
   [ 579900.],
   [ 285900.],
   [ 249900.],
   [ 229900.],
   [ 345000.],
   [ 549000.],
   [ 287000.],
   [ 368500.],
   [ 329900.],
   [ 314000.],
   [ 299000.],
   [ 179900.],
   [ 299900.],
   [ 239500.]]
).astype('float32')

m = 47

W = tf.Variable(tf.zeros([2,1]))
b = tf.Variable(tf.zeros([1]))

b = tf.Print(b, [b], "Bias: ")
W = tf.Print(W, [W], "Weights: ")

y = tf.add( tf.matmul(X,W), b)
y = tf.Print(y, [y], "y: ")

loss = tf.reduce_sum(tf.square(y - y_data)) / (2 * m)
loss = tf.Print(loss, [loss], "loss: ")
optimizer = tf.train.GradientDescentOptimizer(.01)

train = optimizer.minimize(loss)

init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)                                

for i in range(10):
  sess.run(train)
  #if i % 20 == 0:
        #print(sess.run(W), sess.run(b))  

sess.close()

对于输出,我得到以下内容:

I tensorflow/core/kernels/logging_ops.cc:79] Weights: [0 0]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [0]
I tensorflow/core/kernels/logging_ops.cc:79] y: [0 0 0...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [6.5591554e+10]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [38210460 56018.387]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [17020.631]
I tensorflow/core/kernels/logging_ops.cc:79] y: [8.0394994e+10 6.1136921e+10 9.1705295e+10...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [3.373289e+21]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-3.8223224e+09]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-8.8281616e+12 -1.2750791e+10]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-1.8574494e+16 -1.4125102e+16 -2.118763e+16...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [1.8006666e+32]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [2.0396713e+18 2.9459613e+15]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [8.8311514e+14]
I tensorflow/core/kernels/logging_ops.cc:79] y: [4.2914781e+21 3.2634836e+21 4.8952214e+21...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-2.040362e+20]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-4.7124867e+23 -6.8063922e+20]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-9.9150947e+26 -7.5400019e+26 -1.1309991e+27...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [4.7140825e+25]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [1.0887797e+29 1.5725587e+26]
I tensorflow/core/kernels/logging_ops.cc:79] y: [2.2907974e+32 1.7420524e+32 2.6130761e+32...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-1.0891484e+31]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-2.515532e+34 -3.6332629e+31]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-5.2926912e+37 -4.0248632e+37 -6.0372884e+37...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [2.5163837e+36]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [inf 8.3943417e+36]
I tensorflow/core/kernels/logging_ops.cc:79] y: [inf inf inf...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-inf]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-nan -inf]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-nan -nan -nan...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [-nan]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-nan]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-nan -nan]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-nan -nan -nan...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [-nan]

我是从源代码构建的,我正在使用 python3,在修复完成后允许这样做。我怀疑这与它有什么关系,但只是想确定一下。我确信这是缺乏用户知识。

最佳答案

你的学习率太高了,所以解决方案是来回跳,越跳越远。

通常,对于此类问题,规范化您的输入范围是一种很好的做法,例如,使它们具有 mean(0) 和 var(1)。

关于python - 无法在 Tensorflow 中优化多元线性回归,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34167537/

相关文章:

python - 如何在 cython 中执行 struct.pack 和 struct.unpack?

audio - 我可以将扬声器与音调,音色和音量匹配吗?

machine-learning - 在整个训练集上评估模型,无需交叉验证

tensorflow - 通过 Huggingface 标记器映射文本数据

python - Keras 的训练数据权重

python - 无需在 Python 中导入即可查找模块的路径

python - TensorFlow 优化器是否最小化 API 实现的小批量?

python - 拥有多个 tf.Graph 有什么意义?

python - 在列约束下选择 nparray 的某些行

python - 对于小数据集中的非常大的值,梯度下降不会收敛