python - 为什么我定制的线性回归模型不匹配 sklearn?

标签 python numpy machine-learning scikit-learn gradient-descent

我正在尝试使用 Python 创建一个简单的线性模型,不使用任何库(numpy 除外)。这是我的东西

import numpy as np

import pandas

np.random.seed(1)

alpha = 0.1

def h(x, w):
  return np.dot(w.T, x)

def cost(X, W, Y):
  totalCost = 0
  for i in range(47):
    diff = h(X[i], W) - Y[i]
    squared = diff * diff
    totalCost += squared

  return totalCost / 2

housing_data = np.loadtxt('Housing.csv', delimiter=',')

x1 = housing_data[:,0]
x2 = housing_data[:,1]
y = housing_data[:,2]

avgX1 = np.mean(x1)
stdX1 = np.std(x1)
normX1 = (x1 - avgX1) / stdX1
print('avgX1', avgX1)
print('stdX1', stdX1)

avgX2 = np.mean(x2)
stdX2 = np.std(x2)
normX2 = (x2 - avgX2) / stdX2

print('avgX2', avgX2)
print('stdX2', stdX2)

normalizedX = np.ones((47, 3))

normalizedX[:,1] = normX1
normalizedX[:,2] = normX2

np.savetxt('normalizedX.csv', normalizedX)

weights = np.ones((3,))

for boom in range(100):
  currentCost = cost(normalizedX, weights, y)
  if boom % 1 == 0:
    print(boom, 'iteration', weights[0], weights[1], weights[2])
    print('Cost', currentCost)

  for i in range(47):
    errorDiff = h(normalizedX[i], weights) - y[i]
    weights[0] = weights[0] - alpha * (errorDiff) * normalizedX[i][0]
    weights[1] = weights[1] - alpha * (errorDiff) * normalizedX[i][1]
    weights[2] = weights[2] - alpha * (errorDiff) * normalizedX[i][2]

print(weights)

predictedX = [1, (2100 - avgX1) / stdX1, (3 - avgX2) / stdX2]
firstPrediction = np.array(predictedX)
print('firstPrediction', firstPrediction)
firstPrediction = h(firstPrediction, weights)
print(firstPrediction)

首先,它收敛得非常快。仅经过 14 次迭代。其次,它给我的结果与使用 sklearn 的线性回归不同。作为引用,我的 sklearn 代码是:

import numpy
import matplotlib.pyplot as plot
import pandas
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

dataset = pandas.read_csv('Housing.csv', header=None)

x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 2].values

linearRegressor = LinearRegression()

xnorm = sklearn.preprocessing.scale(x)
scaleCoef = sklearn.preprocessing.StandardScaler().fit(x)
mean = scaleCoef.mean_
std = numpy.sqrt(scaleCoef.var_)
print('stf')
print(std)

stuff = linearRegressor.fit(xnorm, y)

predictedX = [[(2100 - mean[0]) / std[0], (3 - mean[1]) / std[1]]]
yPrediction = linearRegressor.predict(predictedX)
print('predictedX', predictedX)
print('predict', yPrediction)


print(stuff.coef_, stuff.intercept_)

我的自定义模型预测 y 的值为 337,000,sklearn 预测为 355,000。我的数据是 47 行,看起来像

2104,3,3.999e+05
1600,3,3.299e+05
2400,3,3.69e+05
1416,2,2.32e+05
3000,4,5.399e+05
1985,4,2.999e+05
1534,3,3.149e+05

完整数据可在 https://github.com/shamoons/linear-logistic-regression/blob/master/Housing.csv 获得

我假设 (a) 我的梯度下降回归在某种程度上是错误的,或者 (b) 我没有正确使用 sklearn

对于给定输入,2 不会预测相同输出的任何其他原因?

最佳答案

我认为您在梯度下降中遗漏了 1/m 项(其中 m 是 y 的大小)。在包括 1/m 项之后,我似乎得到了类似于您的 sklearn 代码的预测值。

见下文

....
weights = np.ones((3,))

m = y.size
for boom in range(100):
  currentCost = cost(normalizedX, weights, y)
  if boom % 1 == 0:
    print(boom, 'iteration', weights[0], weights[1], weights[2])
    print('Cost', currentCost)

  for i in range(47):
    errorDiff = h(normalizedX[i], weights) - y[i]
    weights[0] = weights[0] - alpha *(1/m)* (errorDiff) * normalizedX[i][0]
    weights[1] = weights[1] - alpha *(1/m)*  (errorDiff) * normalizedX[i][1]
    weights[2] = weights[2] - alpha *(1/m)* (errorDiff) * normalizedX[i][2]

...

这给出了第一个预测为 355242。

这与线性回归模型非常吻合,尽管它不进行梯度下降。

我还在 sklearn 中尝试了 sgdregressor(使用随机梯度下降),它似乎也获得了接近线性回归模型和您的模型的值。看下面的代码

import numpy
import matplotlib.pyplot as plot
import pandas
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, SGDRegressor

dataset = pandas.read_csv('Housing.csv', header=None)

x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 2].values

sgdRegressor = SGDRegressor(penalty='none', learning_rate='constant', eta0=0.1, max_iter=1000, tol = 1E-6)

xnorm = sklearn.preprocessing.scale(x)
scaleCoef = sklearn.preprocessing.StandardScaler().fit(x)
mean = scaleCoef.mean_
std = numpy.sqrt(scaleCoef.var_)
print('stf')
print(std)

yPrediction = []
predictedX = [[(2100 - mean[0]) / std[0], (3 - mean[1]) / std[1]]]
print('predictedX', predictedX)
for trials in range(10):
    stuff = sgdRegressor.fit(xnorm, y)
    yPrediction.extend(sgdRegressor.predict(predictedX))
print('predict', np.mean(yPrediction))

结果

predict 355533.10119985335

关于python - 为什么我定制的线性回归模型不匹配 sklearn?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54585105/

相关文章:

python - 无限循环开始了,为什么?

python - 从 numpy 2D 数组中随机选择特定百分比的单元格

python - 在不需要的系统调用上引发异常

python - 列表作为列表列表的元素或多维列表作为网格

python - 来自元组的稀疏数组

python - 如何在日期序列上训练 LSTM?

python - 在 Python 3 中可以结合参数描述和类型提示吗?

python - 值错误 : Failed to convert a NumPy array to a Tensor (Unsupported object type numpy. ndarray)。试图预测特斯拉股票

machine-learning - 当向量之一全为零时的余弦相似度

python - 使用 tensorflow 进行预测