我刚刚注意到 Sci-Kit Learn 的线性回归算法将一些不同的结果加载到 pandas 数据帧中,而不是仅在原始状态下使用它们。
我不明白为什么会出现这种情况。
考虑以下线性回归示例:
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
boston = load_boston()
X1 = pd.DataFrame(boston.data)
X1.columns = boston.feature_names
X2 = boston.data
y2 = boston.target
y1 = boston.target
lreg = LinearRegression()
X1 = (X1 - X1.mean()) / X1.std()
X2 = (X2 - X2.mean()) / X2.std()
生成的模型给出了相同的 R^2 值和预测值,但系数和截距的结果却截然不同。
演示:
intcpt1 = lreg.fit(X1, y1).intercept_
intcpt2 = lreg.fit(X2, y2).intercept_
f"Intercept for model with dataframe: {intcpt1}, model with numpy array: {intcpt2}"
给予:
'Intercept for model with dataframe: 22.53280632411069, model with numpay array: -941.8009906279219'
同样,系数也非常不同:
coef1 = lreg.fit(X1, y1).coef_[:3]
coef2 = lreg.fit(X2, y2).coef_[:3]
f"First the coeffs for model with dataframe: {coef1}, modely with numpy array: {coef2}"
给出:
'First the coeffs for model with dataframe: [-0.92906457 1.08263896 0.14103943], modely with numpy array: [-15.67844685 6.73818665 2.98419849]'
但是得分和预测是相同的:
score1 = lreg.fit(X1, y1).score(X1, y1)
score2 = lreg.fit(X2, y2).score(X2, y2)
f"Score for model with dataframe: {score1}, model with numpy array: {score2}"
产量:
'Score for model with dataframe: 0.7406426641094094, model with numpy array: 0.7406426641094073'
对于系数也是如此:
pred1 = lreg.fit(X1, y1).predict(X1)[:3]
pred2 = lreg.fit(X2, y2).predict(X2)[:3]
f"First 3 predictions with dataframe: {pred1}, with numpy array: {pred2}"
提供:
'First 3 predictions with dataframe: [30.00384338 25.02556238 30.56759672], with numpy array: [30.00384338 25.02556238 30.56759672]'
boston.data
的格式如下:
array([[6.3200e-03, 1.8000e+01, 2.3100e+00, ..., 1.5300e+01, 3.9690e+02,
4.9800e+00],
[2.7310e-02, 0.0000e+00, 7.0700e+00, ..., 1.7800e+01, 3.9690e+02,
9.1400e+00],
[2.7290e-02, 0.0000e+00, 7.0700e+00, ..., 1.7800e+01, 3.9283e+02,
4.0300e+00],
...,
[6.0760e-02, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9690e+02,
5.6400e+00],
[1.0959e-01, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9345e+02,
6.4800e+00],
[4.7410e-02, 0.0000e+00, 1.1930e+01, ..., 2.1000e+01, 3.9690e+02,
7.8800e+00]])
而数据框输出的数据如下:
CRIM ZN INDUS CHAS NOX RM AGE \
0 -0.419367 0.284548 -1.286636 -0.272329 -0.144075 0.413263 -0.119895
1 -0.416927 -0.487240 -0.592794 -0.272329 -0.739530 0.194082 0.366803
2 -0.416929 -0.487240 -0.592794 -0.272329 -0.739530 1.281446 -0.265549
3 -0.416338 -0.487240 -1.305586 -0.272329 -0.834458 1.015298 -0.809088
4 -0.412074 -0.487240 -1.305586 -0.272329 -0.834458 1.227362 -0.510674
5 -0.416631 -0.487240 -1.305586 -0.272329 -0.834458 0.206892 -0.350810
我不清楚为什么 LinearRegression
算法在每种情况下都会以不同的方式解释信息。
最佳答案
这是因为你的转变:
X1 = (X1 - X1.mean()) / X1.std()
X2 = (X2 - X2.mean()) / X2.std()
Pandas 将计算各列的平均值和标准差。要对 numpy 执行此操作,请将 axis 参数添加到 mean
和 std
:
X2 = (X2 - X2.mean(axis=0)) / X2.std(axis=0)
关于python - 在 SciKit Learn 算法中将数据放入 DataFrame 会产生不同的结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54947988/