我正在尝试编写两个脚本来演示局部加权线性回归。我在第一个脚本中使用 Numpy 来解决矩阵问题,如下所示:
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xArr[i] - xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
xTx = xMat.T * (weights * xMat)
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
ws = xTx.I * (xMat.T * (weights * yMat))
yHat[i] = xArr[i]*ws
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
在第二个脚本中,我使用了 TensorFlow 来解决矩阵问题。该脚本如下所示:
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
sess = tf.Session()
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
A_tensor = tf.constant(xMat)
b_tensor = tf.constant(yMat)
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xMat[i]- xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
weights_tensor = tf.constant(weights)
# Matrix inverse solution
wA = tf.matmul(weights_tensor, A_tensor)
tA_A = tf.matmul(tf.transpose(A_tensor), wA)
tA_A_inv = tf.matrix_inverse(tA_A)
wb = tf.matmul(weights_tensor, b_tensor)
tA_wb = tf.matmul(tf.transpose(A_tensor), wb)
solution = tf.matmul(tA_A_inv, tA_wb)
sol_val = sess.run(solution)
yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
如果运行它:
两个结果之间有什么区别?或者也许我的脚本中有错误的东西?请帮助我。
最佳答案
问题出在代码行上,
yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
Numpy 数组乘法发生错误。
把上面这行代码换成
就可以正常工作了yHat[i] =sol_val[0][0]*xArr[i][0] + sol_val[1][0]*xArr[i][1]
完整的工作代码如下:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from numpy import *
import tensorflow as tf
trX = np.linspace(0, 1, 100)
trY= trX + np.random.normal(0,1,100)
#print('trY = ', trY)
sess = tf.Session()
xArr = []
yArr = []
for i in range(len(trX)):
xArr.append([1.0,float(trX[i])])
yArr.append(float(trY[i]))
xMat = mat(xArr);
yMat = mat(yArr).T
A_tensor = tf.constant(xMat)
b_tensor = tf.constant(yMat)
#print("A_Tensor = xMat = ", sess.run(A_tensor))
#print("B_Tensor = yMat = ", sess.run(b_tensor))
m = shape(xMat)[0]
weights = mat(eye((m)))
k = 0.01
yHat = zeros(m)
for i in range(m):
for j in range(m):
diffMat = xMat[i]- xMat[j,:]
weights[j,j] = exp(diffMat*diffMat.T/(-2.0*k**2))
weights_tensor = tf.constant(weights)
# Matrix inverse solution
wA = tf.matmul(weights_tensor, A_tensor)
tA_A = tf.matmul(tf.transpose(A_tensor), wA)
tA_A_inv = tf.matrix_inverse(tA_A)
wb = tf.matmul(weights_tensor, b_tensor)
tA_wb = tf.matmul(tf.transpose(A_tensor), wb)
solution = tf.matmul(tA_A_inv, tA_wb)
sol_val = sess.run(solution)
#plt.plot(sol_val, 'b')
#plt.show()
#print("Sol_Val = ", sol_val)
#print("Sol_Val[0][0] = ", sol_val[0][0])
#print("Sol_Val[1][0] = ", sol_val[1][0])
#print('xArr[i] = ', np.array(xArr[i]))
#print('xArr[i][0] = ', np.array(xArr[i][0]))
#print('xArr[i][1] = ', np.array(xArr[i][1]))
#yHat[i] =sol_val[0][0]*xArr[i][1] + sol_val[1][0]
yHat[i] =sol_val[0][0]*xArr[i][0] + sol_val[1][0]*xArr[i][1]
#print("Weights = ", sess.run(weights_tensor))
#yHat[i] = np.array(xArr[i])*sol_val
#print(sol_val)
plt.scatter(trX, trY)
plt.plot(trX, yHat, 'r')
plt.show()
剧情如下图所示:
关于python - Numpy 和 TensorFlow 之间的差异,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56014361/