python - 使用 python scipy.optimize.minimize 时如何确保解决方案是全局最小值

标签 python numpy machine-learning scipy logistic-regression

我正在Python中实现逻辑回归。为了找到 theta ,我一直在努力确定哪种算法始终能保证全局最优,而不用担心初始参数 theta。

import numpy as np
import scipy.optimize as op

def Sigmoid(z):
    return 1/(1 + np.exp(-z));

def Gradient(theta,x,y):
    m , n = x.shape
    theta = theta.reshape((n,1));
    y = y.reshape((m,1))
    sigmoid_x_theta = Sigmoid(x.dot(theta));
    grad = ((x.T).dot(sigmoid_x_theta-y))/m;
    return grad.flatten();

def CostFunc(theta,x,y):
    m,n = x.shape; 
    theta = theta.reshape((n,1));
    y = y.reshape((m,1));
    term1 = np.log(Sigmoid(x.dot(theta)));
    term2 = np.log(1-Sigmoid(x.dot(theta)));
    term1 = term1.reshape((m,1))
    term2 = term2.reshape((m,1))
    term = y * term1 + (1 - y) * term2;
    J = -((np.sum(term))/m);
    return J;

data = np.loadtxt('ex2data1.txt',delimiter=',');

# m training samples and n attributes
m , n = data.shape          
X = data[:,0:n-1]
y = data[:,n-1:]
X = np.concatenate((np.ones((m,1)), X),axis = 1)
initial_theta = np.zeros((n,1))
m , n = X.shape;

Result = op.minimize(fun = CostFunc, 
                     x0 = initial_theta,
                     args = (X,y), 
                     method = 'TNC',
                     jac = Gradient);
theta = Result.x;

其中ex2data1.txt的内容是:

34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1
79.0327360507101,75.3443764369103,1
45.08327747668339,56.3163717815305,0
61.10666453684766,96.51142588489624,1
75.02474556738889,46.55401354116538,1
76.09878670226257,87.42056971926803,1
84.43281996120035,43.53339331072109,1
95.86155507093572,38.22527805795094,0
75.01365838958247,30.60326323428011,0
82.30705337399482,76.48196330235604,1
69.36458875970939,97.71869196188608,1
39.53833914367223,76.03681085115882,0
53.9710521485623,89.20735013750205,1
69.07014406283025,52.74046973016765,1
67.94685547711617,46.67857410673128,0
70.66150955499435,92.92713789364831,1
76.97878372747498,47.57596364975532,1
67.37202754570876,42.83843832029179,0
89.67677575072079,65.79936592745237,1
50.534788289883,48.85581152764205,0
34.21206097786789,44.20952859866288,0
77.9240914545704,68.9723599933059,1
62.27101367004632,69.95445795447587,1
80.1901807509566,44.82162893218353,1
93.114388797442,38.80067033713209,0
61.83020602312595,50.25610789244621,0
38.78580379679423,64.99568095539578,0
61.379289447425,72.80788731317097,1
85.40451939411645,57.05198397627122,1
52.10797973193984,63.12762376881715,0
52.04540476831827,69.43286012045222,1
40.23689373545111,71.16774802184875,0
54.63510555424817,52.21388588061123,0
33.91550010906887,98.86943574220611,0
64.17698887494485,80.90806058670817,1
74.78925295941542,41.57341522824434,0
34.1836400264419,75.2377203360134,0
83.90239366249155,56.30804621605327,1
51.54772026906181,46.85629026349976,0
94.44336776917852,65.56892160559052,1
82.36875375713919,40.61825515970618,0
51.04775177128865,45.82270145776001,0
62.22267576120188,52.06099194836679,0
77.19303492601364,70.45820000180959,1
97.77159928000232,86.7278223300282,1
62.07306379667647,96.76882412413983,1
91.56497449807442,88.69629254546599,1
79.94481794066932,74.16311935043758,1
99.2725269292572,60.99903099844988,1
90.54671411399852,43.39060180650027,1
34.52451385320009,60.39634245837173,0
50.2864961189907,49.80453881323059,0
49.58667721632031,59.80895099453265,0
97.64563396007767,68.86157272420604,1
32.57720016809309,95.59854761387875,0
74.24869136721598,69.82457122657193,1
71.79646205863379,78.45356224515052,1
75.3956114656803,85.75993667331619,1
35.28611281526193,47.02051394723416,0
56.25381749711624,39.26147251058019,0
30.05882244669796,49.59297386723685,0
44.66826172480893,66.45008614558913,0
66.56089447242954,41.09209807936973,0
40.45755098375164,97.53518548909936,1
49.07256321908844,51.88321182073966,0
80.27957401466998,92.11606081344084,1
66.74671856944039,60.99139402740988,1
32.72283304060323,43.30717306430063,0
64.0393204150601,78.03168802018232,1
72.34649422579923,96.22759296761404,1
60.45788573918959,73.09499809758037,1
58.84095621726802,75.85844831279042,1
99.82785779692128,72.36925193383885,1
47.26426910848174,88.47586499559782,1
50.45815980285988,75.80985952982456,1
60.45555629271532,42.50840943572217,0
82.22666157785568,42.71987853716458,0
88.9138964166533,69.80378889835472,1
94.83450672430196,45.69430680250754,1
67.31925746917527,66.58935317747915,1
57.23870631569862,59.51428198012956,1
80.36675600171273,90.96014789746954,1
68.46852178591112,85.59430710452014,1
42.0754545384731,78.84478600148043,0
75.47770200533905,90.42453899753964,1
78.63542434898018,96.64742716885644,1
52.34800398794107,60.76950525602592,0
94.09433112516793,77.15910509073893,1
90.44855097096364,87.50879176484702,1
55.48216114069585,35.57070347228866,0
74.49269241843041,84.84513684930135,1
89.84580670720979,45.35828361091658,1
83.48916274498238,48.38028579728175,1
42.2617008099817,87.10385094025457,1
99.31500880510394,68.77540947206617,1
55.34001756003703,64.9319380069486,1
74.77589300092767,89.52981289513276,1

以上代码给出 theta = Result.x 值为 [-25.87282405 0.21193078 0.20722013]。如果initial_theta = np.zeros((n,1)),则这是全局最小值。但如果initial_theta = np.ones((n,1)),则会出错。所以在这种情况下,我们的结果取决于参数 theta 的初始值。那么是否可以通过任何方式自动化来避免这个问题。

我还尝试在最小化函数调用中使用“BFGS”方法而不是“TNC”方法,如下所示,然后我得到 RuntimeWarning。

initial_theta = np.zeros((n,1))
result = op.minimize(fun = CostFunc, 
                     x0 = intial_theta,
                     args = (X,y),
                     method = 'BFGS', 
                     jac = Gradient);
optimal_theta =  result.x

我已经使用不同的initial_theta初始值多次调用了上面的函数,我发现BFGS最大时间收敛到局部最小值。当我用

调用 BFGS 时
initial_theta = np.array([-25,0.2,0.2])

接近全局最小值,它收敛了。所以看起来 TNC 比 BFGS 更好,因为两种情况下 intial_theta 相同,TNC 收敛到全局最小值,而 BFGS 收敛到局部最小值。所以

  1. 这在所有情况下都是如此还是取决于特定问题?
  2. BFGS 和 TNC 哪个更好?
  3. 方法参数 = 'BFGS' 的 scipy.optimize.fmin_bfgs 和 scipy.optimize.minimize 之间有什么区别还是两者相同?

任何帮助或见解都会有所帮助。谢谢。

最佳答案

没有任何实用算法可以保证找到全局最优值。然而,有一些启发式方法,例如 DIRECT (参见例如 here )在给定范围的实践中非常有效。这些可用于为算法找到良好的初始化,从而在初始化附近找到局部最优值并更有效地工作。

  1. 但是,逻辑回归是一个凸优化问题。这意味着目标函数(误差函数)只有一个最小值,即局部最小值始终是全局最小值。因此,您可以使用任何本地优化器(梯度下降、L-BFGS、共轭梯度……)。唯一的问题是,由于非线性逻辑函数,您无法直接计算最小值。有一个类似的问题,称为没有逻辑函数的线性回归。在这种情况下,可以直接计算误差函数的全局最小值,而无需任何复杂的优化算法。

  2. 逻辑回归优化器的比较可以在 Fabian Pedregosa's blog 中找到。 。我的第一个猜测是你的梯度计算有错误。也许您应该将其与 scipy.optimize.check_grad 的梯度数值近似值进行比较。

  3. scipy.optimize.minimize 调用 scipy.optimize.fmin_bfgs

关于python - 使用 python scipy.optimize.minimize 时如何确保解决方案是全局最小值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21953366/

相关文章:

machine-learning - Google 预测 CSV 最大大小?

python - ImageMagick 在 Python 子进程中删除元数据时损坏 JPEG 数据

Python项目无法运行

python - 如何在 numpy 数组中查找唯一对象?

python - 如何在多线回归中对解释变量求平方

machine-learning - 将 optim.step() 与 Pytorch 的 DataLoader 一起使用

Python beautifulsoup - 获取输入值/TypeError : 'NoneType' object is not subscriptable

python - Airflow 'NoneType' 对象没有属性 'is_paused',如何解决?

python - 如何重复numpy数组中的特定元素?

python - 添加 Lambda 层后无法合并 keras 模型