我想知道这两种方法有什么区别scipy.optimize.leastsq
和 scipy.optimize.least_squares
是吗?
当我实现它们时,它们在 chi^2 方面产生的差异很小:
>>> solution0 = ((p0.fun).reshape(100,100))
>>> # p0.fun are the residuals of my fit function np.ravel'ed as returned by least_squares
>>> print(np.sum(np.square(solution0)))
0.542899505806
>>> solution1 = np.square((median-solution1))
>>> # solution1 is the solution found by least_sq, it does not yield residuals thus I have to subtract it from the median to get the residuals (my special case)
>>> print(np.sum((solution1)))
0.54402852325
任何人都可以对此进行扩展或指出我在哪里可以找到替代文档,来自 scipy 的文档有点神秘。
最佳答案
从 least_squares
的文档来看,leastsq
似乎是一个较旧的包装器。
See also
leastsq A legacy wrapper for the MINPACK implementation of the Levenberg-Marquadt algorithm.
所以你应该只使用least_squares
。 least_squares
似乎具有额外的功能。其中最重要的是使用的默认“方法”(即算法)不同:
trf
: Trust Region Reflective algorithm, particularly suitable for large sparse problems with bounds. Generally robust method.dogbox
: dogleg algorithm with rectangular trust regions, typical use case is small problems with bounds. Not recommended for problems with rank-deficient Jacobian.lm
: Levenberg-Marquardt algorithm as implemented in MINPACK. Doesn’t handle bounds and sparse Jacobians. Usually the most efficient method for small unconstrained problems.Default is
trf
. See Notes for more information.
旧的 leastsq
算法只是 lm
方法的包装器,正如文档所说,它只适用于小的无约束问题。
您在结果中看到的差异可能是由于所采用的算法不同所致。
关于python - scipy.leastsq 和 scipy.least_squares 之间的区别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40784315/