所以我阅读了 scikit-learn 包 webpate:
我可以使用逻辑回归来拟合数据,在获得 LogisticRegression 实例后,我可以使用它对新数据点进行分类。到目前为止,一切都很好。
有没有办法设置 LogisticRegression() 实例的系数?因为我得到训练好的系数后,我想用同样的API对新的数据点进行分类。
或者也许其他人推荐了另一个具有更好 API 的 python 机器学习包?
谢谢
最佳答案
系数是估算器对象的属性——您在实例化 Logistic 回归类时创建的——因此您可以通过正常的 Python 方式访问它们:
>>> import numpy as NP
>>> from sklearn import datasets
>>> from sklearn import datasets as DS
>>> digits = DS.load_digits()
>>> D = digits.data
>>> T = digits.target
>>> # instantiate an estimator instance (classifier) of the Logistic Reg class
>>> clf = LR()
>>> # train the classifier
>>> clf.fit( D[:-1], T[:-1] )
LogisticRegression(C=1.0, dual=False, fit_intercept=True,
intercept_scaling=1, penalty='l2', tol=0.0001)
>>> # attributes are accessed in the normal python way
>>> dx = clf.__dict__
>>> dx.keys()
['loss', 'C', 'dual', 'fit_intercept', 'class_weight_label', 'label_',
'penalty', 'multi_class', 'raw_coef_', 'tol', 'class_weight',
'intercept_scaling']
这就是获取系数的方法,但如果您只想使用这些系数进行预测,更直接的方法是使用估算器的预测 方法:
>>> # instantiate the L/R classifier, passing in norm used for penalty term
>>> # and regularization strength
>>> clf = LR(C=.2, penalty='l1')
>>> clf
LogisticRegression(C=0.2, dual=False, fit_intercept=True,
intercept_scaling=1, penalty='l1', tol=0.0001)
>>> # select some "training" instances from the original data
>>> # [of course the model should not have been trained on these instances]
>>> test = NP.random.randint(0, 151, 5)
>>> d = D[test,:] # random selected data points w/o class labels
>>> t = T[test,:] # the class labels that correspond to the points in d
>>> # generate model predictions for these 5 data points
>>> v = clf.predict(d)
>>> v
array([0, 0, 2, 0, 2], dtype=int32)
>>> # how well did the model do?
>>> percent_correct = 100*NP.sum(t==v)/t.shape[0]
>>> percent_correct
100
关于python - sklearn (scikit-learn) 逻辑回归包——为分类设置训练系数。,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8539141/