我正在尝试实现我自己的 Imputer。在某些情况下,我想过滤一些火车样本(我认为质量低)。
但是,由于transform
方法只返回X
而不返回y
,而y
本身就是一个numpy数组(据我所知我无法就地过滤),而且 - 当我使用 GridSearchCV
- y
我的 transform
method receives is None
,我似乎无法找到一种方法来做到这一点。
澄清一下:我非常清楚如何过滤数组。我找不到适合当前 API 的 y
向量样本过滤的方法。
我真的很想通过 BaseEstimator
实现来做到这一点,这样我就可以将它与 GridSearchCV
一起使用(它有几个参数)。我是否缺少实现样本过滤的不同方法(不是通过 BaseEstimator
,而是通过 GridSearchCV
兼容)?有什么方法可以解决当前的 API 问题吗?
最佳答案
我找到了一个解决方案,它包含三个部分:
- 有
if idx == id(self.X):
行。这将确保仅在训练集上过滤样本。 - 覆盖
fit_transform
以确保转换方法得到y
而不是None
- 重写
Pipeline
以允许tranform
返回所述y
。
下面是演示它的示例代码,我想它可能不会涵盖所有微小的细节,但我认为它解决了 API 的主要问题。
from sklearn.base import BaseEstimator
from mne.decoding.mixin import TransformerMixin
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import GaussianNB
from sklearn import cross_validation
from sklearn.grid_search import GridSearchCV
from sklearn.externals import six
class SampleAndFeatureFilter(BaseEstimator, TransformerMixin):
def __init__(self, perc = None):
self.perc = perc
def fit(self, X, y=None):
self.X = X
sum_per_feature = X.sum(0)
sum_per_sample = X.sum(1)
self.featurefilter = sum_per_feature >= np.percentile(sum_per_feature, self.perc)
self.samplefilter = sum_per_sample >= np.percentile(sum_per_sample, self.perc)
return self
def transform(self, X, y=None, copy=None):
idx = id(X)
X=X[:,self.featurefilter]
if idx == id(self.X):
X = X[self.samplefilter, :]
if y is not None:
y = y[self.samplefilter]
return X, y
return X
def fit_transform(self, X, y=None, **fit_params):
if y is None:
return self.fit(X, **fit_params).transform(X)
else:
return self.fit(X, y, **fit_params).transform(X,y)
class PipelineWithSampleFiltering(Pipeline):
def fit_transform(self, X, y=None, **fit_params):
Xt, yt, fit_params = self._pre_transform(X, y, **fit_params)
if hasattr(self.steps[-1][-1], 'fit_transform'):
return self.steps[-1][-1].fit_transform(Xt, yt, **fit_params)
else:
return self.steps[-1][-1].fit(Xt, yt, **fit_params).transform(Xt)
def fit(self, X, y=None, **fit_params):
Xt, yt, fit_params = self._pre_transform(X, y, **fit_params)
self.steps[-1][-1].fit(Xt, yt, **fit_params)
return self
def _pre_transform(self, X, y=None, **fit_params):
fit_params_steps = dict((step, {}) for step, _ in self.steps)
for pname, pval in six.iteritems(fit_params):
step, param = pname.split('__', 1)
fit_params_steps[step][param] = pval
Xt = X
yt = y
for name, transform in self.steps[:-1]:
if hasattr(transform, "fit_transform"):
res = transform.fit_transform(Xt, yt, **fit_params_steps[name])
if len(res) == 2:
Xt, yt = res
else:
Xt = res
else:
Xt = transform.fit(Xt, y, **fit_params_steps[name]) \
.transform(Xt)
return Xt, yt, fit_params_steps[self.steps[-1][0]]
if __name__ == '__main__':
X = np.random.random((100,30))
y = np.random.random_integers(0, 1, 100)
pipe = PipelineWithSampleFiltering([('flt', SampleAndFeatureFilter()), ('cls', GaussianNB())])
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size = 0.3, random_state = 42)
kfold = cross_validation.KFold(len(y_train), 10)
clf = GridSearchCV(pipe, cv = kfold, param_grid = {'flt__perc':[10,20,30,40,50,60,70,80]}, n_jobs = 1)
clf.fit(X_train, y_train)
关于python - sklearn : Have an estimator that filters samples,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24896178/