我正在使用 CountVectorizer 对文本进行分词,我想添加自己的停用词。为什么这不起作用? “de”这个词不应该出现在最后的打印品中。
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()
print (word_tokenizer(u'Isto é um teste de qualquer coisa.'))
[u'Isto', u'um', u'teste', u'de', u'qualquer', u'coisa']
最佳答案
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()
In [7]: vectorizer.vocabulary_
Out[7]: {u'coisa': 0, u'isto': 1, u'qualquer': 2, u'teste': 3, u'um': 4}
你可以看到 u'de'
不在计算的词汇表中......
build_tokenizer
方法刚刚标记了您的字符串,删除停用词
应该在之后完成
来自 CountVectorizer
的源代码:
def build_tokenizer(self):
"""Return a function that splits a string into a sequence of tokens"""
if self.tokenizer is not None:
return self.tokenizer
token_pattern = re.compile(self.token_pattern)
return lambda doc: token_pattern.findall(doc)
您的问题的解决方案可以是:
vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
sentence = [u'Isto é um teste de qualquer coisa.']
tokenized = vectorizer.fit_transform(sentence)
result = vectorizer.inverse_transform(tokenized)
In [12]: result
Out[12]:
[array([u'isto', u'um', u'teste', u'qualquer', u'coisa'],
dtype='<U8')]
关于python - 为什么这不起作用? CountVectorizer 中的停用词,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41701870/