我能够得到代码来吐出一个单词及其频率。但是,我想仅使用 scikit-learn 消除停用词。 nltk 在我的工作场所不起作用。有人对如何消除停用词有任何建议吗?
import pandas as pd
df = pd.DataFrame(['my big dog', 'my lazy cat'])
df
0
0 my big dog
1 my lazy cat
value_list = [row[0] for row in df.itertuples(index=False, name=None)]
value_list
['my big dog', 'my lazy cat']
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
x_train = cv.fit_transform(value_list)
x_train
<2x5 sparse matrix of type '<class 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
x_train.toarray()
array([[1, 0, 1, 0, 1],
[0, 1, 0, 1, 1]], dtype=int64)
cv.vocabulary_
{'my': 4, 'big': 0, 'dog': 2, 'lazy': 3, 'cat': 1}
x_train_sum = x_train.sum(axis=0)
x_train_sum
matrix([[1, 1, 1, 1, 2]], dtype=int64)
for word, col in cv.vocabulary_.items():
print('word:{:10s} | count:{:2d}'.format(word, x_train_sum[0, col]))
word:my | count: 2
word:big | count: 1
word:dog | count: 1
word:lazy | count: 1
word:cat | count: 1
with open('my-file.csv', 'w') as f:
for word, col in cv.vocabulary_.items():
f.write('{};{}\n'.format(word, x_train_sum[0, col]))
最佳答案
您可以使用自定义的 stop_words 来初始化 CountVectorizer。例如,将 my
和 big
添加到 stop_words 将只留下 cat
dog
lazy
在词汇方面:
stop_words=['my', 'big']
cv = CountVectorizer(stop_words=stop_words)
x_train = cv.fit_transform(value_list)
x_train.toarray()
array([[0, 1, 0], [1, 0, 1]], dtype=int64)
cv.vocabulary_
{'cat': 0, 'dog': 1, 'lazy': 2}
关于python - 如何仅使用 scikit-learn 消除停用词?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52712254/