这是我现在拥有的代码,我使用的 csv 文件有两列,一列包含文本,一列包含它所属的对话编号。现在我已经设法从文本中获取不同的 ngram,但我还希望获得链接到 ngram 的对话数量。因此,如果一个 ngram 出现 x 次,我想查看它出现在哪些对话中。我该怎么做?
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
df = pd.read_csv("F:/textclustering/data/filteredtext1.csv", encoding="iso-8859-1" ,low_memory=False)
document = df['Data']
vectorizer = CountVectorizer(ngram_range=(2, 2))
X = vectorizer.fit_transform(document)
matrix_terms = np.array(vectorizer.get_feature_names())
matrix_freq = np.asarray(X.sum(axis=0)).ravel()
terms = vectorizer.get_feature_names()
freqs = X.sum(axis=0).A1
dictionary = dict(zip(terms, freqs))
df = pd.DataFrame(dictionary,index=[0]).T.reindex()
df.to_csv("F:/textclustering/data/terms2.csv", sep=',', na_rep="none")
输入 CSV
text, id
example text is great, 1
this is great, 2
example text is great, 3
所需的输出(或接近于此的输出)
ngram, count, id
example text, 2, [1,3]
text is, 2, [1,3]
is great, 3, [1,2,3]
this is, 1, [1]
最佳答案
首先,我们将文档转换为 csr 稀疏矩阵,然后转换为 coo 矩阵。 COO矩阵可以获取稀疏元素的行和列的位置。
from itertools import groupby
from sklearn.feature_extraction.text import CountVectorizer
ls = [['example text is great', 1],
['this is great', 2],
['example text is great', 3]]
document = [l[0] for l in ls]
vectorizer = CountVectorizer(ngram_range=(2, 2))
X = vectorizer.fit_transform(document)
X = X.tocoo()
然后你可以按列分组(对于你拥有的每个二元组)。这里有一个小技巧,您必须首先按列对元组进行排序。然后,对于每一行,您可以用二元组替换行中的索引。我使用字典名称 id2vocab
output = []
id2vocab = dict((v,k) for k,v in vectorizer.vocabulary_.items())
zip_rc = sorted(zip(X.col, X.row), key=lambda x: x[0]) # group by column (vocab)
count = np.ravel(X.sum(axis=0)) # simple sum column for count
for g in groupby(zip_rc, key=lambda x: x[0]):
index = g[0]
bigram = id2vocab[index]
loc = [g_[1] for g_ in g[1]]
c = count[index]
output.append([index, bigram, c, loc])
输出将如下所示
[[0, 'example text', 2, [0, 2]],
[1, 'is great', 3, [0, 1, 2]],
[2, 'text is', 2, [0, 2]],
[3, 'this is', 1, [1]]]
关于python - ngram 计数后如何在数据框中添加额外的列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42813305/