python - 如何获得 Pandas 中每组的平均成对余弦相似度

标签 python pandas nlp

我有一个示例数据框如下

df=pd.DataFrame(np.array([['facebook', "women tennis"], ['facebook', "men basketball"], ['facebook', 'club'],['apple', "vice president"], ['apple', 'swimming contest']]),columns=['firm','text'])

enter image description here

现在我想使用词嵌入来计算每个公司内的文本相似度。例如,facebook 的平均余弦相似度将是第 0、1 和 2 行之间的余弦相似度。最终数据帧应在每个公司的每行旁边有一列 ['mean_cos_ Between_items']。每个公司的值都是相同的,因为它是公司内部的成对比较。

我写了下面的代码:

import gensim
from gensim import utils
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn.metrics.pairwise import cosine_similarity

 # map each word to vector space
    def represent(sentence):
        vectors = []
        for word in sentence:
            try:
                vector = model.wv[word]
                vectors.append(vector)
            except KeyError:
                pass
        return np.array(vectors).mean(axis=0)
    
    # get average if more than 1 word is included in the "text" column
    def document_vector(items):
        # remove out-of-vocabulary words
        doc = [word for word in items if word in model_glove.vocab]
        if doc:
            doc_vector = model_glove[doc]
            mean_vec=np.mean(doc_vector, axis=0)
        else:
            mean_vec = None
        return mean_vec
    
# get average pairwise cosine distance score 
def mean_cos_sim(grp):
   output = []
   for i,j in combinations(grp.index.tolist(),2 ): 
       doc_vec=document_vector(grp.iloc[i]['text'])
       if doc_vec is not None and len(doc_vec) > 0:      
           sim = cosine_similarity(document_vector(grp.iloc[i]['text']).reshape(1,-1),document_vector(grp.iloc[j]['text']).reshape(1,-1))
           output.append([i, j, sim])
       return np.mean(np.array(output), axis=0)

# save the result to a new column    
df['mean_cos_between_items']=df.groupby(['firm']).apply(mean_cos_sim)

但是,我收到以下错误:

enter image description here

您能帮忙吗?谢谢!

最佳答案

请注意,sklearn.metrics.pairwise.cosine_similarity,当传递单个矩阵X时,automatically returns the pairwise similarities between all samples in X 。即,无需手动构建对。

假设您使用类似的东西构建平均嵌入(我在这里使用glove-twitter-25),

def mean_embeddings(s):
    """Transfer a list of words into mean embedding"""
    return np.mean([model_glove.get_vector(x) for x in s], axis=0)

df["embeddings"] = df.text.str.split().apply(mean_embeddings)

结果是df.embeddings

>>> df.embeddings
0    [-0.2597, -0.153495, -0.5106895, -1.070115, 0....
1    [0.0600965, 0.39806002, -0.45810497, -1.375365...
2    [-0.43819, 0.66232, 0.04611, -0.91103, 0.32231...
3    [0.1912625, 0.0066999793, -0.500785, -0.529915...
4    [-0.82556, 0.24555385, 0.38557374, -0.78941, 0...
Name: embeddings, dtype: object

您可以像这样获得平均成对余弦相似度,要点是您可以直接将 cosine_similarity 应用于每个组的充分准备的矩阵: p>

(
 df.groupby("firm").embeddings # extract 'embeddings' for each group
 .apply(np.stack) # turns sequence of arrays into proper matrix
 .apply(cosine_similarity) # the magic: compute pairwise similarity matrix
 .apply(np.mean) # get the mean
)

对于我使用的模型,结果是:

firm
apple       0.765953
facebook    0.893262
Name: embeddings, dtype: float32

关于python - 如何获得 Pandas 中每组的平均成对余弦相似度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71666450/

相关文章:

python - pandas:四舍五入到用户定义的最接近的 float

python - 面临属性错误: for 'tag_' using Spacy in Python

python - 将图像加载到 Dask Dataframe 中

Python 3.x : how to use ast to search for a print statement

Python。学习 turtle graphics

python - 什么是 ngram 计数以及如何使用 nltk 实现?

nlp - 斯坦福 CoreNLP 命名实体识别如何捕获 5 英寸、5 英寸、5 英寸、5 英寸等测量值

python - 值错误: could not convert string to float: 'FEE'

python - 根据定界符吐出一列

python - 如何在 Seaborn 按 YearMonth 订购 X 轴