python - 潜在语义分析结果

标签 python scikit-learn svd sklearn-pandas lsa

我正在遵循 LSA 教程,并将示例切换到不同的字符串列表,我不确定代码是否按预期工作。

当我使用教程中给出的示例输入时,它会产生合理的答案。然而,当我使用自己的输入时,我得到了非常奇怪的结果。

为了进行比较,以下是示例输入的结果:

enter image description here

当我使用自己的例子时,这就是结果。另外值得注意的是,我似乎没有得到一致的结果:

enter image description here

enter image description here

如果能帮助我弄清楚为什么我会得到这些结果,我们将不胜感激:)

代码如下:

import sklearn
# Import all of the scikit learn stuff
from __future__ import print_function
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import pandas as pd
import warnings
# Suppress warnings from pandas library
warnings.filterwarnings("ignore", category=DeprecationWarning,
module="pandas", lineno=570)
import numpy


example = ["Coffee brewed by expressing or forcing a small amount of 
nearly boiling water under pressure through finely ground coffee 
beans.", 
"An espresso-based coffee drink consisting of espresso with 
microfoam (steamed milk with small, fine bubbles with a glossy or 
velvety consistency)", 
"American fast-food dish, consisting of french fries covered in 
cheese with the possible addition of various other toppings", 
"Pounded and breaded chicken is topped with sweet honey, salty 
dill pickles, and vinegar-y iceberg slaw, then served upon crispy 
challah toast.", 
"A layered, flaky texture, similar to a puff pastry."]

''''
example = ["Machine learning is super fun",
"Python is super, super cool",
"Statistics is cool, too",
"Data science is fun",
"Python is great for machine learning",
"I like football",
"Football is great to watch"]
'''

vectorizer = CountVectorizer(min_df = 1, stop_words = 'english')
dtm = vectorizer.fit_transform(example)
pd.DataFrame(dtm.toarray(),index=example,columns=vectorizer.get_feature_names()).head(10)

# Get words that correspond to each column
vectorizer.get_feature_names()

# Fit LSA. Use algorithm = “randomized” for large datasets
lsa = TruncatedSVD(2, algorithm = 'arpack')
dtm_lsa = lsa.fit_transform(dtm.astype(float))
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)

pd.DataFrame(lsa.components_,index = ["component_1","component_2"],columns = vectorizer.get_feature_names())

pd.DataFrame(dtm_lsa, index = example, columns = "component_1","component_2"])

xs = [w[0] for w in dtm_lsa]
ys = [w[1] for w in dtm_lsa]
xs, ys

# Plot scatter plot of points
%pylab inline
import matplotlib.pyplot as plt
figure()
plt.scatter(xs,ys)
xlabel('First principal component')
ylabel('Second principal component')
title('Plot of points against LSA principal components')
show()

#Plot scatter plot of points with vectors
%pylab inline
import matplotlib.pyplot as plt
plt.figure()
ax = plt.gca()
ax.quiver(0,0,xs,ys,angles='xy',scale_units='xy',scale=1, linewidth = .01)
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
xlabel('First principal component')
ylabel('Second principal component')
title('Plot of points against LSA principal components')
plt.draw()
plt.show()

# Compute document similarity using LSA components
similarity = np.asarray(numpy.asmatrix(dtm_lsa) * 
numpy.asmatrix(dtm_lsa).T)
pd.DataFrame(similarity,index=example, columns=example).head(10)

最佳答案

该问题看起来是由于您使用的示例数量较少以及标准化步骤造成的。因为 TrucatedSVD 将您的计数向量映射到许多非常小的数字和一个相对较大的数字,所以当您对这些数字进行标准化时,您会看到一些奇怪的行为。您可以通过查看数据散点图来了解这一点。

dtm_lsa = lsa.fit_transform(dtm.astype(float))
fig, ax = plt.subplots()
for i in range(dtm_lsa.shape[0]):
    ax.scatter(dtm_lsa[i, 0], dtm_lsa[i, 1], label=f'{i+1}')
ax.legend()

not normalised

我想说这个图代表了您的数据,因为两个咖啡示例位于右侧(很难用少量示例来说明更多内容)。但是,当您标准化数据时

dtm_lsa = lsa.fit_transform(dtm.astype(float))
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)
fig, ax = plt.subplots()
for i in range(dtm_lsa.shape[0]):
    ax.scatter(dtm_lsa[i, 0], dtm_lsa[i, 1], label=f'{i+1}')
ax.legend()

normalised

这会将一些点推到彼此之上,这将为您提供 1 的相似之处。差异越大,即添加的新样本越多,这个问题几乎肯定会消失。

关于python - 潜在语义分析结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52198701/

相关文章:

python - 使用SVD绘制词向量来衡量相似度

machine-learning - 如何使用svd根据item推荐item

python - 保持 cmd.exe 打开

python - django celery 中的定期任务无法正常工作

python - 如何在 LDA 中查看每个主题的所有文档?

python - Scikit SGDClassifier 使用字母而不是单词作为特征

scikit-learn - sklearn、Keras、DeepStack - ValueError : multi_class must be in ('ovo' , 'ovr' )

matlab - 我应该在应用 SVD 之前执行数据中心化吗?

python - 如何编写静态 python getitem 方法?

python - Apache 中 web.py 的全局变量用法