我在看PySpark中Statistics.corr
的文档:https://spark.apache.org/docs/1.1.0/api/python/pyspark.mllib.stat.Statistics-class.html#corr。
为什么这里的相关性导致NaN
?
>>> rdd = sc.parallelize([Vectors.dense([1, 0, 0, -2]), Vectors.dense([4, 5, 0, 3]),
... Vectors.dense([6, 7, 0, 8]), Vectors.dense([9, 0, 0, 1])])
>>> pearsonCorr = Statistics.corr(rdd)
>>> print str(pearsonCorr).replace('nan', 'NaN')
[[ 1. 0.05564149 NaN 0.40047142]
[ 0.05564149 1. NaN 0.91359586]
[ NaN NaN 1. NaN]
[ 0.40047142 0.91359586 NaN 1. ]]
最佳答案
关于hadoop - 为什么此示例导致NaN?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36728771/