我收到以下错误
found : org.apache.spark.sql.Dataset[(Double, Double)]
required: org.apache.spark.rdd.RDD[(Double, Double)]
val testMetrics = new BinaryClassificationMetrics(testScoreAndLabel)
在以下代码上:
val testScoreAndLabel = testResults.
select("Label","ModelProbability").
map{ case Row(l:Double,p:Vector) => (p(1),l) }
val testMetrics = new BinaryClassificationMetrics(testScoreAndLabel)
从错误看来,testScoreAndLabel
的类型为sql.Dataset
,但BinaryClassificationMetrics
需要一个RDD
。
如何将 sql.Dataset
转换为 RDD
?
最佳答案
我会做这样的事情
val testScoreAndLabel = testResults.
select("Label","ModelProbability").
map{ case Row(l:Double,p:Vector) => (p(1),l) }
现在转换testScoreAndLabel
只需执行 testScoreAndLabel.rdd
即可实现 RDD
val testMetrics = new BinaryClassificationMetrics(testScoreAndLabel.rdd)
关于scala - 找到 : org. apache.spark.sql.Dataset[(Double, Double)] 需要 : org. apache.spark.rdd.RDD[(Double, Double)],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40577904/