在我将我的 RDD 映射到
((_id_1, section_id_1), (_id_1, section_id_2), (_id_2, section_3), (_id_2, section_4))
我要
reduceByKey
到((_id_1, Set(section_id_1, section_id_2), (_id_2, Set(section_3, section_4)))
val collectionReduce = collection_filtered.map(item => {
val extras = item._2.get("extras")
var section_id = ""
var extras_id = ""
if (extras != null) {
val extras_parse = extras.asInstanceOf[BSONObject]
section_id = extras_parse.get("guid").toString
extras_id = extras_parse.get("id").toString
}
(extras_id, Set {section_id})
}).groupByKey().collect()
我的输出是
((_id_1, (Set(section_1), Set(section_2))), (_id_2, (Set(section_3), Set(section_4))))
我该如何解决?
最佳答案
您可以使用 reduceByKey
只需使用 ++
合并列表。
val rdd = sc.parallelize((1, Set("A")) :: (2, Set("B")) :: (2, Set("C")) :: Nil)
val reducedRdd = rdd.reduceByKey(_ ++ _)
reducedRdd.collect()
// Array((1,Set(A)), (2,Set(B, C)))
在你的情况下:
collection_filtered.map(item => {
// ...
(extras_id, Set(section_id))
}).reduceByKey(_ ++ _).collect()
关于scala - 如何使用 reduceByKey 将值添加到 Scala Spark 中的 Set 中?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31557260/