我想知道是否有某种方法可以为多列的 Spark 数据帧指定自定义聚合函数。
我有一个这样类型的表(名称、项目、价格):
john | tomato | 1.99
john | carrot | 0.45
bill | apple | 0.99
john | banana | 1.29
bill | taco | 2.59
到:
我想将每个人的项目和成本汇总到这样的列表中:
john | (tomato, 1.99), (carrot, 0.45), (banana, 1.29)
bill | (apple, 0.99), (taco, 2.59)
这在数据帧中可能吗?我最近了解到
collect_list
但它似乎只适用于一列。
最佳答案
执行此操作的最简单方法 DataFrame
就是先收集两个list,然后用一个UDF
至 zip
将两个列表放在一起。就像是:
import org.apache.spark.sql.functions.{collect_list, udf}
import sqlContext.implicits._
val zipper = udf[Seq[(String, Double)], Seq[String], Seq[Double]](_.zip(_))
val df = Seq(
("john", "tomato", 1.99),
("john", "carrot", 0.45),
("bill", "apple", 0.99),
("john", "banana", 1.29),
("bill", "taco", 2.59)
).toDF("name", "food", "price")
val df2 = df.groupBy("name").agg(
collect_list(col("food")) as "food",
collect_list(col("price")) as "price"
).withColumn("food", zipper(col("food"), col("price"))).drop("price")
df2.show(false)
# +----+---------------------------------------------+
# |name|food |
# +----+---------------------------------------------+
# |john|[[tomato,1.99], [carrot,0.45], [banana,1.29]]|
# |bill|[[apple,0.99], [taco,2.59]] |
# +----+---------------------------------------------+
关于scala - 在 Spark 中使用自定义函数聚合多列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37737843/