如何在 groupby
之后的数据帧上使用 collect_set
或 collect_list
。例如:df.groupby('key').collect_set('values')
。我收到错误:AttributeError:“GroupedData”对象没有属性“collect_set”
最佳答案
您需要使用 agg。示例:
from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark.sql import functions as F
sc = SparkContext("local")
sqlContext = HiveContext(sc)
df = sqlContext.createDataFrame([
("a", None, None),
("a", "code1", None),
("a", "code2", "name2"),
], ["id", "code", "name"])
df.show()
+---+-----+-----+
| id| code| name|
+---+-----+-----+
| a| null| null|
| a|code1| null|
| a|code2|name2|
+---+-----+-----+
请注意,在上面您必须创建一个 HiveContext。请参阅https://stackoverflow.com/a/35529093/690430用于处理不同的 Spark 版本。
(df
.groupby("id")
.agg(F.collect_set("code"),
F.collect_list("name"))
.show())
+---+-----------------+------------------+
| id|collect_set(code)|collect_list(name)|
+---+-----------------+------------------+
| a| [code1, code2]| [name2]|
+---+-----------------+------------------+
关于list - pysparkcollect_set或collect_list与groupby,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37580782/