apache-spark - 设置 row_number 从 0 开始

标签 apache-spark apache-spark-sql window-functions row-number

您好,我正在尝试 spark 窗口函数。我需要从“0”开始 row_number。这是我的代码。

val target2 = target1.select("id","name","mark1","mark2","version").withColumn("rank", row_number().over(Window.partitionBy("name","mark1","mark2").orderBy("id")))

行号从“1”开始。我试过这样。

val target2 = target1.select("id","name","mark1","mark2","version").withColumn("rank", row_number().over(Window.partitionBy("name","mark1","mark2").orderBy("id") -1))

val target2 = target1.select("id","name","mark1","mark2","version").withColumn("rank", row_number().over(Window.partitionBy("name","mark1","mark2").orderBy("id"))) -1

不适合我。我需要从零开始我的 row_number。任何帮助将不胜感激。

最佳答案

试试这个:

w = Window.partitionBy("name","mark1","mark2").orderBy("id") 
target2 = target1.select("id","name","mark1","mark2","version").withColumn("rank",
                         row_number().over(w)-1)

它适用于 PySpark。

关于apache-spark - 设置 row_number 从 0 开始,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49998448/

相关文章:

apache-spark - Spark : How to group by distinct values in DataFrame

postgresql - 在 Postgres 中聚合多个字段时填写缺失的行

SQL 窗口函数按空值分区

sql - 具有两列的 T-SQL 顺序更新

python - 如何修复pyspark中的 'Container exited with a non-zero exit code 143'错误

hadoop - 使用 pyspark/spark 对大型分布式数据集进行采样

random - Pyspark - 在特定列上运行的 Lambda 表达式

java - 将 JavaPairRDD 转换为 JavaRDD

java - 如何在 Spark Java 中从结构体中检索值?

scala - 如何将自定义日期时间格式转换为时间戳?