我有一个 CSV 文件,已通过以下代码作为数据框导入:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read.csv("name of file.csv", inferSchema = True, header = True)
df.show()
输出
+-----+------+-----+
|col1 | col2 | col3|
+-----+------+-----+
| A | 2 | 4 |
+-----+------+-----+
| A | 4 | 5 |
+-----+------+-----+
| A | 7 | 7 |
+-----+------+-----+
| A | 3 | 8 |
+-----+------+-----+
| A | 7 | 3 |
+-----+------+-----+
| B | 8 | 9 |
+-----+------+-----+
| B | 10 | 10 |
+-----+------+-----+
| B | 8 | 9 |
+-----+------+-----+
| B | 20 | 15 |
+-----+------+-----+
我想为 col1
中的每个组分别创建另一个 col4
,其中包含 col2[n+3]/col2-1
。
输出应该是
+-----+------+-----+-----+
|col1 | col2 | col3| col4|
+-----+------+-----+-----+
| A | 2 | 4 | 0.5| #(3/2-1)
+-----+------+-----+-----+
| A | 4 | 5 | 0.75| #(7/4-1)
+-----+------+-----+-----+
| A | 7 | 7 | NA |
+-----+------+-----+-----+
| A | 3 | 8 | NA |
+-----+------+-----+-----+
| A | 7 | 3 | NA |
+-----+------+-----+-----+
| B | 8 | 9 | 1.5 |
+-----+------+-----+-----+
| B | 10 | 10 | NA |
+-----+------+-----+-----+
| B | 8 | 9 | NA |
+-----+------+-----+-----+
| B | 20 | 15 | NA |
+-----+------+-----+-----+
我知道如何在 pandas 中执行此操作,但我不确定如何对 PySpark 中的分组列进行一些计算。
目前我的PySpark版本是2.4
最佳答案
我的 Spark 版本是 2.2
。 lead()和 Window()被用过。对于 reference .
from pyspark.sql.window import Window
from pyspark.sql.functions import lead, col
my_window = Window.partitionBy('col1').orderBy('col1')
df = df.withColumn('col2_lead_3', lead(col('col2'),3).over(my_window))\
.withColumn('col4',(col('col2_lead_3')/col('col2'))-1).drop('col2_lead_3')
df.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| B| 8| 9| 1.5|
| B| 10| 10|null|
| B| 8| 9|null|
| B| 20| 15|null|
| A| 2| 4| 0.5|
| A| 4| 5|0.75|
| A| 7| 7|null|
| A| 3| 8|null|
| A| 7| 3|null|
+----+----+----+----+
关于python - PySpark DataFrame 中一行与其前导 3 行之间的差异,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54176518/