我有以下 2 个数据框-
dataframe_a
+----------------+---------------+
| user_id| domain|
+----------------+---------------+
| josh| wanadoo.fr|
| samantha| randomn.fr|
| bob| eidsiva.net|
| dylan| vodafone.it|
+----------------+---------------+
dataframe_b
+----------------+---------------+
| user_id| domain|
+----------------+---------------+
| josh| oldwebsite.fr|
| samantha| randomn.fr|
| dylan| oldweb.it|
| ryan| chicks.it|
+----------------+---------------+
我想做一个完整的外部联接,但保留 dataframe_a
的 domain
列中的值,以防我为单个 user_id 获得 2 个不同的域
。所以,我想要的数据框看起来像-
desired_df
+----------------+---------------+
| user_id| domain|
+----------------+---------------+
| josh| wanadoo.fr|
| samantha| randomn.fr|
| bob| eidsiva.net|
| dylan| vodafone.it|
| ryan| chicks.it|
+----------------+---------------+
我想我可以做一些像-
desired_df = dataframe_a.join(dataframe_b, ["user_id"], how="full_outer").drop(dataframe_b.domain)
但我担心这是否会在我想要的数据框中给我ryan
。这是正确的方法吗?
最佳答案
您将需要使用“合并”。在您当前的解决方案中,ryan 将位于生成的数据框中,但剩余的 dataframe_a.domain
列的值为 null。
joined_df = dataframe_a.join(dataframe_b, ["user_id"], how="full_outer")
+----------------+---------------+---------------+
| user_id| domain| domain|
+----------------+---------------+---------------+
| josh| wanadoo.fr| oldwebsite.fr|
| samantha| randomn.fr| randomn.fr|
| bob| eidsiva.net| |
| dylan| vodafone.it| oldweb.it|
| ryan| | chicks.it|
+----------------+---------------+---------------+
'coalesce' 允许您指定偏好顺序,但会跳过空值。
import pyspark.sql.functions as F
joined_df = joined_df.withColumn(
"preferred_domain",
F.coalesce(dataframe_a.domain, dataframe_b.domain)
)
joined_df = joined_df.drop(dataframe_a.domain).drop(dataframe_b.domain)
给予
+----------------+----------------+
| user_id|preferred_domain|
+----------------+----------------+
| josh| wanadoo.fr|
| samantha| randomn.fr|
| bob| eidsiva.net|
| dylan| vodafone.it|
| ryan| chicks.it|
+----------------+----------------+
关于python - PySpark 数据帧 : Full Outer Join with a condition,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58968564/