java - 如何在 Spark 中将两个 DataFrame 与组合列连接起来?

标签 java scala apache-spark apache-spark-sql

我不明白如何将这样的 2 个 DataFrame 相互连接起来。

第一个DataFrame存储用户向服务中心请求的时间信息。

我们将此 DataFrame 称为 df1:

+-----------+---------------------+
| USER_NAME | REQUEST_DATE        |
+-----------+---------------------+
| Alex      | 2018-03-01 00:00:00 |
| Alex      | 2018-09-01 00:00:00 |
| Bob       | 2018-03-01 00:00:00 |
| Mark      | 2018-02-01 00:00:00 |
| Mark      | 2018-07-01 00:00:00 |
| Kate      | 2018-02-01 00:00:00 |
+-----------+---------------------+

第二个DataFrame存储用户可以使用服务中心服务的可能期限(许可期限)的信息。

我们称之为df2

+-----------+---------------------+---------------------+------------+
| USER_NAME | START_SERVICE       | END_SERVICE         | STATUS     |
+-----------+---------------------+---------------------+------------+
| Alex      | 2018-01-01 00:00:00 | 2018-06-01 00:00:00 | Active     |
| Bob       | 2018-01-01 00:00:00 | 2018-02-01 00:00:00 | Not Active |
| Mark      | 2018-01-01 00:00:00 | 2018-05-01 23:59:59 | Active     |
| Mark      | 2018-05-01 00:00:00 | 2018-08-01 23:59:59 | VIP        |
+-----------+---------------------+---------------------+------------+

如何连接这2个DataFrame并返回这样的结果?如何获取治疗时的用户许可类型列表?

+-----------+---------------------+----------------+
| USER_NAME | REQUEST_DATE        | STATUS         |
+-----------+---------------------+----------------+
| Alex      | 2018-03-01 00:00:00 | Active         |
| Alex      | 2018-09-01 00:00:00 | No information |
| Bob       | 2018-03-01 00:00:00 | Not Active     |
| Mark      | 2018-02-01 00:00:00 | Active         |
| Mark      | 2018-07-01 00:00:00 | VIP            |
| Kate      | 2018-02-01 00:00:00 | No information |
+-----------+---------------------+----------------+

代码:

import org.apache.spark.sql.DataFrame

val df1: DataFrame  = Seq(
    ("Alex", "2018-03-01 00:00:00"),
    ("Alex", "2018-09-01 00:00:00"),
    ("Bob", "2018-03-01 00:00:00"),
    ("Mark", "2018-02-01 00:00:00"),
    ("Mark", "2018-07-01 00:00:00"),
    ("Kate", "2018-07-01 00:00:00")
).toDF("USER_NAME", "REQUEST_DATE")

df1.show()

val df2: DataFrame  = Seq(
    ("Alex", "2018-01-01 00:00:00", "2018-06-01 00:00:00", "Active"),
    ("Bob", "2018-01-01 00:00:00", "2018-02-01 00:00:00", "Not Active"),
    ("Mark", "2018-01-01 00:00:00", "2018-05-01 23:59:59", "Active"),
    ("Mark", "2018-05-01 00:00:00", "2018-08-01 23:59:59", "Active")
).toDF("USER_NAME", "START_SERVICE", "END_SERVICE", "STATUS")

df1.show()

val total = df1.join(df2, df1("USER_NAME")===df2("USER_NAME"), "left").filter(df1("REQUEST_DATE") >= df2("START_SERVICE") && df1("REQUEST_DATE") <= df2("END_SERVICE")).select(df1("*"), df2("STATUS"))

total.show()

错误:

org.apache.spark.SparkException: Exception thrown in awaitResult:
  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
  at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
  at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:367)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140)
  at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:135)
  at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:232)
  at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:102)
  at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
  at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:35)
  at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:65)
  at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
  at org.apache.spark.sql.execution.FilterExec.consume(basicPhysicalOperators.scala:85)
  at org.apache.spark.sql.execution.FilterExec.doConsume(basicPhysicalOperators.scala:206)
  at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
  at org.apache.spark.sql.execution.InputAdapter.consume(WholeStageCodegenExec.scala:354)
  at org.apache.spark.sql.execution.InputAdapter.doProduce(WholeStageCodegenExec.scala:383)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.InputAdapter.produce(WholeStageCodegenExec.scala:354)
  at org.apache.spark.sql.execution.FilterExec.doProduce(basicPhysicalOperators.scala:125)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.FilterExec.produce(basicPhysicalOperators.scala:85)
  at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:45)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
  at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:35)
  at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:97)
  at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)

最佳答案

如何连接这2个DataFrame并返回这样的结果?

df_joined = df1.join(df2, Seq('USER_NAME'), 'left' )

如何获取许可证仍然相关的所有用户的列表?

df_relevant = df_joined
.withColumn('STATUS', when(col('REQUEST_DATE') > col('START_SERVICE') and col('REQUEST_DATE') < col('END_SERVICE'), col('STATUS')).otherwise('No information') 
.select('USER_NAME', 'REQUEST_DATE', 'STATUS' )

关于java - 如何在 Spark 中将两个 DataFrame 与组合列连接起来?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54250242/

相关文章:

java - 为什么在 Eclipse 中使用 "step into"时,即使没有步骤过滤器,也不显示对构造函数的调用

java - 使用 public static main(String args) 执行 PerformExecute()

scala - 合并多个循环生成的RDD

java - 如何将变量的值从一个文件发送到另一个文件?

java - 将 JFrame 设置为出现在中心不起作用?

scala - 我“应该”如何使用 scala.collection.immutable.Queue?

scala - 如何按 scala Spark 数据集中的可选类型进行过滤

python - Geoip 2's python library doesn' t 在 pySpark 的 map 函数中工作

java - Spark 中的 DataFrame 到 Json 数组

python - PySpark 在每个 DataFrame 行上执行普通 Python 函数