我正在使用 Spark 1.3.1,其中连接两个数据帧会重复列
加入了。我离开外部加入两个数据帧并想发送
生成的数据帧到 na().fill()
方法以将空值转换为已知
基于列的数据类型的值。我建立了一张 map
“table.column”->“value”并将其传递给 fill 方法。但我明白了
异常而不是成功 :(。我有什么选择?我看到有一个 dataFrame.withColumnRenamed 方法,但我只能重命名一列。我有涉及多个列的连接。我是否只需要确保有一个一组唯一的列名,无论我在其中应用 na().fill() 方法的数据框中的表别名如何?
给定:
scala> val df1 = sqlContext.jsonFile("people.json").as("df1")
df1: org.apache.spark.sql.DataFrame = [first: string, last: string]
scala> val df2 = sqlContext.jsonFile("people.json").as("df2")
df2: org.apache.spark.sql.DataFrame = [first: string, last: string]
我可以加入他们一起
val df3 = df1.join(df2, df1("first") === df2("first"), "left_outer")
我有一个将数据类型转换为值的映射。
scala> val map = Map("df1.first"->"unknown", "df1.last" -> "unknown",
"df2.first" -> "unknown", "df2.last" -> "unknown")
但是执行 fill(map) 会导致异常。
scala> df3.na.fill(map)
org.apache.spark.sql.AnalysisException: Reference 'first' is ambiguous,
could be: first#6, first#8.;
最佳答案
这是我想出的。在我原来的示例中,加入后 df2 中没有任何有趣的东西,所以我将其更改为经典的部门/员工示例。
部门.json
{"department": 2, "name":"accounting"}
{"department": 1, "name":"engineering"}
人.json
{"department": 1, "first":"Bruce", "last": "szalwinski"}
现在我可以加入数据框、构建 map 并将空值替换为未知数。
scala> val df1 = sqlContext.jsonFile("department.json").as("df1")
df1: org.apache.spark.sql.DataFrame = [department: bigint, name: string]
scala> val df2 = sqlContext.jsonFile("people.json").as("df2")
df2: org.apache.spark.sql.DataFrame = [department: bigint, first: string, last: string]
scala> val df3 = df1.join(df2, df1("department") === df2("department"), "left_outer")
df3: org.apache.spark.sql.DataFrame = [department: bigint, name: string, department: bigint, first: string, last: string]
scala> val map = Map("first" -> "unknown", "last" -> "unknown")
map: scala.collection.immutable.Map[String,String] = Map(first -> unknown, last -> unknown)
scala> val df4 = df3.select("df1.department", "df2.first", "df2.last").na.fill(map)
df4: org.apache.spark.sql.DataFrame = [department: bigint, first: string, last: string]
scala> df4.show()
+----------+-------+----------+
|department| first| last|
+----------+-------+----------+
| 2|unknown| unknown|
| 1| Bruce|szalwinski|
+----------+-------+----------+
关于apache-spark - DataFrame na() 填充方法和不明确引用的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35679295/