当我使用 posexplode()
时,以下语句生成“pos”和“col”作为默认名称Spark SQL 中的函数
scala> spark.sql(""" with t1(select to_date('2019-01-01') first_day) select first_day,date_sub(add_months(first_day,1),1) last_day, posexplode(array(5,6,7)) from t1 """).show(false)
+----------+----------+---+---+
|first_day |last_day |pos|col|
+----------+----------+---+---+
|2019-01-01|2019-01-31|0 |5 |
|2019-01-01|2019-01-31|1 |6 |
|2019-01-01|2019-01-31|2 |7 |
+----------+----------+---+---+
在 spark.sql 中覆盖这些默认名称的语法是什么?
在数据帧中,这可以通过提供
df.explode(select 'arr.as(Seq("arr_val","arr_pos")))
来完成。scala> val arr= Array(5,6,7)
arr: Array[Int] = Array(5, 6, 7)
scala> Seq(("dummy")).toDF("x").select(posexplode(lit(arr)).as(Seq("arr_val","arr_pos"))).show(false)
+-------+-------+
|arr_val|arr_pos|
+-------+-------+
|0 |5 |
|1 |6 |
|2 |7 |
+-------+-------+
如何在 SQL 中得到它?我试过
spark.sql(""" with t1(select to_date('2011-01-01') first_day) select first_day,date_sub(add_months(first_day,1),1) last_day, posexplode(array(5,6,7)) as(Seq('p','c')) from t1 """).show(false)
和
spark.sql(""" with t1(select to_date('2011-01-01') first_day) select first_day,date_sub(add_months(first_day,1),1) last_day, posexplode(array(5,6,7)) as(('p','c')) from t1 """).show(false)
但他们正在抛出错误。
最佳答案
您可以使用 LATERAL VIEW
:
spark.sql("""
WITH t1 AS (SELECT to_date('2011-01-01') first_day)
SELECT first_day, date_sub(add_months(first_day,1),1) last_day, p, c
FROM t1
LATERAL VIEW posexplode(array(5,6,7)) AS p, c
""").show
+----------+----------+---+---+
| first_day| last_day| p| c|
+----------+----------+---+---+
|2011-01-01|2011-01-31| 0| 5|
|2011-01-01|2011-01-31| 1| 6|
|2011-01-01|2011-01-31| 2| 7|
+----------+----------+---+---+
或别名元组
spark.sql("""
WITH t1 AS (SELECT to_date('2011-01-01') first_day)
SELECT first_day, date_sub(add_months(first_day,1),1) last_day,
posexplode(array(5,6,7)) AS (p, c)
FROM t1
""").show
+----------+----------+---+---+
| first_day| last_day| p| c|
+----------+----------+---+---+
|2011-01-01|2011-01-31| 0| 5|
|2011-01-01|2011-01-31| 1| 6|
|2011-01-01|2011-01-31| 2| 7|
+----------+----------+---+---+
用 Spark 2.4.0 测试。
请注意别名不是字符串,不应该用
'
引用。或 "
.如果必须使用非标准标识符,则应使用反引号,即WITH t1 AS (SELECT to_date('2011-01-01') first_day)
SELECT first_day, date_sub(add_months(first_day,1),1) last_day,
posexplode(array(5,6,7)) AS (`arr pos`, `arr_value`)
FROM t1
关于sql - 如何在 Spark SQL 中为posexplode 列指定别名?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54309042/