如何将以下查询转换为与不支持子查询的 Spark 1.6 兼容:
SELECT ne.device_id, sp.device_hostname
FROM `table1` ne INNER JOIN `table2` sp
ON sp.device_hostname =
(SELECT device_hostname FROM `table2`
WHERE device_hostname LIKE
CONCAT(ne.device_id,'%') ORDER BY device_hostname DESC LIMIT 1)
我读到它支持在 FROM 中指定的子查询,但不支持 WHERE,但以下内容也不起作用:
SELECT * FROM (SELECT ne.device_id, sp.device_hostname
FROM `table1` ne INNER JOIN `table2` sp
ON sp.device_hostname =
(SELECT device_hostname FROM `table2`
WHERE device_hostname LIKE
CONCAT(ne.device_id,'%') ORDER BY device_hostname DESC LIMIT 1)) AS TA
我的总体目标是连接两个表,但只从表 2 中获取最后一条记录。 SQL 语句是有效的,但当我在 Spark 的 HiveContext.sql 中使用它们时,我得到了一个分析异常。
最佳答案
您可以使用 HiveContext
和窗口函数(引用 How to select the first row of each group? )
scala> Seq((1L, "foo")).toDF("id", "device_id").registerTempTable("table1")
scala> Seq((1L, "foobar"), (2L, "foobaz")).toDF("id", "device_hostname").registerTempTable("table2")
scala> sqlContext.sql("""
| WITH tmp AS (
| SELECT ne.device_id, sp.device_hostname, row_number() OVER (PARTITION BY device_id ORDER BY device_hostname) AS rn
| FROM table1 ne INNER JOIN table2 sp
| ON sp.device_hostname LIKE CONCAT(ne.device_id, '%'))
| SELECT device_id, device_hostname FROM tmp WHERE rn = 1
| """).show
+---------+---------------+
|device_id|device_hostname|
+---------+---------------+
| foo| foobar|
+---------+---------------+
但是只有两列你可以聚合:
scala> sqlContext.sql("""
| WITH tmp AS (
| SELECT ne.device_id, sp.device_hostname
| FROM table1 ne INNER JOIN table2 sp
| ON sp.device_hostname LIKE CONCAT(ne.device_id, '%'))
| SELECT device_id, min(device_hostname) AS device_hostname
| FROM tmp GROUP BY device_id
|""").show
+---------+---------------+
|device_id|device_hostname|
+---------+---------------+
| foo| foobar|
+---------+---------------+
为了提高性能,您应该尝试将 LIKE
替换为相等条件 How can we JOIN two Spark SQL dataframes using a SQL-esque "LIKE" criterion?
关于mysql - 如何在 Spark 1.6 中使用 SQL 子查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48380067/