apache-spark - 如何在 Spark SQL 查询的 Interval 中使用动态值

标签 apache-spark pyspark apache-spark-sql

一个有效的 Spark SQL:

SELECT current_timestamp() - INTERVAL 10 DAYS as diff from sample_table
我试过的 Spark SQL(不工作):
SELECT current_timestamp() - INTERVAL col1 DAYS as diff from sample_table
从上面的查询得到的错误:
mismatched input 'DAYS' expecting 

== SQL ==
SELECT current_timestamp() - INTERVAL col1 DAYS as diff from sample_table
------------------------------------------^^^
" Traceback (most recent call last):
    File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 767, in sql return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
    File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call answer, self.gateway_client, self.target_id, self.name)
    File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 73, in deco raise ParseException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.ParseException: "
mismatched input 'DAYS' expecting 

== SQL ==
SELECT current_timestamp() - INTERVAL col1 DAYS as diff from sample_table
------------------------------------------^^^
我想用col1作为动态间隔值。我怎样才能做到这一点?

最佳答案

SparkSQL 函数 make_interval实现这一点:

SELECT current_timestamp() - make_interval(0, 0, 0, col1, 0, 0, 0) as diff from sample_table

关于apache-spark - 如何在 Spark SQL 查询的 Interval 中使用动态值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58074912/

相关文章:

python - 属性错误 : 'DataFrame' object has no attribute '_data'

python - PySpark运行时错误: Set changed size during iteration

sql - 使用Spark优化Hive SQL查询吗?

hadoop - 如何使用 Spark Map Reduce 将一堆 Parquet 文件联合在一起?

python - 通过其他键将具有非唯一 ID 的列添加到 pyspark 数据帧

apache-spark - PySpark错误:AttributeError:'NoneType'对象没有属性'_jvm'

python - Spark DataFrame mapPartitions

java - 如何在spark RDD(JavaRDD)中获取记录的文件名

python - 将 Python 转换为 Scala

java - 如何使用非 Lambda 函数定义 Spark RDD 转换