我通过Spark将一些日志文件放入sql表中,而我的架构如下所示:
|-- timestamp: timestamp (nullable = true)
|-- c_ip: string (nullable = true)
|-- cs_username: string (nullable = true)
|-- s_ip: string (nullable = true)
|-- s_port: string (nullable = true)
|-- cs_method: string (nullable = true)
|-- cs_uri_stem: string (nullable = true)
|-- cs_query: string (nullable = true)
|-- sc_status: integer (nullable = false)
|-- sc_bytes: integer (nullable = false)
|-- cs_bytes: integer (nullable = false)
|-- time_taken: integer (nullable = false)
|-- User_Agent: string (nullable = true)
|-- Referrer: string (nullable = true)
如您所见,我创建了一个时间戳字段,Spark支持该字段的读取(据我所知,日期无法正常工作)。我很乐意用于“where timestamp>(2012-10-08 16:10:36.0)”之类的查询,但是当我运行它时,我会不断收到错误消息。
我尝试了以下两种sintax形式:
对于第二个,我解析了一个字符串,所以我确定我实际上以时间戳格式传递了它。
我使用2个函数:parse和date2timestamp。
关于如何处理时间戳值的任何提示?
谢谢!
1)
scala> sqlContext.sql(“SELECT * FROM Logs as l where l.timestamp =(2012-10-08 16:10:36.0)”)。collect
java.lang.RuntimeException: [1.55] failure: ``)'' expected but 16 found
SELECT * FROM Logs as l where l.timestamp=(2012-10-08 16:10:36.0)
^
2)
sqlContext.sql(“SELECT * FROM作为l的日志,其中l.timestamp =” + date2timestamp(formatTime3.parse(“2012-10-08 16:10:36.0”)))。collect
java.lang.RuntimeException: [1.54] failure: ``UNION'' expected but 16 found
SELECT * FROM Logs as l where l.timestamp=2012-10-08 16:10:36.0
^
最佳答案
我认为问题首先是时间戳的精度,而且我传递的代表时间戳的字符串也必须强制转换为字符串
因此,此查询现在可以正常工作:
sqlContext.sql("SELECT * FROM Logs as l where cast(l.timestampLog as String) <= '2012-10-08 16:10:36'")
关于scala - SparkSQL时间戳查询失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27069537/