json - 使用 JSON 数据运行 Hive 查询时出错?

标签 json hadoop amazon-s3 hive emr

我的数据包含以下内容:

{"field1":{"data1": 1},"field2":100,"field3":"more data1","field4":123.001}
{"field1":{"data2": 1},"field2":200,"field3":"more data2","field4":123.002}
{"field1":{"data3": 1},"field2":300,"field3":"more data3","field4":123.003}
{"field1":{"data4": 1},"field2":400,"field3":"more data4","field4":123.004}

我将其上传到 S3 并使用 Hive 控制台中的以下命令将其转换为 Hive 表:

ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;
CREATE EXTERNAL TABLE impressions (json STRING ) ROW FORMAT DELIMITED LINES TERMINATED BY '\n' LOCATION 's3://my-bucket/';

查询:

SELECT * FROM impressions;

输出很好,但只要我尝试使用 get_json_object UDF

并运行查询:

SELECT get_json_object(impressions.json, '$.field2') FROM impressions;

我收到以下错误:

> SELECT get_json_object(impressions.json, '$.field2') FROM impressions;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.io.IOException: cannot find dir = s3://nick.bucket.dev/snapshot.csv in pathToPartitionInfo: [s3://nick.bucket.dev/]
    at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:291)
    at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:258)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit.<init>(CombineHiveInputFormat.java:108)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:423)
    at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1036)
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1028)
    at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:172)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:897)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:871)
    at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:479)
    at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:261)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:218)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:567)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Job Submission failed with exception 'java.io.IOException(cannot find dir = s3://my-bucket/snapshot.csv in pathToPartitionInfo: [s3://my-bucket/])'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask

谁知道哪里出了问题?

最佳答案

您声明的任何原因:

ADD JAR s3://elasticmapreduce/samples/hive-ads/libs/jsonserde.jar;

但是没有在表定义中使用 serde?请参阅下面的代码片段了解如何使用它。我看不出有任何理由在这里使用 get_json_object。

CREATE EXTERNAL TABLE impressions (
    field1 string, field2 string, field3 string, field4 string
  )
  ROW FORMAT 
    serde 'com.amazon.elasticmapreduce.JsonSerde'
    with serdeproperties ( 'paths'='field1, field2, field3, 
    field4)
  LOCATION 's3://mybucket' ;

关于json - 使用 JSON 数据运行 Hive 查询时出错?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11514827/

相关文章:

scala - 支持Map的IDE减少Scala中的程序

arrays - 循环遍历多个图像的数组以单独上传到AWS s3 ReactJS

amazon-s3 - 503 使用 s3-cp-dist 在 emr 中减慢速度

javascript - 使用 AWS 的 getObject 在浏览器中显示图像

javascript - JQUERY JSON响应触发解析错误

json - Laravel $ request-> expectsJson()

java - 无法停止Hadoop IPC服务

java - Spring Boot API返回不带标签的json

ruby-on-rails - API 调用在本地工作,但在 Heroku 上不起作用

hadoop - Mapreduce 1算法的缺点是什么