hadoop - 无法启动 Hive 查询(MapReduce)

标签 hadoop cloudera hadoop-yarn

我在配置单元查询方面遇到问题。如果我尝试从 hue 界面启动 count(*) 查询,但出现这样的异常:

15/01/23 15:06:42 ERROR operation.Operation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:147)
    at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69)
    at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
    at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

通过在 hive Cli 中启动相同的查询,我得到:

hive> select count(*) from tweets; 
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
    at java.lang.StringCoding.decode(StringCoding.java:193)
    at java.lang.String.<init>(String.java:416)
    at com.google.protobuf.LiteralByteString.toString(LiteralByteString.java:148)
    at com.google.protobuf.ByteString.toStringUtf8(ByteString.java:572)
    at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$ExtendedBlockProto.getPoolId(HdfsProtos.java:743)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:525)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:751)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1188)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1324)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1432)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1441)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:549)
    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy17.getListing(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1906)
    at org.apache.hadoop.hdfs.DistributedFileSystem$15.<init>(DistributedFileSystem.java:742)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:731)
    at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1664)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
    at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
    at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:336)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:302)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:435)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:525)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:517)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exceeded

我试图检查 jobTracker 中的日志,但我得到了 enter image description here

cat 如何解决所有这些错误?

  • 集群操作系统:CentOS 6.6
  • Hadoop 发行版:Cloudera CDH 5.2
  • Mapreduce: yarn

最佳答案

我已经理解了

的问题
select count(*) from tweets;

问题是我将 serde.jar 放在某些节点主机上的错误目录中。所以我在 hive cli/Hue 中遇到查询错误。 CDH 4.* 抛出“未找到类异常”和 CDH 5.* 错误代码 2。

但是 jobTracker(Yarn) 的问题仍然存在。

关于hadoop - 无法启动 Hive 查询(MapReduce),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28112069/

相关文章:

hadoop - 50个节点的Cloudera CDH3的容量是多少

google-bigquery - 选择大数据仓库

Hadoop 调度程序与 oozie

java - 使用 Java 客户端在 Apache YARN 上运行 MapReduce 应用程序

hadoop - 记录对于内存缓冲区来说太大。通过 TEZ 使用 Hive 的 ORC 表时出错

bash - 如何使用bash脚本在群集中的YARN上快速设置Spark?

hadoop - 在没有主键的情况下使用 pig 删除重复项

java - 从带有某些关键字的推文中获取国家/地区

hadoop - Spark 作业在 Yarn 集群上运行 java.io.FileNotFoundException : File does not exits ,,即使文件存在于主节点上

java - Hadoop 作业未在大数据集中运行并抛出子错误