hadoop - Hive 因 java.io.IOException 失败(超出分割的最大块位置.. splitsize : 45 maxsize: 10)

标签 hadoop hive

配置单元确实需要处理 45 个文件。每个大小约为1GB。当mapper执行100%完成后,hive失败并出现上面的错误消息。

Driver returned: 1.  Errors: OK
Hive history file=/tmp/hue/hive_job_log_hue_201308221004_1738621649.txt
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1376898282169_0441, Tracking URL = http://SH02SVR2882.hadoop.sh2.ctripcorp.com:8088/proxy/application_1376898282169_0441/
Kill Command = //usr/lib/hadoop/bin/hadoop job  -kill job_1376898282169_0441
Hadoop job information for Stage-1: number of mappers: 236; number of reducers: 0
2013-08-22 10:04:40,205 Stage-1 map = 0%,  reduce = 0%
2013-08-22 10:05:07,486 Stage-1 map = 1%,  reduce = 0%, Cumulative CPU 121.28 sec
.......................
2013-08-22 10:09:18,625 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 7707.18 sec
MapReduce Total cumulative CPU time: 0 days 2 hours 8 minutes 27 seconds 180 msec
Ended Job = job_1376898282169_0441
Ended Job = -541447549, job is filtered out (removed at runtime).
Ended Job = -1652692814, job is filtered out (removed at runtime).
Launching Job 3 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job Submission failed with exception 
'java.io.IOException(Max block location exceeded for split: Paths:/tmp/hive-beeswax-logging/hive_2013-08-22_10-04-32_755_6427103839442439579/-ext-10001/000009_0:0+28909,....,/tmp/hive-beeswax-logging/hive_2013-08-22_10-04-32_755_6427103839442439579/-ext-10001/000218_0:0+45856 
Locations:10.8.75.17:...:10.8.75.20:; InputFormatClass: org.apache.hadoop.mapred.TextInputFormat
 splitsize: 45 maxsize: 10)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 236   Cumulative CPU: 7707.18 sec   HDFS Read: 63319449229 HDFS Write: 8603165 SUCCESS
Total MapReduce CPU Time Spent: 0 days 2 hours 8 minutes 27 seconds 180 msec

但是我没有设置maxsize。执行了很多次但是都出现同样的错误。我尝试为配置单元添加 mapreduce.jobtracker.split.metainfo.maxsize 属性。但在这种情况下,hive 在没有任何映射工作的情况下失败了。

最佳答案

设置mapreduce.job.max.split.locations > 45

在我们的情况下它解决了问题。

关于hadoop - Hive 因 java.io.IOException 失败(超出分割的最大块位置.. splitsize : 45 maxsize: 10),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18370647/

相关文章:

json - 使用查找表的 Hive 查询

hadoop - 具有广泛开放权限的目录如何抛出 EPERM 错误(hadoop)?

hadoop - Ctrl + C后出现毛线+刺入=卡住了

hadoop - hive 中的自动增量udf

hadoop - NULL 在将 Hive 查询结果写入文本文件时显示为 '\N'

azure - 查看 Hive 查询中使用的节点数

hadoop - 如何确保 RegexSerDe 可用于我的 Hadoop 节点?

Hadoop 流式传输 "GC overhead limit exceeded"

hadoop - Hadoop 2.7.7,无法使用端口8088打开Resource Manager Web

hadoop - 如何在将数据加载到配置单元时合并数据?