hadoop - 加入 : space available is below the configured reserved amount 的配置单元查询

标签 hadoop hive hdfs hql

我在单节点集群上使用 hive 执行 sql 查询,我收到此错误:

 MapReduce Jobs Launched: 
Stage-Stage-20:  HDFS Read: 4456448 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

在日志 http://localhost:50070/logs/hadoop-hadoop-namenode-hadoop.log 中,可用空间似乎低于配置的保留量:

org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: 
Space available on volume '/dev/mapper/vg_hadoop-lv_root' is 40734720,
which is below the configured reserved amount 104857600`

你明白为什么会出现这个错误吗?

同样在磁盘分析器中,在执行查询之前我有 12.6GB 的可用空间,当执行因错误而停止时,磁盘分析器显示只有 2GB 的可用空间可用。我还用更多 30GB 更新了 virtual box machine,同样的事情发生了。

完整错误:

    Warning: Map Join MAPJOIN[110][bigTable=?] in task 'Stage-20:MAPRED' is a cross product
    Warning: Shuffle Join JOIN[8][tables = [part, supplier]] in Stage 'Stage-1:MAPRED' is a cross product
    Query ID = hadoopadmin_20160324175146_7ab8931d-eeac-4e03-b833-3592ed96521f
    Total jobs = 9
    Stage-27 is selected by condition resolver.
    Stage-1 is filtered out by condition resolver.
    16/03/24 17:51:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Execution log at: /tmp/hadoopadmin/hadoopadmin_20160324175146_7ab8931d-eeac-4e03-b833-3592ed96521f.log
    2016-03-24 17:52:01 Starting to launch local task to process map join;  maximum memory = 518979584
    2016-03-24 17:52:05 Dump the side-table for tag: 1 with group count: 1 into file: file:/tmp/hadoopadmin/614990eb-e755-4bca-bccf-be19bd5c6882/hive_2016-03-24_17-51-46_111_5082675810708688029-1/-local-10017/HashTable-Stage-20/MapJoin-mapfile61--.hashtable
    2016-03-24 17:52:06 Uploaded 1 File to: file:/tmp/hadoopadmin/614990eb-e755-4bca-bccf-be19bd5c6882/hive_2016-03-24_17-51-46_111_5082675810708688029-1/-local-10017/HashTable-Stage-20/MapJoin-mapfile61--.hashtable (938915 bytes)
    2016-03-24 17:52:06 End of local task; Time Taken: 4.412 sec.
    Execution completed successfully
    MapredLocal task succeeded
    Launching Job 2 out of 9
    Number of reduce tasks is set to 0 since there's no reduce operator
    Job running in-process (local Hadoop)
    2016-03-24 17:52:10,043 Stage-20 map = 0%,  reduce = 0%
    2016-03-24 17:53:10,214 Stage-20 map = 0%,  reduce = 0%
    2016-03-24 17:54:10,272 Stage-20 map = 0%,  reduce = 0%
    2016-03-24 17:55:10,336 Stage-20 map = 0%,  reduce = 0%
    2016-03-24 17:56:10,386 Stage-20 map = 0%,  reduce = 0%
    2016-03-24 17:57:10,435 Stage-20 map = 0%,  reduce = 0%
    log4j:ERROR Failed to flush writer,
    java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:326)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:369)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.fatal(Log4JLogger.java:239)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:171)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Ended Job = job_local60483225_0001 with errors
    Error during job, obtaining debugging information...
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    MapReduce Jobs Launched: 
    Stage-Stage-20:  HDFS Read: 4472832 HDFS Write: 0 FAIL
    Total MapR

educe CPU Time Spent: 0 msec
hive> 

查询:

select 
    nation, 
    o_year, 
    sum(amount) as sum_profit 
from
    (select 
        n_name as nation, 
        year(o_orderdate) as o_year, 
        l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount 
    from part, 
        supplier, 
        lineitem, 
        partsupp, 
        orders, 
        nation 
    where
        s_suppkey = l_suppkey and 
        ps_suppkey = l_suppkey and 
        ps_partkey = l_partkey and 
        p_partkey = l_partkey and 
        o_orderkey = l_orderkey and 
        s_nationkey = n_nationkey and 
        p_name like '%plum%' ) as profit 
group by nation, o_year 
order by nation, o_year desc;

最佳答案

这可能是您的问题:

Warning: Map Join MAPJOIN[110][bigTable=?] in task 'Stage-20:MAPRED' is a cross product
Warning: Shuffle Join JOIN[8][tables = [part, supplier]] in Stage 'Stage-1:MAPRED' is a cross product

如果有很多键,交叉产品往往会将几 GB 的表转换为 TB 数量级的表...重新评估您的查询并确保它按照您的想法进行。

编辑 现在您已经添加了查询,我可以添加更多。这部分:

from part, 
    supplier, 
    lineitem, 
    partsupp, 
    orders, 
    nation

是您可以优化事物的地方。这是在创建笛卡尔积,这是您的问题。发生的事情是您首先将所有表连接到一个叉积中,然后根据您的 where 子句保存记录,而不是使用 on 选择性地将表连接在一起条款。试试这个(公认的更丑陋的)查询的优化版本:

select 
  nation, 
  o_year, 
  sum(amount) as sum_profit
from 
  (select 
    n_name as nation, 
    year(o_orderdate) as o_year, 
    l_extendedprice * (1 - l_discount) -  ps_supplycost * l_quantity as amount
   from
      orders o join
      (select 
        l_extendedprice, 
        l_discount, 
        l_quantity, 
        l_orderkey, 
        n_name, 
        ps_supplycost 
       from part p join
         (select 
            l_extendedprice, 
            l_discount, 
            l_quantity, 
            l_partkey, 
            l_orderkey, 
            n_name, 
            ps_supplycost 
          from partsupp ps join
            (select 
                l_suppkey, 
                l_extendedprice, 
                l_discount, 
                l_quantity, 
                l_partkey, 
                l_orderkey, 
                n_name 
             from
               (select s_suppkey, n_name 
                from nation n join supplier s on n.n_nationkey = s.s_nationkey
               ) s1 join lineitem l on s1.s_suppkey = l.l_suppkey
            ) l1 on ps.ps_suppkey = l1.l_suppkey and ps.ps_partkey = l1.l_partkey
         ) l2 on p.p_name like '%plum%' and p.p_partkey = l2.l_partkey
     ) l3 on o.o_orderkey = l3.l_orderkey
  )profit
group by nation, o_year
order by nation, o_year desc;

根据这个benchmarking script .

关于hadoop - 加入 : space available is below the configured reserved amount 的配置单元查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36206345/

相关文章:

hadoop - 将边缘设备的平面数据文件导入HDFS并进行处理

hadoop - 运行Apache Mahout K-Means时出错

sql - 如何在不单独指定每一列的情况下在所有行中搜索文本

java - Spark 与 Hive : Table or view not found

sql - hadoop操作只写一行?

hadoop - 相当于 “mapreduce.map.failures.maxpercent”的TEZ参数

hadoop - 试图让 Hadoop 在伪分布式模式下工作 : connection refused and other errors

hadoop - 在Windows中使用cygwin在hadoop安装期间无法格式化namenode

hadoop - 如何从 sequenceFile 创建一个 spark DataFrame

hadoop - HDFS ACL |无法自动为子文件夹定义ACL