java - 运行 mapreduce 程序时出现 "Java Heap space Out Of Memory Error"

标签 java hadoop mapreduce

我在运行 mapreduce 程序时遇到内存不足错误。如果我将 260 个文件保存在一个文件夹中并作为 mapreduce 程序的输入,它会显示 Java 堆空间内存不足错误。如果我只提供 100文件作为 mapreduce 的输入,它运行良好。那么我如何限制 mapreduce 程序一次只处理 100 个文件(~50MB)。任何人都可以就这个问题提出建议......

No of files:318 ,No of blocks:1(blocksize:128MB), Hadoop运行在32位系统上

My StackTrace:
==============
    15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318
    15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.
end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
    15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0
    15/05/05 11:52:48 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
    15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload
         .....
         .....
         .....
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092
    15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false
    15/05/05 11:52:49 INFO mapreduce.Job:  map 0% reduce 0%
    15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784)
    15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300
    15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240
    15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800
    15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output
    15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output
    15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800
    15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map
    15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete.
    15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
    Caused by: java.lang.OutOfMemoryError: Java heap space
        at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208)
        at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559)
        at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57)
        at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42)
        at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA
    15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25
        File System Counters
            FILE: Number of bytes read=29002348
            FILE: Number of bytes written=29450636
            FILE: Number of read operations=0
            FILE: Number of large read operations=0
            FILE: Number of write operations=0
            HDFS: Number of bytes read=103142
            HDFS: Number of bytes written=0
            HDFS: Number of read operations=6
            HDFS: Number of large read operations=0
            HDFS: Number of write operations=1
        Map-Reduce Framework
            Map input records=1303
            Map output records=1303
            Map output bytes=105296
            Map output materialized bytes=0
            Input split bytes=38078
            Combine input records=0
            Spilled Records=0
            Failed Shuffles=0
            Merged Map outputs=0
            GC time elapsed (ms)=593
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0
            Total committed heap usage (bytes)=1745092608
        File Input Format Counters 
            Bytes Read=0

最佳答案

第 1 步:

在您的 hadoop 主目录中找到的 .bashrc 文件中添加此行:

export JVM_ARGS="-Xms1024m -Xmx1024m"

这会将 java 堆内存更改为 1024。默认值为 128。如果您正在运行终端 hadoop 作业,则以 hadoop 用户身份执行此操作:

source ~/.bashrc

如果仍然出现错误,请尝试第 2 步。

第 2 步:

hadoop-env.sh 文件中添加这一行:

export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"

如果仍然没有运气,请尝试第 3 步。

第 3 步:

mapred-site.xml 文件中添加此属性:

  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>

所有这些步骤都会增加默认的 java 堆内存。

关于java - 运行 mapreduce 程序时出现 "Java Heap space Out Of Memory Error",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30295606/

相关文章:

apache-spark - 使用聚合函数时发生 Spark 数据帧错误

hadoop - 在Hadoop中设置集群-JPS不显示NodeManager和ResourceManager

java - 在 hadoop : Type Mismatch 中链接作业

MongoDB 查询以从动态字段中查找

java - GridLayout 没有填满整个窗口

java - 如何在 Android 中为 textView 创建计数效果

java - Android Studio 预期为 'class' 或 'interface'

java - Bash 命令检查 Linux 上是否安装了 Oracle 或 OpenJDK java 版本

hadoop - 从非常大的序列文件中获取数据的最佳方法是什么?

javascript - MongoDB - mapReduce - Object.values 不是函数