vector - java.lang.OutOfMemoryError : Java heap space error while running seq2sparse in mahout 错误

标签 vector hadoop mahout

我正在尝试在 mahout 中使用 k-means 对一些手工制作的日期进行聚类。我创建了 6 个文件,每个文件中几乎没有 1 或 2 个单词的文本。使用 ./mahout seqdirectory 从它们中创建一个序列文件。在尝试使用 ./mahout seq2sparse 命令将序列文件转换为向量时,出现 java.lang.OutOfMemoryError: Java heap space 错误。序列文件大小为0.215 KB。

命令:./mahout seq2sparse -i mokha/output -o mokha/vector -ow

错误日志:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/m
ahout-examples-0.5-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/l
ib/slf4j-jcl-1.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Apr 24, 2013 2:25:11 AM org.slf4j.impl.JCLLoggerAdapter warn
WARNING: No seq2sparse.props found on classpath, will use command-line arguments
 only
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Maximum n-gram size is: 1
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Minimum LLR value: 1.0
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Number of reduce tasks: 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0001
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commi
ting
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.output.FileOutputCommitt
er commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to mokha/vector/token
ized-documents
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 100% reduce 0%
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0001
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=1471400
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=1496783
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=0
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0002
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0002
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0002
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:14 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0003
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0003
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0003
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0004
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0004
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.RangeCheck(ArrayList.java:547)
        at java.util.ArrayList.get(ArrayList.java:322)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
24)
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0004
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:17 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector/partial-vectors-0
Apr 24, 2013 2:25:17 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputExc
eption: Input path does not exist: file:/home/bitnami/mahout/mahout-distribution
-0.5/bin/mokha/vector/tf-vectors
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(File
InputFormat.java:224)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listSta
tus(SequenceFileInputFormat.java:55)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileI
nputFormat.java:241)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
79)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.startDFCounting(TFI
DFConverter.java:350)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.processTfIdf(TFIDFC
onverter.java:151)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.run(Spars
eVectorsFromSequenceFiles.java:262)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.main(Spar
seVectorsFromSequenceFiles.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
mDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:187)

最佳答案

bin/mahout 脚本读取环境变量“MAHOUT_HEAPSIZE”(以兆字节为单位)并从中设置“JAVA_HEAP_MAX”变量(如果存在)。我使用的 mahout 版本 (0.8) 将 JAVA_HEAP_MAX 设置为 3G。执行中

export MAHOUT_HEAPSIZE=10000m

在 canopy 集群运行之前似乎帮助我的运行在一台机器上保持更长时间。但是,我怀疑最好的解决方案是切换到在集群上运行。

供引用,还有另一篇相关文章: Mahout runs out of heap space

关于vector - java.lang.OutOfMemoryError : Java heap space error while running seq2sparse in mahout 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16182495/

相关文章:

C++ 类/vector 指针问题

r - 在 R 中交换字符串与数字

c++ - Gtest,等于对象

python - 如何通过键连接两个RDD?

hadoop - Mahout Euclidean 实现中的 NaN 距离

java - 在 Java 中的 Apache Mahout 中测试随机梯度下降模型

google-app-engine - 在 Google 应用引擎上部署 Mahout

R - 如何重新排序行索引号

hadoop - 错误 : Java heap space

hadoop - 根据我的映射器代码中的某些逻辑,将我的映射器中的一些数据(行)写入单独的目录