java - Hadoop copyFromLocal 内存不足问题

标签 java hadoop copy out-of-memory heap-memory

我正在尝试将包含 1,048,578 个文件的目录复制到 hdfs 文件系统中,但是出现以下错误:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:2367)
    at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
    at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
    at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
    at java.lang.StringBuffer.append(StringBuffer.java:237)
    at java.net.URI.appendSchemeSpecificPart(URI.java:1892)
    at java.net.URI.toString(URI.java:1922)
    at java.net.URI.<init>(URI.java:749)
    at org.apache.hadoop.fs.shell.PathData.stringToUri(PathData.java:565)
    at org.apache.hadoop.fs.shell.PathData.<init>(PathData.java:151)
    at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:273)
    at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
    at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:291)
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
    at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
    at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
    at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
    at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
    at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
    at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
    at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

最佳答案

问题基本上与 Hadoop 客户端有关。这是通过将“GCOverheadLimit”增加到 4GB 来解决的。以下命令解决了我的问题。

export HADOOP_CLIENT_OPTS="-XX:-UseGCOverheadLimit -Xmx4096m"

关于java - Hadoop copyFromLocal 内存不足问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35405690/

相关文章:

hadoop - 从 Hadoop 访问只读的 Google Storage 存储桶

java - 在 Hadoop 中传播自定义配置值

postgresql - 如何尽快使用查询将批量数据从表加载到表? (postgresql)

java - Android客户端PC服务器java套接字连接失败

java - 在简单的 servlet 中创建 REST 类型的 url,例如 {resource}/<id>

java - java.util.Collections.reverse() 如何工作?

c++ - 数组大小和复制性能

java - Liferay - 创建自定义远程 jsonwebservice

hadoop - 配置单元 : select row with column having maximum value without join

powershell - 复制文件同时保留目录结构