hadoop - Datanode 重新启动 Hadoop fs -put 以获取大量数据(30 GB)

标签 hadoop hdfs

我有一个包含 3 个节点的 hadoop 集群。 1个主人和2个奴隶。他们每个人都有 24 GB 的内存。 当我执行

hadoop fs -put 

将数据从本地文件系统传输到 hdfs dome 数据被传输然后我得到一个异常

12/11/06 19:01:39 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception      for block blk_-2646313249080465541_1002java.net.SocketTimeoutException: 603000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.30.30.210:51735 remote=/172.30.30.211:50010]
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readLong(DataInputStream.java:399)
    at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:125)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3284)

12/11/06 19:01:39 WARN hdfs.DFSClient: Error Recovery for block blk_-2646313249080465541_1002 bad datanode[0] 172.30.30.211:50010
put: All datanodes 172.30.30.211:50010 are bad. Aborting...
12/11/06 19:01:39 ERROR hdfs.DFSClient: Exception closing file /user/root/input/wiki.xml-p000185003p000189874 : java.io.IOException: All datanodes 172.30.30.211:50010 are bad. Aborting...
java.io.IOException: All datanodes 172.30.30.211:50010 are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3414)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2906)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3110)

我使用了 30 GB 的数据进行传输,但只传输了 22 GB,然后我遇到了这个异常,两个数据节点都重新启动了。 缓冲区有没有问题。我的意思是datanode正在通过套接字从namenode接收数据,可能是datanodes缓冲区不够大,无法容纳大量数据,因此导致了这个异常。

这些是HDFS创建的日志文件

2012-11-06 18:54:10,074 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-11-06 18:54:10,239 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
2012-11-06 18:54:10,349 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-11-06 18:54:10,350 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-11-06 18:54:10,350 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-11-06 18:54:10,644 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-11-06 18:54:11,387 WARN org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory /data/hadoop/data due to exception: java.io.FileNotFoundException: /data/hadoop/data/in_use.lock (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:703)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:684)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:542)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:112)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:408)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:306)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1562)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1580)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1707)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1724)

2012-11-06 18:54:11,551 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: All specified directories are not accessible or do not exist.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:143)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:408)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:306)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1623)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1562)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1580)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1707)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1724)

2012-11-06 18:54:11,552 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:     

这些是 Mapred 创建的

2012-11-06 18:54:29,395 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-11-06 18:54:29,416 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
2012-11-06 18:54:29,449 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-11-06 18:54:29,450 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-11-06 18:54:29,450 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2012-11-06 18:54:29,792 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-11-06 18:54:30,002 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-11-06 18:54:30,056 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-11-06 18:54:30,103 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2012-11-06 18:54:30,107 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as mapred
2012-11-06 18:54:30,108 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /data/hadoop/mapred
2012-11-06 18:54:30,145 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Cannot rename /data/hadoop/mapred/ttprivate to /data/hadoop/mapred/toBeDeleted/2012-11-06_18-54-30.117_0
at org.apache.hadoop.util.MRAsyncDiskService.moveAndDeleteRelativePath(MRAsyncDiskService.java:260)
at org.apache.hadoop.util.MRAsyncDiskService.cleanupAllVolumes(MRAsyncDiskService.java:315)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:736)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1515)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3814)

最佳答案

您的问题与集群内的大量数据传输有关。 Apache Hadoop 有一个用于此目的的特定工具,称为 distcp。它使集群中的所有节点都能平等地参与到 HDFS 的数据传输。

请阅读更多信息 http://hadoop.apache.org/docs/r0.20.2/distcp.html

关于hadoop - Datanode 重新启动 Hadoop fs -put 以获取大量数据(30 GB),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13252325/

相关文章:

hadoop - 如何比较 PIG 中的两个元组?

hadoop - 如何从命令行运行 Spark-java 程序

hadoop - Hadoop:start-all.sh无法启动Hadoop服务

hadoop - 是否有任何经过测试的类似于 Apache Hadoop 的框架/解决方案?

hadoop 2.2,Windows 7 上的字数统计示例失败

java - block 池 <registering> 初始化失败(Datanode Uuid 未分配)

hadoop - Sqoop - 是否可以在 HDFS 中导入平面文件

ruby - 如何使用 Ruby 在 MapR HDFS 中保存文件

mysql - hive 中的表分区

hadoop - 修改HDFS的 block 放置策略