hadoop - 将文件复制到HDFS时出错

标签 hadoop hdfs

Hadoop集群正常启动,JPS显示数据节点和任务跟踪器运行正常。
当我将文件复制到HDFS时,这是我收到的错误消息。

hduser@nn:~$ hadoop fs -put gettysburg.txt /user/hduser/getty/gettysburg.txt

Warning: $HADOOP_HOME is deprecated.
14/08/24 21:12:50 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:51 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:52 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:53 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:54 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:55 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:56 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:57 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:58 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/24 21:12:59 INFO ipc.Client: Retrying connect to server: nn/10.10.1.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to nn/10.10.1.1:54310 failed on connection exception: java.net.ConnectException: Connection refused
hduser@nn:~$ 

我能够从NN到DN和Viceverssa以及在DN之间执行ssh。

我已如下更改所有NN和DN中的CD / etc / hosts。
#127.0.0.1      localhost loghost localhost.project1.ch-geni-net.emulab.net
#10.10.1.1      NN-Lan NN-0 NN
#10.10.1.2      DN1-Lan DN1-0 DN1
#10.10.1.3      DN2-Lan DN2-0 DN2
#10.10.1.5      DN4-Lan DN4-0 DN4
#10.10.1.4      DN3-Lan DN3-0 DN3
10.10.1.1       nn
10.10.1.2       dn1
10.10.1.3       dn2
10.10.1.4       dn3
10.10.1.5       dn4

我的mapredsite.xml看起来像这样。
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://nn:54310</value>
<description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHE$
</property>
</configuration>

配置的cd / usr / local / hadoop / conf / master
hduser@nn:/usr/local/hadoop/conf$ vi masters 

#localhost
nn
hduser@dn1:~$ jps
9975 DataNode
10186 Jps
10070 TaskTracker
hduser@dn1:~$ 

hduser@nn:~$ jps
5979 JobTracker
5891 SecondaryNameNode
6159 Jps
hduser@nn:~$ 

问题是什么?

最佳答案

检查core-site.xml文件中的fs.default.name属性。该值应为hdfs:// NN:port。

关于hadoop - 将文件复制到HDFS时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25478683/

相关文章:

json - 带有嵌套Json的Hadoop PIG

hadoop - 在 Hadoop 中组合两个不同的文件

hadoop - 四节点群集上的Hadoop复制因子为1

hadoop - Datanode容量为0kb

用于数据迁移的 Hadoop

Hadoop map/reduce 结构

hadoop - 如何将 HBase 表强制到区域服务器

hadoop - 在Hive SQL中使用over和rank关键字的目的是什么?

hadoop - 哪些进程需要访问 core-site.xml 和 hdfs-site.xml

apache-spark - 您如何在 hdfs 中查看文件的行组