hadoop - 击jps命令时未显示Datanode

标签 hadoop

我是hadoop的新手,我已经设置了多节点群集,但是当我在主节点上按jps命令时,它仅显示namenode而不是datanode;当我粘贴此URL'Master:50070'时,它显示no live node,因此我无法从我的数据库中复制数据本地系统进入hdfs会引发此错误

hduser@oodles-Latitude-3540:~$ hadoop fs -copyFromLocal /home/oodles/input/test /tmp
15/06/28 16:27:56 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

使用此命令start-dfs.sh启动hadoop集群后,我的namenode成功启动,但datanode没有启动。当我检查datanode日志时,它显示此
ToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:53,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:54,498 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:55,499 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:56,500 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

我用谷歌搜索,但没有找到解决方案。

当我在从节点上点击jps命令时,它仅显示datanode

当我将“Master:50070”粘贴到浏览器并浏览文件系统时,还有另一件事
它告诉我这个错误
HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

    Can't browse the DFS since there are no live nodes available to redirect to.
Caused by:

java.io.IOException: Can't browse the DFS since there are no live nodes available to redirect to.
    at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:666)
    at org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)

我的Hadoop集群配置是这样的

1)主服务器上的/ etc / host文件

2)从站上的/ etc / hosts文件

我在hadoop配置文件夹的master和slave文件中有编辑条目,即masters文件我添加了master和slaves文件我添加了Slave1

谁能帮助我解决这些问题!

显示两个图片的datanode日志

最佳答案

您配置ssh吗?尝试使用ssh登录另一个节点以检查ssh连接。

关于hadoop - 击jps命令时未显示Datanode,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31099133/

相关文章:

java - 更改 reducer 排序顺序

hadoop - 如何累积创建表和列

hadoop - 如何使文件在 hdfs 中不可删除?

hadoop 2.2.0 作业列表抛出 NPE

Java UDF on Hadoop 输入参数——从 Pig on Hadoop 调用

hadoop - Hadoop与文档捕获软件的集成

hadoop - hbase.MasterNotRunningException 在 Hbase 中创建表时

linux - 如何在配置单元查询中使用 mysql 查询结果

hadoop - 无法在 Hadoop 集群上启动 H2O - ClassNotFound 异常

bash - 如何使脚本处理不同的文件?