azure - 无法启动namenode : Cannot assign requested address

标签 azure ubuntu hadoop2

我正在尝试在 上安装 hadoop 的多节点 azure 服务器 , 在我的 的 start-dfs.sh 命令上二级名称节点 数据节点 正在启动,但 名称节点 无法以以下错误开始,

2016-02-10 10:52:59,321 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2016-02-10 10:52:59,322 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-02-10 10:52:59,324 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2016-02-10 10:52:59,336 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2016-02-10 10:52:59,346 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [testhadooptest:9000] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
    at org.apache.hadoop.ipc.Server.bind(Server.java:425)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:938)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
    at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
    at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:344)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:673)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.apache.hadoop.ipc.Server.bind(Server.java:408)
    ... 13 more
2016-02-10 10:52:59,349 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-02-10 10:52:59,356 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at testhadooptest/public_ip
************************************************************/

/etc/hosts
127.0.0.1 localhost

public_ip testhadooptest
public_ip tesdatanode

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

但是当我评论行 public_ip testhadooptest 从/etc/hosts,namenode 开始,为什么会出现这种奇怪的行为?
我查看 linklink ,但没有找到解决办法。请帮忙。

Hadoop文件:

核心站点.xml
<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://testhadooptest:9000</value>
   </property>
</configuration>

hdfs-site.xml
<configuration>
    <property>
       <name>dfs.replication</name>
       <value>1</value>
    </property>
    <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/home/hduser/mydata/hdfs/namenode</value>
    </property>
    <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/home/hduser/mydata/hdfs/datanode</value>
    </property>
</configuration>

Hadoop 版本 2.7.1

提前致谢 :)

最佳答案

问题出在 azure 服务器上,它不允许绑定(bind)公共(public)虚拟 IP (VIP) 地址,它只允许绑定(bind)内部 IP 地址,所以我创建了 virtual network ,将所有 3 个节点添加到虚拟网络。

还将每台机器上/etc/hosts 中的 public_ip 替换为 internal_ip

新的/etc/hosts

127.0.0.1 localhost

internal_ip testhadooptest
internal_ip tesdatanode

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts 

关于azure - 无法启动namenode : Cannot assign requested address,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35313781/

相关文章:

python - 为什么从源代码构建 Boost 会改变 CMake 找到的 Python 版本?

Hadoop:节点的概念及其工作机制

c# - 将 CosmosDB const 连接字符串注入(inject) Azure Function V4 CosmosDB 输入/输出绑定(bind)?

azure - 无法在独立区域路径之间移动Azure DevOps

c# - Azure Active Directory 获取所有用户 - (可能死锁)

hadoop - 未使用hadoop fs -mkdir创建的文件夹

eclipse - 如何使用 Java -jar 命令运行 map reduce 作业

azure - 滚动浏览许多 Azure Key Vault secret

c - 使用 gcc 编译/转换,matrix.h : No such file or directory

ubuntu - 如何更改 nginx hls 文件的名称?