Hadoop 名称节点未启动

标签 hadoop bigdata

我正在尝试将 Hadoop 配置为完全分布式模式,将 1 个主节点和 1 个从节点作为不同的节点。我附上了一张屏幕截图,显示了我的主节点和从节点的状态。

在大师中: ubuntu@hadoop-master:/usr/local/hadoop/etc/hadoop$ $HADOOP_HOME/bin/hdfs dfsadmin -refreshNodes

refreshNodes:因本地异常而失败:com.google.protobuf.InvalidProtocolBufferException:协议(protocol)消息标记的线路类型无效。;主机详细信息:本地主机是:“hadoop-master/127.0.0.1”;目标主机是:“hadoop-master”:8020;

这是我在尝试运行刷新节点命令时遇到的错误。谁能告诉我我遗漏了什么或我犯了什么错误?

Master & Slave Screenshot

2016-04-26 03:29:17,090 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2016-04-26 03:29:17,095 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-04-26 03:29:17,095 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-04-26 03:29:17,095 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-04-26 03:29:17,096 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-04-26 03:29:17,097 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.net.BindException: Problem binding to [hadoop-master:8020] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:425) at org.apache.hadoop.ipc.Server$Listener.(Server.java:574) at org.apache.hadoop.ipc.Server.(Server.java:2215) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:938) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:344) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:673) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:463) at sun.nio.ch.Net.bind(Net.java:455) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:408) ... 13 more 2016-04-26 03:29:17,103 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-04-26 03:29:17,109 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/127.0.0.1 ************************************************************/ ubuntu@hadoop-master:/usr/local/hadoop$

最佳答案

DFS 需要格式化。只需发出以下命令;

hadoop 名称节点格式

或者

hdfs 名称节点格式

关于Hadoop 名称节点未启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36850500/

相关文章:

hadoop - POC for Hadoop 实时场景

hadoop - 如何执行 Group by 然后在 pig 的其他列上使用 DISTINCT

hadoop - 如何为 BucketingSink 函数 Flink 设置动态基本路径?

hadoop - 设置 Pig 作业的最小 reducer 数量

hadoop - 对RDD Spark的怀疑

python - 用于映射大数据的Python共享内存字典

mysql - 聊天系统的数据库设计

azure - 使用 NiFi 从 Azure 到 Google Cloud Platform 的数据流

sql - Hive Optimizer 在优化 View 查询时是否考虑 View 定义?

hadoop - 使用Sqoop从RDBMS导入100Gb数据到hadoop所需的时间