hadoop - Hadoop集群节点数

标签 hadoop

我正在尝试设置 Hadoop 多节点集群。

当我启动集群时,这是我在控制台中的响应..

hduser@hadoop-master:/usr/local/hadoop$ /usr/local/hadoop/sbin/start-dfs.sh
Starting namenodes on [hadoop-master]
hadoop-master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-hadoop-master.out
hadoop-master: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-hadoop-master.out
hadoop-child: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-hadoop-child.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-hadoop-master.out
hduser@hadoop-master:/usr/local/hadoop$ jps
21079 NameNode
21258 DataNode
21479 SecondaryNameNode
21600 Jps
hduser@hadoop-master:/usr/local/hadoop$ /usr/local/hadoop/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-hadoop-master.out
hadoop-child: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-hadoop-child.out
hadoop-master: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-hadoop-master.out
hduser@hadoop-master:/usr/local/hadoop$ jps
21079 NameNode
21258 DataNode
22117 Jps
21815 NodeManager
21479 SecondaryNameNode
21658 ResourceManager

您可以看到一个datanode正在hadoop-child机器中启动。

现在,当我尝试获取所有节点信息时。我没有看到所有节点都被显示。

hduser@hadoop-master:/usr/local/hadoop$ bin/hdfs dfsadmin -report
Configured Capacity: 21103243264 (19.65 GB)
Present Capacity: 17825124352 (16.60 GB)
DFS Remaining: 17821085696 (16.60 GB)
DFS Used: 4038656 (3.85 MB)
DFS Used%: 0.02%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 21103243264 (19.65 GB)
DFS Used: 4038656 (3.85 MB)
Non DFS Used: 3278118912 (3.05 GB)
DFS Remaining: 17821085696 (16.60 GB)
DFS Used%: 0.02%
DFS Remaining%: 84.45%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Feb 26 17:13:04 UTC 2017

我需要在这里查看master和child的信息。我有 1 个主节点和 1 个子节点。

********************修复********

根据弗兰克的说法,这就是修复的方式..

  1. 编辑/etc/config 文件并指定主设备和子设备的 IP 地址。

更改了以下2行

127.0.0.1 localhost hadoop-master
961.118.98.183 hadoop-child

到(在两个节点中)

127.0.0.1 localhost 
961.118.99.251 hadoop-master
961.118.98.183 hadoop-child

2.按以下顺序重新启动集群..应重新格式化数据节点。

format will remove only the meta, the datanode's data directories will still be using the old namenode's identity which wil cause the datanode to fail(so delete the directories).

Can you please this order.. 
1) stop the cluster 
2) rm -rf /path/to/datanode/data/dir (in both nodes) 
3) hadoop namenode -format
4) START cluste

最佳答案

设置多节点环境时,必须显式定义namenoderesourcemanager 地址。

将此属性添加到两个节点中的 core-site.xml 中,

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://hadoop-master:8020</value>
</property>

也在yarn-site.xml中,

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>hadoop-master</value>
</property>

确保所有节点的/etc/hosts文件中已完成IP地址和主机名映射。

注意:如果服务正在运行,请停止它们并添加这些属性,然后再重新启动它们。

关于hadoop - Hadoop集群节点数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42471625/

相关文章:

hadoop - 在 Sparklyr 中创建新的 Spark 表或数据框的最有效方法是什么?

hadoop - 如何在 Spark2 中启用 spark.history.fs.cleaner?

hadoop - 亚马逊 EMR 和 Hive : Getting a "java.io.IOException: Not a file" exception when loading subdirectories to an external table

java - 是否可以使用 hadoop 2.5.2 在 oozie 4.1.0 中运行 map reduce 作业

hadoop - hadoop distcp由于缺少 yarn 日志目录而失败

hadoop - 了解 Hive MR 输出

hadoop - 我无法访问jobhistoryserver网站

Hadoop 2.6.0 - 在运行启动脚本时询问用户密码?

hadoop - 护林员-管理员安装期间出现PatchPasswordEncryption错误

hadoop - PIG:在一组特定的列中安排多个记录