java - Hadoop 多节点集群设置

标签 java apache hadoop

我正在尝试在 hadoop 中设置多节点集群,我如何将 0 个数据节点作为 Activity 数据节点并且我的 hdfs 显示分配了 0 个字节

但是节点管理器守护进程在数据节点上运行

大师们: masterhost1 172.31.100.3(也作为辅助名称节点)名称节点

datahost1 172.31.100.4 #datanode

datanode的日志如下:

`STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc865b490b9a6260e9611a5b8633cab885b3d247; compiled by 'jenkins' on 2015-12-18T01:19Z STARTUP_MSG: java = 1.8.0_71 ************************************************************/ 2016-01-24 03:53:28,368 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2016-01-24 03:53:28,862 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop_tmp/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration. 2016-01-24 03:53:36,454 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2016-01-24 03:53:37,132 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is datahost1 2016-01-24 03:53:37,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2016-01-24 03:53:37,195 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5 2016-01-24 03:53:47,331 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-01-24 03:53:47,375 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2016-01-24 03:53:47,395 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2016-01-24 03:53:47,400 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2016-01-24 03:53:47,404 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2016-01-24 03:53:47,405 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2016-01-24 03:53:47,559 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2016-01-24 03:53:47,566 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075 2016-01-24 03:53:47,566 INFO org.mortbay.log: jetty-6.1.26 2016-01-24 03:53:48,565 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075 2016-01-24 03:53:49,200 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop 2016-01-24 03:53:49,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = sudo 2016-01-24 03:53:59,319 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2016-01-24 03:53:59,354 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020 2016-01-24 03:53:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020 2016-01-24 03:53:59,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null 2016-01-24 03:53:59,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: 2016-01-24 03:53:59,491 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop_tmp/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration. 2016-01-24 03:53:59,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to masterhost1/172.31.100.3:9000 starting to offer service 2016-01-24 03:53:59,503 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2016-01-24 03:53:59,504 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2016-01-24 03:54:00,805 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:01,808 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:02,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:03,826 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:04,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

`

最佳答案

问题是传入连接 namenode 没有从 datanode 获取传入信息这是因为 ipv6 问题只需在主节点上禁用 ipv6 并使用 netstat 检查监听端口然后你就可以解决上面的问题

关于java - Hadoop 多节点集群设置,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34975865/

相关文章:

java - 如何用 Java 编写队列代码

java - 静态类、方法和 main()

java - StringBuffer 对象可以成为 Java 中 TreeSet 中的键吗?

mysql - Sqoop Import --password-file 函数在 sqoop 1.4.4 中无法正常工作

java - 在映射器之间共享 FSDataInputStream?

java - 如何使用配置文件中的输入参数执行 Map Reduce 作业

java - 运行 mapreduce 作业时出现 IOException

python - 使用 Apache 和多个 Django 站点的奇怪、不一致的行为

javascript - 使用 PHP 的 CORS(跨源资源共享)

php - htaccess force SSL 违反其他规则