java - 刚刚将我的hadoop集群升级到2.4.1,并且一切正常

标签 java apache web-services hadoop

配置完节点并运行start-all.sh之后,所有节点都说它们已被加星标,但查看从节点的节点,我在日志中看到以下内容:

2014-08-05 06:41:05,790 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-05 06:41:05,791 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8010: starting
2014-08-05 06:41:14,604 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
2014-08-05 06:41:14,711 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 4796@hadoop03
2014-08-05 06:41:14,997 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-633751026-127.0.1.1-1407152865456
2014-08-05 06:41:14,997 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-08-05 06:41:15,025 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
2014-08-05 06:41:15,211 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=298887827;bpid=BP-633751026-127.0.1.1-1407152865456;lv=-55;nsInfo=lv=-56;cid=CID-a343ba30-a7b$
2014-08-05 06:41:15,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2014-08-05 06:41:15,233 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /hadoop/hdfs/namenode/current, StorageType: DISK
2014-08-05 06:41:15,293 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-08-05 06:41:15,296 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1407257140296 with interval 21600000
2014-08-05 06:41:15,296 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-633751026-127.0.1.1-1407152865456
2014-08-05 06:41:15,297 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/hdfs/namenode/current...
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-633751026-127.0.1.1-1407152865456 on /hadoop/hdfs/namenode/curren$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-633751026-127.0.1.1-1407152865456: 188ms
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/hdfs/$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
2014-08-05 06:41:15,486 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-633751026-127.0.1.1-1407152865456 (Datanode Uuid null) service to /192.168.0.5:8020 beginning handshake $
2014-08-05 06:41:30,664 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-633751026-127.0.1.1-1407152865456 (Datanode Uuid null) service to /192.168.0.$
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:806)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4240)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:992)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28057)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

谁能对我集群中发生的事情提供任何见解?
我可以提供完整的配置文件和信息(如果需要)。

最佳答案

在将每个节点的hadoop更新到2.4.1之后,是否还更新了配置文件?
如果这样做,可以提供日志文件和配置文件。
我认为问题在于在core-site.xml中初始化hadoop数据文件属性

关于java - 刚刚将我的hadoop集群升级到2.4.1,并且一切正常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25138394/

相关文章:

java - 转换字符串时 org.json.JSON.typeMismatch

php - 没有在 XAMPP 中运行的基本 PHP 程序

web-services - Grails Spring security - 你可以根据请求 header 更改身份验证响应,转发到 html req 的登录页面,为 json req 返回 401 代码

java - 下载 getcontentlength() 在 Android 中返回 -1

java - http.antMatcher ("/**") .authorizeRequests().antMatchers ("/**") 中的 antMatcher ("/") 需要什么?

java - 单击标记时无法调用我的自定义信息窗口

java - Google 存储客户端库 (appengine-gcs-client) 具有不可用的依赖项 (google-http-client-parent :pom:1. 24.1)

apache - 自定义 mod_jk 转发

php - 与我的 PHP 安装混淆

java - Spring 3.0 JAX-WS 和/或与 Apache CXF 对比