hadoop - HBase master 停止并出现 "Connetion Refused"错误

标签 hadoop hbase cloudera

这是在伪分布式和分布式模式下发生的。 当我尝试启动 HBase 时,最初所有 3 个服务 - master、region 和 quorumpeer 都会启动。然而不到一分钟,主人就停了​​下来。在日志中,这是跟踪 -

2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 0 time(s).
2013-05-06 20:10:26,528 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 1 time(s).
2013-05-06 20:10:27,530 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 2 time(s).
2013-05-06 20:10:28,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 3 time(s).
2013-05-06 20:10:29,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 4 time(s).
2013-05-06 20:10:30,538 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 5 time(s).
2013-05-06 20:10:31,540 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 6 time(s).
2013-05-06 20:10:32,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 7 time(s).
2013-05-06 20:10:33,544 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 8 time(s).
2013-05-06 20:10:34,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 9 time(s).
2013-05-06 20:10:34,550 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to <master/master_ip>:9000 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
        at org.apache.hadoop.ipc.Client.call(Client.java:1155)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy9.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:259)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:220)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:86)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
        at org.apache.hadoop.ipc.Client.call(Client.java:1121)
        ... 18 more

我已采取措施解决此问题但没有成功 - 从分布式模式降级为伪分布式模式。同样的问题。 - 尝试过独立模式 - 没有运气 - hadoop 和 hbase 使用相同的用户 (hadoop)。为 hadoop 设置无密码 ssh。 - 同样的问题。 - 编辑/etc/hosts 文件并将 localhost/servername 以及 127.0.0.1 更改为引用 SO 和不同源的实际 IP 地址。还是同样的问题。 - 重新启动服务器

这是conf文件。

hbase-site.xml

<configuration>
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://<master>:9000/hbase</value>
        <description>The directory shared by regionservers.</description>
</property>

<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
</property>

<property>
        <name>hbase.zookeeper.quorum</name>
        <value><master></value>
</property>

<property>
        <name>hbase.master</name>
        <value><master>:60000</value>
        <description>The host and port that the HBase master runs at.</description>
</property>

<property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.</description>
</property>

</configuration>

/etc/hosts 文件

127.0.0.1 localhost.localdomain 本地主机 ::1 localhost6.localdomain6 localhost6 。

我在这里做错了什么?

Hadoop 版本 - Hadoop 0.20.2-cdh3u5 HBase版本-版本0.90.6-cdh3u5

最佳答案

通过查看您的配置文件,我假设您在配置文件中使用实际的主机名。如果是这种情况,请将主机名以及计算机的 IP 添加到/etc/hosts 文件中。还要确保它与 Hadoop 的 core-site.xml 中的主机名匹配。正确的名称解析对于 HBase 的正常运行至关重要。

如果您仍然遇到任何问题,请按照上述步骤操作 here适本地。我已尝试详细解释该过程,希望您能够在仔细遵循所有步骤的情况下运行它。

HTH

关于hadoop - HBase master 停止并出现 "Connetion Refused"错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16401610/

相关文章:

hadoop - SSIS上的Hadoop Hive任务返回错误代码64

hadoop - Hive FunctionTask 执行错误,返回码-101 : What does it mean?

hadoop - 将 Hbase 导入 Hive

hadoop - 主人不在

hadoop - Mapreduce到hbase输出卡在 map 上的比例降低100%

bash - $@ 是什么意思?

java - 在 Hadoop 中找不到文件布局

bash - 删除 HDFS 中在某个日期范围内创建的所有 0 字节文件

hadoop - Hbase 问题 |反序列化 SCAN 字符串时出现谷歌 protobuf 标签不匹配错误

hadoop - 使用 Hcatalog REST 从 HIVe 访问表