hadoop - Hbase连接问题和无法创建表

标签 hadoop configuration hbase apache-zookeeper nosql

我正在运行一个多节点集群;我正在使用hadoop-1.0.3(两个),Hbase-0.94.2(两个)和zookeeper-3.4.6(仅主版本)

主机:192.168.0.1
奴隶:192.168.0.2

Hbase运行不正常,尝试在hbase上创建表时遇到问题
当然,我无法通过http://master:60010访问HBase状态UI,请帮助!

这是我所有的配置文件:

(hadoop conf)core-site.xml :(主服务器和从服务器上的配置相同)

 <configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
 </property>
</configuration>

(hbase conf)hbase-site.xml:
<configuration>

<property>
      <name>hbase.rootdir</name>
      <value>hdfs://master:54310/hbase</value>
</property>

<property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
</property>

<property>
      <name>hbase.zookeeper.quorum</name>
      <value>master,slave</value>
</property>

<property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2222</value>
</property>

<property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/usr/local/hadoop/zookeeper</value>
</property>

</configuration>

/ etc / hosts和:
192.168.0.1 master
192.168.0.2 slave

区域服务器:
master
slave

这是日志文件:hbase-hduser-regionserver-master.log
2014-12-24 02:12:13,190 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
2014-12-24 02:12:14,002 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server master/192.168.0.1:2181
2014-12-24 02:12:14,003 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2014-12-24 02:12:14,004 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to master/192.168.0.1:2181, initiating session
2014-12-24 02:12:14,005 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2014-12-24 02:12:14,675 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server master,60020,1419415915643: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
    at org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
    at java.lang.Thread.run(Thread.java:745)
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2014-12-24 02:12:14,676 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization of RS failed.  Hence aborting RS.
2014-12-24 02:12:14,683 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer MXBean
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
2014-12-24 02:12:14,690 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread.
2014-12-24 02:12:14,691 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.

最佳答案

我认为localhost文件中的core-site.xml使用master

并将从属节点主机添加到hadoop目录中的slave文件中。

主节点和从节点的core-site.xml文件都看起来像这样:

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
 </property>
</configuration>

如果您同时在两个区域服务器文件中都存在主服务器和从属主机,则这两个服务器都应该存在。

关于hadoop - Hbase连接问题和无法创建表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27635845/

相关文章:

java - 这是对单例的适当使用吗?

hadoop - HBase shell - 检索(仅)列值(而不是列名)

hadoop - CorruptStatistics-使用Parquet文件时的警告消息

hadoop - 在办公电脑上运行 Hadoop 软件(闲置时)

桌面应用程序的配置存储

java - Log4j-2.6.2 基本配置器未配置日志记录级别

javascript - Hive - Thrift - readMessageBegin 中缺少版本,旧客户端?

hadoop - 从多个表中读取数据并评估结果并生成报告

java - HBase & JDBC 连接

hbase - Nutch 2.x 不能抓取像flipkart 和jabong 这样的网站