我只是尝试按照官方文档在hadoop上安装hypertable
首先我在 CentOS 6.5-32bit 节点上以 persudo-distribute 模式部署 cdh4
然后按照hypertable官方文档在hadoop上安装hypertable
当我运行时
cap start -f Capfile.cluster
获取 DfsBroker 没有出现错误
* executing `start'
** transaction: start
* executing `start_servers'
* executing `start_hyperspace'
* executing "/opt/hypertable/current/bin/start-hyperspace.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg"
servers: ["master"]
[master] executing command
** [out :: master] Started Hyperspace
command finished in 6543ms
* executing `start_master'
* executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh"
servers: ["master"]
[master] executing command
** [out :: master] DFS broker: available file descriptors: 65536
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
** [out :: master] ERROR: DFS Broker (hadoop) did not come up
command finished in 129114ms
failed: "sh -c '/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh'" on master
我检查/opt/hypertable/0.9.7.16 中的 DfsBroker.hadoop.log
得到这个
/opt/hypertable/current/bin/jrun: line 113: exec: java: not found
但我 JAVA_HOME 已设置,我测试 java 运行正常
java --version
我尝试单独运行 jrun ,它没有提示 exec: java : not found
我在谷歌之后看到了类似的问题
但我已经使用了所有我能找到的解决方案
/opt/hypertable/current/bin/set-hadoop-distro.sh cdh4
刚刚明白
Hypertable successfully configured for Hadoop cdh4
所以如果有人能给我关于这个问题的提示,我将不胜感激
最佳答案
在启动集群之前,您必须运行:
cap fhsize -f Capfile.cluster
然后您可以检查所有目录是否已正确设置:
ls -laF /opt/hypertable/current/lib/java/*.jar
并且java版本也应该可以工作
/opt/hypertable/current/bin/jrun -version
更多信息在 quick start .
关于hadoop - 在 hadoop 上配置超表运行时,DfsBroker 无法启动错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23167947/