hadoop - 所有节点启动失败

标签 hadoop hdfs hadoop-yarn

我已经设置了我的配置文件并格式化了我的文件系统,但是每当我尝试执行启动 shell 脚本时,我都会收到此错误。

下面我输入了 hstart 的别名

错误:

computer:~ seanplowman$ hstart
18/04/14 23:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-namenode-Seans
localhost: Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-datanode-Seans
localhost: Error: Could not find or load main class Mac.log

Starting secondary namenodes [0.0.0.0]
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-secondarynamenode-Seans
0.0.0.0: Error: Could not find or load main class Mac.log

18/04/14 23:35:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
/usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-resourcemanager-Seans
Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-nodemanager-Seans
localhost: Error: Could not find or load main class Mac.log

jps 还表示运行启动脚本后没有任何节点启动。根据我的研究,我的主机名似乎可能有问题,但是尝试更改这些主机名并没有解决任何问题。

我将提供其他配置文件来展示它们是如何针对上下文进行设置的。

/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
  <description>The name of the default file system.  A URI whose scheme and 
  authority determine the FileSystem implementation.  The uri's scheme determines 
  the config property (fs.SCHEME.impl) naming the FileSystem implementation
  class.  The uri's authority is used to determine the host, port, etc. for a filesystem.
  </description>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
 <property>
  <name>mapred.job.tracker</name>
  <value>localhost:9010</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

我也对 hadoop-env.sh 做了一些更改。我将把它们放在下面。

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

.bashrc

#Hadoop variables
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
###end of paste

.bash_profile

alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"

我不确定接下来要采取的步骤是查看几乎所有涉及的文件。

最佳答案

我认为您的 Mac 主机名中有空格。例如,肖恩·麦克

默认日志文件是用它来命名的,

HDFS:log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
yarn :log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out

其中 $HOSTNAME 是问题所在,空格是意外的。

如果您查看输出,您会注意到 hadoop-seanplowman-namenode-Seans,所以我怀疑

HADOOP_IDENT_STRING = 运行脚本的用户= seanplowman
命令 = hadoop
主机名 = 肖恩·麦克

看看是否fixing the hostname没有空格会改变任何东西。

如果没有,请编辑 yarn-daemon.shhadoop-daemon.sh 脚本以开头

#!/usr/bin/env bash
set -xv

然后用输出编辑问题

关于hadoop - 所有节点启动失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49838686/

相关文章:

json - 在 Hive 中创建外部表以保存 JSON 数据时出错

hadoop - 如何设置hdfs中文件的行组大小?

python - 交叉包含地理数据的多个大文件的有效方法

java - 找不到 Hadoop Java 类

bash - 在 bash 脚本中运行 hadoop 命令

hadoop - 如何在不同节点集群中调度hadoop map任务

hadoop - 检查HDFS中的目录是否为空

mysql - 辅助服务 :mapreduce_shuffle does not exist

hadoop - 容器运行超出物理内存。 Hadoop 流 python MR

apache-spark - 如何使用 JMX 远程连接到 Dataproc 上的 Spark 工作线程