在我自己学习Hadoop和Spark的过程中,我发现环境设置阶段非常繁琐,在此阶段,我们需要设置一个小型集群,并对所有节点上的所有依赖项重复安装和配置。 Java,Hadood,Spark。有没有办法使用bash脚本来完成此任务?
最佳答案
我首先刺了一下它,bash绝对可以完成这项工作。一旦为群集正确设置了SSH,下面的bash脚本就是一个好的开始。它需要进行一些改进,但仍然需要...示例脚本下面仅考虑了Hadoop / Yarn,但这是一项正在进行的工作。可以使用相同的方法在所有节点上设置Java和Spark。完成后,我将更新此答案;)
!/bin/bash
for x in hadoop-slave1 hadoop-slave2 hadoop-slave3
do
ssh $x bash -c "'
cd ~
wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz
tar -xzvf hadoop-2.7.3.tar.gz
ln -s hadoop-2.7.3 hadoop
echo "# HADOOP" >> ~/.bashrc
echo "export HADOOP_PREFIX=/home/hduser/hadoop" >> ~/.bashrc
source ~/.bashrc
echo "export HADOOP_HOME=$HADOOP_PREFIX" >> ~/.bashrc
echo "export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin" >> ~/.bashrc
echo "export HADOOP_COMMON_HOME=$HADOOP_PREFIX" >> ~/.bashrc
echo "export HADOOP_MAPRED_HOME=$HADOOP_PREFIX" >> ~/.bashrc
echo "export HADOOP_HDFS_HOME=$HADOOP_PREFIX" >> ~/.bashrc
echo "export YARN_HOME=$HADOOP_PREFIX" >> ~/.bashrc
source ~/.bashrc
mkdir -p ~/tmp #create tmp dir used and configured in hadoop
# work around an issue I got with JAVA_HOME env var beeing lost.
echo export `env | grep ^JAVA_HOME` >> ~/hadoop/etc/hadoop/hadoop-env.sh
exit
'"
# copy config files to slaves. assuming that master is already setup - to be improved
scp ~/hadoop/etc/hadoop/core-site.xml hduser@$x:~/hadoop/etc/hadoop
scp ~/hadoop/etc/hadoop/hdfs-site.xml hduser@$x:~/hadoop/etc/hadoop
scp ~/hadoop/etc/hadoop/yarn-site.xml hduser@$x:~/hadoop/etc/hadoop
done
关于bash - 如何使用bash脚本在群集中的YARN上快速设置Spark?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39807076/