hadoop - 启动 Apache Spark 集群

标签 hadoop apache-spark configuration installation

我已经在我的四节点集群上安装了 Hadoop。我还在它们每个上安装了 Apache Spark。我可以在没有密码的情况下从主人 ssh 到每个奴隶。我也可以很好地启动我的主节点。但是,当我尝试使用/opt/spark/sbin/start-all.sh 运行 Spark 时,出现以下错误:

starting org.apache.spark.deploy.master.Master, logging to /opt/spark/logs/spark-hduser-org.apache.spark.deploy.master.Master-1-lebron.out
doublet: chown: changing ownership of ‘/opt/spark/logs’: Operation not permitted
doublet: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-doublet.out
doublet: /opt/spark/sbin/spark-daemon.sh: line 149: /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-doublet.out: Permission denied
kyrie: chown: changing ownership of ‘/opt/spark/logs’: Operation not permitted
kyrie: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-kyrie.out
kyrie: /opt/spark/sbin/spark-daemon.sh: line 149: /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-kyrie.out: Permission denied
lebron: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-lebron.out
jr: chown: changing ownership of ‘/opt/spark/logs’: Operation not permitted
jr: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-jr.out
jr: /opt/spark/sbin/spark-daemon.sh: line 149: /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-jr.out: Permission denied
doublet: failed to launch org.apache.spark.deploy.worker.Worker:
doublet: tail: cannot open ‘/opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-doublet.out’ for reading: No such file or directory
doublet: full log in /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-doublet.out
kyrie: failed to launch org.apache.spark.deploy.worker.Worker:
kyrie: tail: cannot open ‘/opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-kyrie.out’ for reading: No such file or directory
jr: failed to launch org.apache.spark.deploy.worker.Worker:
kyrie: full log in /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-kyrie.out
jr: tail: cannot open ‘/opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-jr.out’ for reading: No such file or directory
jr: full log in /opt/spark/logs/spark-hduser-org.apache.spark.deploy.worker.Worker-1-jr.out

(我的电脑被命名为 lebron(master),kyrie,jr,doublet(workers))

请帮忙!!!

最佳答案

启动服务器:

要启动一个独立的主服务器执行:

$ ./sbin/start-master.sh

启动一个或多个 worker 并通过以下方式将它们连接到 master:

$ ./sbin/start-slave.sh <master-spark-URL>

看看这篇文章:Apache Spark Cluster Installation and Configuration Guide

关于hadoop - 启动 Apache Spark 集群,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40120385/

相关文章:

java - 如何使用 Spark 将对象永久保存在内存中?

scala - Greenplum Spark 连接器 org.postgresql.util.PSQLException : ERROR: error when writing data to gpfdist

scala - 如何在 Spark Scala 中使用 mapPartitions?

linux - 比较两个环境属性文件,只将新文件中添加的属性迁移到旧文件

vim - 为什么 Vim 中只应用了一些模式行设置?

hadoop - 如何在 pig 中插入虚拟 map 值

hadoop - hadoop jar 和 yarn -jar 的区别

Java Eclipse : How to clean and how to run from different directory

r - 我们可以将 R 脚本或任何第三方软件安装到 CDH5(Hadoop 的 Cloudera 发行版)吗

java - 无法从 Windows 连接到远程 HDFS