apache-spark - Spark 提交应用程序主控主机

标签 apache-spark spark-streaming hosts-file

我是 Spark 的新手,在提交申请时遇到了问题。我设置了一个主节点,其中有两个带有 spark 的从属节点,一个带有 zookeeper 的单个节点,以及一个带有 kafka 的单个节点。我想在 python 中使用 Spark 流启动 kafka wordcount 示例的修改版本。

要提交申请,我要做的是通过 ssh 进入主 Spark 节点并运行 <path to spark home>/bin/spark-submit .如果我用它的 ip 指定主节点,一切都很好,应用程序正确地使用来自 kafka 的消息,我可以从 SparkUI 看到应用程序在两个从站上正确运行:

./bin/spark-submit --master spark://<spark master ip>:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py <zookeeper ip>:2181 test

但是如果我用它的主机名指定主节点:
./bin/spark-submit --master spark://spark-master01:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py zookeeper01:2181 test

然后它挂起这些日志:
15/05/27 02:01:58 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:18 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:38 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:58 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/27 02:02:58 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
15/05/27 02:02:58 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.

我的 /etc/hosts文件如下所示:
<spark master ip> spark-master01
127.0.0.1 localhost

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
<spark slave-01 ip> spark-slave01
<spark slave-02 ip> spark-slave02
<kafka01 ip> kafka01
<zookeeper ip> zookeeper01

更新

这是 netstat -n -a 输出的第一部分:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address           State
tcp        0      0 0.0.0.0:22              0.0.0.0:*                 LISTEN
tcp        0      0 <spark master ip>:22    <my laptop ip>:60113      ESTABLISHED
tcp        0    260 <spark master ip>:22    <my laptop ip>:60617      ESTABLISHED
tcp6       0      0 :::22                   :::*                      LISTEN
tcp6       0      0 <spark master ip>:7077  :::*                      LISTEN
tcp6       0      0 :::8080                 :::*                      LISTEN
tcp6       0      0 <spark master ip>:6066  :::*                      LISTEN
tcp6       0      0 127.0.0.1:60105         127.0.0.1:44436           TIME_WAIT
tcp6       0      0 <spark master ip>:43874 <spark master ip>:7077    TIME_WAIT
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55029           TIME_WAIT
tcp6       0      0 <spark master ip>:7077  <spark slave 01 ip>:37061 ESTABLISHED
tcp6       0      0 <spark master ip>:7077  <spark slave 02 ip>:47516 ESTABLISHED
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55026           TIME_WAIT

最佳答案

您正在使用主机名而不是 IP 地址。所以你应该在每个节点的 /etc/hosts 中提到你的主机名文件。然后它会起作用。

关于apache-spark - Spark 提交应用程序主控主机,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30474508/

相关文章:

amazon-web-services - 如何将 Spark jar 提交到 EMR 集群?

amazon-web-services - 如何在 Elastic Beanstalk 中自定义主机文件?

dns - 在Mac El Capitan(10.11.5)中忽略了/etc/hosts

spark-streaming - 增加 Kafka Streams 消费者吞吐量

osx-elcapitan - MacOS 仅托管非 www 网址的文件条目

apache-spark - 为什么 7 个分区由 Spark 确定?

redis - Spark : How to send arguments to Spark foreach function

hadoop - Spark - java IOException :Failed to create local dir in/tmp/blockmgr*

java - 创建数据并将其附加到 Spark graphx java

java - scala.MatchError : in Dataframes