docker - 工作容器无法重新连接到Spark驱动程序

标签 docker apache-spark

我将Dockerfile设置为:

# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License
ARG BASE_CONTAINER=jupyter/scipy-notebook
FROM $BASE_CONTAINER

LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"

USER root

# Spark dependencies
ENV SPARK_VERSION 2.3.2
ENV SPARK_HADOOP_PROFILE 2.7
ENV SPARK_SRC_URL https://www.apache.org/dist/spark/spark-$SPARK_VERSION/spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
ENV SPARK_HOME=/opt/spark
ENV PATH $PATH:$SPARK_HOME/bin

RUN apt-get update && \
     apt-get install -y openjdk-8-jdk-headless \
     postgresql && \
    rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME  /usr/lib/jvm/java-8-openjdk-amd64/

ENV PATH $PATH:$JAVA_HOME/bin


RUN wget ${SPARK_SRC_URL}

RUN tar -xzf spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz   

RUN mv spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE} /opt/spark 

RUN rm -f spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz





USER $NB_UID
ENV POST_URL https://jdbc.postgresql.org/download/postgresql-42.2.5.jar
RUN wget ${POST_URL}
RUN mv postgresql-42.2.5.jar $SPARK_HOME/jars
# Install pyarrow
RUN conda install --quiet -y 'pyarrow' && \
    conda install pyspark==2.3.2 && \
    conda clean -tipsy && \
    fix-permissions $CONDA_DIR && \
    fix-permissions /home/$NB_USER

WORKDIR $SPARK_HOME

然后,我运行命令以将my_notebook图像制作为:
docker build -t my_notebook。

然后我将三个容器分别制成Master,Worker和Notebook:

掌握使用docker-compose文件:
master:
      image: my_notebook
      command: bin/spark-class org.apache.spark.deploy.master.Master -h master
      hostname: master
      environment:
        MASTER: spark://master:7077
        SPARK_CONF_DIR: /conf
        SPARK_PUBLIC_DNS: 192.168.XXX.XXX
      expose:
        - 7001
        - 7002
        - 7003
        - 7004
        - 7005
        - 7077
        - 6066
      ports:
        - 4040:4040
        - 6066:6066
        - 7077:7077
        - 8080:8080
      volumes:
        - ./conf/master:/conf
        - ./data:/tmp/data

worker 使用docker-compose文件:
worker:
      image: my_notebook
      command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.XXX.XXX:7077
      hostname: worker
      environment:
        SPARK_CONF_DIR: /conf
        SPARK_WORKER_CORES: 4
        SPARK_WORKER_MEMORY: 4g
        SPARK_WORKER_PORT: 8881
        SPARK_WORKER_WEBUI_PORT: 8081
        SPARK_BLOCKMGR_PORT: 5003
        SPARK_PUBLIC_DNS: localhost
      expose:
        - 7012
        - 7013
        - 7014
        - 7015
        - 8881
        - 5001
        - 5003
      ports:
        - 8081:8081
      volumes:
        - ./conf/worker:/conf
        - ./data:/tmp/data

使用docker-compose文件的笔记本:
notebook:
  image: my_notebook
  command: jupyter notebook
  hostname: notebook
  environment:
    SPARK_PUBLIC_DNS: 192.168.XXX.XXX
  expose:
    - 7012
    - 7013
    - 7014
    - 7015
    - 8881
    - 8888
  ports:
    - 8888:8888

首先,我在一台机器上启动Master容器:
“ docker 组成”
然后在另一台机器上启动Worker,“docker-compose up”
然后在其他机器“docker-compose up”中启动笔记本

Spark集群已设置。可以访问Spark UI。集群中也注册了工作人员。Jupyter Notebook也成功启动。但是面临的问题是,当我通过jupyter worker执行程序运行pyspark应用程序时,执行程序无法重新连接到spark驱动程序。
错误日志是:
Spark Executor Command: "/usr/lib/jvm/java-8-openjdk-amd64//bin/java" "-cp" "/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=35147" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@notebook:35147" "--executor-id" "31" "--hostname" "172.17.0.3" "--cores" "2" "--app-id" "app-20190101134023-0001" "--worker-url" "spark://Worker@172.17.0.3:8881"
========================================

Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:63)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:293)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:63)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    ... 4 more
Caused by: java.io.IOException: Failed to connect to notebook:35147
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: notebook
    at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
    at java.net.InetAddress.getAllByName(InetAddress.java:1193)
    at java.net.InetAddress.getAllByName(InetAddress.java:1127)
    at java.net.InetAddress.getByName(InetAddress.java:1077)
    at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
    at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
    at java.security.AccessController.doPrivileged(Native Method)
    at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
    at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
    at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
    at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
    at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
    at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
    at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
    at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:208)
    at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:49)
    at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:188)
    at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:174)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
    at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:978)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:512)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423)
    at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    ... 1 more

有人可以帮助我吗?

最佳答案

为了使notebook:35147这样的网址正常工作,容器必须位于同一网络中。在您的情况下,您正在不同的计算机上启动容器,因此该网络必须是覆盖网络。

最好的解决方案是使用docker swarmdocker stack而不是docker-compose,但是为了不让您陷入新事物的困扰,让我们暂时坚持撰写。

首先创建这样的网络

我们需要群模式的一些帮助:

在一台机器(经理)上执行docker swarm initdocker network create --driver=overlay --attachable my-network
在其他机器上,docker swarm join以及您在管理器上获得的 token 。

然后修改所有撰写文件,使其结尾

networks:
  my-network:
    external: true

并将其添加到有问题的服务中
networks:
   my-network

关于docker - 工作容器无法重新连接到Spark驱动程序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54303060/

相关文章:

scala - 比较 RDD 的子集

apache-spark - 在写入过程中 session 被终止时,Spark saveAsTable 是否回滚?

scala - 在 Scala 数据框中合并 map

docker - 调查 Docker 连接问题

docker - wurstmeister 的 Kafka-docker 容器扩展失败,错误为 "advertised listeners are already registered by broker 1001"

docker - 使用 Docker 作为邮件服务器

python - 使用 sklearn 和 Spark 时的轮廓分数不同

linux - 管理每个容器的 IP 地址

docker - docker 注册表:https而不是http

hadoop - Hadoop生产基础架构-存储Dilema