apache-spark - 在集群模式下在 Kubernetes 上提交 Spark 应用程序 : Configured service account doesn't have access

标签 apache-spark kubernetes

我尝试将 Spark 应用程序提交到 Kubernetes 集群 (Minikube)。
在客户端模式下运行我的 spark submit 时,一切顺利。在 3 个 pod 中创建了 3 个 executor,并执行了代码。这是我的提交命令:

[MY_PATH]/bin/spark-submit \
   --master k8s://https://[API_SERVER_IP]:8443 \
   --deploy-mode client \
   --name [Name] \
   --class [MyClass] \
   --conf spark.kubernetes.container.image=spark:2.4.0 \
   --conf spark.executor.instances=3 \
   [PATH/TO/MY/JAR].jar

现在,我调整了它以在集群模式下运行:
[MY_PATH]/bin/spark-submit \
   --master k8s://https://[API_SERVER_IP]:8443 \
   --deploy-mode cluster \
   --name [Name] \
   --class [MyClass] \
   --conf spark.kubernetes.container.image=spark:2.4.0 \
   --conf spark.executor.instances=3 \
   local://[PATH/TO/MY/JAR].jar

这一次,创建了一个驱动程序pod以及一个驱动程序服务,然后驱动程序pod失败了。在 Kubernetes 上,我可以看到以下错误:
MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "sparkpi-1555314081444-driver-conf-map" not found

在 pod 日志中,我有错误:
Forbidden!Configured service account doesn't have access. 
Service account may have been revoked. 
pods "sparkpi-1555314081444-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".

这是完整的堆栈跟踪:
org.apache.spark.SparkException: External scheduler cannot be instantiated
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2794)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:493)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
    at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
    at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) 
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/sparkpi-1555314081444-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "sparkpi-1555314081444-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
    at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:57)
    at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:55)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:55)
    at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:89)
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2788)
    ... 20 more 

我该怎么做才能让它发挥作用?

最佳答案

您必须创建一个授权的服务帐户:https://spark.apache.org/docs/latest/running-on-kubernetes.html#rbac

kubectl create serviceaccount spark
kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

然后将其作为参数传递给提交
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark

关于apache-spark - 在集群模式下在 Kubernetes 上提交 Spark 应用程序 : Configured service account doesn't have access,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55684693/

相关文章:

docker - 将 Kubernetes 集群与 Docker 网络互连

kubernetes - K8S - 无法通过 - alertmanager 查看警报

python - 从 Pyspark 列中获取值并将其与 Python 字典进行比较

python - 将 RDD 转换为 DataFrame PySpark 时出现错误

scala - 如何使用toDF创建带有空值的DataFrame?

kubernetes - Argo Workflow + Spark operator + 未生成 App 日志

kubernetes - 如何收集Openshift中特定命名空间的日志数据?

logging - Kubernetes Pod 中的日志未显示

java - Apache Spark 提供自定义类路径

scala - 使用 ssl 连接的 Spark 到 kafka 连接