hadoop - 使用 --proxy-user、--keytab 和 --principal 参数在 hadoop kerberos 中提交 spark-submit

标签 hadoop apache-spark kerberos spark-submit

只是想得到澄清,如果 spark-submit --keytab --principal && --proxy-user参数可以共存吗?

我们要求以真实的业务用户身份提交作业,但该用户在 hadoop kdc 中没有主体。

每当同时使用 proxy-user 和 kerberos principal 时,我都会遇到异常。

17/02/09 13:51:43 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 379 for atlas on 10.12.118.92:8020
Exception in thread "main" java.io.IOException: java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:888)
        at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:8
        at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2243)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.take(RDD.scala:1288)
        at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1328)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.first(RDD.scala:1327)
        at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:269)
        at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:265)
        at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:242)
        at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:74)
        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:171)
        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:44)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
        at org.sandbox.Main$.main(Main.scala:39)
        at org.sandbox.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:161)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
        at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:870)
        ... 57 more
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 403, message: Forbidde
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
        at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128
        at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
  1. 如果代理用户和委托(delegate)人参数不能共存,你们有这方面的文档吗?
  2. 代理用户参数在 kerberos hadoop 环境中的实际用例是什么?

最佳答案

制作spark-submit时--keytab,--principal && --proxy-user参数不能同时使用。

如果一起使用提交将通过以下错误:

Spark 提交失败: 错误:只能提供 --proxy-user 或 --principal 之一。

enter image description here

关于hadoop - 使用 --proxy-user、--keytab 和 --principal 参数在 hadoop kerberos 中提交 spark-submit,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42129275/

相关文章:

scala - 在不使用过滤器功能的情况下删除 RDD 中的第一个元素

hadoop - 配置Sentry以显示/隐藏不同用户的不同数据库

mongodb - flume 或 kafka 相当于 mongodb

hadoop - java.io.IOException : Cannot obtain block length for LocatedBlock 异常

shell - hadoop fs -put 命令

apache-spark - Spark 核心和任务并发

apache-spark - Spark,在 EMR 中抛出 SparkException 时的行为不正确

tomcat - 使用 NTLMv1 或 NTLMv2 或 kerberos 单点登录到 linux 上 tomcat 上的 Spring webapp

tomcat - 如何让 Tomcat 中的 JNDIRealm 使用 Kerberos 身份验证?

hadoop - 实现 Map Reduce 的最佳编程方式