scala - 无法使用 IntelliJ 在本地连接到 hdfs kerberized 集群

标签 scala security apache-spark hadoop kerberos

我正在尝试通过笔记本电脑上安装的 intelliJ 在本地连接到 hdfs。我正在尝试连接的集群是使用边缘节点进行 Kerberized 化的。我为边缘节点生成了一个 key 表,并在下面的代码中进行了配置。我现在可以登录到边缘节点了。但是当我现在尝试访问名称节点上的 hdfs 数据时,它会抛出错误。 下面是试图连接到 hdfs 的 Scala 代码:

import org.apache.spark.sql.SparkSession
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.security.{Credentials, UserGroupInformation}
import org.apache.hadoop.security.token.{Token, TokenIdentifier}
import java.security.{AccessController, PrivilegedAction, PrivilegedExceptionAction}
import java.io.PrintWriter

object DataframeEx {
  def main(args: Array[String]) {
    // $example on:init_session$
    val spark = SparkSession
      .builder()
      .master(master="local")
      .appName("Spark SQL basic example")
      .config("spark.some.config.option", "some-value")
      .getOrCreate()

    runHdfsConnect(spark)

    spark.stop()
  }

   def runHdfsConnect(spark: SparkSession): Unit = {

    System.setProperty("HADOOP_USER_NAME", "m12345")
    val path = new Path("/data/interim/modeled/abcdef")
    val conf = new Configuration()
    conf.set("fs.defaultFS", "hdfs://namenodename.hugh.com:8020")
    conf.set("hadoop.security.authentication", "kerberos")
    conf.set("dfs.namenode.kerberos.principal.pattern","hdfs/_HOST@HUGH.COM")

    UserGroupInformation.setConfiguration(conf);
    val ugi=UserGroupInformation.loginUserFromKeytabAndReturnUGI("m12345@HUGH.COM","C:\\Users\\m12345\\Downloads\\m12345.keytab");

    println(UserGroupInformation.isSecurityEnabled())
     ugi.doAs(new PrivilegedExceptionAction[String] {
       override def run(): String = {
         val fs= FileSystem.get(conf)
         val output = fs.create(path)
         val writer = new PrintWriter(output)
         try {
           writer.write("this is a test")
           writer.write("\n")
         }
         finally {
           writer.close()
           println("Closed!")
         }
          "done"
       }
     })
  }
}

我能够登录到边缘节点。但是当我尝试写入 hdfs(doAs 方法)时,它会抛出以下错误:

WARN Client: Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/namenodename.hugh.com@HUGH.COM
18/06/11 12:12:01 ERROR UserGroupInformation: PriviledgedActionException m12345@HUGH.COM (auth:KERBEROS) cause:java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/namenodename.hugh.com@HUGH.COM
18/06/11 12:12:01 ERROR UserGroupInformation: PriviledgedActionException as:m12345@HUGH.COM (auth:KERBEROS) cause:java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/namenodename.hugh.com@HUGH.COM; Host Details : local host is: "INMBP-m12345/172.29.155.52"; destination host is: "namenodename.hugh.com":8020; 
Exception in thread "main" java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/namenodename.hugh.com@HUGH.COM; Host Details : local host is: "INMBP-m12345/172.29.155.52"; destination host is: "namenodename.hugh.com":8020

如果我登录到 edgenode 并执行 kinit,然后访问 hdfs 就没问题了。那么,为什么我可以登录到 edgenode,却无法访问 hdfs namenode?

如果我需要更多详细信息,请告诉我。

最佳答案

Spark conf 对象设置不正确。以下是对我有用的:

val conf = new Configuration()
conf.set("fs.defaultFS", "hdfs://namenodename.hugh.com:8020")
conf.set("hadoop.security.authentication", "kerberos")
conf.set("hadoop.rpc.protection", "privacy")   ***---(was missing this parameter)***
conf.set("dfs.namenode.kerberos.principal","hdfs/_HOST@HUGH.COM") ***---(this was initially wrongly set as dfs.namenode.kerberos.principal.pattern)***

关于scala - 无法使用 IntelliJ 在本地连接到 hdfs kerberized 集群,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50951656/

相关文章:

java - PKCS7 在 Java 中验证数字签名

java - Spring Security with CAS Rest(直接登录)

python - Apache-spark - 在 Windows 上启动 pyspark 时出错

hadoop - 为什么我们创建RDD来保存Hbase中的数据?还有其他方法可以在 Hbase 中保存数据吗?

scala - 如何将响应主体字段传递给其他请求的主体(Gatling)

Scala,让我的循环更实用

ruby-on-rails - 安全使用 gmail 和 heroku

java - 如何从 hadoopish 文件夹加载 Parquet 文件

scala - 什么是 Scala 延续以及为什么使用它们?

java - 玩框架 2.1.x 和 EbeanServer