hadoop - 启用 Kerberos 后无法访问 Hadoop CLI

标签 hadoop kerberos mit-kerberos

我已遵循以下教程 CDH Hadoop Kerberos , NameNode 和 DataNode 能够正常启动,我能够在 WebUI (0.0.0.0:50070) 上看到所有列出的 DataNode。但是我无法访问 Hadoop CLI。我已遵循本教程 Certain Java versions cannot read credentials cache ,我仍然无法使用 Hadoop CLI。

[root@local9 hduser]# hadoop fs -ls /
20/11/03 12:24:32 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
20/11/03 12:24:32 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
20/11/03 12:24:32 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "local9/192.168.2.9"; destination host is: "local9":8020;
[root@local9 hduser]# klist
Ticket cache: KEYRING:persistent:0:krb_ccache_hVEAjWz
Default principal: hdfs/local9@FBSPL.COM

Valid starting       Expires              Service principal
11/03/2020 12:22:42  11/04/2020 12:22:42  krbtgt/FBSPL.COM@FBSPL.COM
        renew until 11/10/2020 12:22:12
[root@local9 hduser]# kinit -R
[root@local9 hduser]# klist
Ticket cache: KEYRING:persistent:0:krb_ccache_hVEAjWz
Default principal: hdfs/local9@FBSPL.COM

Valid starting       Expires              Service principal
11/03/2020 12:24:50  11/04/2020 12:24:50  krbtgt/FBSPL.COM@FBSPL.COM
        renew until 11/10/2020 12:22:12
[root@local9 hduser]# hadoop fs -ls /
20/11/03 12:25:04 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
20/11/03 12:25:04 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
20/11/03 12:25:04 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "local9/192.168.2.9"; destination host is: "local9":8020;

任何帮助将不胜感激。

最佳答案

我想通了这个问题。 这是 Redhat 中的缓存凭据错误:Red Hat Bugzilla – Bug 1029110 然后我在 Cloudera 上的 Kerberos 上找到了这个文档:Manage krb5.conf

最后的解决方案是从 /etc/krb5.conf

中注释掉这一行

default_ccache_name = KEYRING:persistent:%{uid}

评论此行后,我能够访问 Hadoop CLI。

关于hadoop - 启用 Kerberos 后无法访问 Hadoop CLI,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64659063/

相关文章:

windows - 更改 kerberos 票证缓存位置

c++ - 导入的 DLL 函数引发 "term does not evaluate to a function taking 1 arguments"错误

hadoop - Kerberos kadmin 服务错误

Asp.net 委托(delegate)

bash - hadoop中的批量重命名

在 PySpark 中使用 collect_list 时 Java 内存不足

java - 提取远程zip文件并将其解压缩到Java中的hdfs

hadoop - Hive Metastore SQL Server:thrift.transport.TTransportException:未指定 key 表

hadoop - 每次重新启动时都会替换Cloudera节点/etc/krb5.conf

java - 在 reducer 中迭代自定义可写组件的问题