apache-spark - Spark S3 空 uri 主机

标签 apache-spark amazon-s3

val spark = SparkSession.builder
          .appName(appName)
          .config("spark.delta.logStore.class", "org.apache.spark.sql.delta.storage.S3SingleDriverLogStore")
          .config("hive.exec.dynamic.partition", "true")
          .config("hive.exec.dynamic.partition.mode", "nonstrict")
          .config("hive.exec.max.dynamic.partitions", 5000)
          .config("hive.exec.max.dynamic.partitions.pernode", 5000)
          .enableHiveSupport()
          .master("local[2]")
          .getOrCreate()
spark
    .sparkContext
    .hadoopConfiguration
    .set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
spark.read.json("s3a:///bucketname/foldername/").inputFiles
引发以下异常
Exception in thread "main" java.lang.NullPointerException: null uri host.
    at java.util.Objects.requireNonNull(Objects.java:228)
    at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:73)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.setUri(S3AFileSystem.java:470)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:235)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:391)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:325)
我已经验证我能够从存储桶中读取并拥有正确的权限。

最佳答案

显然我错过了路径中的存储桶名称。也用过 s3a://而不是 s3a:///

关于apache-spark - Spark S3 空 uri 主机,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66129014/

相关文章:

apache-spark - Spark 流中批处理间隔,滑动间隔和窗口大小之间的差异

scala - 为什么SparkContext.textFile的partition参数不生效?

scala - 递归地向数据框添加行

amazon-web-services - AWS Cli 与非 DNS 兼容存储桶同步

post - 如何通过 POST 处理 S3 浏览器上传中的错误

python - 访问 Spark RDD 时闭包中局部变量的使用

apache-spark - 使用 Cloud SQL 代理从 Dataproc 连接到 Cloud SQL

android - 在 Android 应用程序中使用什么类型的视频托管服务?

django - 使用 Django 从 Amazon S3 下载文件

amazon-s3 - 从 Lambda 函数生成预签名 S3 URL 时为 "Access key does not exist"