apache-spark - 如何从独立的 Spark 集群访问 azure block 文件系统 (abfss)

标签 apache-spark hadoop pyspark azure-blob-storage azure-data-lake-gen2

我需要在 Hadoop 3.2 中使用独立的 spark 集群(2.4.7),我正在尝试通过 pyspark 访问 ADLS Gen2 存储。
我已经向我的 core-site.xml 添加了一个共享 key ,我可以像这样访问存储帐户:

hadoop fs -ls abfss://<container>@<storage_account>.dfs.core.windows.net/
但是当我尝试在 pyspark 中读取一个 json 文件(使用 shell)时,如下所示:
spark.conf.set("fs.azure.account.key.<<storageaccount>>.dfs.core.windows.net", "<<key>>")

spark.read.option("multiLine", True).option("mode", "PERMISSIVE").json("abfss://<container>@<storageaccount>.dfs.core.windows.net/example.json").show()
我收到以下错误:
WARN streaming.FileStreamSink: Error while looking for metadata directory.
  File "/opt/spark/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o103.json.
: java.io.IOException: No FileSystem for scheme: abfss
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:561)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:559)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:355)
        at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:559)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
        at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:411)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)WARN: command not found
我还为 SPARK_DIST_CLASSPATH 配置了 HADOOP 库路径。指向$(hadoop classpath)并复制到 hadoop-azure jar 到hadoop/common文件夹。但仍然无法通过 pyspark 访问 abfss。
我在这里能错过什么?
也试过答案given here

最佳答案

这篇文章帮助你DIY: Apache Spark and ADLS Gen 2 support ,请确保您已按照所有必要步骤在 Apache Spark 上成功配置 ADLS gen2。

关于apache-spark - 如何从独立的 Spark 集群访问 azure block 文件系统 (abfss),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64581014/

相关文章:

Spark 数据框列上的正则表达式

hadoop - 启动 Spark REPL 时出错

python - 稀疏与密集向量 PySpark

python - cPickle.PicklingError : Could not serialize object: NotImplementedError

hadoop - 如何在 spark newAPIHadoopRDD 中获取 hbase 单元的所有版本?

python - 如何使用 pyspark.resultiterable.ResultIterable 对象

file - Hadoop MapReduce。无法打开文件以传递参数

amazon-web-services - 模块未找到错误: No module named 'aiohttp' in AWS Glue

apache-spark - 在 Key 上组合两个 Spark Streams

java - Hadoop Java 错误 : Exception in thread "main" java. lang.NoClassDefFoundError: WordCount (wrong name: org/myorg/WordCount)