json - 使用 spark 提交从 Linux FS 加载文件

标签 json linux scala apache-spark

我很难管理如何在 Spark 环境中从 Linux 文件系统加载 JSON 文件。顺便说一句,我正在使用 Spark 1.6。

该文件位于/home/wymeka/fields.json,我正在尝试此命令行:

spark-submit --master yarn transform.jar --schema-file "file:///home/wymeka/fields.json" --cache

Main 类负责加载此文件的行如下:

val df_schema = sqlContext.read.json(pathToSchemaFile) 

使用所有这些,将我引向以下异常:

Caused by: java.io.FileNotFoundException: File file:/home/wymeka/fields.json does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:542)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:755)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:532)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:778)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
    at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

或者当我尝试这个命令行时:

spark-submit --master yarn transform.jar --schema-file "file:\/\/\/home\/imachraoui\/fields.json" --cache

我得到另一个异常:

 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json
    at org.apache.hadoop.fs.Path.initialize(Path.java:206)
    at org.apache.hadoop.fs.Path.<init>(Path.java:172)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:170)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$11.apply(ResolvedDataSource.scala:169)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
    at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:108)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:169)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
    at com.nexys.spark.transform.Main$.main(Main.scala:80)
    at com.nexys.spark.transform.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:%5C/%5C/%5C/home%5C/wymeka%5C/fields.json
    at java.net.URI.checkPath(URI.java:1804)
    at java.net.URI.<init>(URI.java:752)
    at org.apache.hadoop.fs.Path.initialize(Path.java:203)
    ... 24 more

我们非常欢迎任何帮助。


已编辑

之后我尝试了这个命令行:

spark-submit --files /home/wymeka/fields.json --master yarn transform.jar --schema-file "fields.json" --cache

因此更改了我的 spark 代码如下:

val df_schema = sqlContext.read.json(SparkFiles.getRootDirectory()+"/"+pathToSchemaFile)

但还是没有!

最佳答案

该文件应该在同一路径中的所有工作节点上可用,否则它应该是 HDFS 路径

关于json - 使用 spark 提交从 Linux FS 加载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40738522/

相关文章:

linux - Symfony 文件权限问题

linux - 是否可以覆盖您正在处理的文件?

斯卡拉 : Match of parameter Type

mysql - 在 MySql 列中查询 JSON

ruby-on-rails - Amazon EC2 - Gem::Installer::ExtensionBuildError:错误:无法构建 gem native 扩展。 (JSON)

linux - 无法通过 ssh 进入 EC2 实例

scala - Scala中 "weak conformance"是什么概念?

Scala Futures - 内置超时?

javascript - 渲染backbone.js Collection View 时遇到问题。在我的初始化函数中获取“未定义”的 "Uncaught TypeError: Cannot call method '

c# - 无法将 json 转换为 xml