hadoop - 运行spark wordcount示例时出现IllegalStateException

标签 hadoop apache-spark

Spark版本1.6.2
Hadoop版本2.7.3

在独立集群模式下运行spark时

命令字数示例:

spark-submit --class org.apache.spark.examples.JavaWordCount --master spark://IP:7077 spark-examples-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar file.txt output

得到以下错误
INFO cluster.SparkDeploySchedulerBackend: Executor app-20161125052710-0012/10 removed: java.io.IOException: Failed to create directory /usr/hdp/2.5.0.0-1245/spark/work/app-20161125052710-0012/10    
ERROR spark.SparkContext: Error initializing SparkContext.
    java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
    This stopped SparkContext was created at:

    org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
    org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:44)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    java.lang.reflect.Method.invoke(Method.java:606)
    org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

    The currently active SparkContext was created at:

    (No active SparkContext.)

        at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:106)
        at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1602)
        at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2203)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:579)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
        at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:44)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    16/11/25 04:24:48 INFO spark.SparkContext: SparkContext already stopped.

在spark master节点url中,我看到有两个处于ALIVE状态的工作程序

最佳答案

似乎Failed to create directory /usr/hdp/2.5.0.0-1245/spark/work是根本原因。授予/usr/hdp/2.5.0.0-1245/spark/work路径权限后,它可以正常工作

关于hadoop - 运行spark wordcount示例时出现IllegalStateException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40801748/

相关文章:

hadoop - Hadoop Pig-优化字数

java - 如何将JavaPairDStream写入Redis?

python - Spark 中的分区和分桶有什么区别?

java - 无法为 IntelliJ IDE 加载 native hadoop 库

hadoop - 如果我们已经有一些数据,则更新分区的配置单元表

hadoop - Pentaho Kettle 连接到 Hadoop 集群

hadoop - 如何将 hadoop 辅助名称节点与主名称节点分开?

maven - 使用 -Psparkr 错误构建 sparkr

java - 线程 "main"java.lang.NoClassDefFoundError : org/apache/hadoop/fs/FSDataInputStream 中的 SSH 异常

MongoDB 与 Spark