python - SparkContext 错误 - 找不到文件/tmp/spark-events 不存在

标签 python amazon-web-services apache-spark amazon-ec2 pyspark

通过 API 调用运行 Python Spark 应用程序 - 提交申请时 - 回复 - 失败 SSH 进入 Worker

我的 python 应用程序存在于

/root/spark/work/driver-id/wordcount.py

错误可以在

中找到
/root/spark/work/driver-id/stderr

显示如下错误-

Traceback (most recent call last):
  File "/root/wordcount.py", line 34, in <module>
    main()
  File "/root/wordcount.py", line 18, in main
    sc = SparkContext(conf=conf)
  File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 115, in __init__
  File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 172, in _do_init
  File "/root/spark/python/lib/pyspark.zip/pyspark/context.py", line 235, in _initialize_context
  File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 1064, in __call__
  File "/root/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist.
  at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:402)
  at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:255)
  at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
  at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
  at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
  at py4j.Gateway.invoke(Gateway.java:214)
  at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
  at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
  at py4j.GatewayConnection.run(GatewayConnection.java:209)
  at java.lang.Thread.run(Thread.java:745)

它表示 -/tmp/spark-events 不存在 - 这是真的 但是,在 wordcount.py 中

from pyspark import SparkContext, SparkConf

... few more lines ...

def main():
    conf = SparkConf().setAppName("MyApp").setMaster("spark://ec2-54-209-108-127.compute-1.amazonaws.com:7077")
    sc = SparkContext(conf=conf)
    sc.stop()

if __name__ == "__main__":
    main()

最佳答案

/tmp/spark-events 是 Spark 存储事件日志的位置。只需在主机中创建此目录即可。

$mkdir /tmp/spark-events
$ sudo /root/spark-ec2/copy-dir /tmp/spark-events/
RSYNC'ing /tmp/spark-events to slaves...
ec2-54-175-163-32.compute-1.amazonaws.com

关于python - SparkContext 错误 - 找不到文件/tmp/spark-events 不存在,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38350249/

相关文章:

hadoop - 在 Spark SQL 中加载数据时获取 Hive 表中的 Null 值

azure - 运行但不执行任何 Spark 应用程序的 Azure Databricks 群集的成本

python - 将点列表转换为 SVG 三次分段贝塞尔曲线

python - 如何修复 Meta.fields 不能是字符串。你是不是想输入 : ('name' )

amazon-web-services - 将efs音量添加到ecs fargate

amazon-web-services - 是否可以从 AWS lambda 直接调用 docker run

hadoop - 如何将 Amazon Glacier/S3 与 hadoop map reduce/spark 结合使用?

python - 分数函数返回未减少的分数

python - 相当于 Python 3 中的 thread.interrupt_main()

amazon-web-services - 如何使用 Cloudformation 模板安排 AWS 应用程序流程