python - pyspark: ImportError: 没有名为 numpy 的模块

标签 python python-2.7 numpy pyspark rdd

我正在使用 pyspark,并从以下代码得到结果 rdd:

import numpy
model = PrefixSpan.train(input_rdd,minSupport=0.1)
result = model.freqSequences().filter(lambda x: (x.freq >= 50)).filter(lambda x: (len(x.sequence) >=2) ).cache()

当我检查 input_rdd.take(5) 时,input_rdd 看起来不错。上面的代码创建了一个名为'result'的rdd,其格式如下:

PythonRDD[103] at RDD at PythonRDD.scala:48

我确实安装了 numpy,但是当我尝试执行 result.take(5)result.count() 时,我不断收到以下错误。

Py4JJavaErrorTraceback (most recent call last)
<ipython-input-32-7e589dce550c> in <module>()
----> 1 result.take(5)

/usr/local/spark-latest/python/pyspark/rdd.py in take(self, num)
   1308 
   1309             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1310             res = self.context.runJob(self, takeUpToNumLeft, p)
   1311 
   1312             items += res

/usr/local/spark-latest/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
    939         # SparkContext#runJob.
    940         mappedRDD = rdd.mapPartitions(partitionFunc)
--> 941         port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
    942         return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
    943 

/usr/local/spark-latest/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    931         answer = self.gateway_client.send_command(command)
    932         return_value = get_return_value(
--> 933             answer, self.gateway_client, self.target_id, self.name)
    934 
    935         for temp_arg in temp_args:

/usr/local/spark-latest/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/spark-latest/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    310                 raise Py4JJavaError(
    311                     "An error occurred while calling {0}{1}{2}.\n".
--> 312                     format(target_id, ".", name), value)
    313             else:
    314                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 42.0 failed 4 times, most recent failure: Lost task 0.3 in stage 42.0 (TID 85, ph-hdp-abc-dn07): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/data/0/yarn/nm/usercache/abc-test/appcache/application_1482412711394_0011/container_e16_1482412711394_0011_01_000002/pyspark.zip/pyspark/worker.py", line 161, in main
    func, profiler, deserializer, serializer = read_command(pickleSer, infile)
  File "/data/0/yarn/nm/usercache/abc-test/appcache/application_1482412711394_0011/container_e16_1482412711394_0011_01_000002/pyspark.zip/pyspark/worker.py", line 54, in read_command
    command = serializer._read_with_length(file)
  File "/data/0/yarn/nm/usercache/abc-test/appcache/application_1482412711394_0011/container_e16_1482412711394_0011_01_000002/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/data/0/yarn/nm/usercache/abc-test/appcache/application_1482412711394_0011/container_e16_1482412711394_0011_01_000002/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
  File "/data/0/yarn/nm/usercache/abc-test/appcache/application_1482412711394_0011/container_e16_1482412711394_0011_01_000002/pyspark.zip/pyspark/mllib/__init__.py", line 28, in <module>
ImportError: No module named numpy

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:332)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:330)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:935)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:910)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:910)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:668)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:281)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

有人知道我错过了什么吗?谢谢!

最佳答案

如果驱动程序端导入没有失败,则意味着 numpy 在执行程序解释器中不可访问。在某些情况下可能会出现这种情况:

  • 您仅在驱动节点上安装了 numpy(工作节点上缺少 numpy)。
  • 您已在工作节点上安装了 numpy,但工作人员配置不正确:

    • numpy 已安装但在解释器路径中缺失。
    • numpy 已安装,但工作人员使用与已安装 numpy 不同的环境/解释器。

关于python - pyspark: ImportError: 没有名为 numpy 的模块,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41306654/

相关文章:

python - 如果一列中的第三个字母是 W,如何在 Pandas 中删除一行?

python - 如何在主Python文件中使用多个.ui文件

python - 如何从 Python 列表中选择一组变量

python - 访问值 boto3 字典响应

list - 将 Numpy ndarray 转换为列表

python - 将字典的字符串表示形式转换为真正的字典

python - 使用 fillna 后如何更新数据框以进行排序?

python - 在 python 中进行 pickling 的 pickle 中

python - 如何增加 Matplotlib basemap 大小?

python - 如何调整文本行?