python - 当 RDD 包含用户定义的类时,为什么 Apache PySpark top() 会失败?

标签 python serialization apache-spark pickle pyspark

我正在本地计算机上通过 iPython Notebook 使用 Apache Spark 的 PySpark 制作一些代码原型(prototype)。我写了一些看起来工作正常的代码,但是当我对它做一个简单的更改时,它就崩溃了。

下面的第一个代码块有效。第二个 block 因给定错误而失败。非常感谢任何帮助。我怀疑这个错误与序列化 Python 对象有关。该错误表明它不能 Pickle TestClass。我找不到有关如何使我的类(class)可以 pickle 的信息。该文档说“通常,如果您可以 pickle 该对象的每个属性,则可以 pickle 任何对象。类、函数和方法不能被 pickle ——如果您 pickle 一个对象,则该对象的类不会被 pickle ,只是一个标识什么的字符串它所属的类。这对大多数 pickle 都适用(但请注意关于 pickle 长期储存的讨论)。”我不明白这一点,因为我已经尝试用日期时间类替换我的 TestClass 并且一切似乎都正常。

无论如何,代码:

# ----------- This code works -----------------------------
class TestClass(object):
    def __init__(self):
        self.teststr = 'Hello'
    def __str__(self):
        return self.teststr
    def __repr__(self):
        return self.teststr
    def test(self):
        return 'test: {0}'.format(self.teststr)

#load multiple text files into list of RDDs, concatenate them, then remove headers
trip_rdd  = trip_rdds[0]
for rdd in trip_rdds[1:]:
    trip_rdd = trip_rdd.union(rdd)

#filter out header rows from result
trip_rdd = trip_rdd.filter(lambda r: r != header)

#split the line, then convert each element to a dictionary
trip_rdd = trip_rdd.map(lambda r: r.split(','))
trip_rdd = trip_rdd.map(lambda r, k = header_keys: dict(zip(k, r)))
trip_rdd = trip_rdd.map(convert_trip_dict)
#trip_rdd = trip_rdd.map(lambda d, ps = g_nyproj_str: Trip(d, ps))

#originally I map the given dictionaries to a 'Trip' class I defined with various bells and whistles. 
#I've simplified to using TestClass above and still seem to get the same error

trip_rdd = trip_rdd.map(lambda t: TestClass())
trip_rdd = trip_rdd.map(lambda t: t.test()) #(1) Watch this row

print trip_rdd.count()
temp = trip_rdd.top(3)
print temp
print '...done'

以上代码返回以下内容:

347098

['测试:你好','测试:你好','测试:你好']

...完成

但是,当我删除标记为“(1) watch this row”的行( map 的最后一行)并重新运行时,我得到了以下错误。它很长,所以在发布输出之前,我将在这里结束我的问题。再一次,我真的很感激帮助。

提前致谢!

# ----------- This code FAILS -----------------------------
class TestClass(object):
    def __init__(self):
        self.teststr = 'Hello'
    def __str__(self):
        return self.teststr
    def __repr__(self):
        return self.teststr
    def test(self):
        return 'test: {0}'.format(self.teststr)

#load multiple text files into list of RDDs, concatenate them, then remove headers
trip_rdds = [sc.textFile(f) for f in trip_files]
trip_rdd  = trip_rdds[0]
for rdd in trip_rdds[1:]:
    trip_rdd = trip_rdd.union(rdd)

#filter out header rows from result
trip_rdd = trip_rdd.filter(lambda r: r != header)

#split the line, then convert each element to a dictionary
trip_rdd = trip_rdd.map(lambda r: r.split(','))
trip_rdd = trip_rdd.map(lambda r, k = header_keys: dict(zip(k, r)))
trip_rdd = trip_rdd.map(convert_trip_dict)
#trip_rdd = trip_rdd.map(lambda d, ps = g_nyproj_str: Trip(d, ps))

#originally I map the given dictionaries to a 'Trip' class I defined with various bells and whistles. 
#I've simplified to using TestClass above and still seem to get the same error

trip_rdd = trip_rdd.map(lambda t: TestClass())
trip_rdd = trip_rdd.map(lambda t: t.test()) #(1) Watch this row

print trip_rdd.count()
temp = trip_rdd.top(3)
print temp
print '...done'

输出: 347098

*---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-76-6550318a5d5b> in <module>()
     29 #count them
     30 print trip_rdd.count()
---> 31 temp = trip_rdd.top(3)
     32 print temp
     33 print '...done'

C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in top(self, num, key)
   1043             return heapq.nlargest(num, a + b, key=key)
   1044 
-> 1045         return self.mapPartitions(topIterator).reduce(merge)
   1046 
   1047     def takeOrdered(self, num, key=None):

C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in reduce(self, f)
    713             yield reduce(f, iterator, initial)
    714 
--> 715         vals = self.mapPartitions(func).collect()
    716         if vals:
    717             return reduce(f, vals)

C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in collect(self)
    674         """
    675         with SCCallSiteSync(self.context) as css:
--> 676             bytesInJava = self._jrdd.collect().iterator()
    677         return list(self._collect_iterator_through_file(bytesInJava))
    678 

C:\Programs\Coding\Languages\Python\Anaconda_32bit\Conda\lib\site-packages\py4j-0.8.2.1-py2.7.egg\py4j\java_gateway.pyc in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

C:\Programs\Coding\Languages\Python\Anaconda_32bit\Conda\lib\site-packages\py4j-0.8.2.1-py2.7.egg\py4j\protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.\n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling o463.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 49.0 failed 1 times, most recent failure: Lost task 1.0 in stage 49.0 (TID 99, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\worker.py", line 107, in main
    process()
  File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\worker.py", line 98, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\serializers.py", line 231, in dump_stream
    bytes = self.serializer.dumps(vs)
  File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\serializers.py", line 393, in dumps
    return cPickle.dumps(obj, 2)
PicklingError: Can't pickle <class '__main__.TestClass'>: attribute lookup __main__.TestClass failed

    at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137)
    at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:174)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)*

最佳答案

原来你必须在它自己的模块中定义你的类,而不是在代码的主体中。如果你这样做然后导入模块,pickle 能够成功地 pickle 和 unpickle 对象。然后,该类将按照您的预期与 Spark 一起工作。

关于python - 当 RDD 包含用户定义的类时,为什么 Apache PySpark top() 会失败?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29130870/

相关文章:

python - 如何编辑打印到标准输出的字符串?

python - 根据其相对于列表中其他值的值在列表中分配变量

java - 将序列化和压缩的对象保留在内存中

c# - 使用 DataContract 从 List<> 派生的类的序列化

java - 在JAVA中的apache Spark数据集中添加 header

java - 使用spark-submit部署应用程序:应用程序已添加到调度程序中,尚未激活

python - 在 NumPy 中旋转晶体

python - Vim:警告:不输出到终端

java - 在 Gson 中,如何在反序列化期间检测类型不匹配?

apache-spark - DataFrameReader 在读取 avro 文件时抛出 "Unsupported type NULL"