python - Pyspark py4j PickleException : "expected zero arguments for construction of ClassDict"

标签 python apache-spark pyspark py4j

这个问题针对熟悉 py4j 的人 - 可以帮助解决 pickling 错误。我正在尝试向 pyspark PythonMLLibAPI 添加一个方法,该方法接受 namedtuple 的 RDD,做一些工作,并以 RDD 的形式返回结果。

此方法仿照 PYthonMLLibAPI.trainALSModel() 方法,其类似现有相关部分是:

  def trainALSModel(
    ratingsJRDD: JavaRDD[Rating],
    .. )

用于为新代码建模的现有 python Rating 类是:

class Rating(namedtuple("Rating", ["user", "product", "rating"])):
    def __reduce__(self):
        return Rating, (int(self.user), int(self.product), float(self.rating))

这是尝试所以这里是相关的类:

新建 python 类 pyspark.mllib.clustering.MatrixEntry:

from collections import namedtuple
class MatrixEntry(namedtuple("MatrixEntry", ["x","y","weight"])):
    def __reduce__(self):
        return MatrixEntry, (long(self.x), long(self.y), float(self.weight))

方法 foobarRDD 在 PythonMLLibAPI 中:

  def foobarRdd(
    data: JavaRDD[MatrixEntry]): RDD[FooBarResult] = {
    val rdd = data.rdd.map { d => FooBarResult(d.i, d.j, d.value, d.i * 100 + d.j * 10 + d.value)}
    rdd
  }

现在让我们试试看:

from pyspark.mllib.clustering import MatrixEntry

def convert_to_MatrixEntry(tuple):
  return MatrixEntry(*tuple)

from pyspark.mllib.clustering import *
pic = PowerIterationClusteringModel(2)
tups = [(1,2,3),(4,5,6),(12,13,14),(15,7,8),(16,17,16.5)]
trdd = sc.parallelize(map(convert_to_MatrixEntry,tups))

# print out the RDD on python side just for validation
print "%s" %(repr(trdd.collect()))

from pyspark.mllib.common import callMLlibFunc
pic = callMLlibFunc("foobar", trdd)

结果的相关部分:

[(1,2)=3.0, (4,5)=6.0, (12,13)=14.0, (15,7)=8.0, (16,17)=16.5]

这表明输入 rdd 是“完整的”。然而酸洗不开心:

5/04/27 21:15:44 ERROR Executor: Exception in task 6.0 in stage 1.0 (TID 14)
net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict
(for pyspark.mllib.clustering.MatrixEntry)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:617)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:170)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:84)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:97)
    at org.apache.spark.mllib.api.python.SerDe$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(PythonMLLibAPI.scala:1167)
    at org.apache.spark.mllib.api.python.SerDe$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(PythonMLLibAPI.scala:1166)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$17.apply(RDD.scala:819)
    at org.apache.spark.rdd.RDD$$anonfun$17.apply(RDD.scala:819)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1523)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1523)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:212)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:724)

下面是 python 调用堆栈跟踪的可视化:

enter image description here

最佳答案

我在使用 MLlib 时遇到了同样的错误,事实证明我在我的一个函数中返回了错误的数据类型。它现在可以在对返回值进行简单转换后工作。这可能不是您正在寻找的答案,但它至少是一个指引方向的提示。

关于python - Pyspark py4j PickleException : "expected zero arguments for construction of ClassDict",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29910708/

相关文章:

python - 单击 matplotlib(或可能是 plotly)中的阶梯图子图点时如何使标 checkout 现?

python - Gamma 校正,用于背景较浅的图像

apache-spark - Pyspark 和 PCA : How can I extract the eigenvectors of this PCA? 如何计算它们解释的方差?

python - 如何在 Pyspark 中按列连接/附加多个 Spark 数据帧?

python - 在 pyspark 中找不到 col 函数

python - 如何将元组解压缩为比元组更多的值?

python - 向量和 pandas 列(线性向量)之间的余弦相似度

java - Spark : configuration file 'metrics.properties'

scala - Spark实现Scala api的并行交叉验证

python - 更改 Spark Web UI 的根路径?