在 Spark 中对 joda DateTime 字段执行简单映射时,我收到 NullPointerException。
代码片段:
val me1 = (accountId, DateTime.now())
val me2 = (accountId, DateTime.now())
val me3 = (accountId, DateTime.now())
val rdd = spark.parallelize(List(me1, me2, me3))
val result = rdd.map{case (a,d) => (a,d.dayOfMonth().roundFloorCopy())}.collect.toList
堆栈跟踪:
java.lang.NullPointerException
at org.joda.time.DateTime$Property.roundFloorCopy(DateTime.java:2280)
at x.y.z.jobs.info.AggJobTest$$anonfun$1$$anonfun$2.apply(AggJobTest.scala:47)
at x.y.z.jobs.info.AggJobTest$$anonfun$1$$anonfun$2.apply(AggJobTest.scala:47)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:780)
at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:780)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
有什么建议可以解决这个问题吗?
更新: 为了重现该问题,您需要使用 KryoSerializer:
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
最佳答案
正如您所指出的,您正在将 KryoSerializer 与 Joda DateTime 对象一起使用。序列化似乎遗漏了一些必需的信息,您可能希望查看使用向 Kryo 添加对 Joda DateTime 对象的支持的项目之一。例如https://github.com/magro/kryo-serializers提供了一个名为 JodaDateTimeSerializer
的序列化器,您可以使用 kryo.register( DateTime.class, new JodaDateTimeSerializer() );
关于apache-spark - NPE 与 Joda DateTime 一起 Spark ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32256318/