java - hadoop writables NotSerializableException 与 Apache Spark API

标签 java apache-spark

Spark Java 应用程序在 hadoop 可写对象上抛出 NotSerializableException。

public final class myAPP {
  public static void main(String[] args) throws Exception {    
    if (args.length < 1) {
      System.err.println("Usage: myAPP <file>");
      System.exit(1);
    }
    SparkConf sparkConf = new SparkConf().setAppName("myAPP").setMaster("local");
    JavaSparkContext ctx = new JavaSparkContext(sparkConf);
    Configuration conf = new Configuration();
    JavaPairRDD<LongWritable,Text> lines = ctx.newAPIHadoopFile(args[0], TextInputFormat.class, LongWritable.class, Text.class, conf);
    System.out.println(    lines.collect().toString());
    ctx.stop();
  }

.

java.io.NotSerializableException: org.apache.hadoop.io.LongWritable
Serialization stack:
    - object not serializable (class: org.apache.hadoop.io.LongWritable, value: 15227295)
    - field (class: scala.Tuple2, name: _1, type: class java.lang.Object)
    - object (class scala.Tuple2, (15227295,))
    - element of array (index: 0)
    - array (class [Lscala.Tuple2;, size 1153163)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:38)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
15/04/26 16:05:05 ERROR TaskSetManager: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: org.apache.hadoop.io.LongWritable
Serialization stack:
    - object not serializable (class: org.apache.hadoop.io.LongWritable, value: 15227295)
    - field (class: scala.Tuple2, name: _1, type: class java.lang.Object)
    - object (class scala.Tuple2, (15227295,))
    - element of array (index: 0)
    - array (class [Lscala.Tuple2;, size 1153163); not retrying
15/04/26 16:05:05 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
15/04/26 16:05:05 INFO TaskSchedulerImpl: Cancelling stage 0
15/04/26 16:05:05 INFO DAGScheduler: Job 0 failed: collect at Parser2.java:60, took 0.460181 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: org.apache.hadoop.io.LongWritable

在 Spark Scala 程序中,我按如下方式注册了 hadoop 可写对象,它工作正常。

sparkConf.registerKryoClasses(Array(classOf[org.apache.hadoop.io.LongWritable], classOf[org.apache.hadoop.io.Text]))

显然这种方法不适用于 Apache Spark API

sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
sparkConf.set("spark.kryo.registrator", LongWritable.class.getName());

.

Exception in thread "main" org.apache.spark.SparkException: Failed to register classes with Kryo
    at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:101)
    at org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:153)
    at org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:115)
    at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:200)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:101)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:84)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
    at org.apache.spark.rdd.NewHadoopRDD.<init>(NewHadoopRDD.scala:77)
    at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:848)
    at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopFile(JavaSparkContext.scala:488)
    at com.nsn.PMParser.Parser2.main(Parser2.java:56)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.spark.serializer.KryoRegistrator
    at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$3.apply(KryoSerializer.scala:97)
    at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$3.apply(KryoSerializer.scala:97)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:97)
    ... 13 more

hadoop writables NotSerializableException 与 Apache Spark Java API?

最佳答案

从 Spark v1.4.0 开始,您可以使用此 Java API 注册要使用 Kryo 序列化的类: https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkConf.html#registerKryoClasses(java.lang.Class[]) , 通过传入一个 Class 对象数组,每个对象都可以使用 http://docs.oracle.com/javase/7/docs/api/java/lang/Class.html#forName(java.lang.String)

例如:

new SparkConf().registerKryoClasses(new Class<?>[]{
    Class.forName("org.apache.hadoop.io.LongWritable"),
    Class.forName("org.apache.hadoop.io.Text")
});

希望这对您有所帮助。

关于java - hadoop writables NotSerializableException 与 Apache Spark API,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29876681/

相关文章:

java - Spring Boot 有什么方法可以定义 JTA 资源吗?

java - 遇到错误 : message from Actor to Actor was not delivered. [1] 死信。跨集群的分布式发布-订阅不起作用

akka - Apache Spark - 工作人员的连接被拒绝

java - 向 Maven 添加系统依赖

java - 如何使用javapairrdd中的containsAll和contains来使用过滤器

postgresql - 集群中有 20 个分区但没有工作人员被使用的 RDD

java - 收集/计数到非空 Map 会抛出 ClassCastException

java - C# 相当于 java Class<E> 和 E extends Enum<E>

java - 使用方法的输出作为最终变量

python - 如何将 Vector 拆分为列 - 使用 PySpark