java - Spark udf - 带有 json 的 classcastException

标签 java json apache-spark user-defined-functions classcastexception

我正在运行java Spark代码,读取一些json数据并通过UDF将其中一个字段转换为大写 该代码在本地模式下运行时工作正常,但在集群中(在 kubernetes 下)运行时,我得到如下 ClassCastException:

UDF1 uppercase = new UdfUppercase() ; 
session.udf().register("uppercasefunction",uppercase , DataTypes.StringType) ; 

StructField[] structFields = new StructField[]{ 
        new StructField("intColumn", DataTypes.IntegerType, true, Metadata.empty()), 
        new StructField("stringColumn", DataTypes.StringType, true, Metadata.empty()) 
}; 
StructType structType = new StructType(structFields); 

List<String> jsonData = ImmutableList.of( 
        "{\"intColumn\":1,\"stringColumn\":\"Miami\"}"); 

Dataset<String> anotherPeopleDataset = session.createDataset(jsonData, Encoders.STRING()); 
Dataset<Row> anotherPeople = session.read().schema(structType).json(anotherPeopleDataset);            
anotherPeople.show(false); 

Dataset<Row> dfupercase = anotherPeople.select(callUDF("uppercasefunction", col("stringColumn"))); 
dfupercase.show(false); 
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
    at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
    at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:490)
    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

任何帮助将不胜感激

最佳答案

问题现在已经解决了,它与 spring boot 的一些 jar 冲突有关 classcastException 具有误导性,因为我们的印象是我们的数据帧没有得到很好的序列化(本地和集群之间的差异),而实际上它与代码 istelf 无关,但 jar 冲突

关于java - Spark udf - 带有 json 的 classcastException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55907052/

相关文章:

scala - Spark Map 列中最大值对应的键

scala - Parquet 分区中同一列中不同类型的数据

opencv - 在 hadoop 中以分布式模式读取 haar 级联

java - Eclipse插件必须用Java编写吗?

java - Jackson JSON 和 Hibernate JPA 问题的无限递归(又一个)

java - Citrus-Framework:根据状态代码进行条件验证

json - 使用 jq(或 awk)将高冗余 CSV 数据转换为嵌套 JSON?

javascript - Rally - 按里程碑名称过滤史诗

java - 打印 Action 中定义的变量

java - Twitter4j 在 Android 4 中崩溃