apache-spark - Spark SQL 中使用的嵌套 java bean

标签 apache-spark

我正在使用 Spark 2.1,并且想将一个 Person 列表写成一个数据框。Person 类有一个嵌套的 java bean 类 Address

人:

public class Person {
    private String name;
    private Address address;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public Address getAddress() {
        return address;
    }

    public void setAddress(Address address) {
        this.address = address;
    }
}

地址:

public class Address {
    private String city;
    private String street;

    public String getCity() {
        return city;
    }

    public void setCity(String city) {
        this.city = city;
    }

    public String getStreet() {
        return street;
    }

    public void setStreet(String street) {
        this.street = street;
    }
}

我正在使用以下代码针对 List[Person] 创建数据框

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import java.util.ArrayList;
import java.util.List;

public class PersonTest {
    public static void main(String[] args) {
        Person p = new Person();
        p.setName("Tom");
        Address address = new Address();
        address.setCity("C");
        address.setStreet("001");
        p.setAddress(address);
        List<Person> persons = new ArrayList<Person>();
        persons.add(p);

        SparkSession session = SparkSession.builder().master("local").appName("abc").enableHiveSupport().getOrCreate();

        Dataset<Row> df = session.createDataFrame(persons, Person.class);
        df.printSchema();

        df.write().json("file:///D:/applog/spark/" + System.currentTimeMillis());

    }
}

但是出现如下异常,请问如何解决这个问题。

Exception in thread "main" scala.MatchError: com.Address@1e5eb20a (of class com..Address)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:236)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:231)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:103)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:383)
    at org.apache.spark.sql.SQLContext$$anonfun$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1113)
    at org.apache.spark.sql.SQLContext$$anonfun$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1113)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
    at org.apache.spark.sql.SQLContext$$anonfun$beansToRows$1.apply(SQLContext.scala:1113)
    at org.apache.spark.sql.SQLContext$$anonfun$beansToRows$1.apply(SQLContext.scala:1111)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$class.toStream(Iterator.scala:1322)
    at scala.collection.AbstractIterator.toStream(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toSeq(TraversableOnce.scala:298)
    at scala.collection.AbstractIterator.toSeq(Iterator.scala:1336)
    at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:380)

最佳答案

您可以改为创建类型化数据集,然后在需要时将其转换为数据框:

Dataset<Person> ds = session.createDataset(persons, Encoders.bean(Person.class));
Dataset<Row> df = ds.toDF();

关于apache-spark - Spark SQL 中使用的嵌套 java bean,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56129574/

相关文章:

scala - 如何将函数传递给RDD.map?

scala - Spark Scala GraphX : Shortest path between two vertices

apache-spark - 最佳文件大小和 Parquet block 大小

scala - 尽管使用了 import sqlContext.implicits._,但 toDF 无法编译

apache-spark - 结合 PyCharm、Spark 和 Jupyter

python - 如何从 Azure ADLS Gen 1 在 Azure 机器学习工作室中注册特定版本的增量表?

scala - 将数据框转换为数据集会保留额外的列

java - 将递增的整数 Id 添加到 JavaRDD

apache-spark - Spark 谓词下推性能

apache-spark - Spark Streaming 作业日志大小溢出