java - Spark - 转换复杂数据类型

标签 java apache-spark apache-spark-sql user-defined-functions

目标

我想要实现的目标是

  • 读取 CSV 文件(确定)
  • 将其编码为 Dataset<Person> ,其中Person 对象有一个嵌套对象 Address[](引发异常)

人员 CSV 文件

在名为 person.csv 的文件中,有以下数据描述了一些人:

name,age,address
"name1",10,"streetA~cityA||streetB~cityB"
"name2",20,"streetA~cityA||streetB~cityB"

第一行是架构,地址是嵌套结构

数据类

数据类是:

@Data
public class Address implements Serializable {
    public String street;
    public String city;
}

@Data
public class Person implements Serializable {
    public String name;
    public Integer age;
    public Address[] address;
}

读取非类型化数据

我首先尝试从 Dataset<Row> 中的 CSV 读取数据。 ,它按预期工作:

    Dataset<Row> ds = spark.read() //
                           .format("csv") //
                           .option("header", "true") // first line has headers
                           .load("src/test/resources/outer/person.csv");

    LOG.info("=============== Print schema =============");
    ds.printSchema();

root
|-- name: string (nullable = true)
|-- age: string (nullable = true)
|-- address: string (nullable = true)

    LOG.info("================ Print data ==============");
    ds.show();

+-----+---+--------------------+
| name|age|             address|
+-----+---+--------------------+
|name1| 10|streetA~cityA||st...|
|name2| 20|streetA~cityA||st...|
+-----+---+--------------------+

    LOG.info("================ Print name ==============");
    ds.select("name").show();

+-----+
| name|
+-----+
|name1|
|name2|
+-----+

    assertThat(ds.isEmpty(), is(false)); //OK
    assertThat(ds.count(), is(2L)); //OK
    final List<String> names = ds.select("name").as(Encoders.STRING()).collectAsList();
    assertThat(names, hasItems("name1", "name2")); //OK

通过 UserDefinedFunction 进行编码

我的 udf 需要 String并返回Address[] :

private static void registerAsAddress(SparkSession spark) {
    spark.udf().register("asAddress", new UDF1<String, Address[]>() {

                             @Override
                             public Address[] call(String rowValue) {
                                 return Arrays.stream(rowValue.split(Pattern.quote("||"), -1)) //
                                              .map(object -> object.split("~")) //
                                              .map(Address::fromArgs) //
                                              .map(a -> a.orElse(null)) //
                                              .toArray(Address[]::new);
                             }
                         },  //
                         DataTypes.createArrayType(DataTypes.createStructType(
                                 new StructField[]{new StructField("street", DataTypes.StringType, true, Metadata.empty()), //
                                                   new StructField("city", DataTypes.StringType, true, Metadata.empty()) //
                                 })));
}

调用者:

   @Test
    void asAddressTest() throws URISyntaxException {
        registerAsAddress(spark);

        // given, when
        Dataset<Row> ds = spark.read() //
                               .format("csv") //
                               .option("header", "true") // first line has headers
                               .load("src/test/resources/outer/person.csv");

        ds.show();
        // create a typed dataset
        Encoder<Person> personEncoder = Encoders.bean(Person.class);
        Dataset<Person> typed = ds.withColumn("address2", //
                                                callUDF("asAddress", ds.col("address")))
                .drop("address").withColumnRenamed("address2", "address")
                .as(personEncoder);
        LOG.info("Typed Address");
        typed.show();
        typed.printSchema();
    }

这会导致此异常:

Caused by: java.lang.IllegalArgumentException: The value (Address(street=streetA, city=cityA)) of the type (ch.project.data.Address) cannot be converted to struct

为什么它无法从 Address 转换至Struct

最佳答案

在尝试了很多不同的方法并花了一些时间在互联网上进行研究之后,我得到了以下结论:

UserDefinedFunction 很好,但来自旧世界,它可以用一个简单的 map() 函数代替,我们需要将对象从一种类型转换为另一种类型。 最简单的方法如下

    SparkSession spark = SparkSession.builder().appName("CSV to Dataset").master("local").getOrCreate();
    Encoder<FileFormat> fileFormatEncoder = Encoders.bean(FileFormat.class);
    Dataset<FileFormat> rawFile = spark.read() //
                                       .format("csv") //
                                       .option("inferSchema", "true") //
                                       .option("header", "true") // first line has headers
                                       .load("src/test/resources/encoding-tests/persons.csv") //
                                       .as(fileFormatEncoder);

    LOG.info("=============== Print schema =============");
    rawFile.printSchema();
    LOG.info("================ Print data ==============");
    rawFile.show();
    LOG.info("================ Print name ==============");
    rawFile.select("name").show();

    // when
    final SerializableFunction<String, List<Address>> asAddress = (String text) -> Arrays
            .stream(text.split(Pattern.quote("||"), -1)) //
            .map(object -> object.split("~")) //
            .map(Address::fromArgs) //
            .map(a -> a.orElse(null)).collect(Collectors.toList());

    final MapFunction<FileFormat, Person> personMapper = (MapFunction<FileFormat, Person>) row -> new Person(row.name,
                                                                                                             row.age,
                                                                                                             asAddress
                                                                                                                     .apply(row.address));
    final Encoder<Person> personEncoder = Encoders.bean(Person.class);
    Dataset<Person> persons = rawFile.map(personMapper, personEncoder);
    persons.show();

    // then
    assertThat(persons.isEmpty(), is(false));
    assertThat(persons.count(), is(2L));
    final List<String> names = persons.select("name").as(Encoders.STRING()).collectAsList();
    assertThat(names, hasItems("name1", "name2"));
    final List<Integer> ages = persons.select("age").as(Encoders.INT()).collectAsList();
    assertThat(ages, hasItems(10, 20));
    final Encoder<Address> addressEncoder = Encoders.bean(Address.class);
    final MapFunction<Person, Address> firstAddressMapper = (MapFunction<Person, Address>) person -> person.addresses.get(0);
    final List<Address> addresses = persons.map(firstAddressMapper, addressEncoder).collectAsList();
    assertThat(addresses, hasItems(new Address("streetA", "cityA"), new Address("streetC", "cityC")));

关于java - Spark - 转换复杂数据类型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58232599/

相关文章:

java - Spring MongoDB 依赖错误

java - 在容器而不是 web.xml 中指定身份验证

Java类的用途非常有限,但不限于一个类。外在还是内在?

java - 哪个线程启动 JOptionPane?

Pandas 数据帧到 Spark 数据帧 "Can not merge type error"

python - pyspark RDD countByKey() 是如何计数的?

scala - 如何在 Spark 窗口函数中使用 orderby() 降序排列?

apache-spark - Apache Spark 中失败的任务是否会重新提交?

apache-spark - 嵌套列上的 DataFrame partitionBy

apache-spark - Pyspark DataFrame : Split column with multiple values into rows