java - 如何将行中的结构字段转换为 Spark Java 中的 avro 记录

标签 java apache-spark avro spark-avro

我有一个用例,我想将结构字段转换为 Avro 记录。 struct 字段最初映射到 Avro 类型。输入数据是avro文件,struct字段对应输入avro记录中的一个字段。
以下是我想用伪代码实现的目标。

DataSet<Row> data = loadInput(); // data is of form (foo, bar, myStruct) from avro data. 

// do some joins to add more data
data = doJoins(data); // now data is of form (a, b, myStruct)

// transform DataSet<Row> to DataSet<MyType> 
DataSet<MyType> myData = data.map(row -> myUDF(row), encoderOfMyType);

// method `myUDF` definition
MyType myUDF(Row row) {
  String a = row.getAs("a");
  String b = row.getAs("b");

  // MyStruct is the generated avro class that corresponds to field myStruct 
  MyStruct myStruct = convertToAvro(row.getAs("myStruct"));

  return generateMyType(a, b, myStruct);
}
我的问题是:如何实现 convertToAvro上面伪代码中的方法?

最佳答案

来自 documentation :

The Avro package provides function to_avro to encode a column as binary in Avro format, and from_avro() to decode Avro binary data into a column. Both functions transform one column to another column, and the input/output SQL data type can be a complex type or a primitive type.


函数to_avro作为 convertToAvro 的替代品方法:
import static org.apache.spark.sql.avro.functions.*;

//put the avro schema of the struct column into a string
//in my example I assume that the struct consists of a two fields:
//a long field (s1) and a string field (s2)
String schema = "{\"type\":\"record\",\"name\":\"mystruct\"," +
        "\"namespace\":\"topLevelRecord\",\"fields\":[{\"name\":\"s1\"," +
        "\"type\":[\"long\",\"null\"]},{\"name\":\"s2\",\"type\":" +
        "[\"string\",\"null\"]}]},\"null\"]}";

data = ...

//add an additional column containing the struct as binary column
Dataset<Row> data2 = df.withColumn("to_avro", to_avro(data.col("myStruct"), schema));
df2.printSchema();
df2.show(false);
打印
root
 |-- a: string (nullable = true)
 |-- b: string (nullable = true)
 |-- mystruct: struct (nullable = true)
 |    |-- s1: long (nullable = true)
 |    |-- s2: string (nullable = true)
 |-- to_avro: binary (nullable = true)

+----+----+----------+----------------------------+
|a   |b   |mystruct  |to_avro                     |
+----+----+----------+----------------------------+
|foo1|bar1|[1, one]  |[00 02 00 06 6F 6E 65]      |
|foo2|bar2|[3, three]|[00 06 00 0A 74 68 72 65 65]|
+----+----+----------+----------------------------+
要将 avro 列转换回来,函数 from_avro可以使用:
Dataset<Row> data3 = data2.withColumn("from_avro", from_avro(data2.col("to_avro"), schema));
df3.printSchema();
df3.show();
输出:
root
 |-- a: string (nullable = true)
 |-- b: string (nullable = true)
 |-- mystruct: struct (nullable = true)
 |    |-- s1: long (nullable = true)
 |    |-- s2: string (nullable = true)
 |-- to_avro: binary (nullable = true)
 |-- from_avro: struct (nullable = true)
 |    |-- s1: long (nullable = true)
 |    |-- s2: string (nullable = true)

+----+----+----------+--------------------+----------+
|   a|   b|  mystruct|             to_avro| from_avro|
+----+----+----------+--------------------+----------+
|foo1|bar1|  [1, one]|[00 02 00 06 6F 6...|  [1, one]|
|foo2|bar2|[3, three]|[00 06 00 0A 74 6...|[3, three]|
+----+----+----------+--------------------+----------+
关于 udf 的一句话:在问题中,您在 udf 中执行了到 avro 格式的转换。我更愿意在 udf 中只包含实际的业务逻辑,并将格式转换保留在外面。这将逻辑和格式转换分开。如有需要,可以去掉原来的栏目mystruct创建 avro 列后。

关于java - 如何将行中的结构字段转换为 Spark Java 中的 avro 记录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63925187/

相关文章:

java 从文本文件中删除具有特定元素的行

java - 正则表达式: extract text from middle of line

C# UTC 格式的日期时间传递问题

scala - SparkPi 运行缓慢,超过 1 个切片

java - 数据序列化框架

apache-kafka - 当 Schema Registry 与 Kafka 和 Avro 一起使用时,每个主题有一个或多个模式......?

java - character_result_set 在 mysql 中为空

apache-spark - 在不停止进程的情况下刷新Spark实时流中的Dataframe

hadoop - 无法在 HDP 2.5.0 中对 Oozie 运行 Spark 操作(java.lang.IllegalArgumentException : Invalid ContainerId)

python - 每个 kafka 主题的 spark streaming 不同值解码器