我正在尝试从 HDInsight Spark 2.4 编写增量表。
我已经按照 https://docs.delta.io/latest/delta-storage.html#configure-for-azure-blob-storage 配置了我的工作
我有以下代码
myrdd.write().format("delta").mode(SaveMode.Append).partitionBy("col1","col2")
.save("wasbs://<a href="https://stackoverflow.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="73101c1d07121a1d1601331210101c061d075d111f1c115d101c01165d041a1d171c04005d1d1607" rel="noreferrer noopener nofollow">[email protected]</a>/delta/table1");
写入成功,我看到 parquet 文件写入存储位置,但是当我查看 __deltalog 文件时。我没有看到分区信息写入,请参阅下面的partitionBy是空数组
{"commitInfo":{"timestamp":1586157735069,"operation":"WRITE","operationParameters":{"mode":"Append","partitionBy":"[]"},"isBlindAppend":true}}
此外,单个 Parquet 文件的分区信息丢失
{"add":{"path":"part-00000-10341955-1490-4fc4-a66c-e7fdd6765fb2-c000.snappy.parquet","partitionValues":{},"size":10473576,"modificationTime":1586157604000,"dataChange":true}}
{"add":{"path":"part-00001-13651729-a04c-400e-ba42-242df2d0afd4-c000.snappy.parquet","partitionValues":{},"size":3884853,"modificationTime":1586157734000,"dataChange":true}}
{"add":{"path":"part-00002-dc29cc35-ef55-4f71-8195-927d76867195-c000.snappy.parquet","partitionValues":{},"size":2449481,"modificationTime":1586157371000,"dataChange":true}}
{"add":{"path":"part-00003-0a8028fa-e910-420b-aa82-b85f4ee1ce4a-c000.snappy.parquet","partitionValues":{},"size":2680111,"modificationTime":1586157441000,"dataChange":true}}
{"add":{"path":"part-00004-414dc827-2860-44f2-82ff-67e7c6f53e50-c000.snappy.parquet","partitionValues":{},"size":3321879,"modificationTime":1586157381000,"dataChange":true}}
{"add":{"path":"part-00005-b7bb3b28-a78a-4733-be54-e30d88b8d360-c000.snappy.parquet","partitionValues":{},"size":4634113,"modificationTime":1586157618000,"dataChange":true}}
我将以下包传递给我的 Spark 提交
io.delta:delta-core_2.11:0.5.0,org.apache.hadoop:hadoop-azure:3.2.0
如果我遗漏了什么或解释不正确,请告诉我。
最佳答案
根据 Delta Lake 文档,Spark 版本 2.4.2 开始提供对 Delta Lake 的支持
HDinsight Spark 于 2020 年 7 月发布了新版本,其中包括 Spark 2.4.4。
使用 Spark 2.4.4 附带的较新版本的 HDInsight,我看到数据已使用适当的分区写入。
关于apache-spark - azure HDInsight 上具有 azure blob 存储的增量表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61055012/