- 使用外部表
- 进程没有对/home/user/.Trash 的写权限
调用“insert OVERWRITE”将产生以下警告
2018-08-29 13:52:00 WARN TrashPolicyDefault:141 - 无法创建垃圾目录:hdfs://nameservice1/user/XXXXX/.Trash/Current/data/table_1/key1=2 org.apache.hadoop.security.AccessControlException:权限被拒绝:user=XXXXX,access=EXECUTE,inode="/user/XXXXX/.Trash/Current/data/table_1/key1=2":hdfs:hdfs:drwx
问题:
- 我们可以避免迁移到 .Trash 吗?在外部表上使用 TBLPROPERTIES ('auto.purge'='true') 不起作用。
- “insert OVERWRITE”应该重写分区数据,而不是将新数据附加到分区
代码示例
创建表
spark.sql("CREATE EXTERNAL TABLE table_1 (id string, name string) PARTITIONED BY (key1 int) stored as parquet location 'hdfs://nameservice1/data/table_1'")
spark.sql("insert into table_1 values('a','a1', 1)").collect()
spark.sql("insert into table_1 values ('b','b2', 2)").collect()
spark.sql("select * from table_1").collect()
覆盖分区:
spark.sql("insert OVERWRITE table table_1 values ('b','b3', 2)").collect()
结果
[Row(id=u'a', name=u'a1', key1=1),
Row(id=u'b', name=u'b2', key1=2),
Row(id=u'b', name=u'b3', key1=2)]
最佳答案
在您的插入覆盖中添加PARTITION(column)。
val spark = SparkSession.builder.appName("test").config("hive.exec.dynamic.partition", "true").config("hive.exec.dynamic.partition.mode", "nonstrict").enableHiveSupport().getOrCreate
spark.sql("drop table table_1")
spark.sql("CREATE EXTERNAL TABLE table_1 (id string, name string) PARTITIONED BY (key1 int) stored as parquet location '/directory/your location/'")
spark.sql("insert into table_1 values('a','a1', 1)")
spark.sql("insert into table_1 values ('b','b2', 2)")
spark.sql("select * from table_1").show()
spark.sql("insert OVERWRITE table table_1 PARTITION(key1) values ('b','b3', 2)")
spark.sql("select * from table_1").show()
关于apache-spark-sql - Spark-sql Insert OVERWRITE追加数据而不是覆盖,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52085524/