hadoop - Hive 2.1.0:无法移动源

标签 hadoop apache-spark hive hiveql hadoop2

我已将 hive 从1.2.1升级到2.1.0,在这里我在运行插入覆盖目录时面临问题。

INSERT OVERWRITE DIRECTORY '/xx/xx' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' SELECT x.x from xxx;;

    2016-10-28T12:08:49,997 ERROR [main]: exec.Task (:()) - Failed with       exception Unable to move source hdfs://mycluster/xx/.hive-staging_hive_2016-10-    28_12-07-58_576_4894031662568749258-1/-ext-10000 to destination /DIM/ASSET
    org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source  hdfs://mycluster/xx/.hive-staging_hive_2016-10-28_12-07-58  _576_4894031662568749258-1/-ext-10000 to destination /DIM/ASSET
    at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:103)
    at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:254)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1858)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1562)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1313)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)

    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename for src path: hdfs://mycluster/xx/.hive-staging_hive_2016-10-28_12-07-58_576_4894031662568749258-1/-ext-10000/000000_0 to dest path:/x/xx/000000_0 returned false
            at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2942)
    at org.apache.hadoop.hive.ql.exec.MoveTask.moveFileInDfs(MoveTask.java:118)
    at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:96)
    ... 20 more
    Caused by: java.io.IOException: rename for src path: hdfs://mycluster/xx.hive-staging_hive_2016-10-28_12-07-58_576_4894031662568749258-1/-ext-10000/000000_0 to dest path:/x/xx/000000_0 returned false
    at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2922)
    at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2911)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

任何建议都将被高度推荐。

最佳答案

当尝试执行类似的查询而没有对源/目标文件的写访问权时,我遇到了相同的问题。

尝试检查您是否具有对源/目标文件及其父目录的写权限。

关于hadoop - Hive 2.1.0:无法移动源,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40299613/

相关文章:

java - 找到不支持的主体 key 类型(8)

java - Java:读取hadoop reducer的输出文件

apache-spark - ValueError : Cannot convert column into bool

python - Pyspark - 具有重置条件的累积和

mysql - 动态地从 hive/sql 表中选择列

hadoop - 德鲁伊摄取失败

hadoop - Map 和 Reduce 是否在单独的 JVM 中运行?

apache-spark - 如何在 Spark UDF 中编写多个 If 语句

hive - PySpark:java.lang.ClassCastException

hadoop - 从HIVE表中提取单个列