hadoop - Hive 动态分区,未创建正确的分区

标签 hadoop dynamic hive hdfs hiveql

我正在尝试将数据插入到分区表中,但并未创建所有分区(仅创建了空值和零值),请参见下文。

hive >

select state_code,district_code,count(*) from marital_status group by state_code,district_code;
Total MapReduce jobs = 1

启动的 MapReduce 作业:

...
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 3.49 sec   HDFS Read: 193305 HDFS Write: 240 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 490 msec
OK
28  000 60
28  532 60
28  533 60
28  534 60
28  535 60
28  536 60
28  537 60
28  538 60
28  539 60
28  540 60
28  541 60
28  542 60
28  543 60
28  544 60
28  545 60
28  546 60
28  547 60
28  548 60
28  549 60
28  550 60
28  551 60
28  552 60
28  553 60
28  554 60
Time taken: 39.442 seconds, Fetched: 24 row(s)

我现在将此表数据插入到另一个按 district_code 分区的表中。

hive >

insert overwrite table marital_status_part partition(DISTRICT_CODE) SELECT * FROM MARITAL_STATUS WHERE DISTRICT_CODE IN ('532','533','534');
Total MapReduce jobs = 3
Launching Job 1 out of 3

由于没有 reduce 运算符,reduce 任务数设置为 0

Starting Job = job_201507071409_0020, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201507071409_0020
Kill Command = /home/chaitanya/hadoop-1.2.1/libexec/../bin/hadoop job  -kill job_201507071409_0020

Stage-1 的 Hadoop 作业信息:映射器数量:1; reducer 数量:

0
2015-07-07 16:35:38,180 Stage-1 map = 0%,  reduce = 0%
2015-07-07 16:35:48,214 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:49,217 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:50,220 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:51,222 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:52,226 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:53,234 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.01 sec
2015-07-07 16:35:54,237 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.01 sec
MapReduce Total cumulative CPU time: 2 seconds 10 msec
Ended Job = job_201507071409_0020
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://localhost:9000/tmp/hive-chaitanya/hive_2015-07-07_16-35-29_099_2560746659196071718-1/-ext-10000
Loading data to table default.marital_status_part partition (district_code=null)
    Loading partition {district_code=0}
Partition default.marital_status_part{district_code=0} stats: [num_files: 1, num_rows: 0, total_size: 22882, raw_data_size: 0]
Table default.marital_status_part stats: [num_partitions: 1, num_files: 1, num_rows: 0, total_size: 22882, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 2.01 sec   HDFS Read: 193305 HDFS Write: 22882 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 10 msec
OK
Time taken: 26.254 seconds

实际应该发生的是必须使用 532、533、534 创建三个文件夹,但只创建了 2 个文件夹(NULL 和零)。你能帮我解决这个问题吗?

最佳答案

Hive 分区可以被认为是一个“虚拟”列。在 HDFS 上,它们被分成不同的目录。分区值取自您选择的最后一个条目。在不了解有关您的表列的更多信息的情况下,如果稍​​作修改,以下查询应该可以工作。

INSERT OVERWRITE TABLE marital_status_part partition(DISTRICT_CODE) SELECT column1, column2, ..., columnN, DISTRICT_CODE FROM MARITAL_STATUS WHERE DISTRICT_CODE IN ('532','533','534');

在此插入中,请注意 DISTRICT_CODE 是 SELECT 部分的最后一列。最后一列将用作 partition(DISTRICT_CODE) 中的 DISTRICT_CODE。您需要确保您选择的列数与目标表中的列数相匹配,并包含要分区的内容。

参见 https://cwiki.apache.org/confluence/display/Hive/Tutorial#Tutorial-Dynamic-PartitionInsert了解详情。

关于hadoop - Hive 动态分区,未创建正确的分区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31267010/

相关文章:

apache-spark - Apache Spark + Parquet 不遵守使用 “Partitioned” Staging S3A Committer 的配置

arrays - Presto查询行数组

dynamic - 为永恒而爬行

javascript - AngularJS : How to make dynamic field button only for last button?

scala - 如何使用配置单元上下文有效地查询 spark 中的配置单元表?

hadoop - 如何将序列文件转换为拼花格式

bash - Oozie shell 脚本 Action

hadoop - 无法打开 map-reduce 输出 url

java - 使用 java String.split 从文件动态读取

hive - Sqoop 导入按列数据类型拆分