hadoop - 在 Pig 中读取非字符串分区的 Hive 表

标签 hadoop hive apache-pig cloudera-cdh hcatalog

我正在尝试使用 Pig 从 Hive 表中读取数据。详情如下:

  • hive 版本 1.1
  • pig 0.12
  • Hadoop 2.6.0
  • Cloudera 发行版 5.4.4

Hive 表架构:

map <string, string>
yyyy int
mm int
dd int

Partitions are yyyy(int), mm(int), dd(int)

pig 代码:

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016 AND
                                      mm == 7 AND
                                      dd == 19
                                      ;

rmf input_data_dump;
STORE input_data_f INTO ‘input_data_dump';

用于运行的命令:pig -useHCatalog -f ./read_input.pig

我收到以下错误。

Error:
Pig Stack Trace
---------------
ERROR 2017: Internal error creating job configuration.

org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException: ERROR 2017: Internal error creating job configuration.
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:873)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:298)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
        at org.apache.pig.PigServer.launchPlan(PigServer.java:1334)
        at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1319)
        at org.apache.pig.PigServer.execute(PigServer.java:1309)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:387)
        at org.apache.pig.PigServer.executeBatch(PigServer.java:365)
        at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
        at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
        at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
        at org.apache.pig.Main.run(Main.java:478)
        at org.apache.pig.Main.main(Main.java:156)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: MetaException(message:Filtering is supported only on partition keys of type string)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:61)
        at org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:125)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:498)
        ... 19 more
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result$get_partitions_by_filter_resultStandardScheme.read(ThriftHiveMetastore.java)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_by_filter_result.read(ThriftHiveMetastore.java)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions_by_filter(ThriftHiveMetastore.java:2132)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions_by_filter(ThriftHiveMetastore.java:2116)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1047)
        at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:113)
        at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
        at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
        ... 22 more

在网上查找让我找到了 https://issues.apache.org/jira/browse/HIVE-7164

在 hive-site.xml 中将 hive.metastore.integral.jdo.pushdown 设置为 true 是唯一的解决方案吗?这是一个公司设置,所以我不确定我是否可以更改 hive-site.xml 以及如果我让管理员进行更改是否会有任何副作用?

尝试了以下方法:

尝试 1

set hive.metastore.integral.jdo.pushdown true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016 AND
                                      mm == 7 AND
                                      dd == 19
                                      ;

STORE input_data_f INTO ‘input_data_dump';

我在日志中看到了这个:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, PartitionFilterOptimizer]}

尝试 2

set hive.metastore.integral.jdo.pushdown true;
set pig.exec.useOldPartitionFilterOptimizer true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016;
input_data_f1 = FILTER input_data_f BY mm == 7;
input_data_f2 = FILTER input_data_f1 BY dd == 19;

STORE input_data_f2 INTO ‘input_data_dump';

我在日志中看到了这个:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, NewPartitionFilterOptimizer]}

尝试 3

set pig.exec.useOldPartitionFilterOptimizer true;

input_data = LOAD ‘dbname.tablename'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY yyyy == 2016;
input_data_f1 = FILTER input_data_f BY mm == 7;
input_data_f2 = FILTER input_data_f1 BY dd == 19;

STORE input_data_f2 INTO ‘input_data_dump';

我在日志中看到了这个:

org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, NewPartitionFilterOptimizer]}

通过上述尝试,我仍然得到同样的错误。

感谢您的帮助。

最佳答案

更新:
分区过滤器在某些情况下没有推送到加载器:
在 Pig 0.12.0 中,Pig 只将第一个过滤器推送到加载器。您会得到相同的结果,但会因此导致性能下降。 - 要解决这个问题,您应该对所有分区使用一个过滤语句。或者您可以指定: pig.exec.useOldPartitionFilterOptimizer=true see deails here - known issue of 0.12

对于 pig 脚本的特定属性,您可以使用以下之一 这些选项:

- pig.properties 文件(将包含 pig.properties 文件的目录添加到类路径)
- -D 命令行选项和 Pig 属性 (pig -Dpig.tmpfilecompression=true)
- -P 命令行选项和属性文件 (pig -P mypig.properties)
- set 命令(set pig.exec.nocombiner true)直接在 pig sctipt

more details on properties here. . .

测试:转换为 chararray 类型

$ hadoop version
Hadoop 2.6.0-cdh5.7.0

$ pig -version
Apache Pig version 0.12.0-cdh5.7.0 (rexported) 

$ cat pig_test1
-- set hive.metastore.integral.jdo.pushdown true;
input_data = LOAD 'cards.props'
             USING org.apache.hive.hcatalog.pig.HCatLoader()
             ;

input_data_f = FILTER input_data BY (chararray)yyyy == '2106' AND
                                     (chararray)mm == '8' AND
                                      (chararray)dd == '4'
                                      ;
dump input_data_f;

2016-08-04 17:15:54,541 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
([1#test1],2106,8,4)
([2#test2],2106,8,4)
([3#test3],2106,8,4)

hive> select * from props;
OK
{"1":"test1"}   2106    8   4
{"2":"test2"}   2106    8   4
{"3":"test3"}   2106    8   4

关于hadoop - 在 Pig 中读取非字符串分区的 Hive 表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38778619/

相关文章:

hadoop - 增量 MapReduce 实现(除了 CouchDB,最好)

timestamp - 本地时间转换为 Hive 中的 UTC 时间

hadoop - Impala 并发读取和覆盖

hadoop - 本地机器上的 Pig 出错

user-defined-functions - 使用分布式缓存将文件名从 Pig 传递给 Java UDF

hadoop - pig 转储不显示所有数字

sql - 如何在 impala 中使用 distinct

java - Hiveserver2 Java API

hadoop - 确定 Hive "order by"子句中的 reducer 数量

hadoop - 色相可以安装在hadoop上而没有CENTOS上的ambari,cloudera或hortonworks之类的任何发行版吗?