hadoop - Pig 0.7.0错误2118:无法在Hadoop 1.2.1上创建输入拆分

标签 hadoop apache-pig

我从map reduce程序得到了输出文件(存储在HDFS上)。现在,我正在尝试使用PIG 0.7.0加载该文件。

我收到以下错误。我尝试将此文件复制到本地计算机,并在本地模式下运行pig,效果很好。但我想跳过此步骤,使其在 map 缩小模式下工作。

我尝试的选项:

LOAD 'file://log/part-00000', 
LOAD '/log/part-00000', 
LOAD 'hdfs:/log/part-00000', 
LOAD 'hdfs://localhost:50070/log/part-00000', 
hadoop dfs -ls /log/
Warning: $HADOOP_HOME is deprecated.

Found 3 items
-rw-r--r--   3  supergroup          0 2014-02-07 07:56 /log/_SUCCESS
drwxr-xr-x   -  supergroup          0 2014-02-07 07:55 /log/_logs
-rw-r--r--   3  supergroup      10021 2014-02-07 07:56 /log/part-00000

pig (以mapreduce模式运行)
grunt> REC = LOAD 'file://log/part-00000' as (CREATE_TMSTP:chararray,         MESSAGE_TYPE:chararray, MESSAGE_FROM:chararray, MESSAGE_TEXT:chararray);
grunt> DUMP REC;

Backend error message during job submission
-------------------------------------------
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: file:///log/part-00000
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:269)
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
    at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
    at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)
    at java.lang.Thread.run(Thread.java:695)
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/log/part-00000
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:258)
    ... 7 more

pig 栈痕迹
ERROR 2997: Unable to recreate exception from backend error:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: file:///log/part-00000

org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias REC
    at org.apache.pig.PigServer.openIterator(PigServer.java:521)
    at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:544)
    at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:241)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:162)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:138)
    at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:75)
    at org.apache.pig.Main.main(Main.java:357)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable to recreate exception from backend error: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input splits for: file:///log/part-00000
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:169)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:268)
    at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:308)
    at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:835)
    at org.apache.pig.PigServer.store(PigServer.java:569)
    at org.apache.pig.PigServer.openIterator(PigServer.java:504)

...另外6个

最佳答案

您应该尝试升级到Pig的最新版本。 0.7.0已经有很多年历史了。当前稳定版本为0.12.0。

关于hadoop - Pig 0.7.0错误2118:无法在Hadoop 1.2.1上创建输入拆分,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21632476/

相关文章:

hadoop - HBase Snappy Compression - 创建表失败,CompressionTest 成功

hadoop - 我们可以使用Hadoop MapReduce进行实时数据处理吗?

hadoop pig 包减法

hadoop - pig 脚本对 10 block 训练数据进行采样,pig 脚本被卡住了

Hadoop Pig自定义键名

hadoop - 从 hdfs 读取数据时级联的实现问题

python - 在Amazon EMR上运行mrjob,不支持t2.micro

hadoop - 是否可以计算分区的数量?

apache-pig - 在 Pig 中连接两个表

json - 如何在 Pig 中加入 2 个不同的变量?