java - 链接 Map Reduce 作业时出错

标签 java hadoop mapreduce

我的 Map Reduce 结构

public class ChainingMapReduce {



     public static class ChainingMapReduceMapper 
     extends Mapper<Object, Text, Text, IntWritable>{

         public void map(Object key, Text value, Context context
                 ) throws IOException, InterruptedException {

                          // code

             }
         }
     }


     public static class ChainingMapReduceReducer 
     extends Reducer<Text,IntWritable,Text,IntWritable> {


         public void reduce(Text key, Iterable<IntWritable> values, 
                     Context context
                     ) throws IOException, InterruptedException {

             //code

                    }
     }


     public static class ChainingMapReduceMapper1 
     extends Mapper<Object, Text, Text, IntWritable>{

         public void map(Object key, Text value, Context context
                 ) throws IOException, InterruptedException {

            //code
             }
         }
     }

     public static class ChainingMapReduceReducer1 
     extends Reducer<Text,IntWritable,Text,IntWritable> {


         public void reduce(Text key, Iterable<IntWritable> values, 
                     Context context
                     ) throws IOException, InterruptedException {

             //code
         }
     }

    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {

        Configuration conf = new Configuration();

        Job job = new Job(conf, "First");
        job.setJarByClass(ChainingMapReduce.class);
        job.setMapperClass(ChainingMapReduceMapper.class);
        job.setCombinerClass(ChainingMapReduceReducer.class);
        job.setReducerClass(ChainingMapReduceReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);


     FileInputFormat.addInputPath(job, new Path("/home/Desktop/log"));
        FileOutputFormat.setOutputPath(job, new Path("/home/Desktop/temp/output"));        
        job.waitForCompletion( true );


        System.out.println("First Job Completed.....Starting Second Job");
        System.out.println(job.isSuccessful());


      /*  FileSystem hdfs = FileSystem.get(conf);

        Path fromPath = new Path("/home/Desktop/temp/output/part-r-00000");
        Path toPath = new Path("/home/Desktop/temp/output1");
        hdfs.rename(fromPath, toPath);
        conf.clear();

        */
        if(job.isSuccessful()){
            Configuration conf1 = new Configuration();
            Job job1 = new Job(conf1,"Second");
            job1.setJarByClass(ChainingMapReduce.class);
            job1.setMapperClass(ChainingMapReduceMapper1.class);
            job1.setCombinerClass(ChainingMapReduceReducer1.class);
            job1.setReducerClass(ChainingMapReduceReducer1.class);
            job1.setOutputKeyClass(Text.class);
            job1.setOutputValueClass(IntWritable.class);
            FileInputFormat.addInputPath(job, new Path("/home/Desktop/temp/output/part-r-00000)");
            FileOutputFormat.setOutputPath(job, new Path("/home/Desktop/temp/output1"));   
            System.exit(job1.waitForCompletion(true) ? 0 : 1);
        }
        System.exit(job.waitForCompletion(true) ? 0 : 1);

    }

    }

当我运行这个程序时......第一个作业得到完美执行,然后出现以下错误:

First Job Completed.....Starting Second Job true

12/01/27 15:24:21 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/01/27 15:24:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/01/27 15:24:21 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 12/01/27 15:24:21 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop/mapred/staging/4991311720439552/.staging/job_local_0002 Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set. at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:123) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:872) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapreduce.Job.submit(Job.java:476) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506) at ChainingMapReduce.main(ChainingMapReduce.java:129)

我尝试对两个作业使用“conf”,对各自的作业使用“conf”“conf1”。

最佳答案

改变

 FileInputFormat.addInputPath(job, new Path("/home/Desktop/temp/output/part-r-00000)");
 FileOutputFormat.setOutputPath(job, new Path("/home/Desktop/temp/output1"));

 FileInputFormat.addInputPath(job1, new Path("/home/Desktop/temp/output/part-r-00000)");
 FileOutputFormat.setOutputPath(job1, new Path("/home/Desktop/temp/output1"));

第二份工作。

也可以考虑使用 o.a.h.mapred.jobcontrol.JobApache Oozie .

关于java - 链接 Map Reduce 作业时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9029015/

相关文章:

hadoop - hadoop是oltp或olap软件,它可以实时工作吗?它用什么来代替任何现有系统?

java - 使用Java进行HDFS加密

hadoop - 负载均衡器的功能可以用 mapreduce 系统来执行吗?

hadoop - 如何调试挂起的 hadoop map-reduce 作业

javascript - 对象的 react 、排序和数组(reduce 和 map?)

java - 速度引擎忽略@字符?

java - 在 OnCreate() 方法中绑定(bind)服务时出现 NullPointerException

java - 在 Camel 中针对 XSD 验证订单

java命令行错误

scala - Spark : SAXParseException while writing to parquet on s3