hadoop - Oozie 作业在运行 hue 时由于 "not org.apache.hadoop.mapred.Mapper"而失败

标签 hadoop mapreduce oozie hue oozie-coordinator

我正在尝试通过 oozie 作业运行 wordcount 程序。
当我像 hadoop jar wordcoutjar/data.txt/out 一样手动运行 wordcout jar 时。它运行良好并给我输出。
这是我的 wordcount 程序的映射器代码的详细信息。

    public class MapperWordcount extends Mapper<LongWritable, Text, Text, IntWritable>{
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            String line = value.toString();
            StringTokenizer tokenizer = new StringTokenizer(line);
            while (tokenizer.hasMoreTokens()) {
                word.set(tokenizer.nextToken());
                context.write(word, one);
            }
        }

    }

当我通过 oozie job 执行它时,错误如下:

 2015-07-31 00:39:23,357 FATAL [IPC Server handler 29 on 40854] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1438294006985_0011_m_000000_3 - exited : java.lang.RuntimeException: Error in configuring object
            at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
            at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
            at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
            at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:446)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
            at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
            at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
    Caused by: java.lang.reflect.InvocationTargetException
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
            ... 9 more
    Caused by: java.lang.RuntimeException: java.lang.RuntimeException: class com.mr.wc.MapperWordcount not org.apache.hadoop.mapred.Mapper
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2108)
            at org.apache.hadoop.mapred.JobConf.getMapperClass(JobConf.java:1109)
            at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
            ... 14 more
    Caused by: java.lang.RuntimeException: **class com.mr.wc.MapperWordcount not org.apache.hadoop.mapred.Mapper**
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2102)
            ... 16 more   

我的pom.xml是这样的。

   <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.6.0</version>
    </dependency>
   <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>2.6.0</version>
</dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-mapreduce-client-core</artifactId>
        <version>2.6.0</version>

最佳答案

我在这里遇到了同样的问题,实际问题是代码引用了旧的 map reduce 库,而在运行时它试图找到新的 map reduce 库。

在 Gradle 中

compile("org.apache.hadoop:hadoop-core:2.4.0")

在你的 pom.xml 中

<dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>2.4.0</version>
    </dependency>

并将 Mapper 和 reducer 中的所有引用从 org.apache.hadoop.mapred.Mapper 更改为 org.apache.hadoop.mapreduce.Mapper

关于hadoop - Oozie 作业在运行 hue 时由于 "not org.apache.hadoop.mapred.Mapper"而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31739899/

相关文章:

hadoop - 如何在 HBase 上配置 map reduce jobs

hadoop - Namenode重启后如何重构全 block 信息?

hadoop - 在YARN集群模式下让spark使用/etc/hosts文件进行绑定(bind)

java - Hadoop Map/Reduce Mapper 'map'方法和日志

oozie - 从命令行列出并执行Oozie作业

scala - 在 Hadoop 上使用 Spark 运行 Scala 程序

java - 迭代的MapReduce作业具有NumberFormatException错误

CouchDB 标签云 View

sqoop - 在 oozie 中捕获 sqoop 输出

hadoop - 使用修改后的hadoop配置文件运行oozie作业以支持S3到HDFS