java - 如何修复NoSuchMethodError:org.apache.hadoop.mapred.InputSplit.write

标签 java hadoop mapreduce nosuchmethoderror

我在hadoop上写一个项目。我有一个一维字符串数组。它的名称是“words”。

想要将其发送到 reducer ,但出现此错误:

Exception in thread "main" java.lang.NoSuchMethodError:org.apache.hadoop.mapred .InputSplit.write(Ljava/io/DataOutput;)V

我该怎么办?

谁能帮我?

这是我的映射器:
 public  abstract  class Mapn  implements Mapper<LongWritable, Text, Text, Text>{
@SuppressWarnings("unchecked")
public void map(LongWritable key, Text value, Context con) throws IOException, InterruptedException

        {                   
            String line = value.toString();
            String[] words=line.split(",");
            for(String word: words )
            {
                  Text outputKey = new Text(word.toUpperCase().trim());

              con.write(outputKey, words);
            }
            }




            }

最佳答案

当我学习hadoop mapreduce工具时,除了编写传统的WordCount程序外,我还编写了自己的程序,然后为此导出了jar。现在好了,我正在共享使用hadoop-1.2.1 jar依赖关系为其编写的程序。它用于转换数字并将其写在单词中,并且在4个lacs数字上进行了处理,没有任何错误。

所以这是程序:

package com.whodesire.count;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

import com.whodesire.numstats.AmtInWords;

public class CountInWords {

    public static class NumberTokenizerMapper 
                    extends Mapper <Object, Text, LongWritable, Text> {

        private static final Text theOne = new Text("1");
        private LongWritable longWord = new LongWritable();

        public void map(Object key, Text value, Context context) {

            try{
                StringTokenizer itr = new StringTokenizer(value.toString());
                while (itr.hasMoreTokens()) {
                    longWord.set(Long.parseLong(itr.nextToken()));
                    context.write(longWord, theOne);
                }
            }catch(ClassCastException cce){
                System.out.println("ClassCastException raiseddd...");
                System.exit(0);
            }catch(IOException | InterruptedException ioe){
                ioe.printStackTrace();
                System.out.println("IOException | InterruptedException raiseddd...");
                System.exit(0);
            }
        }
    }

    public static class ModeReducerCumInWordsCounter 
            extends Reducer <LongWritable, Text, LongWritable, Text>{
        private Text result = new Text();

        //This is the user defined reducer function which is invoked for each unique key
        public void reduce(LongWritable key, Iterable<Text> values, 
                Context context) throws IOException, InterruptedException {

            /*** Putting the key, which is a LongWritable value, 
                        putting in AmtInWords constructor as String***/
            AmtInWords aiw = new AmtInWords(key.toString());
            result.set(aiw.getInWords());

            //Finally the word and counting is sent to Hadoop MR and thus to target
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        /****
         *** all random numbers generated inside input files has been
         *** generated using url https://andrew.hedges.name/experiments/random/
         ****/

        //Load the configuration files and add them to the the conf object
        Configuration conf = new Configuration();       

        String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

        Job job = new Job(conf, "CountInWords");

        //Specify the jar which contains the required classes for the job to run.
        job.setJarByClass(CountInWords.class);

        job.setMapperClass(NumberTokenizerMapper.class);
        job.setCombinerClass(ModeReducerCumInWordsCounter.class);
        job.setReducerClass(ModeReducerCumInWordsCounter.class);

        //Set the output key and the value class for the entire job
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);

        //Set the Input (format and location) and similarly for the output also
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

        //Setting the Results to Single Target File
        job.setNumReduceTasks(1);

        //Submit the job and wait for it to complete
        System.exit(job.waitForCompletion(true) ? 0 : 1);       
    }
}

我建议您检查已添加的hadoop jar,特别是在hadoop-core-x.x.x.jar上,因为在观看了错误之后,您似乎还没有在项目中添加一些mapreduce jar。

关于java - 如何修复NoSuchMethodError:org.apache.hadoop.mapred.InputSplit.write,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48043029/

相关文章:

java - java中循环梯度和非循环梯度的区别

java - 为什么我不能使用 long 原语作为 switch() 部分的表达式?

java - 在java中实现类和对象,调用方法

ubuntu - 如何为hadoop独立安装配置ssh?

java - Hadoop3 : worker node error connecting to ResourceManager

apache-spark - 我如何知道 Spark 连接是高效的共同分区输入连接?

java - 在 Android 应用程序中托管可执行文件

sql - 通过Apache Phoenix向表中添加列的默认值等于HBase中现有列的值

hadoop - 如何将数据分类到 Pig 中的 Zebra 表中?

java - 为什么hadoop 1.0.3中的reducer非常慢