java - Job 类型中的方法 setPartitionerClass(Class<?extendsPartitioner>) 不适用于参数 (Class<WordCountPartitioner>)

标签 java hadoop

我的驱动程序代码:

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver extends Configured {

    public static void main(String[] args) throws Exception {
        Job job = new Job();
        job.setJarByClass(WordCountDriver.class);
        job.setJobName("wordcountdriver");

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        job.setPartitionerClass(WordCountPartitioner.class);
        job.setNumReduceTasks(4);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        System.exit(job.waitForCompletion(true) ? 0 : -1);
    }
}

我的映射器代码:

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
}

reducer 代码:

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context)
            throws IOException, InterruptedException {
        int sum = 0;
        for(IntWritable value : values) {
            sum += value.get();
        }
        context.write(key, new IntWritable(sum));
    }
}

分区器代码:

import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.Partitioner;

public class WordCountPartitioner implements Partitioner<Text, IntWritable> {

    @Override
    public void configure(JobConf arg0) {
        // TODO Auto-generated method stub
    }

    @Override
    public int getPartition(Text key, IntWritable value, int setNumRedTasks) {
        String line = value.toString();

        if (line.length() == 1) {
            return 0;
        }
        if (line.length() == 2) {
            return 1;
        }
        if (line.length() == 3) {
            return 2;
        } else {
            return 3;
        }
    }
}

为什么我会收到此错误?

最佳答案

您正在混合旧的 (org.apache.hadoop.mapred) 和新的 (org.apache.hadoop.mapreduce) API。您的 WordCountPartitioner 应扩展 org.apache.hadoop.mapreduce.Partitioner 类。

关于java - Job 类型中的方法 setPartitionerClass(Class<?extendsPartitioner>) 不适用于参数 (Class<WordCountPartitioner>),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32928301/

相关文章:

java - 匿名添加对象时从 ArrayList 获取特定对象?

java - 设置 Neo4j 缓存

mongodb - 批处理分析会影响 Couchbase 性能吗?

java - Hadoop作业链:跟踪作业索引

java - 错误 org.apache.zookeeper.ClientCnxn - 调用 watcher 时出错

java - 将嵌套对象发送到 Spring POST

java - JavaFX TableView 宽度从何而来?

hadoop - Hadoop超时尝试在AWS多区域配置中写入Cassandra

hadoop - 运行spark时最大程度地使用Yarn的Vcores

apache-spark - HadoopPartitions 的 Spark 的默认分区是如何计算的?