hadoop - 使用自定义组合器......它可能会被忽略?

标签 hadoop

我在主要...

    job.setMapperClass(AverageIntMapper.class);
    job.setCombinerClass(AverageIntCombiner.class);
    job.setReducerClass(AverageIntReducer.class);

Combiner 有不同的代码,但 Combiner 被完全忽略,因为 Reducer 使用的输出是 Mapper 的输出。

我知道可能不会使用 Combiner,但我认为当 Combiner 与 Reducer 相同时就是这种情况。我真的不明白能够创建自定义 Combiner 的意义,但系统仍然可以跳过它的使用。

如果这不应该发生,那么没有使用 Combiner 的原因可能是什么?

代码...

import java.io.IOException;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


public class AverageInt {

public static class AverageIntMapper extends Mapper<LongWritable, Text, Text, Text> {

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

        String n_string = value.toString();
        context.write(new Text("Value"), new Text(n_string));
    }
}

public static class AverageIntCombiner extends Reducer<Text, Text, Text, Text> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {

        int sum = 0;
        int count = 0;

        for(IntWritable value : values) {
            int temp = Integer.parseInt(value.toString());
            sum += value.get();
            count += 1;
        }

        String sum_count = Integer.toString(sum) + "," + Integer.toString(count);

        context.write(key, new Text(sum_count));
    }
}

public static class AverageIntReducer extends Reducer<Text, Text, Text, Text> {

    public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

        int total = 0;
        int count = 0;

        for(Text value : values) {
            String temp = value.toString();
            String[] split = temp.split(",");
            total += Integer.parseInt(split[0]);
            count += Integer.parseInt(split[1]);
        }

        Double average = (double)total/count;

        context.write(key, new Text(average.toString()));
    }
}

public static void main(String[] args) throws Exception {

    if(args.length != 2) {
        System.err.println("Usage: AverageInt <input path> <output path>");
        System.exit(-1);
    }

    Job job = new Job();
    job.setJarByClass(AverageInt.class);
    job.setJobName("Average");

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    job.setMapperClass(AverageIntMapper.class);
    job.setCombinerClass(AverageIntCombiner.class);
    job.setReducerClass(AverageIntReducer.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);

    System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

最佳答案

如果您查看映射器发出的内容:

public void map(LongWritable key, Text value, Context context)

它发送了两个 Text对象,但是当您正确声明组合器类本身时,reduce 方法具有:

public void reduce(Text key, Iterable<IntWritable> values, Context context)

应该是:

public void reduce(Text key, Iterable<Text> values, Context context)

关于hadoop - 使用自定义组合器......它可能会被忽略?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46386364/

相关文章:

hadoop - 合流 HDFS 连接器 : How can I read from the latest offset when there are no hdfs files?

scala - 使用正则表达式时 Spark S3 访问被拒绝

hadoop - 获取所有 Hive 表/数据库创建/删除详细信息(审计日志)

hadoop - Hadoop Sqoop导出到Teradata错误

hadoop - 在 close() 方法中报告作业状态/进度

hadoop - hadoop mapreduce处理

java - 如何将 JMX JVM 选项传递给 Hadoop Mapper 和Reducer

java - 无法在 SQOOP 中创建 JOB

java - 两个远程集群之间的 DistCp 容错

hadoop - 从头开始dml-hive