我有一个简单的MapReduce作业,是从avro website借来的,做了一些小的修改(我删除了reducer)。基本上,它需要一个简单的avro文件作为输入。这是avro文件的架构
平均模式:
{
"type": "record",
"name": "User",
"fields": [
{"name": "name", "type": "string"},
{"name": "favorite_number", "type": "int"},
{"name":"favorite_color", "type": "string"}
]
}
这是我的mapreduce工作(映射器和主要功能):
public class ColorCountMapper extends Mapper<AvroKey<User>, NullWritable, Text, IntWritable> {
@Override
public void map(AvroKey<User> key, NullWritable value, Context context) throws IOException, InterruptedException {
CharSequence color = key.datum().getFavoriteColor();
if (color == null) {
color = "none";
}
context.write(new Text(color.toString()), new IntWritable(1));
}
}
和
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "TestColor");
job.setJarByClass(runClass.class);
job.setJobName("Color Count");
FileInputFormat.setInputPaths(job, new Path("in"));
FileOutputFormat.setOutputPath(job, new Path("out"));
job.setInputFormatClass(AvroKeyInputFormat.class);
job.setMapperClass(ColorCountMapper.class);
AvroJob.setInputKeySchema(job, User.getClassSchema());
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
boolean r = job.waitForCompletion(true);
System.out.println(r);
}
当我运行程序时,它返回false并且不会成功。我不知道问题所在。有人可以帮忙吗?
最佳答案
您已将“映射器”的值类型设置为 NullWritable 。然后在main / driver中,将Map-output值类设置为 IntWritable 。映射器中的值类型和主/驱动程序应相同。相应地修改程序。如果您有解决方案,请接受我的回答。
关于hadoop - mapreduce作业未执行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28820447/