我正在为字数统计 hadoop 编译一个 java 文件,但是在编译时会抛出一个错误:
CountBook.java:33: error: expected public void reduce(Text_key,Iteratorvalues,OutputCollectoroutput,Reporter reporter)throws IOException
这是我的代码
public class CountBook
{
public static class EMapper extends MapReducebase implements
Mapper<LongWritable,Text,Text,IntWritable>
{
private final static Intwritable one = new Intwritable(1);
public void map(LongWritable key,Text value,OutputCollector<Text,IntWritable>output,Reporter reporter)throws IOException
{
String line = value.toString();
String[] Data = line.split("\";\"");
output.collect(new text(Data[0]),one);
}
}
public static class EReduce extends MapReduceBase implements
Reducer<Text,IntWritable,Text,IntWritable>
{
public void reduce(Text_key,Iterator<IntWritable>values,OutputCollector<text,intWritable>output,Reporter reporter)throws IOException
{
Text key=_key;
int authid=0;
while(values.hasNext())
{
IntWritable value = (IntWritable)values.next();
authid+=value.get();
}
output.collect(key,new intWritable(authid));
}
}
public static void main(String args[])throws Exception
{
JobConf conf = new JbConf(CountBook.class);
conf.setjobName("CountBookByAuthor");
conf.setOutputkeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(EMapper.class);
conf.setCombinerClass(EReduce.class);
conf.setReducerClass(EReducer.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf,new path(args[0]));
FileOutputFormat.setOutputPath(conf,new Path(args[1]));
JobCLient.runJob(conf);
}
}
我正在使用 hadoop-core-1.2.1.jar 作为类路径库并在 centos 7 中运行
最佳答案
您目前拥有:
reduce(Text_key,
Iterator<IntWritable>values,
OutputCollector<text,intWritable>output,
Reporter reporter)
应该是:
reduce(Text key,
Iterator<IntWritable> values,
OutputCollector<Text,IntWritable> output,
Reporter reporter)
主要区别是 key
在它和 Text
之间需要一个空格和 OutputCollector<>
中的类型需要被资本化。
关于java - 错误 : <identifier> expected in java hadoop,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43382629/