java - MapReduce 作业 : weird output?

标签 java hadoop hdfs

我正在编写我的第一个 MapReduce 作业。事情很简单:只计算文件中的字母数字字符。我已经完成生成我的 jar 文件并运行它,但除了调试输出之外,我找不到 MR 作业的输出。你能帮帮我吗?

我的应用类:

import CharacterCountMapper;
import CharacterCountReducer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class CharacterCountDriver extends Configured implements Tool {

    @Override
    public int run(String[] args) throws Exception {

        // Create a JobConf using the processed configuration processed by ToolRunner
        Job job = Job.getInstance(getConf());

        // Process custom command-line options
        Path in = new Path("/tmp/filein");
        Path out = new Path("/tmp/fileout");

        // Specify various job-specific parameters     
        job.setJobName("Character-Count");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        job.setMapperClass(CharacterCountMapper.class);
        job.setReducerClass(CharacterCountReducer.class);

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.setInputPaths(job, in);
        FileOutputFormat.setOutputPath(job, out);

        job.setJarByClass(CharacterCountDriver.class);

        job.submit();
        return 0;
    }

    public static void main(String[] args) throws Exception {
        // Let ToolRunner handle generic command-line options 
        int res = ToolRunner.run(new Configuration(), new CharacterCountDriver(), args);

        System.exit(res);
      }
}

然后是我的映射器类:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class CharacterCountMapper extends
        Mapper<Object, Text, Text, IntWritable> {

    private final static IntWritable one = new IntWritable(1);

    @Override
    protected void map(Object key, Text value, Context context)
            throws IOException, InterruptedException {
        String strValue = value.toString();
        StringTokenizer chars = new StringTokenizer(strValue.replaceAll("[^a-zA-Z0-9]", ""));
        while (chars.hasMoreTokens()) {
            context.write(new Text(chars.nextToken()), one);
        }
    }
}

和 reducer :

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class CharacterCountReducer extends
        Reducer<Text, IntWritable, Text, IntWritable> {

    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context)
            throws IOException, InterruptedException {
        int charCount = 0;
        for (IntWritable val: values) {
            charCount += val.get();
        }
        context.write(key, new IntWritable(charCount));
    }
}

看起来不错,我从我的 IDE 生成可运行的 jar 文件并按如下方式执行它:

$ ./hadoop jar ~/Desktop/example_MapReduce.jar no.hib.mod250.hadoop.CharacterCountDriver
14/11/27 19:36:42 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/11/27 19:36:42 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/11/27 19:36:42 INFO input.FileInputFormat: Total input paths to process : 1
14/11/27 19:36:42 INFO mapreduce.JobSubmitter: number of splits:1
14/11/27 19:36:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local316715466_0001
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/staging/roberto316715466/.staging/job_local316715466_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/staging/roberto316715466/.staging/job_local316715466_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/local/localRunner/roberto/job_local316715466_0001/job_local316715466_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/11/27 19:36:43 WARN conf.Configuration: file:/tmp/hadoop-roberto/mapred/local/localRunner/roberto/job_local316715466_0001/job_local316715466_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/11/27 19:36:43 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/11/27 19:36:43 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/11/27 19:36:43 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/11/27 19:36:43 INFO mapred.LocalJobRunner: Waiting for map tasks
14/11/27 19:36:43 INFO mapred.LocalJobRunner: Starting task: attempt_local316715466_0001_m_000000_0
14/11/27 19:36:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/11/27 19:36:43 INFO mapred.MapTask: Processing split: file:/tmp/filein:0+434
14/11/27 19:36:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer

然后我猜我的输出文件将在/tmp/fileout 中。但相反,它似乎是空的:

$ tree /tmp/fileout/
/tmp/fileout/
└── _temporary
    └── 0

2 directories, 0 files

有什么我想念的吗?谁能帮帮我?

问候 :-)

编辑:

我几乎找到了解决方案on this other post .

在 CharacterCountDriver 中,我用 job.waitForCompletion(true) 替换了 job.submit()。我得到了更详细的输出:

/tmp/fileout/
├── part-r-00000
└── _SUCCESS

0 directories, 2 files

但我仍然不知道如何阅读这些内容,_SUCCESS 是空的,part-r-0000 不是我所期望的:

Absorbantandyellowandporousishe 1
AreyoureadykidsAyeAyeCaptain    1
ICanthearyouAYEAYECAPTAIN       1
Ifnauticalnonsensebesomethingyouwish    1
Ohh     1
READY   1
SPONGEBOBSQUAREPANTS    1
SpongebobSquarepants    3
Spongebobsquarepants    4
Thendroponthedeckandfloplikeafish       1
Wholivesinapineappleunderthesea 1

有什么建议吗?我的代码中可能有任何错误吗?谢谢。

最佳答案

如果我理解正确,您希望您的程序计算输入文件中的字母数字字符。但是,这不是您的代码在做什么。您可以更改映射器以计算每行中的字母数字字符:

String strValue = value.toString();
strValue.replaceAll("[^a-zA-Z0-9]", "");
context.write(new Text("alphanumeric", strValue.length());

这应该可以修复您的程序。基本上,您的映射器输出每行中的字母数字字符作为键。 reducer 累积每个键的计数。通过我的更改,您只需使用一个键:“字母数字”。 key 可能是其他东西,它仍然有效。

关于java - MapReduce 作业 : weird output?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27177157/

相关文章:

java - JNLP 应用程序 - Lib 文件夹未发送到客户端

java - 使用分数进行 Junit 测试

windows - 映射减少错误 : Failed to setup local dir

hadoop - MapReduce架构

java - UDP 发送总是以捕获结束

java - Java servlet 不会解析带有 "[]"符号的 Cookie

java - 在 Eclipse 上运行基本字数统计程序时出错

Hadoop yarn : How to force a Node to be Marked "LOST" instead of "SHUTDOWN"?

Hadoop HA Namenode 出现错误 : flush failed for required journal (JournalAndStream(mgr=QJM to [< ip >:8485, < ip > :8485, < ip >:8485]))

hadoop - Hive 在引用的字段中加载带逗号的 CSV