eclipse - Eclipse上的Hadoop MapReduce:清理登台区域文件:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job_local183880112_0001

标签 eclipse hadoop nullpointerexception mapreduce

2014-04-04 16:02:31.633 java[44631:1903] Unable to load realm info from SCDynamicStore
14/04/04 16:02:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/04 16:02:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/04/04 16:02:32 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/04/04 16:02:32 WARN snappy.LoadSnappy: Snappy native library not loaded
14/04/04 16:02:32 INFO mapred.FileInputFormat: Total input paths to process : 1
14/04/04 16:02:32 INFO mapred.JobClient: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job_local183880112_0001
java.lang.NullPointerException
    at org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:950)
    at org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:476)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:121)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:592)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1013)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
    at LineIndex.main(LineIndex.java:92)

我正在尝试在Eclipse中使用MapReduce为行索引执行Mapreduce程序。出现以上错误。我的代码是:
 public class LineIndex {

  public static class LineIndexMapper extends MapReduceBase
      implements Mapper<LongWritable, Text, Text, Text> {

    private final static Text word = new Text();
    private final static Text location = new Text();

    public void map(LongWritable key, Text val,
        OutputCollector<Text, Text> output, Reporter reporter)
        throws IOException {

      FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
      String fileName = fileSplit.getPath().getName();
      location.set(fileName);

      String line = val.toString();
      StringTokenizer itr = new StringTokenizer(line.toLowerCase());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        output.collect(word, location);
      }
    }
  }



  public static class LineIndexReducer extends MapReduceBase
      implements Reducer<Text, Text, Text, Text> {

    public void reduce(Text key, Iterator<Text> values,
        OutputCollector<Text, Text> output, Reporter reporter)
        throws IOException {

      boolean first = true;
      StringBuilder toReturn = new StringBuilder();
      while (values.hasNext()){
        if (!first)
          toReturn.append(", ");
        first=false;
        toReturn.append(values.next().toString());
      }

      output.collect(key, new Text(toReturn.toString()));
    }
  }


  /**
   * The actual main() method for our program; this is the
   * "driver" for the MapReduce job.
   */
  public static void main(String[] args) {
    JobClient client = new JobClient();
    JobConf conf = new JobConf(LineIndex.class);

    conf.setJobName("LineIndexer");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(Text.class);

    FileInputFormat.addInputPath(conf, new Path("input"));
    FileOutputFormat.setOutputPath(conf, new Path("output"));


    conf.setMapperClass(LineIndexMapper.class);
    conf.setReducerClass(LineIndexReducer.class);
    conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"));
    conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"));

    client.setConf(conf);

    try {
      JobClient.runJob(conf);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

我无法在此处理解和解决错误Nullpointerexception。
有人可以帮我吗?

最佳答案

您能否将mapred-site.xml文件添加到Configuration类对象,然后再试一次。
您可能还需要在该xml文件中指定属性mapred.local.dir

关于eclipse - Eclipse上的Hadoop MapReduce:清理登台区域文件:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job_local183880112_0001,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22874889/

相关文章:

shell - 删除 impala shell 历史记录

apache-spark - org.apache.zeppelin.interpreter.InterpreterException:Sparkr没有响应

linux - 用于远程作业提交的典型 Hadoop 设置

android - 获取可写数据库时 Android 中的 NullPointerException

Android:没有文件出现在文件资源管理器中

java - 在另一台计算机上的另一个 Eclipse 实例中打开 Eclipse 项目

java - 获取进程的 PID 以在不知道其全名的情况下将其杀死

java - com.ibm.ws.webcontainer.webapp.WebApp logServletError SRVE0293E : [Servlet Error]-[faces]: java. lang.NullPointerException

java - 什么是NullPointerException,我该如何解决?

java - 服务器客户端多线程