java - Hadoop Mapreduce错误NativeMethodAccessor

标签 java hadoop

球队,

我是Java的新手,正在尝试运行Hadoop MapReduce程序。遇到错误,无法调试。

程序:

import java.util.*; 

import java.io.IOException; 
import java.io.IOException; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapred.*; 
import org.apache.hadoop.util.*; 

public class ProcessUnits 
{ 
   //Mapper class 
   public static class E_EMapper extends MapReduceBase implements 
   Mapper<LongWritable ,/*Input key Type */ 
   Text,                /*Input value Type*/ 
   Text,                /*Output key Type*/ 
   IntWritable>        /*Output value Type*/ 
   { 

      //Map function 
      public void map(LongWritable key, Text value, 
      OutputCollector<Text, IntWritable> output,   
      Reporter reporter) throws IOException 
      { 
         String line = value.toString(); 
         String lasttoken = null; 
         StringTokenizer s = new StringTokenizer(line,"\t"); 
         String year = s.nextToken(); 

         while(s.hasMoreTokens())
            {
               lasttoken=s.nextToken();
            } 

         int avgprice = Integer.parseInt(lasttoken); 
         output.collect(new Text(year), new IntWritable(avgprice)); 
      } 
   } 


   //Reducer class 
   public static class E_EReduce extends MapReduceBase implements 
   Reducer< Text, IntWritable, Text, IntWritable > 
   {  

      //Reduce function 
      public void reduce( Text key, Iterator <IntWritable> values, 
         OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException 
         { 
            int maxavg=30; 
            int val=Integer.MIN_VALUE; 

            while (values.hasNext()) 
            { 
               if((val=values.next().get())>maxavg) 
               { 
                  output.collect(key, new IntWritable(val)); 
               } 
            } 

         } 
   }  


   //Main function 
   public static void main(String args[])throws Exception 
   { 
      JobConf conf = new JobConf(ProcessUnits.class); 

      conf.setJobName("max_eletricityunits"); 
      conf.setOutputKeyClass(Text.class);
      conf.setOutputValueClass(IntWritable.class); 
      conf.setMapperClass(E_EMapper.class); 
      conf.setCombinerClass(E_EReduce.class); 
      conf.setReducerClass(E_EReduce.class); 
      conf.setInputFormat(TextInputFormat.class); 
      conf.setOutputFormat(TextOutputFormat.class); 

      FileInputFormat.setInputPaths(conf, new Path(args[0])); 
      FileOutputFormat.setOutputPath(conf, new Path(args[1])); 

      JobClient.runJob(conf); 
   } 
} 

错误:

Diagnostics: null Failing this attempt. Failing the application. 16/11/08 19:03:03 INFO mapreduce.Job: Counters: 0 Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865) at ProcessUnits.main(ProcessUnits.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)



注意:我正在Windows命令行中运行。

最佳答案

1)在代码中,您给了StringTokenizer s = new StringTokenizer(line,"\t");
检查您以/home/input输入文件,给定的整数由制表符分隔

2)在代码中,替换为int avgprice = Integer.parseInt(lasttoken);int avgprice = Integer.parseInt(lasttoken.trim());
希望这有效

关于java - Hadoop Mapreduce错误NativeMethodAccessor,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40488537/

相关文章:

java - 关于 wait 和 notifyAll

java - 如何解决服务器在 TCP 通信上收到的 RTSP 消息的冲突?

java - 在 Map Reduce 作业中使用多线程

scala - 如何使用 Spark 读取不断更新的 HDFS 目录并根据字符串(行)将输出拆分为多个 HDFS 文件?

apache-spark - 如何修复 pyspark EMR Notebook 上的错误 - AnalysisException : Unable to instantiate org. apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

java - 如果抛出异常,如何防止 XML 文件内容被修改

java repaint() 和初始化对象

hadoop - Hadoop:多个 map task 如何确保它们不争夺资源?

Hadoop MapReduce : Is it possible to only use a fraction of the input data as the input to a MR job?

java - 碧 Jade 报告 : How to call the report in jsp page