java - 带有 parseDouble 的 Hadoop Mapreduce 代码在一个系统中给出异常,但在其他系统中运行良好?

标签 java hadoop mapreduce

我正在 Java 中运行 Hadoop Mapreduce 代码。它在我的系统中运行良好,但是当我尝试在其他人的系统(需要运行它的最终系统)中运行相同的程序时,它会给出以下错误。该错误应该出现在第 81 行,其中有 Double.parseDouble() 命令。它在我的系统上完美运行。可能是什么问题?

13/06/25 12:07:05 INFO input.FileInputFormat: Total input paths to process : 2
13/06/25 12:07:05 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/06/25 12:07:05 WARN snappy.LoadSnappy: Snappy native library not loaded
13/06/25 12:07:06 INFO mapred.JobClient: Running job: job_201306101543_0158
13/06/25 12:07:07 INFO mapred.JobClient:  map 0% reduce 0%
13/06/25 12:07:11 INFO mapred.JobClient:  map 100% reduce 0%
13/06/25 12:07:18 INFO mapred.JobClient:  map 100% reduce 33%
13/06/25 12:07:20 INFO mapred.JobClient: Task Id :     attempt_201306101543_0158_r_000000_0, Status : FAILED
java.lang.NullPointerException
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1008)
    at java.lang.Double.parseDouble(Double.java:540)
    at Transpose$Reduce.reduce(Transpose.java:89)
    at Transpose$Reduce.reduce(Transpose.java:61)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)

attempt_201306101543_0158_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201306101543_0158_r_000000_0: log4j:WARN Please initialize the log4j system properly.
13/06/25 12:07:21 INFO mapred.JobClient:  map 100% reduce 0%
13/06/25 12:07:29 INFO mapred.JobClient:  map 100% reduce 33%
13/06/25 12:07:31 INFO mapred.JobClient: Task Id :     attempt_201306101543_0158_r_000000_1, Status : FAILED
java.lang.NullPointerException
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1008)
    at java.lang.Double.parseDouble(Double.java:540)
    at Transpose$Reduce.reduce(Transpose.java:89)
    at Transpose$Reduce.reduce(Transpose.java:61)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

相关代码(完整的Reduce功能)

 public static class Reduce extends Reducer<Text, Text, Text, Text> {
        private Text mult = new Text();

    public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

            HashMap<Integer, String> mMap = new HashMap<Integer, String>();
            HashMap<Integer, String> mMap1 = new HashMap<Integer, String>();

            for(Text value : values){
                String[] line = value.toString().split(",", 3);

            Integer row = Integer.valueOf(line[1]);

            if(line[0].equals("M")){
                mMap.put(row, line[2]);
            }
            else if(line[0].equals("Mt")){
                mMap1.put(row, line[2]);
            }       
        }

        double sum=0.0;

            for(Integer i=1; i<=mMap.size(); i++){
    String val1 = mMap.get(i);
    String val2 = mMap1.get(i);                

            double mij = Double.parseDouble(val1);
    double mjk = Double.parseDouble(val2);

    sum += mij*mjk;
            }
        String str = Double.toString(sum);
        mult.set(str);    
            context.write(key, mult);  
        }
    }       

最佳答案

您收到从 Double.parseDouble(String s) 抛出的 NullPointerException ,其中 happens如果snull。因此,听起来您的“生产”计算机上的数据中有一些 null 输入,但这些输入并不驻留在“开发”计算机上。

关于java - 带有 parseDouble 的 Hadoop Mapreduce 代码在一个系统中给出异常,但在其他系统中运行良好?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17302621/

相关文章:

Hadoop 中未设置 JAVA_Home

maven - Apachestorm-jdbc maven 集成

java - 使用Hadoop库序列化Java对象

java - 如何为 map reducer 作业在 java 中为 hadoop 输入自定义选择列读取

java - 请求 POJO AXIS2 Web 服务的参数未传递给 POJO

java - 我应该在 Java 中向我的库 API 添加帮助器/实用程序方法吗?

unit-testing - 用于单元测试的 MR-Unit 与 JUnit

hadoop - 我正在尝试在 MapReduce 中输出 {key, list(values)} 但我只得到排序的 {key,value} 对

java - Spring bean 字段注入(inject)

java - 浮点文字语法