java - TwoDArrayWritable 中的行列值

标签 java hadoop mapreduce reduce

我从映射器发出二维 double 组作为值,并尝试在 reducer 中访问它。转换回 double 以获得所有二维数组的总和。

public static class DoubleTwoDArrayWritable extends TwoDArrayWritable {
        public DoubleTwoDArrayWritable () { 
             super (DoubleWritable.class) ;
        }
    }

reducer

public class ReducerSvm extends Reducer<Text, DoubleTwoDArrayWritable, Text, Text>{
    public void reduce(Text key,Iterable<DoubleTwoDArrayWritable> values,Context context){
        System.out.println("key------"+key.toString());
        Writable [][] getArray = null;
        double C[][] = new double[3][1];
        for (DoubleTwoDArrayWritable value : values)
        {
            getArray = value.get();
            for (int i=0; i<3 ; i++ )
            {
                for (int j=0 ; j<1 ; j++ ){
                    System.out.println("v--> "+((DoubleWritable)getArray[i][j]).get());
                    C[i][j] = ((DoubleWritable)getArray[i][j]).get();
                }
            }

            System.out.println("C array");
            for (int i=0; i<3 ; i++ ){
                for (int j=0 ; j<1 ; j++ ){
                    System.out.println(C[i][j]+" ");
                }
                System.out.println("");
            }
        }

我能够在 Reducer 中获得我的双数组。但是我硬编码我的行和值。 如何在使用 TwoDArrayWritable

时获取 reducer 中的 rowcolumn 计数

编辑:

按照 Balduz 的建议,我编辑了代码

public void reduce(Text key,Iterable<DoubleTwoDArrayWritable> values,Context context){

        for (DoubleTwoDArrayWritable value : values) {
            Writable[][] currentArray = value.get();
            int rowSize = currentArray.length;
            int columnSize = currentArray[0].length;
            System.out.println("row size: "+rowSize);
            double[][] myArray = new double[rowSize][columnSize];

            for (int i = 0; i < currentArray.length; i++) {
                for (int j = 0; i < currentArray[i].length; j++) {
                     myArray[i][j] = ((DoubleWritable)currentArray[i][j]).get();
                }
            }
            System.out.println("myArray array");
            for (int i=0; i<myArray.length ; i++ ){
                for (int j=0 ; j<myArray[0].length ; j++ ){
                    System.out.println(myArray[i][j]+" ");
                }
                System.out.println("");
            }

        }
}
}

我能够正确获取行大小。

但显示

java.lang.ArrayIndexOutOfBoundsException: 1
    at edu.am.bigdata.svmmodel.ReducerTrail.reduce(ReducerTrail.java:26)
    at edu.am.bigdata.svmmodel.ReducerTrail.reduce(ReducerTrail.java:1)
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:610)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:444)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449)

最佳答案

首先,请不要调用变量getArray,因为它看起来像一个方法名,会导致混淆。要遍历每个矩阵,您需要执行以下操作:

for (DoubleTwoDArrayWritable value : values) {
    Writable[][] currentArray = value.get();
    for (int i = 0; i < currentArray.length; i++) {
        for (int j = 0; j < currentArray[i].length; j++) {
             DoubleWritable valueYouWant = (DoubleWritable)currentArray[i][j];
        }
    }
}

编辑: 为了将整个矩阵存储在一个变量中,我假设每一行都有相同数量的列。在这种情况下,您可以像这样初始化它:

for (DoubleTwoDArrayWritable value : values) {
    Writable[][] currentArray = value.get();
    int rowSize = currentArray.length;
    int columnSize = currentArray[0].length;
    double[][] myArray = new double[rowSize][columnSize];

    for (int i = 0; i < currentArray.length; i++) {
        for (int j = 0; j < currentArray[i].length; j++) {
             myArray[i][j] = ((DoubleWritable)currentArray[i][j]).get();
        }
    }
}

关于java - TwoDArrayWritable 中的行列值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25200531/

相关文章:

java - Spring @CacheEvict 不工作

hadoop - combineFileInputFormat 中的 isSplitable 不起作用

java - 对 Hadoop Map-Reduce 应用程序进行基准测试

java - 如何将外部 jar 添加到 hadoop 作业?

java - 如何恢复中断的下载 - 第 2 部分

java - 限制被少数子类覆盖的类方法

java - 如何在 Selendroid 中执行滑动操作?

hadoop - distcp本质上是否使用SSL/TLS将文件传输到AWS S3

hadoop - 在Hive/HBase集成中出现MR抓取错误

hadoop - 使用 Tez 执行引擎将文件系统添加到 Hive