hadoop - 在HBASE中使用协处理器时出现NullPointerException?

标签 hadoop nullpointerexception hbase

我在HDFS上使用HBASE 0.94.8。我已经实现了协处理器的值求和。该表只有两行

hbase(main):043:0> scan 'demo'

ROW COLUMN+CELL

row1 column=info:category, timestamp=1375438808010, value=web
row1 column=info:hits, timestamp=1375438797824, value=123
row2 column=info:category, timestamp=1375438834518,value=mail
row2 column=info:hits, timestamp=1375438822093, value=1321



hbase(main):043:0>描述“演示”

'demo', {METHOD => 'table_att', coprocessor$1 => '|org.apache.hadoop.hbase.coprocess true
or.AggregateImplementation||'}, {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOO MFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE',
MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'} 1 row(s) in 0.0670 seconds



我的代码如下:
import org.apache.hadoop.conf.Configuration; 
import org.apache.hadoop.hbase.HBaseConfiguration; 
import org.apache.hadoop.hbase.HColumnDescriptor; 
import org.apache.hadoop.hbase.HTableDescriptor; 
import org.apache.hadoop.hbase.KeyValue; 
import org.apache.hadoop.hbase.client.HBaseAdmin; 
import org.apache.hadoop.hbase.client.HTable; 
import org.apache.hadoop.hbase.client.Result; 
import org.apache.hadoop.hbase.client.ResultScanner; 
import org.apache.hadoop.hbase.client.Scan; 
import org.apache.hadoop.hbase.client.coprocessor.AggregationClient; 
import org.apache.hadoop.hbase.client.coprocessor.LongColumnInterpreter;
import org.apache.hadoop.hbase.util.Bytes; 
import org.apache.hadoop.hbase.coprocessor.ColumnInterpreter; 
import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;

public class webAggregator {

   // private static final byte[] EDRP_FAMILY = Bytes.toBytes("EDRP");
   // private static final byte[] EDRP_QUALIFIER = Bytes.toBytes("advanceKWh");
   public static void testSumWithValidRange(Configuration conf,
                 String[] otherArgs) throws Throwable {
          byte[] EDRP_TABLE = Bytes.toBytes(otherArgs[0]);
          byte[] EDRP_FAMILY = Bytes.toBytes(otherArgs[1]);
          byte[] EDRP_QUALIFIER = Bytes.toBytes(otherArgs[2]);

          conf.set("hbase.zookeeper.quorum", "master");
          conf.set("hbase.zookeeper.property.clientPort", "2222");

          conf.setLong("hbase.rpc.timeout", 600000);

          conf.setLong("hbase.client.scanner.caching", 1000);
          conf.set(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY,
                       "org.apache.hadoop.hbase.coprocessor.AggregateImplementation");

          // Utility.CreateHBaseTable(conf, otherArgs[1], otherArgs[2], true);
          /*HBaseAdmin admin = new HBaseAdmin(conf);
          HTableDescriptor desc = new HTableDescriptor(EDRP_TABLE);
          desc.addFamily(new HColumnDescriptor(EDRP_FAMILY));
          admin.createTable(desc);*/

          AggregationClient aClient = new AggregationClient(conf);
          Scan scan = new Scan();
          scan.addColumn(EDRP_FAMILY, EDRP_QUALIFIER);


          HTable table = new HTable(conf, "demo");
          Scan s = new Scan();
          ResultScanner ss = table.getScanner(s);
          for(Result r:ss){
              for(KeyValue kv : r.raw()){
                 System.out.print(new String(kv.getRow()) + " ");
                 System.out.print(new String(kv.getFamily()) + ":");
                 System.out.print(new String(kv.getQualifier()) + " ");
                 System.out.print(kv.getTimestamp() + " ");
                 System.out.println(new String(kv.getValue()));
              }
          }

          final ColumnInterpreter<Long, Long> ci = new LongColumnInterpreter();
          long sum = aClient.sum(Bytes.toBytes(otherArgs[0]), ci, scan);
          System.out.println(sum);
   }

   /**
   * Main entry point.
   *
   * @param argsThe
   *            command line parameters.
   * @throws Exception
   *             When running the job fails.
   */
   public static void main(String[] args) throws Exception {
     Configuration conf = HBaseConfiguration.create();
       String[] otherArgs ={"demo","info","hits"};
      try {
         testSumWithValidRange(conf, otherArgs);
       } catch (Throwable e) {
         e.printStackTrace();
       }
   } }


我的堆栈跟踪如下:

java.lang.NullPointerException at webAggregator.testSumWithValidRange(webAggregator.java:62) at webAggregator.main(webAggregator.java:79)



请帮忙。

最佳答案

我也遇到了同样的错误。经过一些调查,我发现问题是我的列类型是整数,因此LongColumnInterpreter.getValue方法返回null。

从代码和结果中,我确定您的'info:hits'列是字符串列,而不是长列。

只需考虑将命中更改为真正的长列,从hbase shell起,其值应类似于

11Ak8Z4Mswtk00:MXf1NZ                        column=f1:dp, timestamp=1400144073173, value=\x00\x00\x00\x00\x00\x00\x00b 

或者,您可以自己编写ColumnInterpreter来处理字符串值总和。

关于hadoop - 在HBASE中使用协处理器时出现NullPointerException?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18015423/

相关文章:

hadoop - 为什么 mapreduce 将压缩文件拆分为输入拆分?

java - 升级到 Cassandra 1.1.0 后找不到 CassandraStorage()

java - 声明 LinkedBlockingQueue<String> 时出现 NullPointer 异常

java - 使用 YOUTUBE-API 的视频路径字符串

sql - 处理来自非常大的数据库表的数据的架构

php - HBase 和 PHP 用户指南?

java - 有没有办法通过python反序列化java对象

java - 在 Java Hadoop 2.2 的 MapReduce 中对一系列值进行分组

Spring Hadoop : Reading generic option

3499个元素后的Java数组空指针