hadoop - 运行HbaseMapreduce程序时发生ClassNotFound异常

标签 hadoop mapreduce hbase

我是Hbase的新手,我正在尝试一些示例示例,以将数据从文件加载到Htable。我从Eclipse创建了一个jar,并将其和输入文件存储在HDFS目录中,然后开始运行jar:

hadoop jar $HADOOP_HOME/CSV2HBASE.jar /home/hbaseuser/input.csv csv_table

然后我遇到了Exeption
Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
        at com.stratapps.hbase.CSV2HBase.main(CSV2HBase.java:67)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
        ... 6 more

我的输入文件是:
row1    f1  cl1 0.12
row2    f1  cl1 0.35
row3    f1  cl1 1.58
row4    f1  cl1 2.73
row5    f1  cl1 0.93

和类(class):
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class CSV2HBase {

  private static final String NAME = "CSV2HBase";

  static class Uploader extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> {

    @Override
    public void map(LongWritable key, Text line, Context context)throws IOException 
    {

      String [] values = line.toString().split(",");
      if(values.length != 4) {
    System.out.println("err values.length!=4 len:"+values.length);
    System.out.println("input string is:"+line);
        return;
      }
      // Extract each value
      byte [] row = Bytes.toBytes(values[0]);
      byte [] family = Bytes.toBytes(values[1]);
      byte [] qualifier = Bytes.toBytes(values[2]);
      byte [] value = Bytes.toBytes(values[3]);
      Put put = new Put(row);
      put.add(family, qualifier, value);

      try {
        context.write(new ImmutableBytesWritable(row), put);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }


    }
  }
  public static Job configureJob(Configuration conf, String [] args)
  throws IOException {
    Path inputPath = new Path(args[0]);
    String tableName = args[1];
    Job job = new Job(conf, NAME + "_" + tableName);
    job.setJarByClass(Uploader.class);
    FileInputFormat.setInputPaths(job, inputPath);
    job.setInputFormatClass(TextInputFormat.class);
    job.setMapperClass(Uploader.class);

    TableMapReduceUtil.initTableReducerJob(tableName, null, job);
    job.setNumReduceTasks(0);
    return job;
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = HBaseConfiguration.create();
    conf.set("hbase.zookeeper.quorum","172.16.17.55");
    conf.set("hbase.zookeeper.property.clientPort","2181");
     String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if(otherArgs.length != 2) {
      System.err.println("Wrong number of arguments: " + otherArgs.length);
      System.err.println("Usage: " + NAME + " <input> <tablename>");
      System.exit(-1);
    }
    Job job = configureJob(conf, otherArgs);
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

请谁能解决我面临的问题。我甚至在$ {HADOOP_HOME}中都保存了jar文件,但仍然遇到classnotfound异常

最佳答案

您必须在导出时将这些类添加到CLASSPATH或使用Eclipse将它们打包到JAR中。

关于hadoop - 运行HbaseMapreduce程序时发生ClassNotFound异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12300814/

相关文章:

java - java.io.IOException:org.apache.thrift.protocol.TProtocolException:无法编写没有设置值的TUnion

mysql - 错误[主]工具.ImportTool : Imported Failed: No enum constant org. apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS

java - 如何在 hadoop 作业中将 'Text' 作为 Mapper 输入键传递?

hadoop - 如何编写自定义输入格式

hbase - 如何获取在 FAILED_OPEN 状态下被击中的 HBASE 中的区域?

rest - Hbase REST API : Timerange scan

hadoop - 无法从 ZooKeeper 获取主地址。我在创建 Hbase 表时遇到此错误

hadoop - Hbase行键过滤器,范围扫描和cassandra功能

python-2.7 - 这个 MRJob 例子的解释

python - Hadoop 返回的结果少于预期