java - 在 Accumulo 表上运行 mapreduce 作业时出现 TApplicationException 异常

标签 java hadoop mapreduce accumulo

我正在运行一个 map reduce 作业,从 Accumulo 中的一个表中获取数据作为输入,并将结果存储在 Accumulo 中的另一个表中。为此,我使用了 AccumuloInputFormat 和 AccumuloOutputFormat 类。这是代码

public int run(String[] args) throws Exception {

        Opts opts = new Opts();
        opts.parseArgs(PivotTable.class.getName(), args);

        Configuration conf = getConf();

        conf.set("formula", opts.formula);

        Job job = Job.getInstance(conf);

        job.setJobName("Pivot Table Generation");
        job.setJarByClass(PivotTable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        job.setMapperClass(PivotTableMapper.class);
        job.setCombinerClass(PivotTableCombiber.class);
        job.setReducerClass(PivotTableReducer.class);

        job.setInputFormatClass(AccumuloInputFormat.class);

        ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());

        AccumuloInputFormat.setInputTableName(job, opts.dataTable);
        AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));

        job.setOutputFormatClass(AccumuloOutputFormat.class);

        BatchWriterConfig bwConfig = new BatchWriterConfig();

        AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
        AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
        AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
        AccumuloOutputFormat.setCreateTables(job, true);

        return job.waitForCompletion(true) ? 0 : 1;
    }

数据透视表是包含主要方法(也是这个方法)的类的名称。我也制作了 mapper、combiner 和 reducer 类。但是当我尝试执行这项工作时,出现错误

Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
        ... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)

有人能告诉我我做错了什么吗?任何帮助将不胜感激。

编辑:我正在运行 Accumulo 1.7.0

最佳答案

TApplicationException 表示错误发生在 Accumulo 平板电脑服务器上,而不是您的客户端 (MapReduce) 代码中。您需要检查您的平板电脑服务器日志,以在您看到 TApplicationException 的任何地方获取有关特定错误的更多信息。

表权限通常是从 ZooKeeper 中检索的,因此它可能表示连接到 ZooKeeper 的 tserver 有问题。

不幸的是,我没有在堆栈跟踪中看到主机名或 IP,因此您可能必须检查所有 tserver 日志才能找到它。

关于java - 在 Accumulo 表上运行 mapreduce 作业时出现 TApplicationException 异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34846357/

相关文章:

python - Apache Pig-在相同关系上嵌套的FOREACH

java - OnetoOne 关系案例中的唯一行。如何防止重复?

java - 如何正确抛出 nullPointerException?

java - 在 Java 中传递的可选匿名参数

java - Hadoop NodeManager数量和DataNodes数量关系

hadoop - 用于文件写入的 Spark 分区非常慢

hadoop - 在 MapReduce 中不运行分区器的多个 reducer

hadoop -libjars 和 ClassNotFoundException

java - 这是 MVC DAO 的正确方法吗?我收到类似 Autowiring 依赖项注入(inject)失败的错误

csv - 将CSV数据导入Hadoop