java - 创建没有输入数据的自定义生成器Hadoop InputFormat

标签 java hadoop testing

我正在尝试创建一个仅生成数据而无需从外部读取数据的InputFormat。它从配置中读取关闭前要生成多少数据。这是为了帮助在非测试环境中分析OutputFormat。不幸的是,我找不到有关使用生成器InputFormat的任何引用。

到目前为止,我拥有的InputFormat是:

  public static class GeneratorInputFormat extends InputFormat<LongWritable, LongWritable> {

    @Override
    public RecordReader<LongWritable, LongWritable> createRecordReader(
        InputSplit arg0, TaskAttemptContext arg1) throws IOException, InterruptedException {
      return new GeneratorRecordReader();
    }

    @Override
    public List<InputSplit> getSplits(JobContext job) throws IOException, InterruptedException {
      long splitCount = job.getConfiguration().getLong(SPLITS_COUNT_KEY, 0);
      long splitSize = job.getConfiguration().getLong(SPLITS_SIZE_KEY, 0);
      List<InputSplit> splits = new ArrayList<InputSplit>();
      for (int i = 0; i < splitCount; i++) {
        splits.add(new TestInputSplit(splitSize));
      }
      return splits;
    }
  }

  public static class TestInputSplit extends InputSplit {

    private final long size;

    public TestInputSplit(long size) {
      this.size = size;
    }

    @Override
    public long getLength() throws IOException, InterruptedException {
      return size;
    }

    @Override
    public String[] getLocations() throws IOException, InterruptedException {
      return new String[0];
    }
  }

记录读取器简单地生成从0到输入长度的数字。

我收到的错误是缺少文件异常:
16/11/18 03:28:54 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1479265882561_0037
Exception in thread "main" java.lang.NullPointerException
        at org.apache.hadoop.mapreduce.split.JobSplitWriter.writeNewSplits(JobSplitWriter.java:132)
        at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:79)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:307)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at com.gmail.mooman219.cloud.hadoop.WordCountBench.main(WordCountBench.java:208)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
        at com.google.cloud.hadoop.services.agent.job.shim.HadoopRunJarShim.main(HadoopRunJarShim.java:12)
16/11/18 03:28:54 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/hadoop-yarn/staging/root/.staging/job_1479265882561_0037/job.split (inode 34186): File does $
ot exist. Holder DFSClient_NONMAPREDUCE_232487306_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3430)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3233)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3071)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

我觉得很奇怪,因为我从来没有在输入端引用任何文件。

最佳答案

该错误明确指出找不到该文件

关于java - 创建没有输入数据的自定义生成器Hadoop InputFormat,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40669051/

相关文章:

hadoop - 如何了解hdfs -du结果

python - 如何在pytest中进行依赖参数化?

asp.net - 如何测试支持多个域的 Multi-Tenancy 应用程序

java - 如何在 Java Swing 中将选项卡放置在 JInternalFrame 标题栏上?

java - Solr + Spring Roo 为每个客户提供单独的索引

json - HIVE - 加载推特 JSON 数据时出错

database - 对Hadoop DFS和MapReduce的一些疑问

testing - 为什么我应该使用 capybara ,因为可以直接使用 Selenium 本身?

java - 无参数方法的多重继承

Java单例类与最终类