java - 无法通过java客户端获取Hadoop作业信息

标签 java hadoop

我正在使用 Hadoop 1.2.1 并尝试通过 java 客户端打印作业详细信息,但它没有打印任何内容,这是我的 java 代码

    Configuration configuration = new Configuration();
    configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/hdfs-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/mapred-site.xml")); 
    InetSocketAddress jobtracker = new InetSocketAddress("localhost", 54311);
    JobClient jobClient;
    jobClient = new JobClient(jobtracker, configuration);
    jobClient.setConf(configuration);
    JobStatus[] jobs = jobClient.getAllJobs();
    System.out.println(jobs.length);//it is printing 0.
    for (int i = 0; i < jobs.length; i++) {
        JobStatus js = jobs[i];
        JobID jobId = js.getJobID();
        System.out.println(jobId);
    }

但是从职位跟踪器历史记录中我可以看到三个职位。这是屏幕截图 enter image description here 任何人都可以告诉我哪里出了问题。我只想打印所有作业详细信息。

这是我的配置文件:

核心站点.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name
<value>/data/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose</description>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.  The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
</description>
</property>
</configuration>

ma​​pred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>

最佳答案

尝试这样的事情

jobClient.displayTasks(jobID, "map", "completed");

工作ID在哪里

JobID jobID = new JobID(jobIdentifier, jobNumber);

TaskReport[] taskReportList =   jobClient.getMapTaskReports(jobID);

关于java - 无法通过java客户端获取Hadoop作业信息,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22412423/

相关文章:

hadoop - Mahout命令行示例

java - 将 hyperjaxb 生成的实体从 eclipse 持久保存到 mysql

java - 在这个计算正弦的java程序中可能会出现什么错误?

java - 从 Java ResultSet 中删除行而不是从底层数据库中删除行

java - Java 中的进度条

java - 配置 MapReduce 作业时使用多个 InputFormat 类

hadoop - 将zip文件从一台服务器移到hdfs?

java - 在 Java 中创建新对象并将其传递给 setMethod 的问题

hadoop - 如何为 PIG 或 HIVE 中的行添加行号?

macos - 在 OSX 上安装 Chorus 的 GreenPlum