hadoop - 无法在 Ubuntu (16.04) 上以伪模式启动 Hadoop (3.1.0)

标签 hadoop hdfs namenode

我正在尝试遵循 Hadoop Apache 网站的入门指南,特别是 Pseudo 分布式配置,
Getting started guide from Apache Hadoop 3.1.0

但我无法启动 Hadoop 名称和数据节点。任何人都可以帮忙建议吗?即使它的东西我可以运行以尝试进一步调试/调查。

在日志的末尾,我看到一条错误消息(不确定它是重要的还是红鲱鱼)。

    2018-04-18 14:15:40,003 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes

    2018-04-18 14:15:40,006 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 0

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0

    2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 11 msec

    2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

    2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting

    2018-04-18 14:15:40,029 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000

    2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state

    2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Initializing quota with 4 thread(s)

    2018-04-18 14:15:40,033 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Quota initialization completed in 2 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2018-04-18 14:15:40,037 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds

> 2018-04-18 14:15:40,232 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
> SIGTERM
> 
> 2018-04-18 14:15:40,236 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 1:
> SIGHUP
> 
> 2018-04-18 14:15:40,236 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at c0315/127.0.1.1

我已经确认,我可以ssh localhost没有密码提示。我还从上面提到的 Apache 入门指南中运行了以下步骤,
  • $ bin/hdfs 名称节点格式
  • $ sbin/start-dfs.sh

  • 但我无法运行第 3 步来浏览 http://localhost:9870/ 的位置.当我从终端提示符运行 >jsp 时,我刚刚返回,

    14900 Jps



    我期待我的节点列表。

    我将附上完整的日志。

    任何人都可以提供调试方法吗?

    Java版,
    $ java --版本
    java 9.0.4 
    Java(TM) SE Runtime Environment (build 9.0.4+11) 
    Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
    

    EDIT1:我也用 Java8 重复了这些步骤,并得到了相同的错误消息。

    EDIT2:按照下面的评论建议,我检查了我现在肯定指向 Java8,并且我还注释掉了 127.0.0.0 的 localhost 设置来自 /etc/hosts文件

    commented-out-localhosts

    java-version

    hadoop-env

    Ubuntu版本,

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID: neon
    Description: KDE neon User Edition 5.12
    Release: 16.04
    Codename: xenial
    

    我试过运行命令,bin/hdfs version
    Hadoop 3.1.0 
    Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 
    Compiled by centos on 2018-03-30T00:00Z 
    Compiled with protoc 2.5.0 
    From source with checksum 14182d20c972b3e2105580a1ad6990 
    This command was run using /home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar
    

    当我尝试 bin/hdfs groups它没有返回,但给了我,
    018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
    

    当我尝试时,$ bin/hdfs lsSnapshottableDir
    lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    

    当我尝试时,$ bin/hdfs classpath
    /home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/*
    

    核心站点.xml
    <configuration>
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
      </property>
    </configuration>
    

    hdfs-site.xml
    <configuration>
      <property>
        <name>dfs.replication</name>
        <value>1</value>
      </property>
    </configuration>
    

    mapred-site.xml
    <configuration>
      <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
      </property>
    </configuration>
    

    最佳答案

    我无法弄清楚(我只是再次尝试,因为我非常想念NEON)但即使 :9000 未使用,操作系统也会在我的情况下发送一个 SIGTERM 。
    遗憾的是,我发现解决此问题的唯一方法是回到股票 Ubuntu。

    关于hadoop - 无法在 Ubuntu (16.04) 上以伪模式启动 Hadoop (3.1.0),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49918920/

    相关文章:

    hadoop hdfs 指向文件 :///not hdfs://

    hadoop - Namenode如何决定在哪个datanode中写入一个 block

    hadoop - 无法更改对 hdfs 目录的读写权限

    java - 使用 java 从本地主机在远程 hdfs 上创建目录

    hadoop - 色相安装-制作应用-失败

    oracle - 无法使用 Sqoop 将数据从 Oracle 导入到 HDFS

    hadoop - 如何使用Solr将文件发送到HDFS

    python - 从 PySpark 连接到 S3 数据

    hadoop2 - 谁能说出在 jps 中不显示 hadoop 恶魔的原因吗?