linux - 如何通过 unix utils 或 nmon 使用 hadoop fs 获取磁盘信息?

标签 linux unix hadoop mapr

我已经安装了带有 mfs(基于 hadoop fs)的 mapr 和一些使用 dfdfisknmon 日志文件从文件系统获取信息的脚本。


    root@spbswgvml10:/opt/nmon# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda1       8.8G  4.4G  4.0G  53% /
    none            4.0K     0  4.0K   0% /sys/fs/cgroup
    udev            2.0G  4.0K  2.0G   1% /dev
    tmpfs           396M  464K  395M   1% /run
    none            5.0M     0  5.0M   0% /run/lock
    none            2.0G     0  2.0G   0% /run/shm
    none            100M     0  100M   0% /run/user
    root@spbswgvml10:/opt/nmon# fdisk -l

    Disk /dev/sda: 10.7 GB, 10737418240 bytes
    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00038d7f

       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048    18874367     9436160   83  Linux
    /dev/sda2        18876414    20969471     1046529    5  Extended
    /dev/sda5        18876416    20969471     1046528   82  Linux swap / Solaris

    Disk /dev/sdb: 32.2 GB, 32212254720 bytes
    64 heads, 51 sectors/track, 19275 cylinders, total 62914560 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x434da72d

       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048    62914559    31456256   83  Linux
    root@spbswgvml10:/opt/nmon# mount
    /dev/sda1 on / type ext4 (rw,errors=remount-ro)
    proc on /proc type proc (rw,noexec,nosuid,nodev)
    sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
    none on /sys/fs/cgroup type tmpfs (rw)
    none on /sys/fs/fuse/connections type fusectl (rw)
    none on /sys/kernel/debug type debugfs (rw)
    none on /sys/kernel/security type securityfs (rw)
    udev on /dev type devtmpfs (rw,mode=0755)
    devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
    tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
    none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
    none on /run/shm type tmpfs (rw,nosuid,nodev)
    none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
    none on /sys/fs/pstore type pstore (rw)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
    cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
    cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
    systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
    rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)

Now I want to get information from device /dev/sdb1, which using by mapr as hadoop fs. I know that I can use something like

hadoop fs df

但我希望有另一种方式来使用,总大小等。​​

我无法挂载/dev/sdb1,因为它正被某些进程使用。并且找不到分区可能已经挂载的路径。

最佳答案

使用下面的命令:

maprcli disk list -host `hostname`

mfs 使用的磁盘不会显示在常规安装输出中。

关于linux - 如何通过 unix utils 或 nmon 使用 hadoop fs 获取磁盘信息?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29074900/

相关文章:

Linux 排序 - 求助

linux tty端口一打开就自发发送数据

linux - 如何使用字符串分隔符将 stderr 重定向到 stdout

linux - 如何仅按第二个字段(数字)排序?

Hadoop MapReduce 思考

Hadoop报错无法启动-all.sh

java - 用于部署 Java "Side By Side"和工具的 Linux 标准?

Linux 内核系统调用返回 -1 而不是 {-1, -256}

bash - 使用 awk 在字符串中查找值

hadoop - 不同的hadoop类型文件