我已经使用 3 个 VPS 设置了 3 个节点的 kubernetes 并安装了 rook/ceph。
当我运行时
kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status
我得到以下结果
osd: 0 osds: 0 up, 0 in
我试过了
ceph device ls
结果是
DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY
ceph osd status
没有给我任何结果
这是我用的yaml文件
https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml
当我使用下面的命令时
sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision
结果是
2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"
我的磁盘分区
root@node1: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 400G 0 disk
├─sda1 8:1 0 953M 0 part /boot
└─sda2 8:2 0 399.1G 0 part /
我在这里做错了什么?
最佳答案
我有类似的问题,在我多次安装和拆卸测试后,ceph status
中没有出现 OSD。
我通过运行解决了这个问题
dd if=/dev/zero of=/dev/sdX bs=1M status=progress
彻底删除此类原始 block 磁盘上的任何信息。
关于kubernetes - 在 kubernetes 集群中安装 rook-ceph 后显示 OSD 0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67465958/