kubernetes - kubeadm初始化错误:WAITING条件超时

标签 kubernetes kubeadm

我试图在为运行Kubernetes初始化的全新VM上运行kubeadm init

我正在按照一些类(class)笔记进行操作,因此一切都应该很好,但是正在得到:

vagrant@c1-master1:~$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c1-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c1-master1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c1-master1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

kubelet似乎还可以:
vagrant@c1-master1:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2019-11-22 15:15:52 UTC; 20min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 11188 (kubelet)
    Tasks: 15 (limit: 547)
   CGroup: /system.slice/kubelet.service
           └─11188 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -

有什么建议可能是问题所在,还是从哪里开始调试?

最佳答案

我建议以下步骤:
1)尝试运行kubeadm reset
2)如果没有帮助,请尝试通过添加kubeadm init标志再次以最新版本或特定版本运行--kubernetes-version=X.Y.Z
3)尝试重新启动Kubelet。
4)检查节点的防火墙规则,并仅打开相关端口。

关于kubernetes - kubeadm初始化错误:WAITING条件超时,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58997458/

相关文章:

kubernetes 没有可用于在 coreos 中调度 pod 的节点

kubernetes - kubernetes集群中的DNS解析问题

kubernetes - 将 ConfigMap "kubeadm-config"中使用的配置存储在 "kube-system"命名空间中

kubernetes - 用户 "system:anonymous"无法获取路径 "/"

kubernetes - 删除 Kubernetes 集群不应删除永久性磁盘

kubernetes - 如何在Kubernetes 1.14版上启用coredns进行DNS查找?

kubernetes - 为什么要在部署之前在单个 Kubernetes 配置文件中指定服务?

kubernetes - "--cri-socket"标志和 "init phase"参数之间的 kubeadm 兼容性

使用 Nginx 的 SSL 直通