kubernetes - 带有Calico CNI节点的Kubernetes 1.17容器1.2.0未加入主节点

标签 kubernetes calico containerd

我正在CentOS 8上使用容器化和Calico作为CNI设置kubernetes集群。使用kubeadm命令设置主节点,其处于就绪状态。

当我将节点加入主节点时,节点未变为就绪状态。我在下面的消息中看到日志文件。

Jan 14 20:17:29 node02 containerd[1417]: time="2020-01-14T20:17:29.416373526-05:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fbst8,Uid:9c7f6334-d106-48e1-af12-1bcdebc7c2c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to start sandbox container: failed to create containerd task: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:279: applying cgroup configuration for process caused \"Invalid unit name 'pod9c7f6334-d106-48e1-af12-1bcdebc7c2c2'\"": unknown"
Jan 14 20:17:29 node02 kubelet[30113]: E0114 20:17:29.416668   30113 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:279: applying cgroup configuration for process caused \"Invalid unit name 'pod9c7f6334-d106-48e1-af12-1bcdebc7c2c2'\"": unknown
Jan 14 20:17:29 node02 kubelet[30113]: E0114 20:17:29.416742   30113 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "calico-node-fbst8_kube-system(9c7f6334-d106-48e1-af12-1bcdebc7c2c2)" failed: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:279: applying cgroup configuration for process caused \"Invalid unit name 'pod9c7f6334-d106-48e1-af12-1bcdebc7c2c2'\"": unknown
Jan 14 20:17:29 node02 kubelet[30113]: E0114 20:17:29.416761   30113 kuberuntime_manager.go:729] createPodSandbox for pod "calico-node-fbst8_kube-system(9c7f6334-d106-48e1-af12-1bcdebc7c2c2)" failed: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:279: applying cgroup configuration for process caused \"Invalid unit name 'pod9c7f6334-d106-48e1-af12-1bcdebc7c2c2'\"": unknown
Jan 14 20:17:29 node02 kubelet[30113]: E0114 20:17:29.416819   30113 pod_workers.go:191] Error syncing pod 9c7f6334-d106-48e1-af12-1bcdebc7c2c2 ("calico-node-fbst8_kube-system(9c7f6334-d106-48e1-af12-1bcdebc7c2c2)"), skipping: failed to "CreatePodSandbox" for "calico-node-fbst8_kube-system(9c7f6334-d106-48e1-af12-1bcdebc7c2c2)" with CreatePodSandboxError: "CreatePodSandbox for pod \"calico-node-fbst8_kube-system(9c7f6334-d106-48e1-af12-1bcdebc7c2c2)\" failed: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:279: applying cgroup configuration for process caused \\\"Invalid unit name 'pod9c7f6334-d106-48e1-af12-1bcdebc7c2c2'\\\"\": unknown"
Jan 14 20:17:30 node02 containerd[1417]: time="2020-01-14T20:17:30.541254039-05:00" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 14 20:17:30 node02 kubelet[30113]: E0114 20:17:30.541394   30113 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Jan 14 20:17:35 node02 containerd[1417]: time="2020-01-14T20:17:35.541792325-05:00" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 14 20:17:35 node02 kubelet[30113]: E0114 20:17:35.541929   30113 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

有解决此错误的提示吗?

最佳答案

由于您未使用docker,因此需要明确setup cgroup驱动程序。

要使用systemd cgroup驱动程序,请在plugins.cri.systemd_cgroup = true/etc/containerd/config.toml中设置systemctl restart containerd
您必须在kubeadm-flags.env中修改文件/var/lib/kubelet并设置cgroups驱动程序。

KUBELET_EXTRA_ARGS=--cgroup-driver=systemd

确保指向上面的文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env

关于kubernetes - 带有Calico CNI节点的Kubernetes 1.17容器1.2.0未加入主节点,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59744000/

相关文章:

kubernetes - docker 桌面中的 kubectl Dashboard 需要代理

kubernetes - 当我们尝试 ping 负载均衡器 (kubernetes) 类型的服务的外部 IP 时,到底会发生什么?

kubernetes - kubeadm + calico 3.6 单节点永远没有准备好

docker - libcontainer 在 docker stack 中的什么位置?

docker - Kubernetes 取消了对 Docker 的支持但支持 containerd(它是 Docker 的一部分)是什么意思?

ssl - 如何使用带有 nginx-ingress 的 digicert 来启用 https

azure - 配置非托管 k8s 负载均衡器

networking - Kubernetes 设置中集群的 pod 中的 Tcpdump(在 Minikube 设置中)

linux - K8s 'calico' pod 未启动 : "Failed to create default IPv4 IP pool: 10.244.0.0/16 error=resource does not exist:"