Kubernetes DNS 不再解析名称

标签 kubernetes dns project-calico kubespray

我有一个由 6 个服务器、3 个主节点和 3 个工作节点组成的集群。
直到今天早上一切正常,直到我从集群中删除了两个工作人员。

现在内部 DNS 不再工作了。
我无法解析内部名称。
显然 google.com 已解决,我可以 ping 它。

我的集群运行 Kubernetes V1.18.2(用于网络的 calico),安装了 kubespray。
我可以从外部访问我的服务,但是当它们需要相互连接时,它们会失败(例如,当 UI 尝试连接到数据库时)。

我在下面提供了此处列出的命令的一些输出:https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

kubectl exec -ti busybox-6899b748d7-pbdk4 -- cat/etc/resolv.conf


nameserver 10.233.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ovh.net
options ndots:5

kubectl exec -ti busybox-6899b748d7-pbdk4 -- nslookup kubernetes.default
Server:         10.233.0.10
Address:        10.233.0.10:53

** server can't find kubernetes.default: NXDOMAIN

*** Can't find kubernetes.default: No answer

command terminated with exit code 1

kubectl exec -ti busybox-6899b748d7-pbdk4 -- nslookup google.com
Server:         10.233.0.10
Address:        10.233.0.10:53

Non-authoritative answer:
Name:   google.com
Address: 172.217.22.142

*** Can't find google.com: No answer

kubectl exec -ti busybox-6899b748d7-pbdk4 -- ping google.com
PING google.com (172.217.22.142): 56 data bytes
64 bytes from 172.217.22.142: seq=0 ttl=52 time=4.409 ms
64 bytes from 172.217.22.142: seq=1 ttl=52 time=4.359 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.359/4.384/4.409 ms

kubectl 获取 pods --namespace=kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-74b594f4c6-5k6kq   1/1     Running   2          6d7h
coredns-74b594f4c6-9ct8x   1/1     Running   0          16m


当我获得 DNS pod 的日志时:
对于 p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p;完成
他们充满了:
E0522 11:56:22.613704       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
E0522 11:56:33.678487       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.233.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1667490&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0522 12:19:42.356157       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Namespace: Get https://10.233.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1667490&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0522 12:19:42.356327       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.233.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1667490&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 10.233.0.1:443: connect: connection refused

coredns 服务已启动:
kubectl get svc --namespace=kube-system

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
coredns                     ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   7d4h
dashboard-metrics-scraper   ClusterIP   10.233.52.242   <none>        8000/TCP                 7d4h
kubernetes-dashboard        ClusterIP   10.233.63.42    <none>        443/TCP                  7d4h
voyager-operator            ClusterIP   10.233.31.206   <none>        443/TCP,56791/TCP        6d5h

端点暴露:
kubectl get ep coredns --namespace=kube-system
NAME      ENDPOINTS                                                    AGE
coredns   10.233.68.9:53,10.233.79.7:53,10.233.68.9:9153 + 3 more...   7d4h

我打破了什么?我怎样才能解决这个问题?

编辑:
评论中要求的更多信息:
kubectl get pods -n kube-system
NAME                                          READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5d9cfb4bfd-8h7jd      1/1     Running   0          3d14h
calico-node-6w8g6                             1/1     Running   13         4d15h
calico-node-78thq                             1/1     Running   6          7d19h
calico-node-cr4jl                             1/1     Running   23         4d16h
calico-node-g5q99                             1/1     Running   1          3d15h
calico-node-pmss2                             1/1     Running   0          3d15h
calico-node-zw9fk                             1/1     Running   18         4d19h
coredns-74b594f4c6-5k6kq                      1/1     Running   2          6d22h
coredns-74b594f4c6-9ct8x                      1/1     Running   0          15h
dns-autoscaler-7594b8c675-j5jfv               1/1     Running   0          15h
kube-apiserver-kub1                           1/1     Running   42         7d20h
kube-apiserver-kub2                           1/1     Running   1          7d19h
kube-apiserver-kub3                           1/1     Running   33         7d19h
kube-controller-manager-kub1                  1/1     Running   37         7d20h
kube-controller-manager-kub2                  1/1     Running   4          3d15h
kube-controller-manager-kub3                  1/1     Running   55         7d19h
kube-proxy-4dlf8                              1/1     Running   4          4d15h
kube-proxy-4nlhf                              1/1     Running   2          4d15h
kube-proxy-82kkz                              1/1     Running   3          4d15h
kube-proxy-lvsfz                              1/1     Running   0          3d15h
kube-proxy-pmhnx                              1/1     Running   4          4d15h
kube-proxy-wpfnn                              1/1     Running   10         4d15h
kube-scheduler-kub1                           1/1     Running   34         7d20h
kube-scheduler-kub2                           1/1     Running   3          7d19h
kube-scheduler-kub3                           1/1     Running   51         7d19h
kubernetes-dashboard-7dbcd59666-79gxv         1/1     Running   0          3d14h
kubernetes-metrics-scraper-6858b8c44d-g9m9w   1/1     Running   1          5d22h
nginx-proxy-galaxy                            1/1     Running   2          4d15h
nginx-proxy-kub4                              1/1     Running   7          4d19h
nginx-proxy-kub5                              1/1     Running   6          4d16h
nodelocaldns-2dv59                            1/1     Running   0          3d15h
nodelocaldns-9skxm                            1/1     Running   5          4d16h
nodelocaldns-dwg4z                            1/1     Running   4          4d15h
nodelocaldns-nmwwz                            1/1     Running   12         7d19h
nodelocaldns-qkq8n                            1/1     Running   4          4d19h
nodelocaldns-v84jj                            1/1     Running   8          7d19h
voyager-operator-5677998d47-psskf             1/1     Running   10         4d15h

最佳答案

我能够重现这个场景。

$ kubectl exec -it busybox -n dev -- nslookup kubernetes.default    
Server:         10.96.0.10
Address:        10.96.0.10:53

** server can't find kubernetes.default: NXDOMAIN

*** Can't find kubernetes.default: No answer

command terminated with exit code 1
$ kubectl exec -it busybox -n dev -- nslookup google.com        
Server:         10.96.0.10
Address:        10.96.0.10:53

Non-authoritative answer:
Name:   google.com
Address: 172.217.168.238

*** Can't find google.com: No answer

$ kubectl exec -it busybox -n dev -- ping google.com    
PING google.com (172.217.168.238): 56 data bytes
64 bytes from 172.217.168.238: seq=0 ttl=52 time=18.425 ms
64 bytes from 172.217.168.238: seq=1 ttl=52 time=27.176 ms
64 bytes from 172.217.168.238: seq=2 ttl=52 time=18.603 ms
64 bytes from 172.217.168.238: seq=3 ttl=52 time=15.445 ms
64 bytes from 172.217.168.238: seq=4 ttl=52 time=16.492 ms
64 bytes from 172.217.168.238: seq=5 ttl=52 time=19.294 ms
^C
--- google.com ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 15.445/19.239/27.176 ms

但是我使用 dnsutils 遵循了相同的步骤图片。在 Kubernetes doc 中已经提到.它给出了积极的回应。
$ kubectl exec -ti dnsutils -n dev -- nslookup kubernetes.default   
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1


$ kubectl exec -ti dnsutils -n dev -- nslookup google.com        
Server:         10.96.0.10
Address:        10.96.0.10#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.168.238
Name:   google.com
Address: 2a00:1450:400e:80c::200e

根据我的理解,这里的 busybox 容器中的 dnsutils 有问题。这就是我们收到此 DNS 解析错误的原因。

关于Kubernetes DNS 不再解析名称,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62003579/

相关文章:

kubernetes - 将现有实例添加到kubernetes集群

amazon-web-services - 将 Amazon CloudFront 指向 A 记录而不是 CNAME

docker - Kubernetes Calico节点 'XXXXXXXXXXX'已使用IPv4地址XXXXXXXXX,CrashLoopBackOff

kubernetes networkpolicy namespaceSelector 当命名空间没有标签时选择

linux - 使用 Calico 开源混合操作系统 Linux 和 windows 上的 Kubernetes

java - 通过流式传输将分段文件上传到 Amazon S3 时内存使用率过高?

kubernetes - Pod 反亲和性和重新平衡 Pod

java - 使用 Fabric8 java 库列出 pod 文件夹中的所有文件

Azure Kubernetes 服务、带有 terraform 的 AKS、私有(private) dns 链接

dns - 服务器找不到 NXDOMAIN