Kubernetes pod 没有分布在不同的节点上

标签 kubernetes google-kubernetes-engine

我在 GKE 上有一个 Kubernetes 集群。我知道 Kubernetes 会传播具有相同标签的 pod,但这不会发生在我身上。这是我的节点描述。

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:16:27 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:17:02 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 cpu:           2
 memory:        1848660Ki
 pods:          110
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (2 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  100m (5%)     0 (0%)          200Mi (11%)     200Mi (11%)
No events.

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:16:38 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:17:12 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 pods:          110
 cpu:           2
 memory:        1848660Ki
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (10 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  default                       pn-minions-deployment-prod-3923308490-axucq                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-prod-3923308490-mvn54                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-8cq5p                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-9yatt                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   heapster-v1.0.2-1246684275-a8eab                                                        150m (7%)       150m (7%)   308Mi (17%)     308Mi (17%)
  kube-system                   kube-dns-v11-uzl1h                                                                      310m (15%)      310m (15%)  170Mi (9%)      920Mi (50%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
  kube-system                   kubernetes-dashboard-v1.0.1-3co2b                                                       100m (5%)       100m (5%)   50Mi (2%)       50Mi (2%)
  kube-system                   l7-lb-controller-v0.6.0-o5ojv                                                           110m (5%)       110m (5%)   70Mi (3%)       120Mi (6%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  1170m (58%)   670m (33%)      798Mi (44%)     1598Mi (88%)
No events.

这里是部署的描述:

Name:                   pn-minions-deployment-prod
Namespace:              default
Labels:                 app=pn-minions,environment=production
Selector:               app=pn-minions,environment=production
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-prod-3923308490 (2/2 replicas created)

Name:                   pn-minions-deployment-staging
Namespace:              default
Labels:                 app=pn-minions,environment=staging
Selector:               app=pn-minions,environment=staging
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-staging-2522417973 (2/2 replicas created)

如您所见,所有四个 pod 都在同一个节点上。我是否应该做一些额外的事情来完成这项工作?

最佳答案

默认情况下,pod 以不受限制的 CPU 和内存限制运行。这意味着系统中的任何 pod 都能够在执行该 pod 的节点上消耗尽可能多的 CPU 和内存。 http://kubernetes.io/docs/admin/limitrange/

当您不指定 CPU 限制时,kubernetes 将不知道需要多少 CPU 资源,并会尝试在一个节点中创建 pod。

这是一个部署

的例子
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
        - name: jenkins
          image: quay.io/naveensrinivasan/jenkins:0.4
          ports:
            - containerPort: 8080
          resources:
            limits:
                cpu: "400m"
#          volumeMounts:
#            - mountPath: /var/jenkins_home
#              name: jenkins-volume
#      volumes:
#         - name: jenkins-volume
#           awsElasticBlockStore:
#            volumeID: vol-29c4b99f
#            fsType: ext4
      imagePullSecrets:
         - name: registrypullsecret

这是 kubectl describe po | 的输出grep Node 创建部署后。

~ aws_kubernetes  naveen@GuessWho  ~/revature/devops/jenkins   jenkins ● k describe po | grep Node
Node:       ip-172-20-0-26.us-west-2.compute.internal/172.20.0.26
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29
Node:       ip-172-20-0-27.us-west-2.compute.internal/172.20.0.27
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29

它现在在 4 个不同的节点中创建。它基于集群上的 CPU 限制。您可以增加/减少 replicas 以查看它被部署在不同的节点中。

这不是特定于 GKE 或 AWS 的。

关于Kubernetes pod 没有分布在不同的节点上,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37494314/

相关文章:

Java 内存不足。这不是内存泄漏吗?

kubernetes - Ingress 中的托管证书,域状态为 FailedNotVisible

谷歌容器集群作为配置

kubernetes - CPU 资源单位 (millicore/millicpu) 是如何计算的?

sql-server - Kubernetes中的dotnet核心Pod连接到本地SQL Server

kubernetes - 错误 : You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error

Kubernetes 仪表板 CrashLoopBackOff,出现错误 "connect: no route to host",我该如何解决?

postgresql - GKE 上的多区域高可用性 - 如何处理 PostgreSQL 数据库?

ssl - Kubernetes NGINX 入口 Controller 未获取 TLS 证书

kubernetes - GKE - 无法使 cuda 与 pytorch 一起工作