GCP 上的 Kubernetes - 没有最低可用性/MinimumReplicasUnavailable 错误

标签 kubernetes deployment containers

我正在将无状态应用工作负载部署到 GCP 上的 Kubernetes 集群。它的目的是运行一系列批处理作业,因此需要 Google 存储的 I/O 以及用于计算输出的临时磁盘空间。

当容器部署时,它会失败并出现“MinimumReplicasUnavailable”错误(下面日志的最后一部分)。

我改变了 Pod 的 CPU、磁盘和内存大小、Pod 数量,并尝试允许自动缩放 - 到目前为止,一切都没有效果。这不是配额问题。

我是否缺少设置?

我应该共享哪些具体日志或设置来帮助诊断问题?

$kubectl get events


LAST SEEN   TYPE      REASON                    OBJECT                                                    MESSAGE
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting kubelet.
30m         Normal    NodeHasSufficientMemory   node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasSufficientMemory
30m         Normal    NodeHasNoDiskPressure     node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasNoDiskPressure
30m         Normal    NodeHasSufficientPID      node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasSufficientPID
30m         Normal    NodeAllocatableEnforced   node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Updated Node Allocatable limit across pods
30m         Normal    NodeReady                 node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeReady
30m         Normal    RegisteredNode            node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx event: Registered Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx in Controller
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting kube-proxy.
30m         Warning   ContainerdStart           node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting containerd container runtime...
30m         Warning   DockerStart               node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting Docker Application Container Engine...
30m         Warning   KubeletStart              node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Started Kubernetes kubelet.
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting kubelet.
30m         Normal    NodeHasSufficientMemory   node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasSufficientMemory
30m         Normal    NodeHasNoDiskPressure     node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasNoDiskPressure
30m         Normal    NodeHasSufficientPID      node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasSufficientPID
30m         Normal    NodeAllocatableEnforced   node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Updated Node Allocatable limit across pods
30m         Normal    NodeReady                 node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeReady
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting kube-proxy.
30m         Normal    RegisteredNode            node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 event: Registered Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 in Controller
30m         Warning   ContainerdStart           node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting containerd container runtime...
30m         Warning   DockerStart               node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting Docker Application Container Engine...

$kubectl 描述部署 -A

Name:                   event-exporter-gke
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=event-exporter
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=event-exporter
                    version=v0.3.1
  Annotations:      components.gke.io/component-name: event-exporter
                    components.gke.io/component-version: 1.0.7
  Service Account:  event-exporter-sa
  Containers:
   event-exporter:
    Image:      gke.gcr.io/event-exporter:v0.3.3-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /event-exporter
      -sink-opts=-stackdriver-resource-model=new -endpoint=https://logging.googleapis.com
    Environment:  <none>
    Mounts:       <none>
   prometheus-to-sd-exporter:
    Image:      gke.gcr.io/prometheus-to-sd:v0.10.0-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /monitor
      --stackdriver-prefix=container.googleapis.com/internal/addons
      --api-override=https://monitoring.googleapis.com/
      --source=event_exporter:http://localhost:80?whitelisted=stackdriver_sink_received_entry_count,stackdriver_sink_request_count,stackdriver_sink_successfully_sent_entry_count
      --pod-id=$(POD_NAME)
      --namespace-id=$(POD_NAMESPACE)
      --node-name=$(NODE_NAME)
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
      NODE_NAME:       (v1:spec.nodeName)
    Mounts:           <none>
  Volumes:
   ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   event-exporter-gke-8489df9489 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set event-exporter-gke-8489df9489 to 1
Name:                   fluentd-gke-scaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:37 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=fluentd-gke-scaler
Annotations:            components.gke.io/component-name: fluentd-scaler
                        components.gke.io/component-version: 1.0.1
                        deployment.kubernetes.io/revision: 1
Selector:               k8s-app=fluentd-gke-scaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=fluentd-gke-scaler
  Annotations:      components.gke.io/component-name: fluentd-scaler
                    components.gke.io/component-version: 1.0.1
  Service Account:  fluentd-gke-scaler
  Containers:
   fluentd-gke-scaler:
    Image:      k8s.gcr.io/fluentd-gcp-scaler:0.5.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /scaler.sh
      --ds-name=fluentd-gke
      --scaling-policy=fluentd-gcp-scaling-policy
    Environment:
      CPU_REQUEST:     100m
      MEMORY_REQUEST:  200Mi
      CPU_LIMIT:       1
      MEMORY_LIMIT:    500Mi
    Mounts:            <none>
  Volumes:             <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   fluentd-gke-scaler-cd4d654d7 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set fluentd-gke-scaler-cd4d654d7 to 1
Name:                   kube-dns
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 10% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Annotations:      components.gke.io/component-name: kubedns
                    components.gke.io/component-version: 1.0.3
                    scheduler.alpha.kubernetes.io/critical-pod:
                    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Service Account:  kube-dns
  Containers:
   kubedns:
    Image:       gke.gcr.io/k8s-dns-kube-dns-amd64:1.15.13
    Ports:       10053/UDP, 10053/TCP, 10055/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    Limits:
      memory:  210Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
   dnsmasq:
    Image:       gke.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.15.13
    Ports:       53/UDP, 53/TCP
    Host Ports:  0/UDP, 0/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --dns-forward-max=1500
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
   sidecar:
    Image:      gke.gcr.io/k8s-dns-sidecar-amd64:1.15.13
    Port:       10054/TCP
    Host Port:  0/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:       <none>
   prometheus-to-sd:
    Image:      gke.gcr.io/prometheus-to-sd:v0.4.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /monitor
      --source=kubedns:http://localhost:10054?whitelisted=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits
      --stackdriver-prefix=container.googleapis.com/internal/addons
      --api-override=https://monitoring.googleapis.com/
      --pod-id=$(POD_NAME)
      --namespace-id=$(POD_NAMESPACE)
      --v=2
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:
   kube-dns-config:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               kube-dns
    Optional:           true
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-7c976ddbdb (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 2
Name:                   kube-dns-autoscaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns-autoscaler
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns-autoscaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns-autoscaler
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  kube-dns-autoscaler
  Containers:
   autoscaler:
    Image:      gke.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /cluster-proportional-autoscaler
      --namespace=kube-system
      --configmap=kube-dns-autoscaler
      --target=Deployment/kube-dns
      --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
      --logtostderr=true
      --v=2
    Requests:
      cpu:              20m
      memory:           10Mi
  Volumes:
   kube-dns-config:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               kube-dns
    Optional:           true
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-7c976ddbdb (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 2
Name:                   kube-dns-autoscaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns-autoscaler
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns-autoscaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns-autoscaler
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  kube-dns-autoscaler
  Containers:
   autoscaler:
    Image:      gke.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /cluster-proportional-autoscaler
      --namespace=kube-system
      --configmap=kube-dns-autoscaler
      --target=Deployment/kube-dns
      --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
      --logtostderr=true
      --v=2
    Requests:
      cpu:              20m
      memory:           10Mi
    Environment:        <none>
    Mounts:             <none>
  Volumes:              <none>
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-autoscaler-645f7d66cf (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-autoscaler-645f7d66cf to 1
Name:                   l7-default-backend
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=glbc
                        kubernetes.io/cluster-service=true
                        kubernetes.io/name=GLBC
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=glbc
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       k8s-app=glbc
                name=glbc
  Annotations:  seccomp.security.alpha.kubernetes.io/pod: docker/default
  Containers:
   default-http-backend:
    Image:      k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0
    Port:       8080/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     10m
      memory:  20Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:8080/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   l7-default-backend-678889f899 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set l7-default-backend-678889f899 to 1
Name:                   metrics-server-v0.3.6
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:35 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=metrics-server
                        kubernetes.io/cluster-service=true
                        version=v0.3.6
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               k8s-app=metrics-server,version=v0.3.6
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=metrics-server
                    version=v0.3.6
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  metrics-server
  Containers:
   metrics-server:
    Image:      k8s.gcr.io/metrics-server-amd64:v0.3.6
    Port:       443/TCP
    Host Port:  0/TCP
    Command:
      /metrics-server
      --metric-resolution=30s
      --kubelet-port=10255
      --deprecated-kubelet-completely-insecure=true
      --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
    Limits:
      cpu:     43m
      memory:  55Mi
    Requests:
      cpu:        43m
      memory:     55Mi
    Environment:  <none>
    Mounts:       <none>
   metrics-server-nanny:
    Image:      gke.gcr.io/addon-resizer:1.8.8-gke.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /pod_nanny
      --config-dir=/etc/config
      --cpu=40m
      --extra-cpu=0.5m
      --memory=35Mi
      --extra-memory=4Mi
      --threshold=5
      --deployment=metrics-server-v0.3.6
      --container=metrics-server
      --poll-period=300000
      --estimator=exponential
      --scale-down-delay=24h
      --minClusterSize=5
    Limits:
      cpu:     100m
      memory:  300Mi
    Requests:
      cpu:     5m
      memory:  50Mi
    Environment:
      MY_POD_NAME:        (v1:metadata.name)
      MY_POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:
      /etc/config from metrics-server-config-volume (rw)
  Volumes:
   metrics-server-config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               metrics-server-config
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   metrics-server-v0.3.6-64655c969 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set metrics-server-v0.3.6-69fbfcd8b9 to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set metrics-server-v0.3.6-64655c969 to 1
  Normal  ScalingReplicaSet  40m   deployment-controller  Scaled down replica set metrics-server-v0.3.6-69fbfcd8b9 to 0
Name:                   stackdriver-metadata-agent-cluster-level
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        app=stackdriver-metadata-agent
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=stackdriver-metadata-agent,cluster-level=true
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           app=stackdriver-metadata-agent
                    cluster-level=true
  Annotations:      components.gke.io/component-name: stackdriver-metadata-agent
                    components.gke.io/component-version: 1.1.3
  Service Account:  metadata-agent
  Containers:
   metadata-agent:
    Image:      gcr.io/stackdriver-agents/metadata-agent-go:1.2.0
    Port:       <none>
    Host Port:  <none>
    Args:
      -logtostderr
      -v=1
    Limits:
      cpu:     48m
      memory:  112Mi
    Requests:
      cpu:     48m
      memory:  112Mi
    Environment:
      CLUSTER_NAME:       risk-engine-cluster
      CLUSTER_LOCATION:   us-central1-a
      IGNORED_RESOURCES:  replicasets.v1.apps,replicasets.v1beta1.extensions
    Mounts:
      /etc/ssl/certs from ssl-certs (rw)
   metadata-agent-nanny:
    Image:      gke.gcr.io/addon-resizer:1.8.11-gke.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /pod_nanny
      --config-dir=/etc/config
      --cpu=40m
      --extra-cpu=0.5m
      --memory=80Mi
      --extra-memory=2Mi
      --threshold=5
      --deployment=stackdriver-metadata-agent-cluster-level
      --container=metadata-agent
      --poll-period=300000
      --estimator=exponential
      --minClusterSize=16
      --use-metrics=true
    Limits:
      memory:  90Mi
    Requests:
      cpu:     50m
      memory:  90Mi
    Environment:
      MY_POD_NAME:        (v1:metadata.name)
      MY_POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:
      /etc/config from metadata-agent-config-volume (rw)
  Volumes:
   ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  Directory
   metadata-agent-config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               metadata-agent-config
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   stackdriver-metadata-agent-cluster-level-5d547598f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  42m   deployment-controller  Scaled up replica set stackdriver-metadata-agent-cluster-level-77cc557f5c to 1
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled up replica set stackdriver-metadata-agent-cluster-level-5d547598f to 1
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled down replica set stackdriver-metadata-agent-cluster-level-77cc557f5c to 0

Name:                   risk-engine
Namespace:              re1
CreationTimestamp:      Sun, 01 Nov 2020 00:03:56 +0000
Labels:                 app=risk-engine
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=risk-engine
Replicas:               3 desired | 3 updated | 3 total | 0 available | 3 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=risk-engine
  Containers:
   risk-engine-1:
    Image:        fordesmi/risk-engine:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   risk-engine-5b6cb4fb9d (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set risk-engine-5b6cb4fb9d to 3

最佳答案

我解决了这个问题。我重建了容器,将 tty 和 stdin 参数添加到 YAML 部署文件中,并指定它只应在失败时重新启动。

关于GCP 上的 Kubernetes - 没有最低可用性/MinimumReplicasUnavailable 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64628290/

相关文章:

c++ - 如果我想在运行时部署时查看图像,要加载什么 dll,还有我真正需要哪个 vc dll?

api - "Container is not defined"谷歌图表

docker - 在 Windows 容器中使用命名管道(同一主机)

hadoop - 将工件部署到 Hadoop 集群

html - 如何对齐div容器和表格

amazon-web-services - 加密 EKS 中的流量

kubernetes - 当我们尝试 ping 负载均衡器 (kubernetes) 类型的服务的外部 IP 时,到底会发生什么?

kubernetes - 同一部署中的Pod中可以有不同的主机安装吗?

kubernetes - 使用 Ingress Google Cloud 获取 "response 404 (backend NotFound), service rules for the path non-existent"

tomcat - 在 tomcat 7 上使用上下文部署 Web 应用程序