Kubernetes - 您可以在 x86 上构建主节点和 Arm 上工作节点的集群吗?

标签 kubernetes

我有一个英特尔 NUC (I5) 和 Raspberry-Pi Model-B 。我尝试通过将 Intel-NUC 作为主节点、将 Raspberry-Pi 作为工作节点来创建 kubernetes 集群。当我尝试上述设置时,我发现工作节点一直崩溃。这是输出。仅当进行上述设置时才会发生这种情况。如果我尝试创建一个包含两个 Raspberry-Pi(一个主节点和一个工作节点)的集群,它工作得很好。

我做错了什么?

sudo kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-ubuntu                           1/1       Running            0          13h
kube-system   kube-apiserver-ubuntu                 1/1       Running            0          13h
kube-system   kube-controller-manager-ubuntu        1/1       Running            0          13h
kube-system   kube-dns-6f4fd4bdf-fqmmt                3/3       Running            0          13h
kube-system   kube-proxy-46ddk                        0/1       CrashLoopBackOff   5          3m
kube-system   kube-proxy-j48fc                        1/1       Running            0          13h
kube-system   kube-scheduler-ubuntu                 1/1       Running            0          13h
kube-system   kubernetes-dashboard-5bd6f767c7-nh6hz   1/1       Running            0          13h
kube-system   weave-net-2bnzq                         2/2       Running            0          13h
kube-system   weave-net-7hr54                         1/2       CrashLoopBackOff   3          3m

我检查了 kube-proxy 的日志并发现了以下条目 来自 kube-proxy 的日志 standard_init_linux.go:178: exec 用户进程导致“exec 格式错误” 这似乎源于拍摄的图像是 ARM 弓形而不是 x86 弓形的问题。这是 yaml 文件

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kube-proxy-5xc9c",
    "generateName": "kube-proxy-",
    "namespace": "kube-system",
    "selfLink": "/api/v1/namespaces/kube-system/pods/kube-proxy-5xc9c",
    "uid": "a227b43b-27ef-11e8-8cf2-b827eb03776e",
    "resourceVersion": "22798",
    "creationTimestamp": "2018-03-15T01:24:40Z",
    "labels": {
      "controller-revision-hash": "3203044440",
      "k8s-app": "kube-proxy",
      "pod-template-generation": "1"
    },
    "ownerReferences": [
      {
        "apiVersion": "extensions/v1beta1",
        "kind": "DaemonSet",
        "name": "kube-proxy",
        "uid": "361aca09-27c9-11e8-a102-b827eb03776e",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "kube-proxy",
        "configMap": {
          "name": "kube-proxy",
          "defaultMode": 420
        }
      },
      {
        "name": "xtables-lock",
        "hostPath": {
          "path": "/run/xtables.lock",
          "type": "FileOrCreate"
        }
      },
      {
        "name": "lib-modules",
        "hostPath": {
          "path": "/lib/modules",
          "type": ""
        }
      },
      {
        "name": "kube-proxy-token-kzt5h",
        "secret": {
          "secretName": "kube-proxy-token-kzt5h",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "kube-proxy",
        "image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
        "command": [
          "/usr/local/bin/kube-proxy",
          "--config=/var/lib/kube-proxy/config.conf"
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "kube-proxy",
            "mountPath": "/var/lib/kube-proxy"
          },
          {
            "name": "xtables-lock",
            "mountPath": "/run/xtables.lock"
          },
          {
            "name": "lib-modules",
            "readOnly": true,
            "mountPath": "/lib/modules"
          },
          {
            "name": "kube-proxy-token-kzt5h",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "IfNotPresent",
        "securityContext": {
          "privileged": true
        }
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "kube-proxy",
    "serviceAccount": "kube-proxy",
    "nodeName": "udubuntu",
    "hostNetwork": true,
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node-role.kubernetes.io/master",
        "effect": "NoSchedule"
      },
      {
        "key": "node.cloudprovider.kubernetes.io/uninitialized",
        "value": "true",
        "effect": "NoSchedule"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute"
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute"
      },
      {
        "key": "node.kubernetes.io/disk-pressure",
        "operator": "Exists",
        "effect": "NoSchedule"
      },
      {
        "key": "node.kubernetes.io/memory-pressure",
        "operator": "Exists",
        "effect": "NoSchedule"
      }
    ]
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:24:45Z"
      },
      {
        "type": "Ready",
        "status": "False",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:35:41Z",
        "reason": "ContainersNotReady",
        "message": "containers with unready status: [kube-proxy]"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-03-15T01:24:46Z"
      }
    ],
    "hostIP": "192.168.178.24",
    "podIP": "192.168.178.24",
    "startTime": "2018-03-15T01:24:45Z",
    "containerStatuses": [
      {
        "name": "kube-proxy",
        "state": {
          "waiting": {
            "reason": "CrashLoopBackOff",
            "message": "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-5xc9c_kube-system(a227b43b-27ef-11e8-8cf2-b827eb03776e)"
          }
        },
        "lastState": {
          "terminated": {
            "exitCode": 1,
            "reason": "Error",
            "startedAt": "2018-03-15T01:40:51Z",
            "finishedAt": "2018-03-15T01:40:51Z",
            "containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
          }
        },
        "ready": false,
        "restartCount": 8,
        "image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
        "imageID": "docker-pullable://gcr.io/google_containers/kube-proxy-arm@sha256:c6fa0de67fb6dbbb0009b2e6562860d1f6da96574d23617726e862f35f9344e7",
        "containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
      }
    ],
    "qosClass": "BestEffort"
  }
}

最佳答案

是的,这是可能的,我刚刚为我的一位客户做到了这一点。

基本上存在一个问题,即在主服务器上自动部署的 KubeProxy DaemonSet 被编译为 x64 - 因为您希望主服务器为 x64,节点为 ARM。

当您将 ARM 节点添加到集群时,DaemonSet 尝试在它们上部署 x64 镜像,但失败了。

安装后,您需要编辑默认 DaemonSet 以仅选择 x64 节点,并为 ARM 节点部署另一个 DaemonSet。 这个要点将引导您完成: Multiplatform (amd64 and arm) Kubernetes cluster setup

希望这有帮助, 奥菲尔。

关于Kubernetes - 您可以在 x86 上构建主节点和 Arm 上工作节点的集群吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49290088/

相关文章:

kubernetes - kubectl 和 minikube-kubectl 的区别

kubernetes - 是否可以获取 `.env`文件来创建Kubernetes secret ?

linux - 暴露hello-minikube服务找不到端口

kubernetes - 具有不同 namespace 的k8s istio系统防火墙

mysql - Kubernetes MySQL复制-Master服务主机查询

kubernetes - kubeconfig 文件 “/etc/kubernetes/admin.conf” 已经存在,但 API 服务器 URL 错误

Kubernetes Nginx Ingress HTTP 通过 301 而不是 308 重定向到 HTTPS?

kubernetes - pod 无法获取正确的 coredns IP 地址

azure - Docker : how to read configmap & secret in asp. 网络核心应用程序?

nginx - Keycloak、oauth2-proxy 和 nginx.ingress.kubernetes