kubernetes - 无法访问 Kubernetes 仪表板

标签 kubernetes

我基于 contrib repo 在 CoreOS 上创建了一个 Kubernetes v1.3.3 集群。我的集群看起来很健康,我想使用仪表板,但即使禁用所有身份验证,我也无法访问 UI。以下是 kubernetes-dashboard 组件的详细信息,以及一些 API 服务器配置/输出。我在这里缺少什么?

仪表板组件

core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: 2016-07-28T23:40:57Z
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "345970"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
  uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
  - ip: 172.16.100.9
    targetRef:
      kind: Pod
      name: kubernetes-dashboard-v1.1.0-nog8g
      namespace: kube-system
      resourceVersion: "345969"
      uid: d4791722-5908-11e6-9697-02b43b6aa639
  ports:
  - port: 9090
    protocol: TCP

core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2016-07-28T23:40:57Z
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "109199"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
  clusterIP: 172.20.164.194
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --

namespace=kube-system
Name:           kubernetes-dashboard
Namespace:      kube-system
Labels:         k8s-app=kubernetes-dashboard
            kubernetes.io/cluster-service=true
Selector:       k8s-app=kubernetes-dashboard
Type:           ClusterIP
IP:         172.20.164.194
Port:           <unset> 80/TCP
Endpoints:      172.16.100.9:9090
Session Affinity:   None
No events.

core@ip-10-178-153-240 ~ $ kubectl get po  kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/created-by: |
      {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
  creationTimestamp: 2016-08-02T23:28:34Z
  generateName: kubernetes-dashboard-v1.1.0-
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    version: v1.1.0
  name: kubernetes-dashboard-v1.1.0-nog8g
  namespace: kube-system
  resourceVersion: "345969"
  selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
  uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
  containers:
  - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /
        port: 9090
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 30
    name: kubernetes-dashboard
    ports:
    - containerPort: 9090
      protocol: TCP
    resources:
      limits:
        cpu: 100m
        memory: 50Mi
      requests:
        cpu: 100m
        memory: 50Mi
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-lvmnw
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: ip-10-178-153-57.us-west-2.compute.internal
  restartPolicy: Always
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: default-token-lvmnw
    secret:
      secretName: default-token-lvmnw
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-08-02T23:28:34Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2016-08-02T23:28:35Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2016-08-02T23:28:34Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
    image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
    imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
    lastState: {}
    name: kubernetes-dashboard
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-08-02T23:28:34Z
  hostIP: 10.178.153.57
  phase: Running
  podIP: 172.16.100.9
  startTime: 2016-08-02T23:28:34Z

API 服务器配置

/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws

可从远程主机(笔记本电脑)访问 API 服务器

$ curl http://10.178.153.240:8080/
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/apps",
    "/apis/apps/v1alpha1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v2alpha1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/policy",
    "/apis/policy/v1alpha1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]

用户界面无法远程访问

$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'

可从 Minion 节点访问 UI

core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
 <!doctype html> <html ng-app="kubernetesDashboard">...

API 服务器路由表

core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0  proto dhcp  src 10.178.153.240  metric 1024
10.178.153.0/24 dev eth0  proto kernel  scope link  src 10.178.153.240
10.178.153.1 dev eth0  proto dhcp  scope link  src 10.178.153.240  metric 1024
172.16.0.0/12 dev flannel.1  proto kernel  scope link  src 172.16.6.0
172.16.6.0/24 dev docker0  proto kernel  scope link  src 172.16.6.1

Minion(pod所在的地方)路由表

core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0  proto dhcp  src 10.178.153.57  metric 1024
10.178.153.0/24 dev eth0  proto kernel  scope link  src 10.178.153.57
10.178.153.1 dev eth0  proto dhcp  scope link  src 10.178.153.57  metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0  proto kernel  scope link  src 172.16.100.1

法兰绒原木 看来这一路线与 Flannel 的行为不正常。我在日志中收到这些错误,但重新启动守护程序似乎无法解决该问题。

...Watch subnets: client: etcd cluster is unavailable or misconfigured

... L3 miss: 172.16.100.9

... calling NeighSet: 172.16.100.9

最佳答案

您必须使用前面答案中提到的 NodePort 类型的服务在集群外部公开您的服务,或者如果您在 API 服务器上启用了基本身份验证,则可以使用以下 URL 访问您的服务:

http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name

参见:http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls

关于kubernetes - 无法访问 Kubernetes 仪表板,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38733801/

相关文章:

docker - 无法使用 Kompose 部署到 Kubernetes 集群

kubernetes - kubernetes 中的有状态 jupyter 笔记本

kubernetes - 如何使用 Amazon EKS 上的 kubernetes 入口 Controller 将 http 重定向到 https

apache - Kubernetes:无法使用apache和https部署Flask Web应用

kubernetes - 单个群集Kube POD上的CockroachDB失败,并显示CrashLoopBackOff

kubernetes - 如何将kubernetes pod暴露给外部IP?

debugging - 如何调试minikube错误?

docker - 当kubernetes重启容器或者集群扩容时会发生什么?

kubernetes - 如何从现有的 github 项目中添加 helm repo?

kubernetes - 如何让 cron 作业在 x 秒后启动?