networking - 如何使一个Pod与Kubernetes中的另一个Pod联网? (简单)

标签 networking deployment kubernetes containers web-deployment

我一直在脑海里反复敲打头一段时间。 Web上有大量关于Kubernetes的信息,但是所有这些都假设有太多的知识,以至于像我这样的n00b并没有太多的事情要做。

因此,任何人都可以共享以下简单示例(作为yaml文件)吗?我想要的就是

  • 两个 bean 荚
  • 比方说,一个pod有一个后端(我不知道-node.js),一个有前端(例如React)。
  • 一种在它们之间进行联网的方法。

  • 然后是从背面到正面调用api调用的示例。

    我开始研究这种事情,突然间,我点击了这个页面https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this。这是 super 无用的。我不需要或不需要高级网络策略,也没有时间浏览映射在kubernetes顶部的几个不同的服务层。我只想找出一个简单的网络请求示例。

    希望如果这个例子存在于stackoverflow上,它将同样为其他人服务。

    任何帮助,将不胜感激。谢谢。

    编辑; 看起来最简单的示例可能是使用Ingress Controller 。

    EDIT EDIT;

    我正在努力尝试部署一个最小的示例-我将在这里逐步完成一些步骤,并指出我的问题。

    所以下面是我的yaml文件:
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: frontend
      labels:
        app: frontend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/frontend_example
            ports:
            - containerPort: 3000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: frontend
    spec:
      type: LoadBalancer
      selector:
        app: frontend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 3000
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: backend
      labels:
        app: backend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: backend
      template:
        metadata:
          labels:
            app: backend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/backend_example
            ports:
            - containerPort: 5000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: backend
    spec:
      type: LoadBalancer
      selector:
        app: backend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 5000
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: frontend
    spec:      
      rules:
      - host: www.kubeplaytime.example
        http:
          paths:
          - path: /
            backend:
              serviceName: frontend
              servicePort: 80
          - path: /api
            backend:
              serviceName: backend
              servicePort: 80
    

    我认为这是在做
  • 部署前端和后端应用程序-我将patientplatypus/frontend_examplepatientplatypus/backend_example部署到dockerhub,然后下拉图像。我有一个悬而未决的问题,如果我不想从Docker集线器中拉取图像,而只想从本地主机加载,那有可能吗?在这种情况下,我会将代码推送到生产服务器,在服务器上构建docker镜像,然后上传到kubernetes。好处是,如果我希望图像是私有(private)的,则不必依赖dockerhub。
  • 它正在创建两个服务端点,这些端点将外部流量从Web浏览器路由到每个部署。这些服务的类型为loadBalancer,因为它们正在平衡部署中我拥有的(在本例中为3个)副本集之间的流量。
  • 最后,我有一个入口 Controller ,该 Controller 应允许我的服务通过www.kubeplaytime.examplewww.kubeplaytime.example/api相互路由。但是,这不起作用。

  • 运行此命令会怎样?
    patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
    deployment.apps "frontend" created
    service "frontend" created
    deployment.apps "backend" created
    service "backend" created
    ingress.extensions "frontend" created
    
  • 因此,首先,它似乎可以正确创建所有我需要的部件。
    patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEbackend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1mfrontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1mkubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10dfrontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2mbackend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
  • 其次,如果我观看了这些服务,最终将获得可用于在浏览器中导航到这些站点的IP地址。上面的每个IP地址都分别将我路由到前端和后端。

  • 如何

    我在尝试使用入口 Controller 时遇到了一个问题-它似乎已部署,但我不知道如何到达那里。
    patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
    NAME       HOSTS                      ADDRESS   PORTS     AGE
    frontend   www.kubeplaytime.example             80        16m
    
  • 因此,我没有可以使用的地址,并且www.kubeplaytime.example似乎无法使用。

  • 路由到我刚刚创建的入口扩展名似乎必须要做的就是使用服务并在其上进行部署以获得IP地址,但这很快就变得异常复杂。

    例如,看一下这篇中篇文章:https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e

    似乎仅用于将服务路由添加到Ingress的必要代码(即他所谓的 Ingress Controller )似乎是这样的:
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: ingress-nginx
    spec:
      type: LoadBalancer
      selector:
        app: ingress-nginx
      ports:
      - name: http
        port: 80
        targetPort: http
      - name: https
        port: 443
        targetPort: https
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: ingress-nginx
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: ingress-nginx
        spec:
          terminationGracePeriodSeconds: 60
          containers:
          - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
            name: ingress-nginx
            imagePullPolicy: Always
            ports:
              - name: http
                containerPort: 80
                protocol: TCP
              - name: https
                containerPort: 443
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 30
              timeoutSeconds: 5
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx-default-backend
    spec:
      ports:
      - port: 80
        targetPort: http
      selector:
        app: nginx-default-backend
    ---
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nginx-default-backend
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx-default-backend
        spec:
          terminationGracePeriodSeconds: 60
          containers:
          - name: default-http-backend
            image: gcr.io/google_containers/defaultbackend:1.0
            livenessProbe:
              httpGet:
                path: /healthz
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 30
              timeoutSeconds: 5
            resources:
              limits:
                cpu: 10m
                memory: 20Mi
              requests:
                cpu: 10m
                memory: 20Mi
            ports:
            - name: http
              containerPort: 8080
              protocol: TCP
    

    为了获得我的入口路由的服务入口点,似乎需要将其附加到上面的其他yaml代码中,并且确实提供了ip:
    patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
    NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
    backend                 LoadBalancer   10.0.31.209   <pending>     80:32428/TCP                 4m
    frontend                LoadBalancer   10.0.222.47   <pending>     80:32482/TCP                 4m
    ingress-nginx           LoadBalancer   10.0.28.157   <pending>     80:30573/TCP,443:30802/TCP   4m
    kubernetes              ClusterIP      10.0.0.1      <none>        443/TCP                      10d
    nginx-default-backend   ClusterIP      10.0.71.121   <none>        80/TCP                       4m
    frontend   LoadBalancer   10.0.222.47   40.121.7.66   80:32482/TCP   5m
    ingress-nginx   LoadBalancer   10.0.28.157   40.121.6.179   80:30573/TCP,443:30802/TCP   6m
    backend   LoadBalancer   10.0.31.209   40.117.248.73   80:32428/TCP   7m
    

    所以ingress-nginx似乎是我想要到达的站点。导航到40.121.6.179会返回默认的404消息(default backend - 404)-它不会转到frontend,因为/被路由。 /api返回相同的结果。导航到我的主机 namespace www.kubeplaytime.example从浏览器返回404-没有错误处理。

    问题
  • 严格要求Ingress Controller吗?如果需要,是否有一个不太复杂的版本?
  • 我觉得我接近了,我做错了什么?

  • FULL YAML

    在这里可用:https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938

    谢谢您的帮助!

    编辑编辑编辑

    我尝试使用 HELM 。从表面上看,这似乎是一个简单的界面,因此我尝试将其旋转:
    patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
    NAME:   erstwhile-beetle
    LAST DEPLOYED: Sun May  6 12:13:30 2018
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME                                       DATA  AGE
    erstwhile-beetle-nginx-ingress-controller  1     1s
    
    ==> v1/Service
    NAME                                            TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE
    erstwhile-beetle-nginx-ingress-controller       LoadBalancer  10.0.216.38  <pending>    80:31494/TCP,443:32118/TCP  1s
    erstwhile-beetle-nginx-ingress-default-backend  ClusterIP     10.0.55.224  <none>       80/TCP                      1s
    
    ==> v1beta1/Deployment
    NAME                                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    erstwhile-beetle-nginx-ingress-controller       1        1        1           0          1s
    erstwhile-beetle-nginx-ingress-default-backend  1        1        1           0          1s
    
    ==> v1beta1/PodDisruptionBudget
    NAME                                            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
    erstwhile-beetle-nginx-ingress-controller       1              N/A              0                    1s
    erstwhile-beetle-nginx-ingress-default-backend  1              N/A              0                    1s
    
    ==> v1/Pod(related)
    NAME                                                             READY  STATUS             RESTARTS  AGE
    erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz       0/1    ContainerCreating  0         1s
    erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w  0/1    ContainerCreating  0         1s
    
    
    NOTES:
    The nginx-ingress controller has been installed.
    It may take a few minutes for the LoadBalancer IP to be available.
    You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
    
    An example Ingress that makes use of the controller:
    
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        annotations:
          kubernetes.io/ingress.class: nginx
        name: example
        namespace: foo
      spec:
        rules:
          - host: www.example.com
            http:
              paths:
                - backend:
                    serviceName: exampleService
                    servicePort: 80
                  path: /
        # This section is only required if TLS is to be enabled for the Ingress
        tls:
            - hosts:
                - www.example.com
              secretName: example-tls
    
    If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
    
      apiVersion: v1
      kind: Secret
      metadata:
        name: example-tls
        namespace: foo
      data:
        tls.crt: <base64 encoded cert>
        tls.key: <base64 encoded key>
      type: kubernetes.io/tls
    

    看来这确实很棒-它可以将所有内容旋转起来,并提供了有关如何添加入口的示例。由于我以空白kubectl旋转 Helm ,因此我使用以下yaml文件添加了我认为需要的文件。

    文件:
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: frontend
      labels:
        app: frontend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/frontend_example
            ports:
            - containerPort: 3000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: frontend
    spec:
      type: LoadBalancer
      selector:
        app: frontend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 3000
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: backend
      labels:
        app: backend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: backend
      template:
        metadata:
          labels:
            app: backend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/backend_example
            ports:
            - containerPort: 5000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: backend
    spec:
      type: LoadBalancer
      selector:
        app: backend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 5000
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      annotations:
        kubernetes.io/ingress.class: nginx
    spec:
      rules:
        - host: www.example.com
          http:
            paths:
              - path: /api
                backend:
                  serviceName: backend
                  servicePort: 80
              - path: /
                frontend:
                  serviceName: frontend
                  servicePort: 80
    

    但是,将其部署到集群会遇到此错误:
    patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
    deployment.apps "frontend" created
    service "frontend" created
    deployment.apps "backend" created
    service "backend" created
    error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
    

    因此,问题就变成了,我该如何调试呢?
    如果您吐出 Helm 产生的代码,那么它基本上是人不可读的-无法进入那里弄清楚正在发生的事情。

    检查出来:https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863-超过1000行!

    如果有人能更好地调试 Helm 部署,请将其添加到未解决问题列表中。

    编辑编辑编辑编辑

    为简化起见,我尝试仅使用 namespace 从一个Pod调用另一个Pod。

    所以这是我发出HTTP请求的React代码:
    axios.get('http://backend/test')
    .then(response=>{
      console.log('return from backend and response: ', response);
    })
    .catch(error=>{
      console.log('return from backend and error: ', error);
    })
    

    我也尝试过使用http://backend.exampledeploy.svc.cluster.local/test而不走运。

    这是我处理get的节点代码:
    router.get('/test', function(req, res, next) {
      res.json({"test":"test"})
    });
    

    这是我上传到yaml集群的kubectl文件:
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: frontend
      namespace: exampledeploy
      labels:
        app: frontend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/frontend_example
            ports:
            - containerPort: 3000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: frontend
      namespace: exampledeploy
    spec:
      type: LoadBalancer
      selector:
        app: frontend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 3000
    ---
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: backend
      namespace: exampledeploy
      labels:
        app: backend
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: backend
      template:
        metadata:
          labels:
            app: backend
        spec:
          containers:
          - name: nginx
            image: patientplatypus/backend_example
            ports:
            - containerPort: 5000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: backend
      namespace: exampledeploy
    spec:
      type: LoadBalancer
      selector:
        app: backend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 5000
    

    正如我在终端机中所见,上传到集群似乎可以正常工作:
    patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy 
    NAME                            READY     STATUS    RESTARTS   AGE
    pod/backend-584c5c59bc-5wkb4    1/1       Running   0          15m
    pod/backend-584c5c59bc-jsr4m    1/1       Running   0          15m
    pod/backend-584c5c59bc-txgw5    1/1       Running   0          15m
    pod/frontend-647c99cdcf-2mmvn   1/1       Running   0          15m
    pod/frontend-647c99cdcf-79sq5   1/1       Running   0          15m
    pod/frontend-647c99cdcf-r5bvg   1/1       Running   0          15m
    
    NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
    service/backend    LoadBalancer   10.0.112.160   168.62.175.155   80:31498/TCP   15m
    service/frontend   LoadBalancer   10.0.246.212   168.62.37.100    80:31139/TCP   15m
    
    NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.extensions/backend    3         3         3            3           15m
    deployment.extensions/frontend   3         3         3            3           15m
    
    NAME                                        DESIRED   CURRENT   READY     AGE
    replicaset.extensions/backend-584c5c59bc    3         3         3         15m
    replicaset.extensions/frontend-647c99cdcf   3         3         3         15m
    
    NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/backend    3         3         3            3           15m
    deployment.apps/frontend   3         3         3            3           15m
    
    NAME                                  DESIRED   CURRENT   READY     AGE
    replicaset.apps/backend-584c5c59bc    3         3         3         15m
    replicaset.apps/frontend-647c99cdcf   3         3         3         15m
    

    但是,当我尝试发出请求时,出现以下错误:
    return from backend and error:  
    Error: Network Error
    Stack trace:
    createError@http://168.62.37.100/static/js/bundle.js:1555:15
    handleError@http://168.62.37.100/static/js/bundle.js:1091:14
    App.js:14
    

    由于axios调用是通过浏览器进行的,因此我想知道,即使后端和前端位于不同的容器中,也无法使用此方法来调用后端。我有点迷茫,因为我认为这是将Pod联网在一起的最简单方法。

    编辑X5

    我已经确定可以像这样通过执行到pod中来从命令行 curl 后端:
    patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
    * Hostname was NOT found in DNS cache
    *   Trying 10.0.249.147...
    * Connected to backend (10.0.249.147) port 80 (#0)
    > GET /test HTTP/1.1
    > User-Agent: curl/7.38.0
    > Host: backend
    > Accept: */*
    > 
    < HTTP/1.1 200 OK
    < X-Powered-By: Express
    < Content-Type: application/json; charset=utf-8
    < Content-Length: 15
    < ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
    < Date: Sun, 06 May 2018 20:25:49 GMT
    < Connection: keep-alive
    < 
    * Connection #0 to host backend left intact
    {"test":"test"}
    

    毫无疑问,这意味着,因为前端代码是在浏览器中执行的,所以它需要Ingress才能进入Pod,因为前端的http请求是简单Pod联网所无法解决的。我不确定这一点,但这意味着Ingress是必要的。

    最佳答案

    首先,让我们澄清一些明显的误解。您提到您的前端是一个React应用程序,大概会在用户浏览器中运行。为此,您的实际问题不是后端和前端pod 彼此通信,而是浏览器需要连接到这两个Pod (依次连接到前端Pod)加载React应用程序,并加载到React应用程序的后端pod进行API调用)。

    可视化:

                                                     +---------+
                                                 +---| Browser |---+                                                 
                                                 |   +---------+   |
                                                 V                 V
    +-----------+     +----------+         +-----------+     +----------+
    | Front-end |---->| Back-end |         | Front-end |     | Back-end |
    +-----------+     +----------+         +-----------+     +----------+
          (what you asked for)                     (what you need)
    

    如前所述,最简单的解决方案是使用Ingress controller。我不会在这里详细介绍如何设置Ingress Controller ;在某些云环境(例如GKE)中,您将能够使用由云提供商提供给您的Ingress Controller 。否则,您可以设置NGINX Ingress controller。查看NGINX Ingress Controller deployment guide了解更多信息。

    定义服务

    首先为您的前端和后端应用程序定义Service resources(它们也将使您的Pod相互通信)。服务定义可能如下所示:
    apiVersion: v1
    kind: Service
    metadata:
      name: backend
    spec:
      selector:
        app: backend
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
    

    确保您的Pod具有labels,可以通过Service资源选择它(在本示例中,我使用app=backendapp=frontend作为标签)。

    如果您要建立Pod到Pod的通信,请立即完成。在每个Pod中,您现在可以使用backend.<namespace>.svc.cluster.local(或backend作为简写)和frontend作为主机名来连接到该Pod。

    定义入口

    接下来,您可以定义Ingress资源;由于这两种服务都需要从群集外部(用户浏览器)进行连接,因此您将需要两种服务的Ingress定义。
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: frontend
    spec:      
      rules:
      - host: www.your-application.example
        http:
          paths:
          - path: /
            backend:
              serviceName: frontend
              servicePort: 80
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: backend
    spec:      
      rules:
      - host: api.your-application.example
        http:
          paths:
          - path: /
            backend:
              serviceName: backend
              servicePort: 80
    

    另外,您也可以使用单个Ingress资源聚合前端和后端(此处没有“正确”的答案,只是出于偏好):
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: frontend
    spec:      
      rules:
      - host: www.your-application.example
        http:
          paths:
          - path: /
            backend:
              serviceName: frontend
              servicePort: 80
          - path: /api
            backend:
              serviceName: backend
              servicePort: 80
    

    之后,请确保www.your-application.exampleapi.your-application.example都指向您的Ingress Controller 的外部IP地址,然后就应该完成了。

    关于networking - 如何使一个Pod与Kubernetes中的另一个Pod联网? (简单),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50195896/

    相关文章:

    ubuntu - 为什么 tcpdump 只捕获过滤器接收到的一半数据包?

    java - 在 Linux 上运行 Swing jar

    java - 从生物识别指纹考勤设备中检索数据

    windows - 计划任务.bat - 检查网络驱动器是否准备就绪

    c# - UDP DatagramSocket的正确使用

    java - 为什么 Eclipse 不能正确部署我的动态 Web 项目?

    git - 如何在 Windows Azure 中重置部署凭据?

    kubernetes - 如何监控 Kubernetes Pod 崩溃?

    kubernetes - Istio试用版不匹配。如何升级呢?

    docker - Kubernetes 集群中的 Kafka - 如何从 Kubernetes 集群外部发布/使用消息